Skip to main content

Analytical Comparison Between the Pattern Classifiers Based upon a Multilayered Perceptron and Probabilistic Neural Network in Parallel Implementation

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2022 (ICANN 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13531))

Included in the following conference series:

  • 1699 Accesses

Abstract

It is well known that, while the training mode of probabilistic neural network is quickly completed by a straightforward manner of allocating the units in a single hidden layer, its utility in the reference mode is slow in ordinary serial computing situations. In order to alleviate the slow operation, the parallel implementation is thus considered to be a desirable option. In this paper, we first quantify the overall amount of the step-wise operations required for the reference mode of a probabilistic neural network and that of a multilayered perceptron (or deep) neural network, both implemented in a parallel environment. Second, we derive the necessary condition for the reference mode of a probabilistic neural network to yield an equally fast as or faster operation than that of a deep neural network. Based upon the condition so derived, we next deduce a comparative relation between the training mode of a probabilistic neural network, where k-means clustering algorithm is applied to reduce the number of the hidden units, and that of a deep neural network operated in parallel. It is then shown that both the training and testing modes of the compact-sized network meeting these criteria can be run in a parallel environment as fast as or faster than those of a feed-forward deep neural network, while keeping a reasonably high classification performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015)

    Article  Google Scholar 

  2. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)

    Article  Google Scholar 

  3. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Rumelhart, D.E., McClelland, J.L. (eds.) Parallel Distributed Processing, vol. 1, pp. 318–362. MIT Press, Cambridge (1986)

    Chapter  Google Scholar 

  4. Specht, D.F.: Probabilistic neural networks. Neural Netw. 3, 109–118 (1990)

    Article  Google Scholar 

  5. Azema-Barac, M.E.: A conceptual framework for implementing neural networks on massively parallel machines. In: 6th International Parallel Processing Symposium, pp. 527–530 (1992)

    Google Scholar 

  6. Pethick, M., Liddle, M., Werstein, P., Huang, Z., Parallelization of a backpropagation neural network on a cluster computer. In: 15th IASTED International Conference on Parallel and Distributed Computing and Systems, CA, USA, pp. 574–582. ACTA Press (2003)

    Google Scholar 

  7. Suresh, S., Omkar, S.N., Mani, V.: Parallel implementation of back-propagation algorithm in networks of workstations. IEEE Trans. Parallel Distrib. Syst. 16(1), 24–34 (2005)

    Article  Google Scholar 

  8. Turchenko, V., Golovko, V.: Parallel batch pattern training algorithm for deep neural network. In: International Conference on High Performance Computing & Simulation (HPCS), pp. 697–702 (2014)

    Google Scholar 

  9. Secretan, J., Georgiopoulos, M., Maidhof, I., Shibly, P., Hecker, J.: Methods for parallelizing the probabilistic neural network on a Beowulf cluster computer. In: IEEE International Joint Conference on Neural Network, pp. 2378–2385 (2006)

    Google Scholar 

  10. Bastke, S., Deml, M., Schmidt, S.: Combining statistical network data, probabilistic neural networks and the computational power of GPUs for anomaly detection in computer networks. In: 19th International Conference on Automated Planning and Scheduling, Workshop on Intelligent Security (SecArt 2009), Thessaloniki, Greece (2009)

    Google Scholar 

  11. Kokkinos, Y., Margaritis, K.: A parallel radial basis probabilistic neural network for scalable data mining in distributed memory machines. In: IEEE 24th International Conference on Tools with Artificial Intelligence, pp. 1094–1099 (2012)

    Google Scholar 

  12. Phaudphut, C., So-In, C., Phusomsai, W.: A parallel probabilistic neural network ECG recognition architecture over GPU platforms. In: 13th International Joint Conference on Computer Science and Software Engineering (JCSSE), pp. 1–7 (2016)

    Google Scholar 

  13. MacQueen, J.B.: Some methods for classification and analysis of multivariate observations. In: 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297 (1967)

    Google Scholar 

  14. Chen, S., Grant, P.M., Cowan, C.F.N.: Orthogonal least squares algorithm for training multi-output radial basis function networks. In: Second International Conference on Artificial Neural Networks, pp. 336–339 (1991)

    Google Scholar 

  15. Bessrour, M., Elouedi, Z., Lefevre, E.: E-DBSCAN: an evidential version of the DBSCAN method. In: IEEE Symposium - Series on Computational Intelligence (SSCI-2020), pp. 3073–3080 (2020)

    Google Scholar 

  16. Luo, W., Zhu, W., Ni, L., Qiao, Y., Yuan, Y.: SCA2: novel efficient swarm clustering algorithm. IEEE Trans. Emerg. Top. Comput. Intell. 5(3), 442–456 (2021)

    Article  Google Scholar 

  17. Kusy, M., Kluska, J.: Assessment of prediction ability for reduced probabilistic neural network in data classification problems. Soft Comput. 21(1), 199–212 (2016). https://doi.org/10.1007/s00500-016-2382-9

    Article  Google Scholar 

  18. Dua D., Graff, G.: UCI machine learning repository, School of Information and Computer Sciences, Univ. California Irvine, Irvine, CA (2019). https://archive.ics.uci.edu/ml

  19. LeCun, Y., Cortes, C., Burges, C.J.C.: The MNIST database. http://yann.lecun.com/exdb/mnist/. Accessed Jan 2022

  20. Kingma, D., Lei Ba, J.: Adam: a method for stochastic optimization. In: Third International Conference on Learning Representations, San Diego, arXiv:1412.6980 (2015)

  21. Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tetsuya Hoya .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Takahashi, K., Morita, S., Hoya, T. (2022). Analytical Comparison Between the Pattern Classifiers Based upon a Multilayered Perceptron and Probabilistic Neural Network in Parallel Implementation. In: Pimenidis, E., Angelov, P., Jayne, C., Papaleonidas, A., Aydin, M. (eds) Artificial Neural Networks and Machine Learning – ICANN 2022. ICANN 2022. Lecture Notes in Computer Science, vol 13531. Springer, Cham. https://doi.org/10.1007/978-3-031-15934-3_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-15934-3_45

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-15933-6

  • Online ISBN: 978-3-031-15934-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics