Abstract
It is well known that, while the training mode of probabilistic neural network is quickly completed by a straightforward manner of allocating the units in a single hidden layer, its utility in the reference mode is slow in ordinary serial computing situations. In order to alleviate the slow operation, the parallel implementation is thus considered to be a desirable option. In this paper, we first quantify the overall amount of the step-wise operations required for the reference mode of a probabilistic neural network and that of a multilayered perceptron (or deep) neural network, both implemented in a parallel environment. Second, we derive the necessary condition for the reference mode of a probabilistic neural network to yield an equally fast as or faster operation than that of a deep neural network. Based upon the condition so derived, we next deduce a comparative relation between the training mode of a probabilistic neural network, where k-means clustering algorithm is applied to reduce the number of the hidden units, and that of a deep neural network operated in parallel. It is then shown that both the training and testing modes of the compact-sized network meeting these criteria can be run in a parallel environment as fast as or faster than those of a feed-forward deep neural network, while keeping a reasonably high classification performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015)
Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Rumelhart, D.E., McClelland, J.L. (eds.) Parallel Distributed Processing, vol. 1, pp. 318–362. MIT Press, Cambridge (1986)
Specht, D.F.: Probabilistic neural networks. Neural Netw. 3, 109–118 (1990)
Azema-Barac, M.E.: A conceptual framework for implementing neural networks on massively parallel machines. In: 6th International Parallel Processing Symposium, pp. 527–530 (1992)
Pethick, M., Liddle, M., Werstein, P., Huang, Z., Parallelization of a backpropagation neural network on a cluster computer. In: 15th IASTED International Conference on Parallel and Distributed Computing and Systems, CA, USA, pp. 574–582. ACTA Press (2003)
Suresh, S., Omkar, S.N., Mani, V.: Parallel implementation of back-propagation algorithm in networks of workstations. IEEE Trans. Parallel Distrib. Syst. 16(1), 24–34 (2005)
Turchenko, V., Golovko, V.: Parallel batch pattern training algorithm for deep neural network. In: International Conference on High Performance Computing & Simulation (HPCS), pp. 697–702 (2014)
Secretan, J., Georgiopoulos, M., Maidhof, I., Shibly, P., Hecker, J.: Methods for parallelizing the probabilistic neural network on a Beowulf cluster computer. In: IEEE International Joint Conference on Neural Network, pp. 2378–2385 (2006)
Bastke, S., Deml, M., Schmidt, S.: Combining statistical network data, probabilistic neural networks and the computational power of GPUs for anomaly detection in computer networks. In: 19th International Conference on Automated Planning and Scheduling, Workshop on Intelligent Security (SecArt 2009), Thessaloniki, Greece (2009)
Kokkinos, Y., Margaritis, K.: A parallel radial basis probabilistic neural network for scalable data mining in distributed memory machines. In: IEEE 24th International Conference on Tools with Artificial Intelligence, pp. 1094–1099 (2012)
Phaudphut, C., So-In, C., Phusomsai, W.: A parallel probabilistic neural network ECG recognition architecture over GPU platforms. In: 13th International Joint Conference on Computer Science and Software Engineering (JCSSE), pp. 1–7 (2016)
MacQueen, J.B.: Some methods for classification and analysis of multivariate observations. In: 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297 (1967)
Chen, S., Grant, P.M., Cowan, C.F.N.: Orthogonal least squares algorithm for training multi-output radial basis function networks. In: Second International Conference on Artificial Neural Networks, pp. 336–339 (1991)
Bessrour, M., Elouedi, Z., Lefevre, E.: E-DBSCAN: an evidential version of the DBSCAN method. In: IEEE Symposium - Series on Computational Intelligence (SSCI-2020), pp. 3073–3080 (2020)
Luo, W., Zhu, W., Ni, L., Qiao, Y., Yuan, Y.: SCA2: novel efficient swarm clustering algorithm. IEEE Trans. Emerg. Top. Comput. Intell. 5(3), 442–456 (2021)
Kusy, M., Kluska, J.: Assessment of prediction ability for reduced probabilistic neural network in data classification problems. Soft Comput. 21(1), 199–212 (2016). https://doi.org/10.1007/s00500-016-2382-9
Dua D., Graff, G.: UCI machine learning repository, School of Information and Computer Sciences, Univ. California Irvine, Irvine, CA (2019). https://archive.ics.uci.edu/ml
LeCun, Y., Cortes, C., Burges, C.J.C.: The MNIST database. http://yann.lecun.com/exdb/mnist/. Accessed Jan 2022
Kingma, D., Lei Ba, J.: Adam: a method for stochastic optimization. In: Third International Conference on Learning Representations, San Diego, arXiv:1412.6980 (2015)
Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Takahashi, K., Morita, S., Hoya, T. (2022). Analytical Comparison Between the Pattern Classifiers Based upon a Multilayered Perceptron and Probabilistic Neural Network in Parallel Implementation. In: Pimenidis, E., Angelov, P., Jayne, C., Papaleonidas, A., Aydin, M. (eds) Artificial Neural Networks and Machine Learning – ICANN 2022. ICANN 2022. Lecture Notes in Computer Science, vol 13531. Springer, Cham. https://doi.org/10.1007/978-3-031-15934-3_45
Download citation
DOI: https://doi.org/10.1007/978-3-031-15934-3_45
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-15933-6
Online ISBN: 978-3-031-15934-3
eBook Packages: Computer ScienceComputer Science (R0)