Abstract
Neural networks (NNs) are a subset in the field of machine learning (ML) that tends to make it possible for a machine to learn and make new predictions based on previous experiences and provided data. It is important to emphasize that there is no need to program this kind of behavior since the whole process is supported by the “self-adjustment” of the algorithm, which can evaluate itself and therefore adjust its parameters to get better performance and accuracy. Neural networks are different from other types of machine learning algorithms in such a way that they do not use statistical and mathematical models to make future predictions. Instead, they replicate the structure and the processes that happen inside the human brain. However, this type of learning is very computationally expensive since there is an enormous amount of states and conditions in which the network itself can be found. Therefore, it can be said that a process of learning for neural networks relates to the collection of NP-complete problems because of the large search space of possible solutions. Swarm intelligence (SI) algorithms can help to reduce this large space of solutions by finding a solution that is not optimal but close optimal and provide satisfactory results given how much longer it would take to train a network without them. In this paper, the authors have proposed a solution for the stated problem based on hybridized bat algorithm.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
W.S. McCulloch, W. Pitts, A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5(4), 115–133 (1943)
M.S. Shanker, Using neural networks to predict the onset of diabetes mellitus. J. Chem. Inform. Computer Sci. 36(1), 35–41 (1996)
J.J. Palop, L. Mucke, Amyloid-\(\beta \)-induced neuronal dysfunction in alzheimer’s disease: from synapses toward neural networks. Nature Neurosci. 13(7), 812–818 (2010)
O. Er, F. Temurtas, A.Ç. Tanrıkulu, Tuberculosis disease diagnosis using artificial neural networks. J. Med. Syst. 34(3), 299–302 (2010)
Y. Lu, S. Yi, N. Zeng, Y. Liu, Y. Zhang, Identification of rice diseases using deep convolutional neural networks. Neurocomputing 267, 378–384 (2017)
B. Liu, Y. Zhang, D. He, Y. Li, Identification of apple leaf diseases based on deep convolutional neural networks. Symmetry 10(1), 11 (2018)
J. Orbach, Principles of neurodynamics. Perceptrons and the theory of brain mechanisms. Arch. General Psychiatry 7(3), 218–219 (1962)
Y. Freund, R.E. Schapire, Large margin classification using the perceptron algorithm. Mach. Learn. 37(3), 277–296 (1999)
R. Hecht-Nielsen, Theory of the backpropagation neural network, in Neural Networks for Perception (Elsevier, Amsterdam, 1992)
F.-C. Chen, Back-propagation neural networks for nonlinear self-tuning adaptive control. IEEE Control Syst. Mag. 10(3), 44–48 (1990)
M. Dorigo, M. Birattari, T. Stutzle, Ant colony optimization. IEEE Comput. Intell. Mag. 1(4), 28–39 (2006)
X.-S. Yang, A new metaheuristic bat-inspired algorithm, in Nature Inspired Cooperative Strategies for Optimization (NICSO 2010) (Springer, Berlin, 2010), pp. 65–74
D. Karaboga, B. Basturk, Artificial bee colony (abc) optimization algorithm for solving constrained optimization problems, in International Fuzzy Systems Association World Congress (Springer, Berlin, 2007), pp. 789–798
A.A. Heidari, S. Mirjalili, H. Faris, I. Aljarah, M. Mafarja, H. Chen, Harris hawks optimization: algorithm and applications. Future Gener. Comput. Syst. 97, 849–872 (2019)
T. Bezdan, M. Zivkovic, E. Tuba, I. Strumberger, N. Bacanin, M. Tuba, Multi-objective task scheduling in cloud computing environment by hybridized bat algorithm, in International Conference on Intelligent and Fuzzy Systems (Springer, 2020), pp. 718–725
T. Bezdan, M. Zivkovic, M. Antonijevic, T. Zivkovic, N. Bacanin, Enhanced flower pollination algorithm for task scheduling in cloud computing environment, in Machine Learning for Predictive Analysis, ed. by A. Joshi, M. Khosravy, N. Gupta (Springer, Singapore, 2021), pp. 163–171
N. Bacanin, T. Bezdan, E. Tuba, I. Strumberger, M. Tuba, M. Zivkovic, Task scheduling in cloud computing environment by grey wolf optimizer, in 2019 27th Telecommunications Forum (TELFOR) (IEEE, 2019), pp. 1–4
M. Zivkovic, N. Bacanin, E. Tuba, I. Strumberger, T. Bezdan, M. Tuba, Wireless sensor networks life time optimization based on the improved firefly algorithm, in 2020 International Wireless Communications and Mobile Computing (IWCMC) (IEEE, 2020), pp. 1176–1181
N. Bacanin, E. Tuba, M. Zivkovic, I. Strumberger, M. Tuba, Whale optimization algorithm with exploratory move for wireless sensor networks localization, in International Conference on Hybrid Intelligent Systems (Springer, Berlin, 2019), pp. 328–338
M. Zivkovic, N. Bacanin, T. Zivkovic, I. Strumberger, E. Tuba, M. Tuba, Enhanced grey wolf algorithm for energy efficient wireless sensor networks, in 2020 Zooming Innovation in Consumer Technologies Conference (ZINC) (IEEE, 2020), pp. 87–92
T. Bezdan, M. Zivkovic, E. Tuba, I. Strumberger, N. Bacanin, M. Tuba, Glioma brain tumor grade classification from MRI using convolutional neural networks designed by modified FA”, in International Conference on Intelligent and Fuzzy Systems (Springer, Berlin, 2020), pp. 955–963
E.T.I.S. Nebojsa Bacanin, T. Bezdan, M. Tuba, Optimizing convolutional neural network hyperparameters by enhanced swarm intelligence metaheuristics. Algorithms 13(3), 67 (2020)
N. Bacanin, T. Bezdan, E. Tuba, I. Strumberger, M. Tuba, Monarch butterfly optimization based convolutional neural network design. Mathematics 8(6), 936 (2020)
T. Bezdan, E. Tuba, I. Strumberger, N. Bacanin, M. Tuba, Automatically designing convolutional neural network architecture with artificial flora algorithm, in ICT Systems and Sustainability, ed. by M. Tuba, S. Akashe, A. Joshi (Springer, Singapore, 2020), pp. 371–378
A.A. Heidari, H. Faris, I. Aljarah, S. Mirjalili, An efficient hybrid multilayer perceptron neural network with grasshopper optimization. Soft Comput. 23(17), 7941–7958 (2019)
J.C. Duchi, E. Hazan, Y. Singer, Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011)
M. D. Zeiler, Adadelta: an adaptive learning rate method (2012)
D.P. Kingma, J. Ba, Adam: a method for stochastic optimization (2014)
D. Karaboga, B. Akay, A modified artificial bee colony (ABC) algorithm for constrained optimization problems. Appl. Soft Comput. 11(3), 3021–3031 (2011)
M. Tuba, N. Bacanin, Hybridized bat algorithm for multi-objective radio frequency identification (RFID) network planning, in 2015 IEEE Congress on Evolutionary Computation (CEC) (2015), pp. 499–506
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Gajic, L., Cvetnic, D., Zivkovic, M., Bezdan, T., Bacanin, N., Milosevic, S. (2021). Multi-layer Perceptron Training Using Hybridized Bat Algorithm. In: Smys, S., Tavares, J.M.R.S., Bestak, R., Shi, F. (eds) Computational Vision and Bio-Inspired Computing. Advances in Intelligent Systems and Computing, vol 1318. Springer, Singapore. https://doi.org/10.1007/978-981-33-6862-0_54
Download citation
DOI: https://doi.org/10.1007/978-981-33-6862-0_54
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-33-6861-3
Online ISBN: 978-981-33-6862-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)