Abstract
The conceptual underpinnings of machine learning and artificial intelligence have been significantly impacted by deep learning, which has experienced substantial practical success. A deep learning model’s (DL) performance is influenced by a number of hyperparameters during model formation. The learning rate is one of them (LR). The examination of the widely used Adam optimizer’s LR impact on performance metrics and the choice of the best LR with no ambiguity for future research are the main objectives of this work. The DDoS SDN dataset and Long Short Term Memory (LSTM) DL model are used in the research to support the work for attack detection in fog-based IoT systems for security purposes. It was discovered that Adam Optimizers’ default LR 0.001 is the best; however, when huge batch sizes are taken into account, LR 0.01 proved to be better in the numbers of performance metrics but the noise is too high. As a result, it is decided that the LR 0.001 is the most appropriate value while building a DL model using the Adam optimizer.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Wu, Y., et al.: Demystifying learning rate policies for high accuracy training of deep neural networks. In: 2019 IEEE International Conference on Big Data (Big Data). IEEE (2019)
Yu, C., et al.: LLR: learning rates by LSTM for training neural networks. Neurocomputing 394, 41–50 (2020)
Zhao, H., et al.: Research on a learning rate with energy index in deep learning. Neural Netw. 110, 225–231 (2019)
Hertel, L., et al.: Sherpa: robust hyperparameter optimization for machine learning. SoftwareX 12, 100591 (2020)
Zhang, H., et al.: AdaL: adaptive gradient transformation contributes to convergences and generalizations. arXiv preprint arXiv:2107.01525 (2021)
LeCun, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Krizhevsky, A., Ilya, S., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25 (2012)
Schaul, T., Zhang, S., LeCun, Y.: No more pesky learning rates. In: International Conference on Machine Learning. PMLR (2013)
Ruder, S.: An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747 (2016)
He, K., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
Bottou, L.: Stochastic gradient descent tricks. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 421–436. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35289-8_25
Qian, N.: On the momentum term in gradient descent learning algorithms. Neural Netw. 12(1), 145–151 (1999)
Sutskever, I., et al.: On the importance of initialization and momentum in deep learning. In: International Conference on Machine Learning. PMLR (2013)
Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12(7), 2121–2159 (2011)
Zeiler, M.D.: Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 (2012)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Soydaner, D.: A comparison of optimization algorithms for deep learning. Int. J. Pattern Recognit. Artif. Intell. 34(13), 2052013 (2020)
Guide to latest AdaBelief optimizer for deep learning (analyticsindiamag.com). Accessed Jan 2022
https://towardsdatascience.com/optimization-techniques-simulated-annealing-d6a4785a1de70. Accessed Jan 2022
https://towardsdatascience.com/deep-learning-optimizers-436171c9e23f. Accessed Jan 2022
Ahuja, N., Singal, G., Mukhopadhyay, D.: DDOS attack SDN dataset. Mendeley Data V1 (2020). https://doi.org/10.17632/jxpfjc64kr.1
Ullah, I., Mahmoud, Q.H.: A scheme for generating a dataset for anomalous activity detection in IoT networks. In: Goutte, C., Zhu, X. (eds.) Canadian AI 2020. LNCS (LNAI), vol. 12109, pp. 508–520. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-47358-7_52
Douligeris, C., Mitrokotsa, A.: DDoS attacks and defense mechanisms: a classification. In: Proceedings of the 3rd IEEE International Symposium on Signal Processing and Information Technology (IEEE Cat. No. 03EX795). IEEE (2003)
Gudla, S.P.K., et al.: DI-ADS: a deep intelligent distributed denial of service attack detection scheme for fog-based IoT applications. Math. Probl. Eng. 2022, 1–17 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
There is no conflict of interest.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Gudla, S.P.K., Bhoi, S.K. (2022). A Study on Effect of Learning Rates Using Adam Optimizer in LSTM Deep Intelligent Model for Detection of DDoS Attack to Support Fog Based IoT Systems. In: Panda, S.K., Rout, R.R., Sadam, R.C., Rayanoothala, B.V.S., Li, KC., Buyya, R. (eds) Computing, Communication and Learning. CoCoLe 2022. Communications in Computer and Information Science, vol 1729. Springer, Cham. https://doi.org/10.1007/978-3-031-21750-0_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-21750-0_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-21749-4
Online ISBN: 978-3-031-21750-0
eBook Packages: Computer ScienceComputer Science (R0)