Abstract
The paper presents modified version of Generalized Error Backpropagation algorithm (GBP) merged with RMSprop optimizer. This solution is compared with analogous method based on Stochastic Gradient Descent. Both algorithms are used to train MLP and CxNN neural networks solving selected benchmark and real–life classification problems. Results indicate that usage of GBP-RMSprop can be beneficial in terms of increasing classification accuracy as well as decreasing activity of neurons’ connections and length of training. This suggests that RMSprop can effectively solve optimization problems of variable dimensionality. In the effect, merging GBP with RMSprop as well as with other optimizers such as Adam and AdaGrad can lead to construction of better algorithms for training of contextual neural networks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Suleymanova, I., et al.: A deep convolutional neural network approach for astrocyte detection. Sci. Rep. 8(12878), 1–7 (2018)
Chen, S., Zhang, S., Shang, J., Chen, B., Zheng, N.: Brain-inspired cognitive model with attention for self-driving cars. IEEE Trans. Cogn. Dev. Syst. 11(1), 13–25 (2019)
Zhang, S., Zheng, W.X.: Recursive adaptive sparse exponential functional link neural network for nonlinear AEC in impulsive noise environment. IEEE Trans. Neural Netw. Learn. Syst. 29(9), 4314–4323 (2018)
Guest, D., Cranmer, K., Whiteson, D.: Deep learning and its application to LHC physics. Annu. Rev. Nucl. Part. Sci. 68, 1–22 (2018)
Bao, W.N., Yue, J.H., Rao, Y.: A deep learning framework for financial time series using stacked autoencoders and long-short term memory. PloS ONE 12(7), 1–24 (2017)
Tsai, Y.-C., et al.: FineNet: a joint convolutional and recurrent neural network model to forecast and recommend anomalous financial items. In: Proceedings of the 13th ACM Conference on Recommender Systems RecSys 2019, pp. 536–537. ACM, New York (2019)
Gao, D., Li, X., Dong, Y., Peers, P., Xu, K., Tong, X.: Deep inverse rendering for high-resolution SVBRDF estimation from an arbitrary number of images. ACM Trans. Graphics (SIGGRAPH) 38(4), 1–15 (2019). Article no. 134
Liu, L., et al.: Automatic skin binding for production characters with deep graph networks. ACM Trans. Graphics (SIGGRAPH) 38(4), 1–12 (2019). Article no. 114
Gong, K., et al.: Iterative PET image reconstruction using convolutional neural network representation. IEEE Trans. Med. Imaging 38(3), 675–685 (2019)
Athiwaratkun, B., Stokes, J.W.: Malware classification with LSTM and GRU language models and a character-level CNN. In: Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), USA, pp. 2482–2486. IEEE (2017)
Huang, X., Tan, H., Lin, G., Tian, Y.: A LSTM-based bidirectional translation model for optimizing rare words and terminologies. In: 2018 IEEE International Conference on Artificial Intelligence and Big Data (ICAIBD), China, pp. 5077–5086. IEEE (2018)
Dozono, H., Niina, G., Araki, S.: Convolutional self organizing map. In: 2016 IEEE International Conference on Computational Science and Computational Intelligence (CSCI), pp. 767–771. IEEE (2016)
Higgins, I., et al.: beta-VAE: learning basic visual concepts with a constrained variational framework. In: International Conference on Learning Represent, ICLR 2017, vol 2, no. 5, pp. 1–22 (2017)
Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation. In: International Conference on Learning Representations, ICLR 2018, pp. 1–26 (2018)
Alcin, M., Koyuncu, I., Tuna, M., Varan, M., Pehlivan, I.: A novel high speed artificial neural network–based chaotic true random number generator on field programmable gate array. Int. J. Circuit Theory Appl. 47(3), 365–378 (2019)
Huk, M.: Backpropagation generalized delta rule for the selective attention Sigma-if artificial neural network. Int. J. Appl. Math. Comput. Sci. 22, 449–459 (2012)
Huk, M.: Notes on the generalized backpropagation algorithm for contextual neural networks with conditional aggregation functions. J. Intell. Fuzzy Syst. 32, 1365–1376 (2017)
Huk, M.: Training contextual neural networks with rectifier activation functions: role and adoption of sorting methods. J. Intell. Fuzzy Syst. 38, 1–10 (2019)
Huk, M.: Learning distributed selective attention strategies with the Sigma-if neural network. In: Akbar, M., Hussain, D. (eds.) Advances in Computer Science and IT, pp. 209–232. InTech, Vukovar (2009)
Szczepanik, M., Jóźwiak, I.: Data management for fingerprint recognition algorithm based on characteristic points’ groups. In: Pechenizkiy, M., Wojciechowski, M. (eds.) New Trends in Databases and Information Systems. Advances in Intelligent Systems and Computing, vol. 185, pp. 425–432. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-32518-2_40
Janusz, B.J., Wołk, K.: Implementing contextual neural networks in distributed machine learning framework. In: Nguyen, N.T., Hoang, D.H., Hong, T.-P., Pham, H., Trawiński, B. (eds.) ACIIDS 2018. LNCS (LNAI), vol. 10752, pp. 212–223. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-75420-8_20
Wołk, K., Burnell, E.: Implementation and analysis of contextual neural networks in H2O framework. In: Nguyen, N.T., Gaol, F.L., Hong, T.-P., Trawiński, B. (eds.) ACIIDS 2019. LNCS (LNAI), vol. 11432, pp. 429–440. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-14802-7_37
Ruder, S.: An overview of gradient descent optimization algorithms, pp. 1–14. eprint arXiv:1609.04747v2 (2017)
Armstrong, S.A.: MLL translocations specify a distinct gene expression profile that distinguishes a unique leukemia. Nat. Genet. 30, 41–47 (2002)
Golub, T.R., et al.: Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science 286, 531–537 (1999)
Khan, J., et al.: Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nat. Med. 7(6), 673–679 (2001)
UCI Machine Learning Repository. http://archive.ics.uci.edu/ml
Huk, M.: Non-uniform initialization of inputs groupings in contextual neural networks. In: Nguyen, N.T., Gaol, F.L., Hong, T.-P., Trawiński, B. (eds.) ACIIDS 2019. LNCS (LNAI), vol. 11432, pp. 420–428. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-14802-7_36
Dauphin Y., Pascanu, R., Gulcehre, C., Cho, K., Ganguli, S., Bengio, Y.: Identifying and attacking the saddle point problem in high dimensional non-convex optimization, pp. 1–14. eprint arXiv:1406.2572 (2014)
Darken, C., Chang, J., Moody, J.: Learning rate schedules for faster stochastic gradient search. In: Proceedings of the 1992 IEEE Workshop on Neural Networks for Signal Processing II, September, pp. 1–11 (1992)
Bouckaert, R.R., Frank, E.: Evaluating the replicability of significance tests for comparing learning algorithms. In: Dai, H., Srikant, R., Zhang, C. (eds.) PAKDD 2004. LNCS (LNAI), vol. 3056, pp. 3–12. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24775-3_3
Matsumoto, M., Nishimura, T.: Mersenne twister: a 623-dimensionally equidistributed uniform pseudorandom number generator. ACM Trans. Model. Comput. Simul. 8(3), 3–30 (1998)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Huk, M. (2020). Stochastic Optimization of Contextual Neural Networks with RMSprop. In: Nguyen, N., Jearanaitanakij, K., Selamat, A., Trawiński, B., Chittayasothorn, S. (eds) Intelligent Information and Database Systems. ACIIDS 2020. Lecture Notes in Computer Science(), vol 12034. Springer, Cham. https://doi.org/10.1007/978-3-030-42058-1_29
Download citation
DOI: https://doi.org/10.1007/978-3-030-42058-1_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-42057-4
Online ISBN: 978-3-030-42058-1
eBook Packages: Computer ScienceComputer Science (R0)