Abstract
For a better future in machine learning (ML), it is necessary to modify our current concepts to get the fastest ML. Many designers had attempted to find the optimal learning rates in their applications through many algorithms over the past decades, but they have not yet achieved their target of highest speed of back-propagation (BP). This research proposes a novel BP rule called the Instant Learning Ratios-Machine Learning (ILR-ML) or (ILRML). Unlike the traditional BP algorithms, the ILR-ML offers its learning without the concepts of the learning rate(s). The ILR-ML has a new concept called the "Learning Ratio" and indicated by a sign (Δℓ). The ILR-ML performs the full BP algorithm with 100% accuracy per each learning iteration. The ILR-ML is more suitable for the online machine learning.
Similar content being viewed by others
References
Zhang K, Li X, He L, Guo C, Lin H (2020) A human-in-the-loop deep learning paradigm for synergic visual evaluation in children. Neural Netw 122:163–173
Zhang B, Liu Y, Cao J, Shujun Wu, Wang J (2019) Fully complex conjugate gradient-based neural networks using Wirtinger calculus framework: Deterministic convergence and its application. Neural Netw 115:50–64
Gou J, Wang L, Yi Z, Yuan Y, Mao Q (2020) Weighted discriminative collaborative competitive representation for robust image classification. Neural Netw 125:104–120
Fang X, Bai H, Guo Z, Shen B, Zenglin Xu (2020) DART: domain-adversarial residual-transfer networks for unsupervised cross-domain image classification. Neural Netw 127:182–192
Zhu Y, Li R, Yang Y, Ye N (2020) Learning cascade attention for fine-grained image classification. Neural Netw 122:174–182
Sa-Couto L, Wichert A (2019) Attention inspired network: steep learning curve in an invariant pattern recognition model. Neural Netw 114(2019):38–46
Fukushima K (2018) Margined winner-take-all: new learning rule for pattern recognition. Neural Netw 97:152–161
Yang M, Zhao W, Chen L, Qiang Qu, Shen Y (2019) Investigating the transferring capability of capsule networks for text classification. Neural Netw 118:247–261
Poon H-K, Yap W-S, Tee Y-K, Lee W-K, Goi B-M (2019) Hierarchical gated recurrent neural network with adversarial and virtual adversarial training on text classification. Neural Netw 119:299–312
Fernández-Delgado M, Sirsat MS, Cernadas E, Alawadi S, Febrero-Bande M (2019) An extensive experimental survey of regression methods. Neural Netw 111:11–34
Yuwu Lu, Lai Z, Wong WK, Li X (2020) Low-rank discriminative regression learning for image classification. Neural Netw 125:245–257
Tokuda I, Tokunaga R, Aihara K (2003) Back-propagation learning of infinite-dimensional dynamical systems. Neural Netw 16:1179–1193
Rigler AK, Irvine JM, Vogl TP (1991) Rescaling of variables in back propagation learning. Neural Netw 4:225–229
Andrea Soltoggio, Kenneth O. Stanley, Sebastian Risi (2018). Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks. Neural Networks, 48–67.
Guang-Bin Huang, Qin-Yu Zhu, Chee-Kheong Siew (2006) Extreme learning machine: Theory and applications, Neurocomputing, 489–501.
Kim J, Kim J, Jang G-J, Lee M (2017) Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection. Neural Netw 87:109–121
Huang G, Huang G-B, Song S, You K (2015) Trends in extreme learning machines: a review. Neural Netw 61:32–48
Edoardo Ragusaa, Paolo Gastaldoa, Rodolfo Zuninoa, Erik Cambria (2020) Balancing computational complexity and generalization ability: A novel design for ELM, Neurocomputing, 405–417.
Liu C, Tao Yu (2020) The regulatory warning model of regional product quality based on the back-propagation artificial neural network. Neural Comput Appl 32:1639–1648
Deng W, Zheng Q, Chen L et al (2010) Research on extreme learning of neural networks. Chin J Comput 33(2):279–287
Pan L, Feng X, Sang F, Li L, Leng M, Chen X (2019) An improved back propagation neural network based on complexity decomposition technology and modified flower pollination optimization for short-term load forecasting. Neural Comput Appl 31:2679–2697
Petrissa Zell, Bodo Rosenhahn (2019) Learning inverse dynamics for human locomotion analysis. Neural Computing and Applications.
John Duchi, Elad Hazan, Yoram Singer. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 2121–2159.
Zeiler MD (2012) ADADELTA: An Adaptive Learning Rate Method. ArXiv Preprint arXiv 1212:5701
Tieleman, T., Hinton, G. E. (2012). Lecture 6.5 - rmsprop, COURSERA: Neural networks for machine learning.
Kingma D, Ba J (2015) Adam: a method for stochastic optimization. In: Proceedings of international conference on learning representations.
Daniel C, Taylor J, Nowozin S (2016) Learning step size controllers for robust neural network training. In: Proceedings of the thirtieth AAAI conference on artificial intelligence
Kanada Y (2016) Optimizing neural-network learning rate by using a genetic algorithm with per-epoch mutations. In: Proceedings of international joint conference on neural networks
Cheng K, Tao F, Zhan Y, Li M, Li K (2020) Hierarchical attributes learning for pedestrian re-identification via parallel stochastic gradient descent combined with momentum correction and adaptive learning rate. Neural Comput Appl 32:5695–5712
Navin Anwani, Bipin Rajendran (2020) Training multi-layer spiking neural networks using NormAD based spatio-temporal error backpropagation, Neurocomputing, 67–77.
Aziz Khater A, El-Nagar AM, El-Bardini M, El-Rabaie NM (2020) Online learning based on adaptive learning rate for a class of recurrent fuzzy neural network. Neural Comput Appl 32:8691–8710
Zhang J, Fei Hu, Li Li, Xiaofei Xu, Yang Z, Chen Y (2019) An adaptive mechanism to achieve learning rate dynamically. Neural Comput Appl 31:6685–6698
Assem Badr, A. Fouda (2012) Modify the μCS-51 with Vector Instructions. JCSI International Journal of Computer Science Issues, Vol. 9, Issue 3.
Assem Badr (2018) Modifying the logic gate symbols to enrich the designing of the computer systems by 3-D bit-matrices. Ain Shams Engineering Journal, 3207–3216.
Assem Badr (2020), Introducing two complementary novel algebraic operations: Matrix-separation and Matrices-joining for programming evaluation and development. Ain Shams Engineering Journal, 351–362.
Muhammad Anwaar, Chu Kiong Loo, Manjeevan Seera (2020), Face image synthesis with weight and age progression using conditional adversarial autoencoder. Neural Computing and Applications volume 32, pages3567–3579
Noelia Vallez, Alberto Velasco-Mata, Oscar Deniz (2020), Deep autoencoder for false positive reduction in handgun detection. Neural Computing and Applications.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors whose names are listed immediately below certify that they have no affiliations with or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers’ bureaus; membership, employment, consultancies, stock ownership, or other equity interest; and expert testimony or patent-licensing arrangements), or nonfinancial interest (such as personal or professional relationships, affiliations, knowledge or beliefs) in the subject matter or materials discussed in this manuscript.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Badr, A. Awesome back-propagation machine learning paradigm. Neural Comput & Applic 33, 13225–13249 (2021). https://doi.org/10.1007/s00521-021-05951-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-021-05951-6