Skip to main content
Log in

Awesome back-propagation machine learning paradigm

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

For a better future in machine learning (ML), it is necessary to modify our current concepts to get the fastest ML. Many designers had attempted to find the optimal learning rates in their applications through many algorithms over the past decades, but they have not yet achieved their target of highest speed of back-propagation (BP). This research proposes a novel BP rule called the Instant Learning Ratios-Machine Learning (ILR-ML) or (ILRML). Unlike the traditional BP algorithms, the ILR-ML offers its learning without the concepts of the learning rate(s). The ILR-ML has a new concept called the "Learning Ratio" and indicated by a sign (Δℓ). The ILR-ML performs the full BP algorithm with 100% accuracy per each learning iteration. The ILR-ML is more suitable for the online machine learning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27 T
Fig. 28
Fig. 29
Fig. 30
Fig. 31

Similar content being viewed by others

References

  1. Zhang K, Li X, He L, Guo C, Lin H (2020) A human-in-the-loop deep learning paradigm for synergic visual evaluation in children. Neural Netw 122:163–173

    Article  Google Scholar 

  2. Zhang B, Liu Y, Cao J, Shujun Wu, Wang J (2019) Fully complex conjugate gradient-based neural networks using Wirtinger calculus framework: Deterministic convergence and its application. Neural Netw 115:50–64

    Article  Google Scholar 

  3. Gou J, Wang L, Yi Z, Yuan Y, Mao Q (2020) Weighted discriminative collaborative competitive representation for robust image classification. Neural Netw 125:104–120

    Article  Google Scholar 

  4. Fang X, Bai H, Guo Z, Shen B, Zenglin Xu (2020) DART: domain-adversarial residual-transfer networks for unsupervised cross-domain image classification. Neural Netw 127:182–192

    Article  Google Scholar 

  5. Zhu Y, Li R, Yang Y, Ye N (2020) Learning cascade attention for fine-grained image classification. Neural Netw 122:174–182

    Article  Google Scholar 

  6. Sa-Couto L, Wichert A (2019) Attention inspired network: steep learning curve in an invariant pattern recognition model. Neural Netw 114(2019):38–46

    Article  Google Scholar 

  7. Fukushima K (2018) Margined winner-take-all: new learning rule for pattern recognition. Neural Netw 97:152–161

    Article  Google Scholar 

  8. Yang M, Zhao W, Chen L, Qiang Qu, Shen Y (2019) Investigating the transferring capability of capsule networks for text classification. Neural Netw 118:247–261

    Article  Google Scholar 

  9. Poon H-K, Yap W-S, Tee Y-K, Lee W-K, Goi B-M (2019) Hierarchical gated recurrent neural network with adversarial and virtual adversarial training on text classification. Neural Netw 119:299–312

    Article  Google Scholar 

  10. Fernández-Delgado M, Sirsat MS, Cernadas E, Alawadi S, Febrero-Bande M (2019) An extensive experimental survey of regression methods. Neural Netw 111:11–34

    Article  Google Scholar 

  11. Yuwu Lu, Lai Z, Wong WK, Li X (2020) Low-rank discriminative regression learning for image classification. Neural Netw 125:245–257

    Article  Google Scholar 

  12. Tokuda I, Tokunaga R, Aihara K (2003) Back-propagation learning of infinite-dimensional dynamical systems. Neural Netw 16:1179–1193

    Article  Google Scholar 

  13. Rigler AK, Irvine JM, Vogl TP (1991) Rescaling of variables in back propagation learning. Neural Netw 4:225–229

    Article  Google Scholar 

  14. Andrea Soltoggio, Kenneth O. Stanley, Sebastian Risi (2018). Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks. Neural Networks, 48–67.

  15. Guang-Bin Huang, Qin-Yu Zhu, Chee-Kheong Siew (2006) Extreme learning machine: Theory and applications, Neurocomputing, 489–501.

  16. Kim J, Kim J, Jang G-J, Lee M (2017) Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection. Neural Netw 87:109–121

    Article  Google Scholar 

  17. Huang G, Huang G-B, Song S, You K (2015) Trends in extreme learning machines: a review. Neural Netw 61:32–48

    Article  Google Scholar 

  18. Edoardo Ragusaa, Paolo Gastaldoa, Rodolfo Zuninoa, Erik Cambria (2020) Balancing computational complexity and generalization ability: A novel design for ELM, Neurocomputing, 405–417.

  19. Liu C, Tao Yu (2020) The regulatory warning model of regional product quality based on the back-propagation artificial neural network. Neural Comput Appl 32:1639–1648

    Article  Google Scholar 

  20. Deng W, Zheng Q, Chen L et al (2010) Research on extreme learning of neural networks. Chin J Comput 33(2):279–287

    Article  MathSciNet  Google Scholar 

  21. Pan L, Feng X, Sang F, Li L, Leng M, Chen X (2019) An improved back propagation neural network based on complexity decomposition technology and modified flower pollination optimization for short-term load forecasting. Neural Comput Appl 31:2679–2697

    Article  Google Scholar 

  22. Petrissa Zell, Bodo Rosenhahn (2019) Learning inverse dynamics for human locomotion analysis. Neural Computing and Applications.

  23. John Duchi, Elad Hazan, Yoram Singer. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 2121–2159.

  24. Zeiler MD (2012) ADADELTA: An Adaptive Learning Rate Method. ArXiv Preprint arXiv 1212:5701

    Google Scholar 

  25. Tieleman, T., Hinton, G. E. (2012). Lecture 6.5 - rmsprop, COURSERA: Neural networks for machine learning.

  26. Kingma D, Ba J (2015) Adam: a method for stochastic optimization. In: Proceedings of international conference on learning representations.

  27. Daniel C, Taylor J, Nowozin S (2016) Learning step size controllers for robust neural network training. In: Proceedings of the thirtieth AAAI conference on artificial intelligence

  28. Kanada Y (2016) Optimizing neural-network learning rate by using a genetic algorithm with per-epoch mutations. In: Proceedings of international joint conference on neural networks

  29. Cheng K, Tao F, Zhan Y, Li M, Li K (2020) Hierarchical attributes learning for pedestrian re-identification via parallel stochastic gradient descent combined with momentum correction and adaptive learning rate. Neural Comput Appl 32:5695–5712

    Article  Google Scholar 

  30. Navin Anwani, Bipin Rajendran (2020) Training multi-layer spiking neural networks using NormAD based spatio-temporal error backpropagation, Neurocomputing, 67–77.

  31. Aziz Khater A, El-Nagar AM, El-Bardini M, El-Rabaie NM (2020) Online learning based on adaptive learning rate for a class of recurrent fuzzy neural network. Neural Comput Appl 32:8691–8710

    Article  Google Scholar 

  32. Zhang J, Fei Hu, Li Li, Xiaofei Xu, Yang Z, Chen Y (2019) An adaptive mechanism to achieve learning rate dynamically. Neural Comput Appl 31:6685–6698

    Article  Google Scholar 

  33. Assem Badr, A. Fouda (2012) Modify the μCS-51 with Vector Instructions. JCSI International Journal of Computer Science Issues, Vol. 9, Issue 3.

  34. Assem Badr (2018) Modifying the logic gate symbols to enrich the designing of the computer systems by 3-D bit-matrices. Ain Shams Engineering Journal, 3207–3216.

  35. Assem Badr (2020), Introducing two complementary novel algebraic operations: Matrix-separation and Matrices-joining for programming evaluation and development. Ain Shams Engineering Journal, 351–362.

  36. Muhammad Anwaar, Chu Kiong Loo, Manjeevan Seera (2020), Face image synthesis with weight and age progression using conditional adversarial autoencoder. Neural Computing and Applications volume 32, pages3567–3579

  37. Noelia Vallez, Alberto Velasco-Mata, Oscar Deniz (2020), Deep autoencoder for false positive reduction in handgun detection. Neural Computing and Applications.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Assem Badr.

Ethics declarations

Conflict of interest

The authors whose names are listed immediately below certify that they have no affiliations with or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers’ bureaus; membership, employment, consultancies, stock ownership, or other equity interest; and expert testimony or patent-licensing arrangements), or nonfinancial interest (such as personal or professional relationships, affiliations, knowledge or beliefs) in the subject matter or materials discussed in this manuscript.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Badr, A. Awesome back-propagation machine learning paradigm. Neural Comput & Applic 33, 13225–13249 (2021). https://doi.org/10.1007/s00521-021-05951-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-05951-6

Keywords

Navigation