Skip to main content
Log in

Two noise tolerant incremental learning algorithms for single layer feed-forward neural networks

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

Abstract

This paper focuses on noise resistant incremental learning algorithms for single layer feed-forward neural networks (SLFNNs). In a physical implementation of a well trained neural network, faults or noise are unavoidable. As biological neural networks have ability to tolerate noise, we would like to have a trained neural network that has certain ability to tolerate noise too. This paper first develops a noise tolerant objective function that can handle multiplicative weight noise. We assume that multiplicative weight noise exist in the weights between the input layer and the hidden layer, and in the weights between the hidden layer and the output layer. Based on the developed objective function, we propose two noise tolerant incremental extreme learning machine algorithms, namely weight deviation incremental extreme learning machine (WDT-IELM) and weight deviation convex incremental extreme learning machine (WDTC-IELM). Compared to the original extreme learning machine algorithms, the two proposed algorithms have much better ability to tolerate the multiplicative weight noise. Several simulations are carried out to demonstrate the superiority of the two proposed algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Barron AR (1993) Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans Inf Theory 39(3):930–945

    Article  MathSciNet  Google Scholar 

  • Bi X, Ma H, Li J, Ma Y, Chen D (2018) A positive and unlabeled learning framework based on extreme learning machine for drug-drug interactions discovery. J Ambient Intell Human Comput. https://doi.org/10.1007/s12652-018-0960-7

    Article  Google Scholar 

  • Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/ml. Accessed 2019

  • Burr JB (1991) Digital neural network implementations. Neural Netw Concept Appl Implement 3:237–285

    Google Scholar 

  • Feng RB, Han ZF, Wan WY, Leung CS (2017) Properties and learning algorithms for faulty RBF networks with coexistence of weight and node failures. Neurocomputing 224:166–176

    Article  Google Scholar 

  • Guély F, Siarry P (1993) Gradient descent method for optimizing various fuzzy rule bases. In: Second IEEE international conference on fuzzy systems, 1993, pp 1241–1246

  • Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5):359–366

    Article  Google Scholar 

  • Hornik K (1991) Approximation capabilities of multilayer feedforward networks. Neural Netw 4(2):251–257

    Article  MathSciNet  Google Scholar 

  • Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501

    Article  Google Scholar 

  • Huang GB, Chen L, Siew CK (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892

    Article  Google Scholar 

  • Huang GB, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70(16–18):3056–3062

    Article  Google Scholar 

  • Leung CS, Wang HJ, Sum J (2010) On the selection of weight decay parameter for faulty networks. IEEE Trans Neural Netw 21(8):1232–1244

    Article  Google Scholar 

  • Li Y, Zhang S, Yin Y, Xiao W, Zhang J (2018) Parallel one-class extreme learning machine for imbalance learning based on Bayesian approach. J Ambient Intell Human Comput. https://doi.org/10.1007/s12652-018-0994-x

  • Liu B, Kaneko T (1969) Error analysis of digital filters realized with floating-point arithmetic. Proc IEEE 57(10):1735–1747

    Article  Google Scholar 

  • Mahdiani HR, Fakhraie SM, Lucas C (2012) Relaxed fault-tolerant hardware implementation of neural networks in the presence of multiple transient errors. IEEE Trans Neural Netw Learn Systems 23(8):1215–1228

    Article  Google Scholar 

  • Martolia R, Jain A, Singla L (2015) Analysis & survey on fault tolerance in radial basis function networks. In: 2015 IEEE international conference on computing, communication & automation (ICCCA), pp 469–473

  • Murakami M, Honda N (2007) Fault tolerance comparison of IDS models with multilayer perceptron and radial basis function networks. In: International joint conference on neural networks 2007 (IJCNN2007), pp 1079–1084, IEEE

  • Pajarinen J, Peltonen J, Uusitalo MA (2011) Fault tolerant machine learning for nanoscale cognitive radio. Neurocomputing 74(5):753–764

    Article  Google Scholar 

  • Wang SJ, Muhammad K, Phillips P, Dong Z, Zhang YD (2017) Ductal carcinoma in situ detection in breast thermography by extreme learning machine and combination of statistical measure and fractal dimension. J Ambient Intell Human Comput. https://doi.org/10.1007/s12652-017-0639-5

    Article  Google Scholar 

Download references

Acknowledgements

The work is supported by a research grant from City University of Hong Kong (No. 9610431).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrew Chi Sing Leung.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Adegoke, M., Wong, H.T., Leung, A.C.S. et al. Two noise tolerant incremental learning algorithms for single layer feed-forward neural networks. J Ambient Intell Human Comput 14, 15643–15657 (2023). https://doi.org/10.1007/s12652-019-01488-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-019-01488-8

Keywords

Navigation