Abstract
This paper focuses on noise resistant incremental learning algorithms for single layer feed-forward neural networks (SLFNNs). In a physical implementation of a well trained neural network, faults or noise are unavoidable. As biological neural networks have ability to tolerate noise, we would like to have a trained neural network that has certain ability to tolerate noise too. This paper first develops a noise tolerant objective function that can handle multiplicative weight noise. We assume that multiplicative weight noise exist in the weights between the input layer and the hidden layer, and in the weights between the hidden layer and the output layer. Based on the developed objective function, we propose two noise tolerant incremental extreme learning machine algorithms, namely weight deviation incremental extreme learning machine (WDT-IELM) and weight deviation convex incremental extreme learning machine (WDTC-IELM). Compared to the original extreme learning machine algorithms, the two proposed algorithms have much better ability to tolerate the multiplicative weight noise. Several simulations are carried out to demonstrate the superiority of the two proposed algorithms.
Similar content being viewed by others
References
Barron AR (1993) Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans Inf Theory 39(3):930–945
Bi X, Ma H, Li J, Ma Y, Chen D (2018) A positive and unlabeled learning framework based on extreme learning machine for drug-drug interactions discovery. J Ambient Intell Human Comput. https://doi.org/10.1007/s12652-018-0960-7
Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/ml. Accessed 2019
Burr JB (1991) Digital neural network implementations. Neural Netw Concept Appl Implement 3:237–285
Feng RB, Han ZF, Wan WY, Leung CS (2017) Properties and learning algorithms for faulty RBF networks with coexistence of weight and node failures. Neurocomputing 224:166–176
Guély F, Siarry P (1993) Gradient descent method for optimizing various fuzzy rule bases. In: Second IEEE international conference on fuzzy systems, 1993, pp 1241–1246
Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5):359–366
Hornik K (1991) Approximation capabilities of multilayer feedforward networks. Neural Netw 4(2):251–257
Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501
Huang GB, Chen L, Siew CK (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892
Huang GB, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70(16–18):3056–3062
Leung CS, Wang HJ, Sum J (2010) On the selection of weight decay parameter for faulty networks. IEEE Trans Neural Netw 21(8):1232–1244
Li Y, Zhang S, Yin Y, Xiao W, Zhang J (2018) Parallel one-class extreme learning machine for imbalance learning based on Bayesian approach. J Ambient Intell Human Comput. https://doi.org/10.1007/s12652-018-0994-x
Liu B, Kaneko T (1969) Error analysis of digital filters realized with floating-point arithmetic. Proc IEEE 57(10):1735–1747
Mahdiani HR, Fakhraie SM, Lucas C (2012) Relaxed fault-tolerant hardware implementation of neural networks in the presence of multiple transient errors. IEEE Trans Neural Netw Learn Systems 23(8):1215–1228
Martolia R, Jain A, Singla L (2015) Analysis & survey on fault tolerance in radial basis function networks. In: 2015 IEEE international conference on computing, communication & automation (ICCCA), pp 469–473
Murakami M, Honda N (2007) Fault tolerance comparison of IDS models with multilayer perceptron and radial basis function networks. In: International joint conference on neural networks 2007 (IJCNN2007), pp 1079–1084, IEEE
Pajarinen J, Peltonen J, Uusitalo MA (2011) Fault tolerant machine learning for nanoscale cognitive radio. Neurocomputing 74(5):753–764
Wang SJ, Muhammad K, Phillips P, Dong Z, Zhang YD (2017) Ductal carcinoma in situ detection in breast thermography by extreme learning machine and combination of statistical measure and fractal dimension. J Ambient Intell Human Comput. https://doi.org/10.1007/s12652-017-0639-5
Acknowledgements
The work is supported by a research grant from City University of Hong Kong (No. 9610431).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Adegoke, M., Wong, H.T., Leung, A.C.S. et al. Two noise tolerant incremental learning algorithms for single layer feed-forward neural networks. J Ambient Intell Human Comput 14, 15643–15657 (2023). https://doi.org/10.1007/s12652-019-01488-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12652-019-01488-8