Abstract
Extreme learning machines have been applied successfully to many real-world applications, due to their faster training speed and good performance. However, in order to guarantee the convergence of the ELM algorithm, it initially requires a large number of hidden nodes. In addition, extreme learning machines have two drawbacks: over-fitting and the sensitivity of accuracy to the number of hidden nodes. The aim of this paper is to propose a new smoothing \(L_{1/2}\) extreme learning machine with regularization to overcome these two drawbacks. The main advantage of the proposed approach is to reduce weights to smaller values during the training, and such nodes with sufficiently small weights can eventually be removed after training so as to obtain a suitable network size. Numerical experiments have been carried out for approximation problems and multi-class classification problems, and preliminary results have shown that the proposed approach works well.
Q.-W. Fan—This work was supported by National Science Foundation of China (No. 11171367).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd edn. Tsinghua University Press, Prentice Hall, Beijing (2001)
Magoulas, G.D., Vrahatis, M.N., Androulakis, G.S.: Improving the convergence of the backpropagation algorithm using learning rate adaptation methods. Neural Comput. 11(7), 1769–1796 (1999)
Zhang, X.S.: Neural Networks in Optimization. Kluwer Academic Publishers, Boston (2000)
Liu, W., Dai, Y.H.: Minimization algorithms based on supervisor and searcher cooperation. J. Optim. Theory Appl. 111(2), 359–379 (2001)
Zhou, W., Zurada, J.M.: Competitive layer model of discrete-time recurrent neural networks with LT neurons. Neural Comput. 22(8), 2137–2160 (2010)
Poggio, T., Girosi, F.: A theory of networks for approximation and learning. Artificial Intelligence Laboratory, Mass. Inst. Technol., Cambridge, A.I. Memo 1140 (1989)
Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2, 359–366 (1989)
Cybenko, G.: Approximation of superpositions of a sigmoidal function. Math. Contr. Signals Syst. 4(2), 303–314 (1989)
Funahashi, K.: On the approximate realization of continuous mappings by neural networks. Neural Netw. 2, 183–192 (1989)
Hornik, K.: Approximation capabilities of multilayer feedforward networks. Neural Netw. 4, 251–257 (1991)
Huang, G.B., Chen, Y.Q., Babri, H.A.: Classification ability of single hidden layer feedforward neural networks. IEEE Trans. Neural Netw. 11(3), 799–801 (2000)
Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: theory and applications. Neurocomputing 70, 489–501 (2006)
Huang, G.B., Wang, D.H., Lan, Y.: Extreme learning machines: a survey. Int. J. Mach. Learn. Cyber. 2, 107–122 (2011)
Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: a new learning scheme of feedforward neural networks. In: Proceedings of the IEEE International Joint Conference on Neural Networks, vol. 2, pp. 985–990 (2004)
Huang, G.B., Zhou, H.M., Zhang, R.: Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B Cybern. 42(2), 513–529 (2012)
Balasundaram, S., Kapil, D.G.: 1-Norm extreme learning machine for regression and multiclass classification using Newton method. Neurocomputing 128, 4–14 (2014)
Zhang, L., Zhou, W.: On the sparseness of 1-norm support vector machines. Neural Netw. 23, 373–385 (2010)
Deng, W., Zheng, Q., Chen, L.: Regularized extreme learning machine. In: Proceedings of the IEEE Symposium on Computational Intelligence in Data Mining, pp. 389–395 (2009)
Miche, Y., van Heeswijk, M., Bas, P., Simula, O., Lendasse, A.: TROP-ELM: a double-regularized ELM using LARS and Tikhonov regularization. Neurocomputing 74, 2413–2421 (2011)
Luo, J.H., Vong, C.M., Wong, P.K.: Sparse Bayesian extreme learning machine for multi-classification. IEEE Trans. Neural Netw. Learn. Syst. 25(4), 836–843 (2014)
Candes, E., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)
Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)
Donoho, D.L.: Neighborly polytopes and the sparse solution of underdetermined systems of linear equations. Statistics Department, Stanford University, Stanford, CA, Technical report 2005–4 (2005)
Chartrand, R., Staneva, V.: Restricted isometry properties and nonconvex compressive sensing. Inverse Probl. 24(3), 20–35 (2008)
Krishnan, D., Fergus, R.: Fast image deconvolution using hyper-Laplacian priors. In: Neural Information Processing Systems. MIT Press, Cambridge (2009)
Chartrand, R.: Exact reconstruction of sparse signals via nonconvex minimizaion. IEEE Signal Process. Lett. 14(10), 707–710 (2007)
Xu, Z.B., Zhang, H., Wang, Y., et al.: L1/2 regularization. Sci. China Inf. Sci. 53, 1159–1169 (2010)
Xu, Z.B., Chang, X.Y., Xu, F.M., Zhang, H.: \(L_{1/2}\) regularization: a thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23(7), 1013–1027 (2012)
Zeng, J.S., Fang, J., Xu, Z.B.: Sparse SAR imaging on \(L_{1/2}\) regularization. Sci. China Inf. Sci. 55(8), 1755–1775 (2012)
Meng, D.Y., Zhao, Q., Xu, Z.B.: Improve robustness of sparse PCA by \(L_1\)-norm maximization. Pattern Recognit. 45(1), 487–497 (2012)
Fan, Q.W., Zurada, J.M., Wu, W.: Convergence of online gradient method for feedforward neural networks with smoothing \(L_{1/2}\) regularization penalty. Neurocomputing 131, 208–216 (2014)
Wu, W., Fan, Q.W., Zurada, J.M., et al.: Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks. Neural Netw. 50, 72–78 (2014)
Yang, D.K., Wu, W.: A Smoothing Interval Neural Network, Discrete Dynamics in Nature and Society, vol. 2012, 25p (2012)
Rao, C.R., Mitra, S.K.: Generalized Inverse of Matrices and Its Applications. Wiley, New York (1971)
Acknowledgement
This work has been supported by National Science Foundation of China (No. 11171367).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Fan, QW., He, XS., Yang, XS. (2018). Smoothing Regularized Extreme Learning Machine. In: Pimenidis, E., Jayne, C. (eds) Engineering Applications of Neural Networks. EANN 2018. Communications in Computer and Information Science, vol 893. Springer, Cham. https://doi.org/10.1007/978-3-319-98204-5_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-98204-5_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-98203-8
Online ISBN: 978-3-319-98204-5
eBook Packages: Computer ScienceComputer Science (R0)