Skip to main content
Log in

Extreme learning machine with errors in variables

  • Published:
World Wide Web Aims and scope Submit manuscript

Abstract

Extreme learning machine (ELM) is widely used in training single-hidden layer feedforward neural networks (SLFNs) because of its good generalization and fast speed. However, most improved ELMs usually discuss the approximation problem for sample data with output noises, not for sample data with noises both in input and output values, i.e., error-in-variable (EIV) model. In this paper, a novel algorithm, called (regularized) TLS-ELM, is proposed to approximate the EIV model based on ELM and total least squares (TLS) method. The proposed TLS-ELM uses the idea of ELM to choose the hidden weights, and applies TLS method to determine the output weights. Furthermore, the perturbation quantities of hidden output matrix and observed values are given simultaneously. Comparison experiments of our proposed TLS-ELM with least square method, TLS method and ELM show that our proposed TLS-ELM has better accuracy and less training time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Balasundaram, S., Kapil: Application of error minimized extreme learning machine for simultaneous learning of a function and its derivatives. Neurocomputing 74, 2511–2519 (2011)

    Article  Google Scholar 

  2. Cao, F.L., Lin, S.B., Xu, Z.B.: Approximation capability of interpolation neural networks. Neurocomputing 74, 457–460 (2010)

    Article  Google Scholar 

  3. Chen, T.P., Chen, H., Liu, R.W.: Approximation capability in by multiplayer feedforward networks and related problems. IEEE T. Neural Networ. 6, 25–30 (1995)

    MATH  Google Scholar 

  4. Cybenko, G.: Approximation by superpositions of sigmoidal function. Math. Control Signal Systems 2, 303–314 (1989)

    Article  MATH  MathSciNet  Google Scholar 

  5. Deng, W., Zheng, Q., Chen, L.: Regularized extreme learning machine. In: IEEE Symposium on Computational Intelligence and Data Mining (CIDM’09), pp. 389–395. Nashville, TN (2009)

  6. Feng, G.R., Huang, G.-B., Lin, Q.P., Gay, R.: Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE T. Neural Networ. 20(8), 1152–1157 (2009)

    Google Scholar 

  7. Funahashi, K.I.: On the approximate realization of continuous mappings by neural networks. Neural Netw. 2, 183–192 (1989)

    Article  Google Scholar 

  8. Golub, G.H., Christian, H.P., O’Leary, D.P.: Tikhonov regularization and total least squares. SIAM J. Matrix Anal. A 21(1), 185–194 (1999)

    Article  MATH  Google Scholar 

  9. Golub, G.H., Loan, C.F.V.: An analysis of the total least squares problem. SIAM J. Numer. Anal. 17, 883–893 (1980)

    Article  MATH  MathSciNet  Google Scholar 

  10. Huang, G.-B., Wang, D.H., Lan, Y.: Extreme learning machines: a survey. Int. J. Mach. Learn. Cybern. 2, 107–122 (2011)

    Article  Google Scholar 

  11. Huang, G.-B., Zhu, Q.-Y., Siew, C.K.: Extreme learning machine: a new learning scheme of feedforward neural networks. In: Proceedings of the International Joint Conference on Neural Net works (IJCNN2004), pp. 25–29. Budapest, Hungary (2004)

    Google Scholar 

  12. Huang, G.-B., Zhu, Q.-Y., Siew, C.K.: Extreme learning machine: theory and applications. Neurocomputing 70, 489–501 (2006)

    Article  Google Scholar 

  13. Fromkorth, A., Kohler, M.: Analysis of least squares regression estimates in case of additional errors in the variables. J. Stat. Plan. Infer. 141, 172–188 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  14. Kukush, A., Markovsky, I., Van Huffel, S.: Consistency of the structured total least squares estimator in a multivariate errors-in-variables model. J. Stat. Plan. Infer. 133, 315–358 (2005)

    Article  MATH  Google Scholar 

  15. Lan, Y., Soh, Y.C., Huang, G.-B.: Constructive hidden nodes selection of extreme learning machine for regression. Neurocomputing 73, 3191–3199 (2010)

    Article  Google Scholar 

  16. Lan, Y., Soh, Y.C., Huang, G.-B.: Two-stage extreme learning machine for regression. Neurocomputing 73, 3028–3038 (2010)

    Article  Google Scholar 

  17. Liang, N.Y., Huang, G.-B., Saratchandran, P., Sundararajan, N.: A fast and accurate online sequential learning algorithm for feedforward networks. IEEE T. Neural Networ. 17(6), 1411–1423 (2006)

    Google Scholar 

  18. Markovsky, I., Huffel, S.V.: Overview of total least squares methods. Signal Process. 87, 2283–2302 (2007)

    Article  MATH  Google Scholar 

  19. Martinez, J.M.M., Montero, P.E., Olivas, E.S., Guerrero, J.D.M., Benedito, R.M., Sanchis, J.G.: Regularized extreme learning machine for regression problems. Neurocomputing 74(17), 3716–3721 (2011)

    Article  Google Scholar 

  20. Minhas, R., Baradarani, A., Seifzadeh, S., Wu, Q.M.J.: Human action recognition using extreme learning machine based on visual vocabularies. Neurocomputing 73, 1906–1917 (2010)

    Article  Google Scholar 

  21. Mohammed, A.A., Minhas, R., JonathanWu, Q.M., Sid-Ahmed, M.A.: Human face recognition based on multidimensional PCA and extreme learning machine. Pattern Recogn. 44, 2588–2597 (2011)

    Article  MATH  Google Scholar 

  22. Sun, Z.L., Choi, T.M., Au, K.F., Yu, Y.: Sales forecasting using extreme learning machine with applications in fashion retailing. Decis. Support Syst. 46, 411–419 (2008)

    Article  Google Scholar 

  23. Wang, L., Huang, Y.P., Luo, X.Y., Wang, Z., Luo, S.W.: Image deblurring with filters learned by extreme learning machine. Neurocomputing 74, 2464–2474 (2011)

    Article  Google Scholar 

  24. Wang, Y.G., Cao, F.L., Yuan, Y.B.: A study on effectiveness of extreme learning machine. Neurocomputing 74, 2483–2490 (2011)

    Article  Google Scholar 

  25. Wei, Y.M., Min, N., Ng, M.K., Xu, W.: Tikhonov regularization for weighted total least squares problems. Appl. Math. Lett. 20, 82–87 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  26. Xu, Z.B., Cao, F.L.: The essential order of approximation for neural networks. Sci. China Ser. F 47, 97–112 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  27. Yeung, D.S., Ng, W.W.Y., Wang, D.F., Tsang, E.C.C., Wang, X.Z.: Localized generalization error model and its application to architecture selection for radial basis function neural network. IEEE T. Neural Networ. 18(5), 1294–1305 (2007)

    Google Scholar 

  28. Zhao, J.W., Park, D.S., Lee, J.W., Cao, F.L.: Generalized extreme learning machine acting on a metric space. Soft Comput. 16, 1503–1514 (2012)

    Article  MATH  Google Scholar 

  29. Zhao, J.W., Wang, Z.H., Park, D.S.: Online sequential extreme learning machine with forgetting mechanism. Neurocomputing 87, 79–89 (2012)

    Article  Google Scholar 

  30. Zhao, J.W., Zhou, Z.H., Cao, F.L.: Human face recognition based on ensemble of polyharmonic extreme learning machine. Neural Comput. Appl. (2013). doi:10.1007/s00521-013-1356-4

    Google Scholar 

  31. Zhou, Z.H., Zhao, J.W., Cao, F.L.: Surface reconstruction based on extreme learning machine. Neural Comput. Appl. (2012). doi:10.1007/s00521-012-0891-8

    Google Scholar 

  32. Zhu, Q.-Y., Qin, A.K., Suganthan, P.N., Huang, G.-B.: Evolutionary extreme learning machine. Pattern Recogn. 38, 1759–1763 (2005)

    Article  MATH  Google Scholar 

  33. Zong, W.W., Huang, G.-B.: Face recognition based on extreme learning machine. Neurocomputing 74, 2541–2551 (2011)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feilong Cao.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhao, J., Wang, Z. & Cao, F. Extreme learning machine with errors in variables. World Wide Web 17, 1205–1216 (2014). https://doi.org/10.1007/s11280-013-0220-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11280-013-0220-x

Keywords

Navigation