Skip to main content
Log in

On extreme learning machine for ε-insensitive regression in the primal by Newton method

  • Extreme Learning Machine's Theory & Application
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In this paper, extreme learning machine (ELM) for ε-insensitive error loss function-based regression problem formulated in 2-norm as an unconstrained optimization problem in primal variables is proposed. Since the objective function of this unconstrained optimization problem is not twice differentiable, the popular generalized Hessian matrix and smoothing approaches are considered which lead to optimization problems whose solutions are determined using fast Newton–Armijo algorithm. The main advantage of the algorithm is that at each iteration, a system of linear equations is solved. By performing numerical experiments on a number of interesting synthetic and real-world datasets, the results of the proposed method are compared with that of ELM using additive and radial basis function hidden nodes and of support vector regression (SVR) using Gaussian kernel. Similar or better generalization performance of the proposed method on the test data in comparable computational time over ELM and SVR clearly illustrates its efficiency and applicability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Balasundaram S, Singh R (2010) On finite Newton method for support vector regression. Neural Comput Appl 19:967–977

    Article  Google Scholar 

  2. Balasundaram S, Kapil (2011) Application of error minimized extreme learning machine for simultaneous learning of a function and its derivatives. Neurocomputing 74:2511–2519

    Article  Google Scholar 

  3. Casdagli M (1989) Nonlinear prediction of chaotic time series. Physica D 35:335–356

    Article  MathSciNet  MATH  Google Scholar 

  4. Chang C-C, Lin C-J LIBSVM: a library for support vector machines. http://www.csie.ntu.edu.tw/~cjlin/libsvm

  5. Chen S, Wang M (2005) Seeking multi-threshold directly from support vectors for image segmentation. Neurocomputing 67:335–344

    Article  Google Scholar 

  6. Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel based learning methods. Cambridge University Press, Cambridge

    Google Scholar 

  7. Feng G, Huang G-B, Lin Q, Gay R (2009) Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans Neural Netw 20(8):1352–1357

    Article  Google Scholar 

  8. Frenay B, Verleysen M (2010) Using SVMs with randomized feature spaces: an extreme learning approach. In: Proceedings of the 18th European symposium on artificial neural networks (ESANN), Bruges, Belgium, pp 315–320

  9. Fung G, Mangasarian OL (2003) Finite Newton method for Lagrangian support vector machine. Neurocomputing 55:39–55

    Article  Google Scholar 

  10. Hiriart-Urruty J-B, Strodiot JJ, Nguyen H (1984) Generalized Hessian matrix and second order optimality conditions for problems with CL1 data. Appl Math Optim 11:43–56

    Article  MathSciNet  MATH  Google Scholar 

  11. Huang G-B, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70:3056–3062

    Article  Google Scholar 

  12. Huang G-B, Chen L (2008) Enhanced random search based incremental extreme learning machine. Neurocomputing 71:3460–3468

    Article  Google Scholar 

  13. Huang G-B, Chen L, Siew C-K (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892

    Article  Google Scholar 

  14. Huang G-B, Ding X, Zhou H (2010) Optimization method based extreme learning machine for classification. Neurocomputing 74:155–163

    Article  Google Scholar 

  15. Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: theory and applications. Neurocomputing 70:489–501

    Article  Google Scholar 

  16. Lee YJ, Hsieh W-F, Huang C-M (2005) ε-SSVR: a smooth support vector machine for ε-insensitive regression. IEEE Trans Knowl Data Eng 17(5):678–685

    Article  Google Scholar 

  17. Lee YJ, Mangasarian OL (2001) SSVM: a smooth support vector machine for classification. Comput Optim Appl 20(1):5–22

    Article  MathSciNet  MATH  Google Scholar 

  18. Liu Q, He Q, Shi Z (2008) Extreme support vector machine classifier. Lect Notes Comput Sci 5012:222–233

    Article  Google Scholar 

  19. Mangasarian OL (2002) A finite Newton method for classification. Optim Methods Softw 17:913–929

    Article  MathSciNet  MATH  Google Scholar 

  20. Mangasarian OL, Musicant DR (2001) Lagrangian support vector machines. J Mach Learn Res 1:161–177

    MathSciNet  MATH  Google Scholar 

  21. Mangasarian OL, Musicant DR (2000) Active set support vector machine classification. In: Lee TK, Dietterich TG, Tesp V (eds) Advances in neural information processing systems, vol 13, pp 577–586

  22. Mukherjee S, Osuna E, Girosi F (1997) Nonlinear prediction of chaotic time series using support vector machines. In: NNSP’97: Neural networks for signal processing VII. Proceedings of IEEE signal processing society workshop, Amelia Island, pp 511–520

  23. Muller KR, Smola AJ, Ratsch G, Schölkopf B, Kohlmorgen J (1999) Using support vector machines for time series prediction. In: Schölkopf B, Burges CJC, Smola AJ (eds) Advances in Kernel methods—support vector learning. MIT Press, Cambridge, pp 243–254

    Google Scholar 

  24. Musicant DR, Feinberg A (2004) Active set support vector regression. IEEE Trans Neural Netw 15(2):268–275

    Article  Google Scholar 

  25. Rao CR, Mitra SK (1971) Generalized inverse of matrices and its applications. Wiley, New York

    MATH  Google Scholar 

  26. Singh R, Balasundaram S (2007) Application of extreme learning machine for time series analysis. Int J Intell Technol 2:256–262

    Google Scholar 

  27. Tay FEH, Cao LJ (2001) Application of support vector machines in financial time series with forecasting. Omega 29(4):309–317

    Article  Google Scholar 

  28. Vapnik VN (2000) The nature of statistical learning theory, 2nd edn. Springer, New York

    MATH  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the referees for their valuable comments that greatly improved the earlier version of the paper. Mr. Kapil acknowledges the financial support given as scholarship by Council of Scientific and Industrial Research, India.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Balasundaram.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Balasundaram, S., Kapil On extreme learning machine for ε-insensitive regression in the primal by Newton method. Neural Comput & Applic 22, 559–567 (2013). https://doi.org/10.1007/s00521-011-0798-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-011-0798-9

Keywords

Navigation