Skip to main content
Log in

An enhanced extreme learning machine based on ridge regression for regression

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

The extreme learning machine (ELM) is a novel single hidden layer feedforward neural network, which has the superiority in many aspects, especially in the training speed; however, there are still some shortages that restrict the further development of ELM, such as the perturbation and multicollinearity in the linear model. To the adverse effects caused by the perturbation or the multicollinearity, this paper proposes an enhanced ELM based on ridge regression (RR-ELM) for regression, which replaces the least square method to calculate output weights. With an additional adjustment of ridge regression, all the characteristics become even better. Simulative results show that the RR-ELM, compared with ELM, has better stability and generalization performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Han F, Ling Q-H, Huang D-S (2010) An improved approximation approach incorporating particle swarm optimization and a priori information into neural networks. Neural Comput Appl 19:255–261

    Google Scholar 

  2. Han F, Huang D-S (2008) A new constrained learning algorithm for function approximation by encoding a priori information into feedforward neural networks. Neural Comput Appl 17:433–439

    Google Scholar 

  3. Liang NY, Huang GB, Saratchandran P, Sundararajan N (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17:1411–1423

    Google Scholar 

  4. Samanta B (2011) Prediction of chaotic time series using computational intelligence. Expert Syst Appl 38:11406–11411

    Article  MathSciNet  Google Scholar 

  5. Tsai H-C, Lin Y-H (2011) Modular neural network programming with genetic optimization. Expert Syst Appl 38:11032–11039

    Google Scholar 

  6. Balamurugan G, Aravindhababu P (2011) ANN based online voltage estimation. Appl Soft Comput. doi:10.1016/j.asoc.2011.08.041

  7. Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: theory and applications. Neurocomputing 70:489–501

    Google Scholar 

  8. Suresh S, Sarawathi S, Sundararajan N (2010) Performance enhancement of extreme learning for multi-category sparse data classification problems. Eng Appl Artif Intell 23:1149–1157

    Google Scholar 

  9. Minhas R, Mohammed AA, Jonathan M, Wu Q (2010) A fast recognition framework based on extreme learning machine using hybrid object information. Neurocomputing 73:1831–1839

    Google Scholar 

  10. Pan C, Park D, Yang Y, Yoo H (2011) Leukocyte image segmentation by visual attention and extreme learning machine. Neural Comput Appl. doi:10.1007/s00521-011-0522-9

  11. Hu X-f, Zhao Z, Wang S, Wang F-l, He D-k, Wu S-k (2008) Multi-stage extreme learning machine for fault diagnosis on hydraulic tube tester. Neural Comput Appl 17:399–403

    Google Scholar 

  12. Zhang RX, Huang GB, Sundararajan N, Saratchandran P (2007) Multicategory classification using an extreme learning machine for microarray gene expression cancer diagnosis. IEEE ACM Trans Comput Biol Bioinform 4(3):485–495

    Google Scholar 

  13. Huang G-B, Chen L, Siew C-K (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892

    Google Scholar 

  14. Nizar AH, Dong ZY, Wang Y (2008) Power utility nontechnical loss analysis with extreme learning machine method. IEEE Trans Power Syst 23(3):946–955

    Google Scholar 

  15. Yeu CWT, Lim MH, Huang GB, Agarwal A, Ong YS (2006) A new machine learning paradigm for terrain reconstruction. IEEE Geosci Remote Sens Lett 3(3):382–386

    Google Scholar 

  16. Wang G, Zhao Y, Wang D (2008) A protein secondary structure prediction framework based on the Extreme Learning Machine. Neurocomputing 72(1–3):262–268

    Article  Google Scholar 

  17. Tang X, Han M (2009) Partial Lanczos extreme learning machine for single-output regression problems. Neurocomputing 72:3066–3076

    Google Scholar 

  18. Feng G, Huang G-B, Lin Q, Gay R (2009) Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans Neural Netw 20(8):1352–1357

    Google Scholar 

  19. Lan Y, Soh YC, Huang G-B (2009) Ensemble of online sequential extreme learning machine. Neurocomputing 72:3391–3395

    Article  Google Scholar 

  20. Li G, Liu M, Dong M (2010) A new online learning algorithm for structure-adjustable extreme learning machine. Comput Math Appl 60:377–389

    Google Scholar 

  21. Rong H-J, Ong Y-S, Tan A-H, Zhu Z-X (2008) A fast pruned-extreme learning machine for classification problem. Neurocomputing 72:359–366

    Article  Google Scholar 

  22. Zhu QY, Qin AK, Suganthan PN, Huang GB (2005) Evolutionary extreme learning machine. Pattern Recogn 38(10):1759–1763

    Article  MATH  Google Scholar 

  23. Huang G-B, Chen L (2008) Enhanced random search based incremental extreme learning machine. Neurocomputing 71(16–18):3460–3468

    Article  Google Scholar 

  24. Huang GB, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70(16–18):3056–3062

    Article  Google Scholar 

  25. Han F, Huang DS (2006) Improved extreme learning machine for function approximation by encoding a priori information. Neurocomputing 69(16–18):2369–2373

    Article  Google Scholar 

  26. Hawkinsa DM, Yin X (2002) A faster algorithm for ridge regression of reduced rank data. Comput Stat Data Anal 40:253–262

    Google Scholar 

  27. Foucart T (1999) Stability of the inverse correlation matrix. Partial ridge regression. J Stat Plan Inference 77:141–154

    Google Scholar 

  28. Turlach BA (2006) An even faster algorithm for ridge regression of reduced rank data. Comput Stat Data Anal 50:642–658

    Google Scholar 

  29. Tutz G, Binder H (2007) Boosting ridge regression. Comput Stat Data Anal 51:6044–6059

    Google Scholar 

  30. Lawless JF, Wang P (1976) A simulation study of ridge and other regression estimators. Commun Stat Theory Methods A5:307–323

    Google Scholar 

  31. Lipovetsky S (2010) Enhanced ridge regressions. Math Comput Model 51:338–348

    Google Scholar 

  32. Wan ATK (2002) On generalized ridge regression estimators under collinearity and balanced loss. Appl Math Comput 129:455–467

    Google Scholar 

  33. Akdeniz F, Yuksel G, Wan ATK (2004) The moments of the operational almost unbiased ridge regression estimator. Appl Math Comput 153:673–684

    Google Scholar 

  34. Lan Y, Soh Y-C, Huang G-B (2010) Constructive hidden nodes selection of extreme learning machine. Neurocomputing 70:1–9

    Google Scholar 

  35. Li G, Niu P, Xiao X (2011) Development and investigation of efficient artificial bee colony algorithm for numerical function optimization. Appl Soft Comput. doi:10.1016/j.asoc.2011.08.040

Download references

Acknowledgments

This project is supported by the National Natural Science Foundation of China (Grant No. 60774028) and Natural Science Foundation of Hebei Province, China (Grant No. F2010001318).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Guoqiang Li or Peifeng Niu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Li, G., Niu, P. An enhanced extreme learning machine based on ridge regression for regression. Neural Comput & Applic 22, 803–810 (2013). https://doi.org/10.1007/s00521-011-0771-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-011-0771-7

Keywords

Navigation