Skip to main content

Extreme Learning Machine: A Robust Modeling Technique? Yes!

  • Conference paper
Advances in Computational Intelligence (IWANN 2013)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7902))

Included in the following conference series:

Abstract

In this paper is described the original (basic) Extreme Learning Machine (ELM). Properties like robustness and sensitivity to variable selection are studied. Several extensions of the original ELM are then presented and compared. Firstly, Tikhonov-Regularized Optimally-Pruned Extreme Learning Machine (TROP-ELM) is summarized as an improvement of the Optimally-Pruned Extreme Learning Machine (OP-ELM) in the form of a L 2 regularization penalty applied within the OP-ELM. Secondly, a Methodology to Linearly Ensemble ELM (-ELM) is presented in order to improve the performance of the original ELM. These methodologies (TROP-ELM and -ELM) are tested against state of the art methods such as Support Vector Machines or Gaussian Processes and the original ELM and OP-ELM, on ten different data sets. A specific experiment to test the sensitivity of these methodologies to variable selection is also presented.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: Theory and applications. Neurocomputing 70(1-3), 489–501 (2006)

    Article  Google Scholar 

  2. Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd edn. Prentice Hall (July 1998)

    Google Scholar 

  3. Miche, Y., Bas, P., Jutten, C., Simula, O., Lendasse, A.: A methodology for building regression models using extreme learning machine: OP-ELM. In: European Symposium on Artificial Neural Networks, ESANN 2008, Bruges, Belgium, April 23-25 (2008)

    Google Scholar 

  4. Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., Lendasse, A.: OP-ELM: Optimally-pruned extreme learning machine. IEEE Transactions on Neural Networks 21(1), 158–162 (2010)

    Article  Google Scholar 

  5. Miche, Y., Sorjamaa, A., Lendasse, A.: OP-ELM: Theory, experiments and a toolbox. In: Kůrková, V., Neruda, R., Koutník, J. (eds.) ICANN 2008, Part I. LNCS, vol. 5163, pp. 145–154. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  6. Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B 58, 267–288 (1994)

    MathSciNet  Google Scholar 

  7. Allen, D.M.: The relationship between variable selection and data agumentation and a method for prediction. Technometrics 16(1), 125–127 (1974)

    Article  MATH  MathSciNet  Google Scholar 

  8. Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. Annals of Statistics 32, 407–499 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  9. Golub, G.H., Heath, M., Wahba, G.: Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics 21(2), 215–223 (1979)

    Article  MATH  MathSciNet  Google Scholar 

  10. Hoerl, A.E.: Application of ridge analysis to regression problems. Chemical Engineering Progress 58, 54–59 (1962)

    Google Scholar 

  11. Owen, A.B.: A robust hybrid of lasso and ridge regression. Technical report, Stanford University (2006)

    Google Scholar 

  12. Similä, T., Tikka, J.: Multiresponse sparse regression with application to multidimensional scaling. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3697, pp. 97–102. Springer, Heidelberg (2005)

    Google Scholar 

  13. Thisted, R.A.: Ridge regression, minimax estimation, and empirical bayes methods. Technical Report 28, Division of Biostatistics, Stanford University (1976)

    Google Scholar 

  14. Tychonoff, A.N.: Solution of incorrectly formulated problems and the regularization method. Soviet Mathematics 4, 1035–1038 (1963)

    Google Scholar 

  15. Zhao, P., Rocha, G.V., Yu, B.: Grouped and hierarchical model selection through composite absolute penalties. Annals of Statistics 37(6A), 3468–3497 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  16. Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B 67(2), 301–320 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  17. Miche, Y., van Heeswijk, M., Bas, P., Simula, O., Lendasse, A.: Trop-elm: A double-regularized elm using lars and tikhonov regularization. Neurocomputing 74(16), 2413–2421 (2011)

    Article  Google Scholar 

  18. Miche, Y., Eirola, E., Bas, P., Simula, O., Jutten, C., Lendasse, A., Verleysen, M.: Ensemble modeling with a constrained linear system of leave-one-out outputs. In: Verleysen, M. (ed.) ESANN 2010: 18th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, April 28-30, pp. 19–24. d-side Publications, Bruges (2010)

    Google Scholar 

  19. van Heeswijk, M., Miche, Y., Oja, E., Lendasse, A.: GPU-accelerated and parallelized ELM ensembles for large-scale regression. Neurocomputing 74(16), 2430–2437 (2011)

    Article  Google Scholar 

  20. Hua Zhou, Z., Wu, J., Tang, W.: Ensembling neural networks: Many could be better than all. Artif. Intell. 137(1-2), 239–263 (2002)

    Article  MATH  Google Scholar 

  21. Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, USA (1996)

    MATH  Google Scholar 

  22. Huang, G.B., Chen, L., Siew, C.K.: Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Transactions on Neural Networks 17, 879–892 (2005)

    Article  Google Scholar 

  23. Huang, G.B., Chen, L.: Enhanced random search based incremental extreme learning machine. Neurocomputing 71(16-18), 3460–3468 (2008)

    Article  Google Scholar 

  24. Huang, G.B., Chen, L.: Convex incremental extreme learning machine. Neurocomputing 70(16-18), 3056–3062 (2007)

    Article  Google Scholar 

  25. Huang, G.B., Zhu, Q.Y., Mao, K., Siew, C.K., Saratchandran, P., Sundararajan, N.: Can threshold networks be trained directly? IEEE Transactions on Circuits and Systems II: Express Briefs 53(3), 187–191 (2006)

    Article  Google Scholar 

  26. Li, M.B., Huang, G.B., Saratchandran, P., Sundararajan, N.: Fully complex extreme learning machine. Neurocomputing 68, 306–314 (2005)

    Article  Google Scholar 

  27. Huang, G.B., Siew, C.K.: Extreme learning machine with randomly assigned rbf kernels. International Journal of Information Technology 11(1), 16–24 (2005)

    Google Scholar 

  28. Rao, C.R., Mitra, S.K.: Generalized Inverse of Matrices and Its Applications. John Wiley & Sons Inc. (1971)

    Google Scholar 

  29. Yuan, L., Chai, S.Y., Huang, G.B.: Random search enhancement of error minimized extreme learning machine. In: Verleysen, M. (ed.) European Symposium on Artificial Neural Networks, ESANN 2010, April 28-30, pp. 327–332. d-side Publications, Bruges (2010)

    Google Scholar 

  30. Feng, G., Huang, G.B., Lin, Q., Gay, R.: Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Transactions on Neural Networks 20(8), 1352–1357 (2009)

    Article  Google Scholar 

  31. Group, E.: The op-elm toolbox (2009), http://www.cis.hut.fi/projects/eiml/research/downloads/op-elm-toolbox

  32. Berger, J.: Minimax estimation of a multivariate normal mean under arbitrary quadratic loss. Journal of Multivariate Analysis 6(2), 256–264 (1976)

    Article  MATH  MathSciNet  Google Scholar 

  33. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd edn. Springer (2009)

    Google Scholar 

  34. Deng, W., Zheng, Q., Chen, L.: Regularized extreme learning machine. In: IEEE Symposium on Computational Intelligence and Data Mining, CIDM 2009, March 30-April 2, pp. 389–395 (2009)

    Google Scholar 

  35. Nelder, J.A., Mead, R.: A simplex method for function minimization. The Computer Journal 7(4), 308–313 (1965)

    Article  MATH  Google Scholar 

  36. Lendasse, A., Wertz, V., Verleysen, M.: Model selection with cross-validations and bootstraps - application to time series prediction with RBFN models. In: Kaynak, O., Alpaydin, E., Oja, E., Xu, L. (eds.) ICANN/ICONIP 2003. LNCS, vol. 2714, pp. 573–580. Springer, Heidelberg (2003)

    Google Scholar 

  37. Lawson, C.L., Hanson, R.J.: Solving least squares problems, 3rd edn. SIAM Classics in Applied Mathematics (1995)

    Google Scholar 

  38. Frank, A., Asuncion, A.: UCI machine learning repository (2010), http://archive.ics.uci.edu/ml

  39. Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines (2001), Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm

  40. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. The MIT Press (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Lendasse, A. et al. (2013). Extreme Learning Machine: A Robust Modeling Technique? Yes!. In: Rojas, I., Joya, G., Gabestany, J. (eds) Advances in Computational Intelligence. IWANN 2013. Lecture Notes in Computer Science, vol 7902. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-38679-4_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-38679-4_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-38678-7

  • Online ISBN: 978-3-642-38679-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics