Skip to main content

Applying Least Angle Regression to ELM

  • Conference paper
Advances in Artificial Intelligence (Canadian AI 2012)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7310))

Included in the following conference series:

Abstract

Basic extreme learning machines apply least square solution to calculate the neural network’s output weights. In the presence of outliers and multi-collinearity, the least square solution becomes invalid. In order to fix this problem, a new kind of extreme learning machine is proposed. An outlier detection technique is introduced to locate outliers and avoid their interference. The least square solution is replaced by regularization for output weights calculation during which the number of hidden nodes is also automatically chosen. Simulation results show that the proposed model has good prediction performance on both normal datasets and datasets contaminated by outliers.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Asuncion, A., Newman, D.: UCI machine learning repository (2007), http://archive.ics.uci.edu/ml/

  2. Chang, C., Lin, C.: Libsvm: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology 2, 1–27 (2011)

    Google Scholar 

  3. Cortes, C., Vapnik, V.: Support-vector networks. Machine Learning 20, 273–297 (1995)

    MATH  Google Scholar 

  4. Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. The Annals of Statistics 32, 407–499 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  5. Frénay, B., Verleysen, M.: Using svms with randomised feature spaces: an extreme learning approach. In: European Symposium on Artificial Neural Networks (ESANN), pp. 315–320 (2010)

    Google Scholar 

  6. Geman, S., Bienenstock, E., Doursat, R.: Neural networks and the bias/variance dilemma. Neural Computation 4, 1–58 (1992)

    Article  Google Scholar 

  7. Hoerl, A., Kennard, R.: Ridge regression: biased estimation for nonorthogonal problems. Technometrics 42, 80–86 (2000)

    Google Scholar 

  8. Huang, G.B., Chen, L.: Convex incremental extreme learning machine. Neurocomputing 70, 3056–3062 (2007)

    Article  Google Scholar 

  9. Huang, G.B., Chen, L.: Enhanced random search based incremental extreme learning machine. Neurocomputing 71, 3460–3468 (2008)

    Article  Google Scholar 

  10. Huang, G.B., Zhu, Q., Siew, C.: Extreme learning machine: theory and applications. Neurocomputing 70, 489–501 (2006)

    Article  Google Scholar 

  11. Huang, G.B., Chen, L., Siew, C.K.: Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Transactions on Neural Networks 17(4), 879–892 (2006)

    Article  Google Scholar 

  12. Jaeger, H.: The ”echo state” approach to analysing and training recurrent neural networks-with an erratum note. GMD Report 148, German National Research Center for Information Technology (2001)

    Google Scholar 

  13. Lendasse, A., Sorjamaa, A., Miche, Y.: Op-elm toolbox (2008), http://www.cis.hut.fi/projects/tsp/index.php?page=OPELM

  14. Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., Lendasse, A.: Op-elm: Optimally pruned extreme learning machine. IEEE Transactions on Neural Networks 21, 158–162 (2010)

    Article  Google Scholar 

  15. Mitra, K., Veeraraghavan, A., Chellappa, R.: Robust regression using sparse learning for high dimensional parameter estimation problems. In: IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp. 3846–3849 (2010)

    Google Scholar 

  16. Skoglund, K.: Regression and pca software (2011), http://www2.imm.dtu.dk/~ksjo/kas/software/spca

  17. Zhu, Q., Huang, G.B.: Extreme learning machine toolbox (2004), http://www.ntu.edu.sg/eee/icis/cv/egbhuang.htm

  18. Zou, H.: The adaptive lasso and its oracle properties. Journal of the American Statistical Association 101, 1418–1429 (2006)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Shao, H., Japkowicz, N. (2012). Applying Least Angle Regression to ELM. In: Kosseim, L., Inkpen, D. (eds) Advances in Artificial Intelligence. Canadian AI 2012. Lecture Notes in Computer Science(), vol 7310. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-30353-1_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-30353-1_15

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-30352-4

  • Online ISBN: 978-3-642-30353-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics