Abstract
Basic extreme learning machines apply least square solution to calculate the neural network’s output weights. In the presence of outliers and multi-collinearity, the least square solution becomes invalid. In order to fix this problem, a new kind of extreme learning machine is proposed. An outlier detection technique is introduced to locate outliers and avoid their interference. The least square solution is replaced by regularization for output weights calculation during which the number of hidden nodes is also automatically chosen. Simulation results show that the proposed model has good prediction performance on both normal datasets and datasets contaminated by outliers.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Asuncion, A., Newman, D.: UCI machine learning repository (2007), http://archive.ics.uci.edu/ml/
Chang, C., Lin, C.: Libsvm: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology 2, 1–27 (2011)
Cortes, C., Vapnik, V.: Support-vector networks. Machine Learning 20, 273–297 (1995)
Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. The Annals of Statistics 32, 407–499 (2004)
Frénay, B., Verleysen, M.: Using svms with randomised feature spaces: an extreme learning approach. In: European Symposium on Artificial Neural Networks (ESANN), pp. 315–320 (2010)
Geman, S., Bienenstock, E., Doursat, R.: Neural networks and the bias/variance dilemma. Neural Computation 4, 1–58 (1992)
Hoerl, A., Kennard, R.: Ridge regression: biased estimation for nonorthogonal problems. Technometrics 42, 80–86 (2000)
Huang, G.B., Chen, L.: Convex incremental extreme learning machine. Neurocomputing 70, 3056–3062 (2007)
Huang, G.B., Chen, L.: Enhanced random search based incremental extreme learning machine. Neurocomputing 71, 3460–3468 (2008)
Huang, G.B., Zhu, Q., Siew, C.: Extreme learning machine: theory and applications. Neurocomputing 70, 489–501 (2006)
Huang, G.B., Chen, L., Siew, C.K.: Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Transactions on Neural Networks 17(4), 879–892 (2006)
Jaeger, H.: The ”echo state” approach to analysing and training recurrent neural networks-with an erratum note. GMD Report 148, German National Research Center for Information Technology (2001)
Lendasse, A., Sorjamaa, A., Miche, Y.: Op-elm toolbox (2008), http://www.cis.hut.fi/projects/tsp/index.php?page=OPELM
Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., Lendasse, A.: Op-elm: Optimally pruned extreme learning machine. IEEE Transactions on Neural Networks 21, 158–162 (2010)
Mitra, K., Veeraraghavan, A., Chellappa, R.: Robust regression using sparse learning for high dimensional parameter estimation problems. In: IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp. 3846–3849 (2010)
Skoglund, K.: Regression and pca software (2011), http://www2.imm.dtu.dk/~ksjo/kas/software/spca
Zhu, Q., Huang, G.B.: Extreme learning machine toolbox (2004), http://www.ntu.edu.sg/eee/icis/cv/egbhuang.htm
Zou, H.: The adaptive lasso and its oracle properties. Journal of the American Statistical Association 101, 1418–1429 (2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Shao, H., Japkowicz, N. (2012). Applying Least Angle Regression to ELM. In: Kosseim, L., Inkpen, D. (eds) Advances in Artificial Intelligence. Canadian AI 2012. Lecture Notes in Computer Science(), vol 7310. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-30353-1_15
Download citation
DOI: https://doi.org/10.1007/978-3-642-30353-1_15
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-30352-4
Online ISBN: 978-3-642-30353-1
eBook Packages: Computer ScienceComputer Science (R0)