Abstract
Extreme Learning Machines (ELMs) have become a popular tool for the classification of electroencephalography (EEG) signals for Brain Computer Interfaces. This is so mainly due to their very high training speed and generalization capabilities. Another important advantage is that they have only one hyperparameter that must be calibrated: the number of hidden nodes. While most traditional approaches dictate that this parameter should be chosen smaller than the number of available training examples, in this article we argue that, in the case of problems in which the data contain unrepresentative features, such as in EEG classification problems, it is beneficial to choose a much larger number of hidden nodes. We characterize this phenomenon, explain why this happens and exhibit several concrete examples to illustrate how ELMs behave. Furthermore, as searching for the optimal number of hidden nodes could be time consuming in enlarged ELMs, we propose a new training scheme, including a novel pruning method. This scheme provides an efficient way of finding the optimal number of nodes, making ELMs more suitable for dealing with real time EEG classification problems. Experimental results using synthetic data and real EEG data show a major improvement in the training time with respect to most traditional and state of the art ELM approaches, without jeopardising classification performance and resulting in more compact networks.
Similar content being viewed by others
Information Sharing Statement
The P300 based BCI dataset is publicly available at https://akimpech.izt.uam.mx/p300db/. The DaSalla imagined speech dataset was originally available at http://www.brainliner.jp/data/brainliner-admin/Speech_Imagery_Dataset. The synthetic data, along with an Python and MatLab implementation of the Relevance-Based Pruned method are also publicly available to encourage reproducible research and can be accessed at https://github.com/N-Nieto/Relevance_Base_Pruning. The OP-ELM implementation was downloaded from https://research.cs.aalto.fi/aml/software.shtml.
References
Alencar, A. S., Neto, A. R. R., & Gomes, J. P. P. (2016). A new pruning method for extreme learning machines via genetic algorithms. Applied Soft Computing, 44, 101–107.
Belkin, M., Hsu, D., Ma, S., & Mandal, S. (2019). Reconciling modern machine-learning practice and the classical bias-variance trade-off. Proceedings of the National Academy of Sciences, 116(32), 15849–15854.
DaSalla, C. S., Kambara, H., Sato, M., & Koike, Y. (2009). Single-trial classification of vowel speech imagery using common spatial patterns. Neural Networks, 22(9), 1334–1339.
Ding, S., Guo, L., & Hou, Y. (2017). Extreme learning machine with kernel model based on deep learning. Neural Computing and Applications, 28(8), 1975–1984.
Ding, S., Zhang, N., Xu, X., Guo, L., & Zhang, J. (2015). Deep extreme learning machine and its application in eeg classification. Mathematical Problems in Engineering, 2015.
Duan, L., Bao, M., Miao, J., Xu, Y., & Chen, J. (2016). Classification based on multilayer extreme learning machine for motor imagery task from EEG signals. Procedia Computer Science, 88, 176–184.
Heinz, W., Engl, M. H., & Neubauer, A. (1996). Regularization of inverse problems. Mathematics and its Applications (Dordrecht), 375.
Holz, E. M., Botrel, L., Kaufmann, T., & Kübler, A. (2015). Long-term independent brain-computer interface home use improves quality of life of a patient in the locked-in state: a case study. Archives of Physical Medicine and Rehabilitation, 96(3), S16–S26.
Horn, R. A., & Johnson, C. R. (1990). Topics in matrix analysis. Cambridge University Press.
Huang, G.-B., & Babri, H. A. (1998). Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions. IEEE Transactions on Neural Networks, 9(1), 224–229.
Huang, G.-B., Wang, D. H., & Lan, Y. (2011). Extreme learning machines: a survey. International Journal of Machine Learning and Cybernetics, 2(2), 107–122.
Huang, G.-B., Zhu, Q.-Y., & Siew, C.-K. (2006). Extreme learning machine: theory and applications. Neurocomputing, 70(1–3), 489–501.
Huang, G.-B., Zhu, Q.-Y., Siew, C.-K., et al. (2004). Extreme learning machine: a new learning scheme of feedforward neural networks. Neural Networks, 2, 985–990.
Jin, Z., Zhou, G., Gao, D., & Zhang, Y. (2020). EEG classification using sparse Bayesian extreme learning machine for brain-computer interface. Neural Computing and Applications, 32(11), 6601–6609.
Kong, W., Guo, S., Long, Y., Peng, Y., Zeng, H., Zhang, X., & Zhang, J. (2018). Weighted extreme learning machine for P300 detection with application to brain computer interface. Journal of Ambient Intelligence and Humanized Computing, pp. 1–11.
Ledesma-Ramirez, C., Bojorges-Valdez, E., Yáñez-Suarez, O., Saavedra, C., Bougrain, L., & Gentiletti, G. G. (2010). An open-access P300 speller database. Asilomar, California, USA: In Fourth International Brain-Computer Interface Meeting.
Liang, N.-Y., Saratchandran, P., Huang, G.-B., & Sundararajan, N. (2006). Classification of mental tasks from EEG signals using extreme learning machine. International Journal of Neural Systems, 16(01), 29–38.
Luo, J., Vong, C.-M., & Wong, P.-K. (2013). Sparse bayesian extreme learning machine for multi-classification. IEEE Transactions on Neural Networks and Learning Systems, 25(4), 836–843.
Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., & Lendasse, A. (2010). OP-ELM: optimally pruned extreme learning machine. IEEE Transactions on Neural Networks, 21(1), 158–162.
Murugavel, A. M., & Ramakrishnan, S. (2016). Hierarchical multi-class SVM with ELM kernel for epileptic EEG signal classification. Medical & Biological Engineering & Computing, 54(1), 149–161.
Nicolas-Alonso, L. F., & Gomez-Gil, J. (2012). Brain computer interfaces, a review. Sensors, 12(2), 1211–1279.
Rong, H.-J., Ong, Y.-S., Tan, A.-H., & Zhu, Z. (2008). A fast pruned-extreme learning machine for classification problem. Neurocomputing, 72(1–3), 359–366.
Schmidt, W. F., Kraaijveld, M. A., Duin, R. P., et al. (1992). Feed forward neural networks with random weights. In International Conference on Pattern Recognition, p. 1. IEEE Computer Society Press.
Shi, L.-C., & Lu, B.-L. (2013). EEG-based vigilance estimation using extreme learning machines. Neurocomputing, 102, 135–143.
Similä, T., & Tikka, J. (2005). Multiresponse sparse regression with application to multidimensional scaling. In International Conference on Artificial Neural Networks. Springer pp. 97–102.
Song, Y., & Zhang, J. (2013). Automatic recognition of epileptic EEG patterns via extreme learning machine and multiresolution feature extraction. Expert Systems with Applications, 40(14), 5477–5489.
Tan, P., Sa, W., & Yu, L. (2016). Applying extreme learning machine to classification of EEG BCI. In 2016 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), pp. 228–232.
Tavares, L. D., Saldanha, R. R., Vieira, D. A., & Lisboa, A. C. (2014). A comparative study of extreme learning machine pruning based on detection of linear independence. In 2014 IEEE 26th International Conference on Tools with Artificial Intelligence. IEEE pp. 63–69.
Wolpaw, J. R., Birbaumer, N., McFarland, D. J., Pfurtscheller, G., & Vaughan, T. M. (2002). Brain-computer interfaces for communication and control. Clinical Neurophysiology, 113(6), 767–791.
Yuan, Q., Zhou, W., Li, S., & Cai, D. (2011). Epileptic EEG classification based on extreme learning machine and nonlinear features. Epilepsy Research, 96(1–2), 29–38.
Zhang, Y., Wang, Y., Zhou, G., Jin, J., Wang, B., Wang, X., & Cichocki, A. (2018). Multi-kernel extreme learning machine for EEG classification in brain-computer interfaces. Expert Systems with Applications, 96, 302–310.
Zhao, H., Guo, X., Wang, M., Li, T., Pang, C., & Georgakopoulos, D. (2018). Analyze EEG signals with extreme learning machine based on pmis feature selection. International Journal of Machine Learning and Cybernetics, 9(2), 243–249.
Acknowledgements
This research was funded in part by Consejo Nacional de Investigaciones Científicas y Técnicas, CONICET, Argentina, through PIP 2014-2016 No. 11220130100216-CO, the Agencia Nacional de Promoción Científica y Tecnológica through PICT-2017-4596 and by Universidad Nacional del Litoral, UNL, through CAI+D-UNL 2016 PIC No.50420150100036LI.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they have no conflict of interest.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Nieto, N., Ibarrola, F.J., Peterson, V. et al. Extreme Learning Machine Design for Dealing with Unrepresentative Features. Neuroinform 20, 641–650 (2022). https://doi.org/10.1007/s12021-021-09541-8
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12021-021-09541-8