Skip to main content
Log in

Exponential filtering technique for Euclidean norm-regularized extreme learning machines

  • Theoretical Advances
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

Extreme Learning Machine (ELM) is a feedforward neural network that utilizes a single hidden layer to effectively tackle the learning speed challenges commonly associated with conventional gradient-based neural networks. ELM has been reported to achieve faster learning rates and better performance than traditional neural networks. However, it is susceptible to unreliable solutions when applied to real-world input data with inconsistent noise, resulting in overfitting. To mitigate these limitations, we investigate various regularization techniques that can be employed in conjunction with ELM, including Tikhonov regularization, a well-established method in the field. However, one of the main drawbacks of Tikhonov regularization is its assumption of the input data’s noise to be white and Gaussian, which may not be the case in real-world applications. This assumption can lead to suboptimal regularization and poor generalization performance of the model. Therefore, we propose using an exponential filtering method in conjunction with ELM to overcome this limitation and improve the model’s reliability. We compare our approach with Tikhonov regularization and other existing methods to evaluate its efficacy. Our experimental results demonstrate that our proposed strategy achieves superior accuracy and generalization capability compared to the other methods. Moreover, we provide statistical evidence to support the significance of our findings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Availability of data and materials

Data sets are available on the public UCI Machine Learning Repository http://archive.ics.uci.edu/ml.

Code availability

Codes are available on request after acceptance will be made public on GitHub.

References

  1. Huang G-B, Zhu Q-Y, Siew C-K (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. In: International joint conference on neural networks (IJCNN), vol 2, pp 985–990. IEEE

  2. Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1):489–501

    Article  Google Scholar 

  3. Huang G-B, Chen L, Siew CK (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892

    Article  Google Scholar 

  4. Rumelhart DE, Hinton GE, Williams RJ (1988) Learning representations by back-propagating errors. Cognit Model 5(3):1

    MATH  Google Scholar 

  5. Alenezi F, Armghan A, Polat K (2023) Wavelet transform based deep residual neural network and relu based extreme learning machine for skin lesion classification. Expert Syst Appl 213:119064

    Article  Google Scholar 

  6. Zhou X, Huang J, Lu F, Zhou W, Liu P (2023) A novel compound fault-tolerant method based on online sequential extreme learning machine with cycle reservoir for turbofan engine direct thrust control. Aerosp Sci Technol 132:108059

    Article  Google Scholar 

  7. Huang G-B, Zhou H, Ding X, Zhang R (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern, Part B (Cybernetics) 42(2):513–529

  8. Lavanya S, Prasanth A, Jayachitra S, Shenbagarajan A (2021) A tuned classification approach for efficient heterogeneous fault diagnosis in IoT-enabled WSN applications. Measurement 183:109771

    Article  Google Scholar 

  9. Sekar J, Aruchamy P, Sulaima Lebbe Abdul H, Mohammed AS, Khamuruddeen S (2022) An efficient clinical support system for heart disease prediction using tanfis classifier. Computat Intell 38(2):610–640

  10. Jayachitra S, Prasanth A (2021) Multi-feature analysis for automated brain stroke classification using weighted gaussian Naïve bayes classifier. J Circuits Syst Comput 30(10):2150178

  11. Huang G, Huang G-B, Song S, You K (2015) Trends in extreme learning machines: a review. Neural Netw 61:32–48

    Article  MATH  Google Scholar 

  12. Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297

    Article  MATH  Google Scholar 

  13. Suykens JA, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300

    Article  Google Scholar 

  14. Haykin S (2010) Neural networks and learning machines, 3/E. Pearson Education, India

    Google Scholar 

  15. Wong H-T, Leung H-C, Leung C-S, Wong E (2022) Noise/fault aware regularization for incremental learning in extreme learning machines. Neurocomputing 486:200–214

    Article  Google Scholar 

  16. Zhou Z, Yang X, Ji J, Wang Y, Zhu Z (2022) Classifying fabric defects with evolving inception v3 by improved l2, 1-norm regularized extreme learning machine. Textile Res J, 00405175221114633

  17. MartíNez-MartíNez JM, Escandell-Montero P, Soria-Olivas E, MartíN-Guerrero JD, Magdalena-Benedito R, GóMez-Sanchis J (2011) Regularized extreme learning machine for regression problems. Neurocomputing 74(17):3716–3721

    Article  Google Scholar 

  18. Wang Y, Li D, Du Y, Pan Z (2015) Anomaly detection in traffic using l1-norm minimization extreme learning machine. Neurocomputing 149:415–425

    Article  Google Scholar 

  19. Balasundaram S, Gupta D (2014) 1-Norm extreme learning machine for regression and multiclass classification using Newton method. Neurocomputing 128:4–14

    Article  Google Scholar 

  20. Chin CS, Ji X (2018) Adaptive online sequential extreme learning machine for frequency-dependent noise data on offshore oil rig. Eng Appl Artif Intell 74:226–241

    Article  Google Scholar 

  21. Rezaei-Ravari M, Eftekhari M, Saberi-Movahed F (2021) Regularizing extreme learning machine by dual locally linear embedding manifold learning for training multi-label neural network classifiers. Eng Appl Artif Intell 97:104062

    Article  Google Scholar 

  22. Naik SM, Jagannath RPK, Kuppili V (2020) Fractional tikhonov regularization to improve the performance of extreme learning machines. Physica A: Stat Mech Appl, 124034

  23. Shukla S, Raghuwanshi BS (2019) Online sequential class-specific extreme learning machine for binary imbalanced learning. Neural Netw 119:235–248

    Article  Google Scholar 

  24. Wang D, Wang P, Yuan Y, Wang P, Shi J (2020) A fast conformal predictive system with regularized extreme learning machine. Neural Netw 126:347–361

    Article  Google Scholar 

  25. Kim M (2021) The generalized extreme learning machines: tuning hyperparameters and limiting approach for the Moore–Penrose generalized inverse. Neural Netw 144:591–602

    Article  Google Scholar 

  26. Jiao M, Wang D, Yang Y, Liu F (2021) More intelligent and robust estimation of battery state-of-charge with an improved regularized extreme learning machine. Eng Appl Artif Intell 104:104407

    Article  Google Scholar 

  27. Huang G-B, Zhou H, Ding X, Zhang R (2011) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern Part B (Cybernetics) 42(2):513–529

  28. Deina C, do Amaral Prates MH, Alves CHR, Martins MSR, Trojan F, Stevan Jr SL, Siqueira HV (2022) A methodology for coffee price forecasting based on extreme learning machines. Inf Process Agric 9(4):556–565

  29. Bhatt M, Gutta S, Yalavarthy PK (2016) Exponential filtering of singular values improves photoacoustic image reconstruction. JOSA A 33(9):1785–1792

    Article  Google Scholar 

  30. Liu X, Gao C, Li P (2012) A comparative analysis of support vector machines and extreme learning machines. Neural Netw 33:58–66

    Article  MATH  Google Scholar 

  31. Huang G-B (2014) An insight into extreme learning machines: random neurons, random features and kernels. Cognit Comput 6(3):376–390

    Article  Google Scholar 

  32. Fernández-Navarro F, Hervás-Martínez C, Sanchez-Monedero J, Gutiérrez PA (2011) MELM-GRBF: a modified version of the extreme learning machine for generalized radial basis function neural networks. Neurocomputing 74(16):2502–2510

    Article  Google Scholar 

  33. Chamorro-Servent J, Aguirre J, Ripoll J, Vaquero JJ, Desco M (2011) Feasibility of u-curve method to select the regularization parameter for fluorescence diffuse optical tomography in phantom and small animal studies. Opt Express 19(12):11490–11506

    Article  Google Scholar 

  34. Tveito A, Langtangen HP, Nielsen BF, Cai X (2010) Parameter estimation and inverse problems. In: Elements of scientific computing. Springer, Berlin, pp 411–421

  35. Showalter D (1967) Representation and computation of the pseudoinverse. Proc Am Math Soc 18(4):584–586

    Article  MathSciNet  MATH  Google Scholar 

  36. Naik SM, Jagannath RPK, Kuppili V (2019) Iterative minimal residual method provides optimal regularization parameter for extreme learning machines. Results Phys

  37. Antoniou A, Lu W-S (2007) Practical optimization: algorithms and engineering applications. Springer, New York

    MATH  Google Scholar 

  38. Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/ml

  39. Sokolova M, Lapalme G (2009) A systematic analysis of performance measures for classification tasks. Inf Process Manag 45(4):427–437

    Article  Google Scholar 

  40. Kusy M, Kowalski PA (2018) Weighted probabilistic neural network. Inf Sci 430:65–76. https://doi.org/10.1016/j.ins.2017.11.036

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research work received support from the Kyungpook National University Research Fund in 2022.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anand Paul.

Ethics declarations

Conflict of interest

To the best of our knowledge, the named authors have no conflict of interest, financial or otherwise.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Consent to participate

Not applicable.

Consent for publication

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Naik, S.M., Subramani, C., Jagannath, R.K. et al. Exponential filtering technique for Euclidean norm-regularized extreme learning machines. Pattern Anal Applic 26, 1453–1462 (2023). https://doi.org/10.1007/s10044-023-01174-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-023-01174-8

Keywords

Navigation