Skip to main content
Log in

Optimizing Extreme Learning Machine via Generalized Hebbian Learning and Intrinsic Plasticity Learning

  • Published:
Neural Processing Letters Aims and scope Submit manuscript


Traditional extreme learning machine (ELM) has random weights between input layer and hidden layer, this kind of random feature mapping brings non-discriminative feature space and unstable classification accuracy, which greatly limits the performance of the ELM networks. Therefore, to get the well-pleasing input weights, two biologically inspired, unsupervised learning methods were introduced to optimize the traditional ELM networks, namely the generalized hebbian algorithm (GHA) and intrinsic plasticity learning (IPL). The GHA is able to extract the principal components of the input data of arbitrary size, while the IPL tunes the probability density of the neuron’s output towards a desired distribution such as exponential distribution or weber distribution, thereby maximizing the networks information transmission. With the incorporation of the GHA and IPL approach, the optimized ELM networks generates a discriminative feature space and preserves much more characteristic of the input data, accordingly, achieving a better task performance. Based on the above two unsupervised methods, a simple, yet effective hierarchical feature mapping extreme learning machine (HFMELM) is further proposed. With almost no information loss in the layer-wise feature mapping process, the HFMELM is able to learn the high-level representation of the input data. To evaluate the effectiveness of the proposed methods, extensive experiments on several datasets are presented, the results show that the proposed methods significantly outperform the traditional ELM networks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others


  1. Leshno M, Lin VY, Pinkus A, Schocken S (1993) Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw 6(6):861–867

    Article  Google Scholar 

  2. Huang G-B, Babri HA (1998) Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions. IEEE Trans Neural Netw 9(1):224–229

    Article  Google Scholar 

  3. Huang G-B, Zhu Q-Y, Siew C-K (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. In 2004 IEEE international joint conference on proceedings neural networks, vol 2. IEEE, pp 985–990

  4. Huang G-B, Zhou H, Ding X, Zhang R (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern Part B (Cybern) 42(2):513–529

    Article  Google Scholar 

  5. Liu X, Wang L, Huang G-B, Zhang J, Yin J (2015) Multiple kernel extreme learning machine. Neurocomputing 149:253–264

    Article  Google Scholar 

  6. Huang G, Song S, Gupta JN, Wu C (2014) Semi-supervised and unsupervised extreme learning machines. IEEE Trans Cybern 44(12):2405–2417

    Article  Google Scholar 

  7. Tang J, Deng C, Huang G-B (2016) Extreme learning machine for multilayer perceptron. IEEE Trans Neural Netw Learn Syst 27(4):809–821

    Article  MathSciNet  Google Scholar 

  8. Liang N-Y, Huang G-B, Saratchandran P, Sundararajan N (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Networks 17(6):1411–1423

    Article  Google Scholar 

  9. Mirza B, Lin Z, Toh K-A (2013) Weighted online sequential extreme learning machine for class imbalance learning. Neural Process Lett 38(3):465–486

    Article  Google Scholar 

  10. Mirza B, Kok S, Dong F (2016) Multi-layer online sequential extreme learning machine for image classification. In Proceedings of ELM-2015 Volume 1. Springer, Berlin pp 39–49

  11. Zong W, Huang G-B (2014) Learning to rank with extreme learning machine. Neural Process Lett 39(2):155–166

    Article  Google Scholar 

  12. Zong W, Huang G-B, Chen Y (2013) Weighted extreme learning machine for imbalance learning. Neurocomputing 101:229–242

    Article  Google Scholar 

  13. Cao J, Lin Z, Huang G-B, Liu N (2012) Voting based extreme learning machine. Inf Sci 185(1):66–77

    Article  MathSciNet  Google Scholar 

  14. Liu N, Wang H (2010) Ensemble based extreme learning machine. IEEE Signal Process Lett 17(8):754–757

    Article  Google Scholar 

  15. Iosifidis A, Tefas A, Pitas I (2013) Minimum class variance extreme learning machine for human action recognition. IEEE Trans Circuits Syst Video Technol 23(11):1968–1979

    Article  Google Scholar 

  16. Kasun LLC, Yang Y, Huang G-B, Zhang Z (2016) Dimension reduction with extreme learning machine. IEEE Trans Image Process 25(8):3906–3918

    Article  MathSciNet  MATH  Google Scholar 

  17. Iosifidis A, Tefas A, Pitas I (2016) Graph embedded extreme learning machine. IEEE Trans Cybern 46(1):311–324

    Article  Google Scholar 

  18. Nguyen TV, Mirza B (2017) Dual-layer kernel extreme learning machine for action recognition. Neurocomputing 260:123–130

    Article  Google Scholar 

  19. Zhu W, Miao J, Qing L (2014) Constrained extreme learning machine: a novel highly discriminative random feedforward neural network. In 2014 international joint conference on neural networks (IJCNN). IEEE, pp 800–807

  20. Niu P, Ma Y, Li M, Yan S, Li G (2016) A kind of parameters self-adjusting extreme learning machine. Neural Process Lett 44(3):813–830

    Article  Google Scholar 

  21. Huang G-B, Chen L (2008) Enhanced random search based incremental extreme learning machine. Neurocomputing 71(16–18):3460–3468

    Article  Google Scholar 

  22. Iosifidis A, Tefas A, Pitas I (2015) Dropelm: Fast neural network regularization with dropout and dropconnect. Neurocomputing 162:57–66

    Article  Google Scholar 

  23. Yu W, Zhuang F, He Q, Shi Z (2015) Learning deep representations via extreme learning machines. Neurocomputing 149:308–315

    Article  Google Scholar 

  24. Zhou H, Huang G-B, Lin Z, Wang H, Soh YC (2015) Stacked extreme learning machines. IEEE Trans Cybern 45(9):2013–2025

    Article  Google Scholar 

  25. Li G, Niu P, Ma Y, Wang H, Zhang W (2014) Tuning extreme learning machine by an improved artificial bee colony to model and optimize the boiler efficiency. Knowl-Based Syst 67:278–289

    Article  Google Scholar 

  26. Han F, Yao H-F, Ling Q-H (2013) An improved evolutionary extreme learning machine based on particle swarm optimization. Neurocomputing 116:87–93

    Article  Google Scholar 

  27. Cao J, Lin Z, Huang G-B (2012) Self-adaptive evolutionary extreme learning machine. Neural Process Lett 36(3):285–305

    Article  Google Scholar 

  28. Neumann K, Steil JJ (2011) Batch intrinsic plasticity for extreme learning machines. In International conference on artificial neural networks. Springer, Berlin pp 339–346

  29. Klaus Steil J (2013) Optimizing extreme learning machines via ridge regression and batch intrinsic plasticity. Neurocomputing 102:23–30

    Article  Google Scholar 

  30. Sanger TD (1989) Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Netw 2(6):459–473

    Article  Google Scholar 

  31. Johnson WB, Lindenstrauss J (1984) Extensions of lipschitz mappings into a hilbert space. Contemp Math 26(189–206):1

    MathSciNet  MATH  Google Scholar 

  32. Li C (2011) A model of neuronal intrinsic plasticity. IEEE Trans Auton Ment Dev 3(4):277–284

    Article  Google Scholar 

  33. Triesch J (2005) Synergies between intrinsic and synaptic plasticity in individual model neurons. Adv Neural Inf Process Syst 1417–1424

  34. Schrauwen B, Wardermann M, Verstraeten D, Steil JJ, Stroobandt D (2008) Improving reservoirs using intrinsic plasticity. Neurocomputing 71(7–9):1159–1171

    Article  Google Scholar 

  35. Hebb DO (2005) The organization of behavior: a neuropsychological theory. Psychology Press, Hove

    Google Scholar 

  36. Oja E, Karhunen J, Wang L, Vigario R (1996) Principal and independent components in neural networks-recent developments. Proceedings VII Italian Workshop Neural Networks WIRN 95:16–35

    Google Scholar 

  37. Karhunen J, Joutsensalo J (1995) Generalizations of principal component analysis, optimization problems, and neural networks. Neural Netw 8(4):549–562

    Article  Google Scholar 

  38. Triesch J (2014) Synergies between intrinsic and synaptic plasticity mechanisms. Neural Comput 19(4):885–909 s

  39. Schlkopf B, Platt J, Hofmann T (2006) Greedy layer-wise training of deep networks. In: International conference on neural information processing systems, pp 153–160

  40. Huang G, Liu Z, Weinberger KQ, van der Maaten L (2017) Densely connected convolutional networks. Proc IEEE Conf Comput Vis Pattern Recognit 1(2):3

    Google Scholar 

Download references


This research is supported by the National Science and Technology Major Projects (No. 2013ZX03005013), and the Opening Foundation of the State Key Laboratory for Diagnosis and Treatment of Infectious Diseases (No. 2014KF06).

Author information

Authors and Affiliations


Corresponding author

Correspondence to Xinyu Jin.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, C., Jin, X., Jiang, B. et al. Optimizing Extreme Learning Machine via Generalized Hebbian Learning and Intrinsic Plasticity Learning. Neural Process Lett 49, 1593–1609 (2019).

Download citation

  • Published:

  • Issue Date:

  • DOI: