Skip to main content
Log in

Two Novel Versions of Randomized Feed Forward Artificial Neural Networks: Stochastic and Pruned Stochastic

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Although high accuracies were achieved by artificial neural network (ANN), determining the optimal number of neurons in the hidden layer and the activation function is still an open issue. In this paper, the applicability of assigning the number of neurons in the hidden layer and the activation function randomly was investigated. Based on the findings, two novel versions of randomized ANNs, which are stochastic, and pruned stochastic, were proposed to achieve a higher accuracy without any time-consuming optimization stage. The proposed approaches were evaluated and validated by the basic versions of the popular randomized ANNs [1] are the random weight neural network [2], the random vector functional links [3] and the extreme learning machine [4] methods. In the stochastic version of randomized ANNs, not only the weights and biases of the neurons in the hidden layer but also the number of neurons in the hidden layer and each activation function were assigned randomly. In pruned stochastic version of these methods, the winner networks were pruned according to a novel strategy in order to produce a faster response. Proposed approaches were validated via 60 datasets (30 classification and 30 regression datasets). Obtained accuracies and time usages showed that both versions of randomized ANNs can be employed for classification and regression.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Zhang L, Suganthan PN (2016) A survey of randomized algorithms for training neural networks. Inf Sci 364:146–155

    Article  Google Scholar 

  2. Schmidt WF, Kraaijveld MA, Duin RPW (1992) Feed forward neural networks with random weights. In: IEEE, 11th international conference on pattern recognition, pp 1–4

  3. Pao YH, Park GH, Sobajic DJ (1994) Learning and generalization characteristics of the random vector functional-link net. Neurocomputing 6(2):163–180

    Article  Google Scholar 

  4. Huang G-B, Zhu Q, Siew CÃ (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501

    Article  Google Scholar 

  5. Hornik K (1991) Approximation capabilities of multilayer feedforward networks. Neural Netw 4(2):251–257

    Article  MathSciNet  Google Scholar 

  6. Wang D (2016) Editorial?: Randomized algorithms for training neural networks. Inf Sci 364–365:126–128

    Article  Google Scholar 

  7. Li M, Wang D (2017) Insights into randomized algorithms for neural networks: practical issues and common pitfalls. Inf Sci 382–383(December):170–178

    Article  Google Scholar 

  8. Zhang L, Suganthan PN (2016) A comprehensive evaluation of random vector functional link networks. Inf Sci 367–368:1094–1105

    Article  Google Scholar 

  9. Huang G, Bin Huang G, Song S, You K (2015) Trends in extreme learning machines: a review. Neural Netw 61:32–48

    Article  MATH  Google Scholar 

  10. Bin Huang G (2015) What are extreme learning machines? filling the gap between Frank Rosenblatt’s dream and John von Neumann’s puzzle. Cogn Comput 7(3):263–278

    Article  Google Scholar 

  11. Bin Huang G (2014) An insight into extreme learning machines: random neurons, random features and kernels. Cogn Comput 6(3):376–390

    Article  Google Scholar 

  12. Hernández-Aguirre A, Koutsougeras C, Buckles BP (2002) Sample complexity for function learning tasks through linear neural networks. Lect Notes Comput Sci 2313:262–271

    Article  MathSciNet  MATH  Google Scholar 

  13. Porwal A, Carranza EJM, Hale M (2003) Artificial neural networks for mineral-potential mapping: a case study from Aravalli Province, Western India. Nat Resour Res 12(3):155–171

    Article  Google Scholar 

  14. Das DP, Panda G (2004) Active mitigation of nonlinear noise processes using a novel filtered-s LMS algorithm. IEEE Trans Speech Audio Process 12(3):313–322

    Article  Google Scholar 

  15. Dudek G (2016) Extreme learning machine as a function approximator: initialization of input weights and biases. Adv Intell Syst Comput 403:59–69

    Google Scholar 

  16. Ertuğrul ÖF (2016) Forecasting electricity load by a novel recurrent extreme learning machines approach. Int J Electr Power Energy Syst 78:429

    Article  Google Scholar 

  17. Ertuğrul ÖF, Kaya Y (2016) Smart city planning by estimating energy efficiency of buildings by extreme learning machine. In: 4th international Istanbul smart grid congress and fair, ICSG 2016

  18. Ertuğrul ÖF, Emin Tağluk M, Kaya Y, Tekin R (2013) EMG signal classification by extreme learning machine|EMG sinyallerinin agirigrenme makinesi ile siniflandirilmasi. In: 21st signal processing and communications applications conference SIU 2013

  19. Li B, Li Y, Rong X (2013) The extreme learning machine learning algorithm with tunable activation function. Neural Comput Appl 22(3–4):531–539

    Article  Google Scholar 

  20. Rychetsky M, Ortmann M, Glesner S (1998) Pruning and regularization techniques for feed forward nets applied on a real world data base. In: International symposium on neural computation, pp 603–609

  21. Huang G-B, Chen L, Siew C-K (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. Trans Neural Netw 17(4):879–892

    Article  Google Scholar 

  22. Huang G-B, Chen L (2008) Enhanced random search based incremental extreme learning machine. Neurocomputing 71(16–18):3460–3468

    Article  Google Scholar 

  23. Kohonen T (1990) The self-organizing map. Proc IEEE 78(9):1464–1480

    Article  Google Scholar 

  24. Duin RPW, Juszczak P, Paclik P, Pekalska E, de Ridder D (2004) PR-Tools 4.0, a Matlab toolbox for pattern recognition, The Netherlands

  25. Lichman M (2013) UCI machine learning repository [http://archive.ics.uci.edu/ml] Irvine, CA University of California, School of Information and Computer Sciences

  26. http://mldata.org/repository/data/viewslug/banana-ida/

  27. Huang G, Zhu Q (2006) Extreme learning machine: a new learning scheme of feedforward neural networks. Neurocomputing 70:489–501

    Article  Google Scholar 

  28. http://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html

  29. Ertuğrul ÖF, Altun Ş (2016) Developing correlations by extreme learning machine for calculating higher heating values of waste frying oils from their physical properties. Neural Comput Appl 28:3145

    Article  Google Scholar 

  30. Zhu Q-Y, Qin AK, Suganthan PN, Huang G-B (2005) Evolutionary extreme learning machine. Pattern Recogn 38(10):1759–1763

    Article  MATH  Google Scholar 

  31. Bin Huang G, Wang DH, Lan Y (2011) Extreme learning machines: a survey. Int J Mach Learn Cybern 2(2):107–122

    Article  Google Scholar 

  32. Feng G, Bin Huang G, Lin Q, Gay R (2009) Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans Neural Netw 20(8):1352–1357

    Article  Google Scholar 

  33. Cant E (2003) Pruning neural networks with distribution estimation algorithms. Neural Netw 25:790–800

    MATH  Google Scholar 

  34. Bin Huang G, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70(16–18):3056–3062

    Article  Google Scholar 

  35. Rong H-J, Ong Y-S, Tan A-H, Zhu Z (2008) A fast pruned-extreme learning machine for classification problem. Neurocomputing 72(1–3):359–366

    Article  Google Scholar 

  36. Alpaydın E (2010) Introduction to machine learning, 2nd edn. The MIT Press, Cambridge, MA, London, England, pp 39, 79

  37. Ratsch G, Onoda T, Muller KR (1998) An improvement of AdaBoost to avoid overfitting. In: Advances in neutral information processing systems, Kitakyushu, pp 506–509

  38. Raymer ML, Doom TE, Kuhn LA, Punch WF (2003) Knowledge discovery in medical and biological datasets using a hybrid bayes classifier/evolutionary algorithm. IEEE Trans Syst Man Cybern 33:802–814

    Article  Google Scholar 

  39. Ertuğrul ÖF, Tağluk ME (2016) A novel machine learning method based on generalized behavioral learning theory. Neural Comput Appl 28:3921

    Article  Google Scholar 

  40. Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  MATH  Google Scholar 

  41. Tang J, Deng C, Huang G-B (2015) Extreme learning machine for multilayer perceptron. IEEE Trans Neural Netw Learn Syst 27:1–13

    MathSciNet  Google Scholar 

  42. Liang N-Y, Huang G-B, Saratchandran P, Sundararajan N (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17(6):1411–1423

    Article  Google Scholar 

  43. Cao J, Lin Z, Huang GB (2012) Self-adaptive evolutionary extreme learning machine. Neural Process Lett 36(3):285–305

    Article  Google Scholar 

  44. Wang Y, Yuan Y, Yang X (2012) Bidirectional extreme learning machine for regression problem and its learning effectiveness. IEEE Trans Neural Netw Learn Syst 23(9):1498–1505

    Article  Google Scholar 

  45. Alamo T, Tempo R, Luque A, Ramirez DR (2015) Randomized methods for design of uncertain systems: sample complexity and sequential algorithms. Automatica 52:160–172

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ömer Faruk Ertuğrul.

Appendix

Appendix

See Tables 8, 9, 10, 11, 12, 13, 14 and 15.

Table 8 Obtained performances by randomized ANNs in classification datasets
Table 9 Obtained performances by stochastic versions of randomized ANNs in classification datasets
Table 10 Obtained performances by pruned stochastic versions of randomized ANNs in classification datasets
Table 11 Obtained performances by randomized ANNs in regression datasets
Table 12 Obtained performances by stochastic versions of randomized ANNs in regression datasets
Table 13 Obtained performances by pruned stochastic versions of randomized ANNs in regression datasets
Table 14 Obtained time usages by randomized ANNs, their stochastic and pruned stochastic versions in classification datasets
Table 15 Obtained time usages by randomized ANNs, their stochastic and pruned stochastic versions in regression datasets

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ertuğrul, Ö.F. Two Novel Versions of Randomized Feed Forward Artificial Neural Networks: Stochastic and Pruned Stochastic. Neural Process Lett 48, 481–516 (2018). https://doi.org/10.1007/s11063-017-9752-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-017-9752-x

Keywords

Navigation