Skip to main content
Log in

Fast learning network: a novel artificial neural network with a fast learning speed

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

This paper proposes a novel artificial neural network called fast learning network (FLN). In FLN, input weights and hidden layer biases are randomly generated, and the weight values of the connection between the output layer and the input layer and the weight values connecting the output node and the input nodes are analytically determined based on least squares methods. In order to test the FLN validity, it is applied to nine regression applications, and experimental results show that, compared with support vector machine, back propagation, extreme learning machine, the FLN with much more compact networks can achieve very good generalization performance and stability at a very fast training speed and a quick reaction of the trained network to new observations. In addition, in order to further test the FLN validity, it is applied to model the thermal efficiency and NO x emissions of a 330 WM coal-fired boiler and achieves very good prediction precision and generalization ability at a high learning speed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Green M, Ekelund U, Edenbrandt L, Björk J, Forberg JL, Ohlsson M (2009) Exploring new possibilities for case-based explanation of artificial neural network ensembles. Neural Netw 22:75–81

    Article  Google Scholar 

  2. May RJ, Maier HR, Dandy GC (2010) Data splitting for artificial neural networks using SOM-based stratified sampling. Neural Netw 23:283–294

    Article  Google Scholar 

  3. Kiranyaz S, Ince T, Yildirim A, Gabbouj M (2009) Evolutionary artificial neural networks by multi-dimensional particle swarm optimization. Neural Netw 22:1448–1462

    Article  Google Scholar 

  4. Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: theory and applications. Neurocomputing 70:489–501

    Article  Google Scholar 

  5. Suresh S, Venkatesh Babu R, Kim HJ (2009) No-reference image quality assessment using modified extreme learning machine classifier. Appl Soft Comput 9:541–552

    Article  Google Scholar 

  6. Li G, Niu P (2011) An enhanced extreme learning machine based on ridge regression for regression. Neural Comput Appl. doi:10.1007/s00521-011-0771-7

  7. Romero E, Alquézar R (2012) Comparing error minimized extreme learning machines and support vector sequential feed-forward neural networks. Neural Netw 25:122–129

    Article  Google Scholar 

  8. Zhu Q-Y, Qin AK, Suganthan PN, Huang G-B (2005) Evolutionary extreme learning machine. Pattern Recogn 38:1759–1763

    Article  MATH  Google Scholar 

  9. Huynh HT, Won Y (2008) Small number of hidden units for ELM with two-stage linear model. IEICE Trans Inform Syst 91-D:1042–1049

    Article  Google Scholar 

  10. He M (1993) Theory, application and related problems of double parallel feedforward neural networks. Ph.D. thesis, Xidian University, Xi’an

  11. Wang J, Wu W, Li Z, Li L (2011) Convergence of gradient method for double parallel feedforward neural network. Int J Numer Anal Model 8:484–495

    MathSciNet  MATH  Google Scholar 

  12. Tamura S, Tateishi M (1997) Capabilities of a four-layered feedforward neural network: four layers versus three. IEEE Trans Neural Netw 8:251–255

    Article  Google Scholar 

  13. Huang G-B (1998) Learning capability of neural networks. Ph.D. thesis, Nanyang Technological University, Singapore

  14. Huang G-B (2003) Learning capability and storage capacity of two-hidden-layer feedforward networks. IEEE Trans Neural Netw 14:274–281

    Article  Google Scholar 

  15. Huang G-B, Chen L, Siew C-K (2006) Universal approximation using incremental networks with random hidden computation nodes. IEEE Trans Neural Netw 17:879–892

    Article  Google Scholar 

  16. Rao CR, Mitra SK (1971) Generalized inverse of matrices and its applications. Wiley, New York

    MATH  Google Scholar 

  17. Serre D (2002) Matrices: theory and applications. Springer, New York

    Google Scholar 

  18. Liang N-Y, Huang G-B, Saratchandran P, Sundararajan N (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17:1411–1423

    Article  Google Scholar 

  19. Lan Y, Soh YC, Huang G-B (2010) Two-stage extreme learning machine for regression. Neurocomputing 73:3028–3038

    Article  Google Scholar 

  20. Xu C, Lu J, Zheng Y (2006) An experiment and analysis for a boiler combustion optimization on efficiency and NO x emissions. Boil Technol 37:69–74

    Google Scholar 

Download references

Acknowledgments

This project is supported by the National Natural Science Foundation of China (Grant No. 60774028) and Natural Science Foundation of Hebei Province, China (Grant No.F2010001318).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peifeng Niu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Li, G., Niu, P., Duan, X. et al. Fast learning network: a novel artificial neural network with a fast learning speed. Neural Comput & Applic 24, 1683–1695 (2014). https://doi.org/10.1007/s00521-013-1398-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-013-1398-7

Keywords

Navigation