Skip to main content
Log in

An exploration of the uncertainty relation satisfied by BP network learning ability and generalization ability

  • Published:
Science in China Series F: Information Sciences Aims and scope Submit manuscript

Abstract

This paper analyses the intrinsic relationship between the BP network learning ability and generalization ability and other influencing factors when the overfit occurs, and introduces the multiple correlation coefficient to describe the complexity of samples; it follows the calculation uncertainty principle and the minimum principle of neural network structural design, provides an analogy of the general uncertainty relation in the information transfer process, and ascertains the uncertainty relation between the training relative error of the training sample set, which reflects the network learning ability, and the test relative error of the test sample set, which represents the network generalization ability; through the simulation of BP network overfit numerical modeling test with different types of functions, it is ascertained that the overfit parameter q in the relation generally has a span of 7 × 10−3 to 7 × 10−2; the uncertainty relation then helps to obtain the formula for calculating the number of hidden nodes of a network with good generalization ability under the condition that multiple correlation coefficient is used to describe sample complexity and the given approximation error requirement is satisfied; the rationality of this formula is verified; this paper also points out that applying the BP network to the training process of the given sample set is the best method for stopping training that improves the generalization ability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Baum, E. B., Haussler, D., What size net gives valid generalization? NIPSI, 1989, San Mateo, CA, 81–90.

  2. Moody, J. E., The effective number of parameters: An analysis of generalization and regularization in nonlinear learning system, NIPS 4, 1992, San Mateo, CA, 847–854.

  3. Barron, A. R., Approximation and estimation bounds for artificial neural networks, Machine Learning, 1994, (14): 115–133.

  4. Geman, S., Neural networks and bias/variance dilemma, Neural Computation, 1992, (4): 1–58.

  5. Cataltepe, Z., Abu-mostafa, Y. S., Magdon-Ismail, M., No free lunch for early stopping, Neural Computation, 1999, (11): 995–1009.

  6. Amari, S., Murata, N., Muller, K. R. et al., A symptotic statistical theory of overtraining and cross-validation, IEEE Trans. Neural Networks, 1997, 8(5): 985–996.

    Article  Google Scholar 

  7. Baldi, P., Temporal evolution of generalization during learning in linear networks, Neural Computation, 1991, (3):589–603.

  8. Partridge, D., Network generalization differences quantified, Neural Networks, 1996, 9(2): 263–271.

    Article  Google Scholar 

  9. Zhang Hongbin, The problem of the number of samples for training multiple-layered network, Automation Academic Journal, 1993, 19(1): 71–77.

    Google Scholar 

  10. Wei Haikun, Xu Sixin, Song Wenzhong, The generalization theory and generalization method of neural network, Automation Academic Journal, 2001, 27(6): 806–815.

    MathSciNet  Google Scholar 

  11. Yan Pingfan, The capacity, learning and computational complexity of artificial neural network, Electronics Academic Journal, 1995, 23(4): 63–67.

    Google Scholar 

  12. Wang Hui, He Xingui, Research on the improvement of BP network generalization ability, System Engineering and Electronic Technology, 2001, 23(3): 85–87, 101.

    MathSciNet  Google Scholar 

  13. Jiang Xuejun, Tang Huanwen, System analysis of the generalization ability of feed forward neural network, System Engineering Theory and Practice, 2000, 20(8): 38–40.

    Google Scholar 

  14. Peng Hanchuan, Gan Qiang, Wei Yu, Some practical methods for improving the extension ability of feed forward neural network, Acta Electronica Sinica, 1998, 26(4): 116–119.

    Google Scholar 

  15. Zha Youliang, Information uncertainty principle, Chinese Science Bulletin, 1989, 34(1): 86–87.

    Google Scholar 

  16. Shi Neng, Multiple Analytical Methods in Meteorological Statistics and Forecasting, Beijing: Meteorological Press, 1992, 30–33.

    Google Scholar 

  17. Li Jianping, Uncertainty principle and its implications and inspirations, Journal of Chinese Academy of Sciences, 2000, 15(6): 428–430.

    Google Scholar 

  18. Li Jianping, Zeng Qingcun, Qiu Jifan, Calculation uncertainty principle of non-linear differential equation-II. Theoretical analysis, Science in China, Vol. E, 2000, 30(6): 550–567.

    Google Scholar 

  19. Yan Pingfan, Uncertainty principle and neural network used in signal restoration, Signal Processing, 1991, 7(2): 71–76.

    Google Scholar 

  20. Zhang Liming, Model of Artificial Neural Network and Its Application. Shanghai: Fudan University Press, 1993, 46.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peng Lihong.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Li, Z., Peng, L. An exploration of the uncertainty relation satisfied by BP network learning ability and generalization ability. Sci China Ser F 47, 137–150 (2004). https://doi.org/10.1360/02yf0331

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1360/02yf0331

Keywords

Navigation