Skip to main content

A Modification to the Nguyen–Widrow Weight Initialization Method

  • Conference paper
  • First Online:
Book cover Intelligent Systems, Technologies and Applications

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 910))

Abstract

Weight initialization is important factor for determining the speed of training in feedforward neural networks. In this paper, a variable parameter \(\alpha \) is identified through statistical analysis in Nguyen–Widrow weight initialization method. The value of \(\alpha \) is varied from 1 to 10 and is tested on nine function approximation tasks. The results are compared for each value of \(\alpha \) using single-tail t-test. An optimal value of \(\alpha \) is derived, and a new weight initialization technique is hence proposed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bottou, L.: Reconnaissance de la parole par reseaux multi-couches. In: Proceedings of the International Workshop Neural Networks Application, Neuro-Nimes, vol. 88, pp. 197–217 (1988)

    Google Scholar 

  2. Breiman, L.: The ii method for estimating multivariate functions from noisy data. Technometrics 33(2), 125–143 (1991)

    MathSciNet  MATH  Google Scholar 

  3. Chandra, P., Singh, Y.: Feedforward sigmoidal networks-equicontinuity and fault-tolerance properties. IEEE Trans. Neural Netw. 15(6), 1350–1366 (2004)

    Article  Google Scholar 

  4. Chen, C.L., Nutter, R.S.: Improving the training speed of three-layer feedforward neural nets by optimal estimation of the initial weights. In: 1991 IEEE International Joint Conference on Neural Networks, 1991. pp. 2063–2068. IEEE (1991)

    Google Scholar 

  5. Cherkassky, V., Gehring, D., Mulier, F.: Comparison of adaptive methods for function estimation from samples. IEEE Trans. Neural Netw. 7(4), 969–984 (1996)

    Article  Google Scholar 

  6. Cherkassky, V., Mulier, F.M.: Learning from Data: Concepts, Theory, and Methods. Wiley (2007)

    Google Scholar 

  7. Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control., Signals Syst. 2(4), 303–314 (1989)

    Article  MathSciNet  Google Scholar 

  8. Drago, G.P., Ridella, S.: Statistically controlled activation weight initialization (scawi). IEEE Trans. Neural Netw. 3(4), 627–631 (1992)

    Article  Google Scholar 

  9. Funahashi, K.I.: On the approximate realization of continuous mappings by neural networks. Neural Netw. 2(3), 183–192 (1989)

    Article  Google Scholar 

  10. Hagan, M.T., Menhaj, M.B.: Training feedforward networks with the marquardt algorithm. IEEE Trans. Neural Netw. 5(6), 989–993 (1994)

    Article  Google Scholar 

  11. Haykin, S., Network, N.: A comprehensive foundation. Neural Netw. 2, 2004 (2004)

    Google Scholar 

  12. Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2(5), 359–366 (1989)

    Article  Google Scholar 

  13. Irie, B., Miyake, S.: Capabilities of three-layered perceptrons. In: IEEE International Conference on Neural Networks, vol.1, p. 218 (1988)

    Google Scholar 

  14. Jones, L.K.: Constructive approximations for neural networks by sigmoidal functions. Proc. IEEE 78(10), 1586–1589 (1990)

    Article  Google Scholar 

  15. Kim, Y., Ra, J.: Weight value initialization for improving training speed in the backpropagation network. In: 1991 IEEE International Joint Conference on Neural Networks, 1991. pp. 2396–2401. IEEE (1991)

    Google Scholar 

  16. Maechler, M., Martin, D., Schimert, J., Csoppenszky, M., Hwang, J.: Projection pursuit learning networks for regression. In: Proceedings of the 2nd International IEEE Conference on Tools for Artificial Intelligence, 1990, pp. 350–358. IEEE (1990)

    Google Scholar 

  17. Mittal, A., Chandra, P., Singh, A.P.: A statistically resilient method of weight initialization for sfann. In: 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 1371–1376. IEEE (2015)

    Google Scholar 

  18. Nguyen, D., Widrow, B.: Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights. In: 1990 IJCNN International Joint Conference on Neural Networks, 1990, pp. 21–26. IEEE (1990)

    Google Scholar 

  19. Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: The rprop algorithm. In: IEEE International Conference On Neural Networks, 1993, pp. 586–591. IEEE (1993)

    Google Scholar 

  20. Sodhi, S.S., Chandra, P.: A partially deterministic weight initialization method for sffanns. In: 2014 IEEE International Advance Computing Conference (IACC), pp. 1275–1280. IEEE (2014)

    Google Scholar 

  21. Sodhi, S.S., Chandra, P., Tanwar, S.: A new weight initialization method for sigmoidal feedforward artificial neural networks. In: 2014 International Joint Conference on Neural Networks (IJCNN), pp. 291–298. IEEE (2014)

    Google Scholar 

  22. Yam, J.Y., Chow, T.W.: A weight initialization method for improving training speed in feedforward neural network. Neurocomputing 30(1), 219–232 (2000)

    Article  Google Scholar 

Download references

Acknowledgements

This publication is an outcome of the R&D work undertaken project under the Visvesvaraya PhD Scheme of Ministry of Electronics & Information Technology, Government of India, being implemented by Digital India Corporation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Apeksha Mittal .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mittal, A., Singh, A.P., Chandra, P. (2020). A Modification to the Nguyen–Widrow Weight Initialization Method. In: Thampi, S., et al. Intelligent Systems, Technologies and Applications. Advances in Intelligent Systems and Computing, vol 910. Springer, Singapore. https://doi.org/10.1007/978-981-13-6095-4_11

Download citation

Publish with us

Policies and ethics