Skip to main content

Feedforward Neural Networks

  • Chapter
  • First Online:
Computational Mechanics with Neural Networks

Abstract

The feedforward neural network is the most fundamental type of neural networks from historical viewpoint and its wide applicability. This chapter discusses several aspects of this type of neural network in detail. Section 2.1 describes its fundamental structure and algorithm, Sect. 2.2 various types of layers, Sect. 2.3 some techniques for regularization, Sect. 2.4 the acceleration techniques for training, Sect. 2.5 the methods for weight initialization, and finally Sect. 2.6 the model averaging technique and the Dropout.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Funahashi, K.: On the approximate realization of continuous mappings by neural networks. Neural Netw. 2, 183–192 (1989)

    Article  Google Scholar 

  2. Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2, 359–366 (1989)

    Google Scholar 

  3. Goodfellow, I., Warde-Farley, D., Mirza, M., Courville, A., Bengio, Y.: Maxout network. J. Mach. Learn. Res. W&CP 28(3), 1319–1327 (2013)

    Google Scholar 

  4. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)

    Google Scholar 

  5. Rumelhart, D.E., McClelland, J.L., The PDP Research Group: Parallel Distributed Processing: Explolations in the Microstructure of Cognition, Volume 1: Foundation, MIT Press, Cambridge (1986)

    Google Scholar 

  6. Heykin, S.: Neural Networks: A Comprehensive Foundation. Prentice Hall, Upper Saddle River (1999)

    Google Scholar 

  7. Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011)

    Google Scholar 

  8. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: The 3rd International Conference for Learning Representations (ICLR), San Diego (2015) arXiv:1412.6980

  9. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR 9, 249–256 (2010)

    Google Scholar 

  10. Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Genki Yagawa .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Yagawa, G., Oishi, A. (2021). Feedforward Neural Networks. In: Computational Mechanics with Neural Networks. Lecture Notes on Numerical Methods in Engineering and Sciences. Springer, Cham. https://doi.org/10.1007/978-3-030-66111-3_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-66111-3_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-66110-6

  • Online ISBN: 978-3-030-66111-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics