Abstract
The feedforward neural network is the most fundamental type of neural networks from historical viewpoint and its wide applicability. This chapter discusses several aspects of this type of neural network in detail. Section 2.1 describes its fundamental structure and algorithm, Sect. 2.2 various types of layers, Sect. 2.3 some techniques for regularization, Sect. 2.4 the acceleration techniques for training, Sect. 2.5 the methods for weight initialization, and finally Sect. 2.6 the model averaging technique and the Dropout.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Funahashi, K.: On the approximate realization of continuous mappings by neural networks. Neural Netw. 2, 183–192 (1989)
Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2, 359–366 (1989)
Goodfellow, I., Warde-Farley, D., Mirza, M., Courville, A., Bengio, Y.: Maxout network. J. Mach. Learn. Res. W&CP 28(3), 1319–1327 (2013)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)
Rumelhart, D.E., McClelland, J.L., The PDP Research Group: Parallel Distributed Processing: Explolations in the Microstructure of Cognition, Volume 1: Foundation, MIT Press, Cambridge (1986)
Heykin, S.: Neural Networks: A Comprehensive Foundation. Prentice Hall, Upper Saddle River (1999)
Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011)
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: The 3rd International Conference for Learning Representations (ICLR), San Diego (2015) arXiv:1412.6980
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR 9, 249–256 (2010)
Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Yagawa, G., Oishi, A. (2021). Feedforward Neural Networks. In: Computational Mechanics with Neural Networks. Lecture Notes on Numerical Methods in Engineering and Sciences. Springer, Cham. https://doi.org/10.1007/978-3-030-66111-3_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-66111-3_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-66110-6
Online ISBN: 978-3-030-66111-3
eBook Packages: EngineeringEngineering (R0)