Abstract
Research on neural networks started quite a long time ago, and it has become a broad and interdisciplinary research field today. Though neural networks have various definitions across disciplines, this book uses a widely adopted one: “Artificial neural networks are massively parallel interconnected networks of simple (usually adaptive) elements and their hierarchical organizations which are intended to interact with the objects of the real world in the same way as biological nervous systems do” (Kohonen 1988). In the context of machine learning, neural networks refer to “neural networks learning”, or in other words, the intersection of machine learning research and neural networks research.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aarts E, Korst J (1989) Simulated annealing and Boltzmann machines: a stochastic approach to combinatorial optimization and neural computing. Wiley, New York
Ackley DH, Hinton GE, Sejnowski TJ (1985) A learning algorithm for Boltzmann machines. Cogn Sci 9(1):147–169
Barron AR (1991) Complexity regularization with application to artificial neural networks. In: Roussas G (ed) Nonparametric functional estimation and related topics; NATO ASI series, vol 335. Kluwer, Amsterdam, pp 561–576
Bishop CM (1995) Neural networks for pattern recognition. Oxford University Press, New York
Broomhead DS, Lowe D (1988) Multivariate functional interpolation and adaptive networks. Complex Syst 2(3):321–355
Carpenter GA, Grossberg S (1987) A massively parallel architecture for a self-organizing neural pattern recognition machine. Comput Vis 37(1):54–115
Carpenter GA, Grossberg S (eds) (1991) Pattern recognition by self-organizing neural networks. MIT Press, Cambridge
Chauvin Y, Rumelhart DE (eds) (1995) Backpropagation: theory, architecture, and applications. Lawrence Erlbaum Associates, Hillsdale
Elman JL (1990) Finding structure in time. Cogn Sci 14(2):179–211
Fahlman SE, Lebiere C (1990) The cascade-correlation learning architecture. Technical report CMU-CS-90-100, School of Computer Sciences, Carnegie Mellon University, Pittsburgh, PA
Gerstner W, Kistler W (2002) Spiking neuron models: single neurons, populations, plasticity. Cambridge University Press, Cambridge
Girosi F, Jones M, Poggio T (1995) Regularization theory and neural networks architectures. Neural Comput 7(2):219–269
Goldberg DE (1989) Genetic algorithms in search, optimization and machine learning. Addison-Wesley, Boston
Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Cambridge
Gori M, Tesi A (1992) On the problem of local minima in backpropagation. IEEE Trans Pattern Anal Mach Intell 14(1):76–86
Haykin S (1998) Neural networks: a comprehensive foundation, 2nd edn. Prentice-Hall, Upper Saddle River
Hinton G (2010) A practical guide to training restricted Boltzmann machines. Technical report UTML TR 2010-003, Department of Computer Science, University of Toronto
Hinton G, Osindero S, Teh Y-W (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554
Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5):359–366
Kohonen T (1982) Self-organized formation of topologically correct feature maps. Biol Cybern 43(1):59–69
Kohonen T (1988) An introduction to neural computing. Neural Netw 1(1):3–16
Kohonen T (2001) Self-organizing maps, 3rd edn. Springer, Berlin
LeCun Y, Bengio Y (1995) Convolutional networks for images, speech, and time-series. In: Arbib MA (ed) The handbook of brain theory and neural networks. MIT Press, Cambridge
LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324
MacKay DJC (1992) A practical Bayesian framework for backpropagation networks. Neural Comput 4(3):448–472
McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133
Minsky M, Papert S (1969) Perceptrons. MIT Press, Cambridge
Orr GB, Müller K-R (eds) (1998) Neural networks: tricks of the trade. Springer, London
Park J, Sandberg IW (1991) Universal approximation using radial-basis-function networks. Neural Comput 3(2):246–257
Pineda FJ (1987) Generalization of back-propagation to recurrent neural networks. Phys Rev Lett 59(19):2229–2232
Reed RD, Marks RJ (1998) Neural smithing: supervised learning in feedforward artificial neural networks. MIT Press, Cambridge
Rumelhart DE, Hinton GE, Williams RJ (1986a) Learning internal representations by error propagation. In: Rumelhart DE, McClelland JL (eds) Parallel distributed processing: explorations in the microstructure of cognition, vol 1. MIT Press, Cambridge, pp 318–362
Rumelhart DE, Hinton GE, Williams RJ (1986b) Learning representations by backpropagating errors. Nature 323(9):533–536
Schwenker F, Kestler HA, Palm G (2001) Three learning phases for radial-basis-function networks. Neural Netw 14(4–5):439–458
Tickle AB, Andrews R, Golea M, Diederich J (1998) The truth will come to light: directions and challenges in extracting the knowledge embedded within trained artificial neural networks. IEEE Trans Neural Netw 9(6):1057–1067
Werbos P (1974) Beyond regression: new tools for prediction and analysis in the behavior science. PhD thesis, Harvard University, Cambridge, MA
Yao X (1999) Evolving artificial neural networks. Proc IEEE 87(9):1423–1447
Zhou Z-H (2004) Rule extraction: using neural networks or for neural networks? J Comput Sci Technol 19(2):249–253
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2021 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Zhou, ZH. (2021). Neural Networks. In: Machine Learning. Springer, Singapore. https://doi.org/10.1007/978-981-15-1967-3_5
Download citation
DOI: https://doi.org/10.1007/978-981-15-1967-3_5
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-1966-6
Online ISBN: 978-981-15-1967-3
eBook Packages: Computer ScienceComputer Science (R0)