Abstract
Like multi-layer perceptrons, radial basis function networks are feed-forward neural networks with a strictly layered structure. However, the number of layers is always three, that is, there is exactly one hidden layer. In addition, radial basis function networks differ from multi-layer perceptrons in the network input and activation functions, especially in the hidden layer. In this hidden layer radial basis functions are employed, which are responsible for the name of this type of neural network. With these functions a kind of “catchment region” is assigned to each neuron, in which it mainly influences the output of the neural network.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
A. Albert. Regression and the Moore-Penrose Pseudoinverse. Academic Press, New York, NY, USA, 1972
J.A. Hartigan and M.A. Wong. A k-means Clustering Algorithm. Applied Statistics 28:100–108. Blackwell, Oxford, United Kingdom, 1979
A. Zell. Simulation Neuronaler Netze. Addison-Wesley, Stuttgart, Germany, 1996
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag London
About this chapter
Cite this chapter
Kruse, R., Borgelt, C., Klawonn, F., Moewes, C., Steinbrecher, M., Held, P. (2013). Radial Basis Function Networks. In: Computational Intelligence. Texts in Computer Science. Springer, London. https://doi.org/10.1007/978-1-4471-5013-8_6
Download citation
DOI: https://doi.org/10.1007/978-1-4471-5013-8_6
Publisher Name: Springer, London
Print ISBN: 978-1-4471-5012-1
Online ISBN: 978-1-4471-5013-8
eBook Packages: Computer ScienceComputer Science (R0)