Abstract
This paper investigates the functional invariance of neural network learning methods. By functional invariance we mean the property of producing functionally equivalent minima as the size of the network grows, when the smoothing parameters are fixed. We study three different principles on which functional invariance can be based, and try to delimit the conditions under which each of them acts. We find out that, surprisingly, some of the most popular neural learning methods, such as weight-decay and input noise addition, exhibit this interesting property.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bishop, C.M.: “Neural networks for pattern recognition”. Oxford University Press, 1995.
Hornik, K.: “Approximation capabilities of multilayer feedforward networks”. Neural Networks, Vol 4(2), pp. 251–257, 1991.
Koistinen, P. and Holmstrom, L.: “Kernel regression and backpropagation training with noise”, Advances in Neural Information Processing Systems 4, Morgan-Kauffman, 1992.
Neal, R.M.: ”Bayesian learning for neural networks”. Springer-Verlag, New York, 1996.
Ruiz de Angulo, V. and Torras, C.: “Random weights and regularization”, ICANN’94, Sorrento, pp. 1456–1459, 1994.
Ruiz de Angulo, V. and Torras, C.: “A deterministic algorithm that emulates learning with random weights”, Neurocomputing (to appear).
Ruiz de Angulo, V. and Torras, C.: “Architecture-independent approximation of functions”, Neural Computation, Vol. 13, No. 5, pp. 1119–1135, May 2001.
Wolpert, D.H. (1994): “Bayesian backpropagation over I-O function rather than weights”, Advances in Neural Information Processing Systems 6, Morgan Kauffman, 1999.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
de Angulo, V.R., Torras, C. (2001). Neural Learning Invariant to Network Size Changes. In: Dorffner, G., Bischof, H., Hornik, K. (eds) Artificial Neural Networks — ICANN 2001. ICANN 2001. Lecture Notes in Computer Science, vol 2130. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44668-0_6
Download citation
DOI: https://doi.org/10.1007/3-540-44668-0_6
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42486-4
Online ISBN: 978-3-540-44668-2
eBook Packages: Springer Book Archive