Abstract
In this paper, we discuss the problem of active training data selection in the presence of noise. We formalize the learning problem in neural networks as an inverse problem using a functional analytic framework and use the Averaged Projection criterion as our optimization criterion for learning. Based on the above framework, we look at training data selection from two objectives, namely, improving the generalization ability and secondly, reducing the noise variance in order to achieve better learning results. The final result uses the apriori correlation information on noise characteristics and the original function ensemble to devise an efficient sampling scheme, which can be used in conjunction with the incremental learning schemes devised in our earlier work to achieve optimal generalization.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
V.V. Federov. Theory of optimal experiments. Academic Press, New York, 1972.
K. Fukumizu. Active learning in multilayer perceptrons. In Advances in Neural Information Processing Systems, Vol. 8, pp. 295–301. MIT Press, 1996.
Y. Koide, Y. Yamashita, and H. Ogawa. A unified theory of the family of projection filters for signal and image estimation. Transactions of the IEICE Japan, Vol. J-77 D-II, No. 7, pp. 1293–1301, 1994. In Japanese.
S.P. Luttrell. The use of transinformation in the design of data sampling schemes for inverse problems. Inverse Problems, Vol. 1, No. 1, pp. 199–218, 1985.
D. Mackay. Information-based objective functions for active data selection. Neural Computation, Vol. 4, No. 4, pp. 590–604, 1992.
M. Plutowski and H. White. Selecting concise training sets from clean data. IEEE Transactions on Neural Networks, Vol. 4, No. 2, pp. 305–318, 1993.
P. Sollich and D. Saad. Learning from queries for maximum information gain in imperfectly learnable problems. In Advances in Neural Information Processing System, Vol. 7, pp. 287–294. MIT Press, 1995.
S. Vijayakumar. Computational theory of incremental and active leanrning for optimal generalization. PhD thesis, Tokyo Institute of Technology, 1998.
M. Wann, T. Hediger, and N.N. Greenbaun. The influence of training sets on generalization in feedforward neural networks. In Proceedings, International Joint Conf on Neural Networks, Vol. 3, pp. 137–142, 1990.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag London Limited
About this paper
Cite this paper
Vijayakumar, S., Sugiyama, M., Ogawa, H. (1999). Training Data Selection for Optimal Generalization with Noise Variance Reduction in Neural Networks. In: Marinaro, M., Tagliaferri, R. (eds) Neural Nets WIRN VIETRI-98. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0811-5_14
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0811-5_14
Publisher Name: Springer, London
Print ISBN: 978-1-4471-1208-2
Online ISBN: 978-1-4471-0811-5
eBook Packages: Springer Book Archive