Abstract
Selectiv e learning is an active learning strategy where the neural network selects during training the most informative patterns. This paper investigates a selective learning strategy where the informativeness of a pattern is measured as the sensitivity of the network output to perturbations in that pattern. The sensitivity approach to selective learning is then compared with an error selection approach where pattern informativeness is defined as the approximation error.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Cohn, D. A.: Neural Network Exploration using Optimal Experiment Design. AI Memo No 1491, Artificial Intelligence Laboratory, Massachusetts Institute of Technology (1994)
Engelbrecht, A. P., Cloete, I.: Selective Learning using Sensitivity Analysis. IEEE World Congress on Computational Intelligence, International Joint Conference on Neural Networks, Anchorage, Alaska (1998) 1150–1155
Engelbrecht, A. P., Cloete, I.: Incremental Learning using Sensitivity Analysis. IEEE International Joint Conference on Neural Networks, Washington DC, USA (1999) paper 380
Engelbrecht, A. P., Adejumo, A.: A New Selective Learning Algorithm for Time Series Approximation using Feedforward Neural Networks. In: Development and Practice of Artificial Intelligence Techniques, VB Bajić, D Sha (Eds). Proceedings of the International Conference on Artificial Intelligence, Durban, South Africa (1999) 29–31
Fritzke, B.: Incremental Learning of Local Linear Mappings. International Conference on Artificial Neural Networks, Paris (1995)
Hassibi, B., Stork, D. G., Wolff, G.: Optimal Brain Surgeon: Extensions and Performance Comparisons, JD Cowan, G Tesauro, J Alspector (Eds), Advances in Neural Information Processing Systems. Vol. 6. (1994) 263–270
Hirose, Y., Yamashita, K., Hijiya, S.: Back-Propagation Algorithm which Varies the Number of Hidden Units. Neural Networks. 4 (1991) 61–66
Hunt, S. D., Deller, J. R. (jr): Selective Training of Feedforward Artificial Neural Networks using Matrix Perturbation Theory. Neural Networks. 8(6) (1995) 931–944
Kamimura, R., Nakanishi, S.: Weight Decay as a Process of Redundancy Reduction. World Congress on Neural Networks. Vol. 3. (1994) 486–491
Le Cun, Y., Denker, J. S., Solla, S. A.: Optimal Brain Damage. In D Touretzky (Ed), Advances in Neural Information Processing systems. Vol. 2. (1990) 598–605
MacKay, D. J. C.: Information-Based Objective Functions for Active Data Selection. Neural Computation. 4 (1992) 590–604
Plutowski, M., White, H.: Selecting Concise Training Sets from Clean Data. IEEE Transactions on Neural Networks. 4(2) (1993) 305–318
Röbel, A.: The Dynamic Pattern Selection Algorithm: Effective Training and Controlled Generalization of Backpropagation Neural Networks. Technical Report, Institute für Angewandte Informatik, Technische Universität Berlin. (1994) 497–500
Seung, H. S., Opper, M., Sompolinsky, H.: Query by Committee. Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory. (1992) 287–299
Sung, K. K., Niyogi, P.: A Formulation for Active Learning with Applications to Object Detection. AI Memo No 1438, Artificial Intelligence Laboratory, Massachusetts Institute of Technology. (1996)
Weigend, A. S., Rumelhart, D. E., Huberman, B. A.: Generalization by Weight-Elimination with Application to Forecasting. In R Lippmann, J Moody, DS Touretzky (Eds), Advances in Neural Information Processing Systems. Vol. 3. (1991) 875–882
Zhang, B.-T.: Accelerated Learning by Active Example Selection. International Journal of Neural Systems. 5(1) (1994) 67–75
Zurada, J. M., Malinowski, A., Usui, S.: Perturbation Method for Deleting Redundant Inputs of Perceptron Networks. Neurocomputing. 14 (1997) 177–193
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Engelbrecht, A.P. (2001). Selective Learning for Multilayer Feedforward Neural Networks. In: Mira, J., Prieto, A. (eds) Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence. IWANN 2001. Lecture Notes in Computer Science, vol 2084. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45720-8_45
Download citation
DOI: https://doi.org/10.1007/3-540-45720-8_45
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42235-8
Online ISBN: 978-3-540-45720-6
eBook Packages: Springer Book Archive