Skip to main content

Selective Learning for Multilayer Feedforward Neural Networks

  • Conference paper
  • First Online:
Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence (IWANN 2001)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2084))

Included in the following conference series:

Abstract

Selectiv e learning is an active learning strategy where the neural network selects during training the most informative patterns. This paper investigates a selective learning strategy where the informativeness of a pattern is measured as the sensitivity of the network output to perturbations in that pattern. The sensitivity approach to selective learning is then compared with an error selection approach where pattern informativeness is defined as the approximation error.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cohn, D. A.: Neural Network Exploration using Optimal Experiment Design. AI Memo No 1491, Artificial Intelligence Laboratory, Massachusetts Institute of Technology (1994)

    Google Scholar 

  2. Engelbrecht, A. P., Cloete, I.: Selective Learning using Sensitivity Analysis. IEEE World Congress on Computational Intelligence, International Joint Conference on Neural Networks, Anchorage, Alaska (1998) 1150–1155

    Google Scholar 

  3. Engelbrecht, A. P., Cloete, I.: Incremental Learning using Sensitivity Analysis. IEEE International Joint Conference on Neural Networks, Washington DC, USA (1999) paper 380

    Google Scholar 

  4. Engelbrecht, A. P., Adejumo, A.: A New Selective Learning Algorithm for Time Series Approximation using Feedforward Neural Networks. In: Development and Practice of Artificial Intelligence Techniques, VB Bajić, D Sha (Eds). Proceedings of the International Conference on Artificial Intelligence, Durban, South Africa (1999) 29–31

    Google Scholar 

  5. Fritzke, B.: Incremental Learning of Local Linear Mappings. International Conference on Artificial Neural Networks, Paris (1995)

    Google Scholar 

  6. Hassibi, B., Stork, D. G., Wolff, G.: Optimal Brain Surgeon: Extensions and Performance Comparisons, JD Cowan, G Tesauro, J Alspector (Eds), Advances in Neural Information Processing Systems. Vol. 6. (1994) 263–270

    Google Scholar 

  7. Hirose, Y., Yamashita, K., Hijiya, S.: Back-Propagation Algorithm which Varies the Number of Hidden Units. Neural Networks. 4 (1991) 61–66

    Article  Google Scholar 

  8. Hunt, S. D., Deller, J. R. (jr): Selective Training of Feedforward Artificial Neural Networks using Matrix Perturbation Theory. Neural Networks. 8(6) (1995) 931–944

    Article  Google Scholar 

  9. Kamimura, R., Nakanishi, S.: Weight Decay as a Process of Redundancy Reduction. World Congress on Neural Networks. Vol. 3. (1994) 486–491

    Google Scholar 

  10. Le Cun, Y., Denker, J. S., Solla, S. A.: Optimal Brain Damage. In D Touretzky (Ed), Advances in Neural Information Processing systems. Vol. 2. (1990) 598–605

    Google Scholar 

  11. MacKay, D. J. C.: Information-Based Objective Functions for Active Data Selection. Neural Computation. 4 (1992) 590–604

    Article  Google Scholar 

  12. Plutowski, M., White, H.: Selecting Concise Training Sets from Clean Data. IEEE Transactions on Neural Networks. 4(2) (1993) 305–318

    Article  Google Scholar 

  13. Röbel, A.: The Dynamic Pattern Selection Algorithm: Effective Training and Controlled Generalization of Backpropagation Neural Networks. Technical Report, Institute für Angewandte Informatik, Technische Universität Berlin. (1994) 497–500

    Google Scholar 

  14. Seung, H. S., Opper, M., Sompolinsky, H.: Query by Committee. Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory. (1992) 287–299

    Google Scholar 

  15. Sung, K. K., Niyogi, P.: A Formulation for Active Learning with Applications to Object Detection. AI Memo No 1438, Artificial Intelligence Laboratory, Massachusetts Institute of Technology. (1996)

    Google Scholar 

  16. Weigend, A. S., Rumelhart, D. E., Huberman, B. A.: Generalization by Weight-Elimination with Application to Forecasting. In R Lippmann, J Moody, DS Touretzky (Eds), Advances in Neural Information Processing Systems. Vol. 3. (1991) 875–882

    Google Scholar 

  17. Zhang, B.-T.: Accelerated Learning by Active Example Selection. International Journal of Neural Systems. 5(1) (1994) 67–75

    Article  Google Scholar 

  18. Zurada, J. M., Malinowski, A., Usui, S.: Perturbation Method for Deleting Redundant Inputs of Perceptron Networks. Neurocomputing. 14 (1997) 177–193

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Engelbrecht, A.P. (2001). Selective Learning for Multilayer Feedforward Neural Networks. In: Mira, J., Prieto, A. (eds) Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence. IWANN 2001. Lecture Notes in Computer Science, vol 2084. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45720-8_45

Download citation

  • DOI: https://doi.org/10.1007/3-540-45720-8_45

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42235-8

  • Online ISBN: 978-3-540-45720-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics