Advertisement

Input selection with partial retraining

  • Piërre van de Laar
  • Stan Gielen
  • Tom Heskes
Part III: Learning: Theory and Algorithms
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1327)

Abstract

In this article, we describe how input selection can be performed with partial retraining. By detecting and removing irrelevant input variables resources are saved, generalization tends to improve, and the resulting architecture is easier to interpret. In our simulations the relevant input variables were correctly separated from the irrelevant variables for a regression and a classification problem.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    T. Cibas, F. Fogelman Soulié, P. Gallinari, and S. Raudys. Variable selection with optimal cell damage. In M. Marinaro and P. G. Morasso, editors, Proceedings of the International Conference on Artificial Neural Networks, volume 1, pages 727–730. Springer-Verlag, 1994.Google Scholar
  2. 2.
    T. Czernichow. Architecture selection through statistical sensitivity analysis. In C. von der Malsburg, W. von Seelen, J. C. Vorbriiggen, and B. Sendhoff, editors, Artificial Neural Networks-ICANN 96, volume 1112 of Lecture Notes in Computer Science, pages 179–184. Springer, 1996.Google Scholar
  3. 3.
    N. R. Draper and H. Smith. Applied Regression Analysis. Wiley Series in Probability and Mathematical Statistics. Wiley, New York, second edition, 1981.Google Scholar
  4. 4.
    J. H. Friedman. Multivariate adaptive regression splines. The Annals of Statistics, 19(1):1–141, 1991.Google Scholar
  5. 5.
    G. D. Garson. Interpreting neural-network connection weights. AI Expert, 6(4):47–51, 1991.Google Scholar
  6. 6.
    B. Hassibi, D. G. Stork, G. Wolff, and T. Watanabe. Optimal Brain Surgeon: Extensions and performance comparisons. In J. D. Cowan, G. Tesauro, and J. Alspector, editors, Advances in Neural Information Processing Systems, volume 6, pages 263–270, San Francisco, 1994. Morgan Kaufmann.Google Scholar
  7. 7.
    D. G. Kleinbaum, L. L. Kupper, and K. E. Muller. Applied Regression Analysis and Other Multivariable Methods. The Duxbury series in statistics and decision sciences. PWS-KENT Publishing Company, Boston, second edition, 1988.Google Scholar
  8. 8.
    J. O. Moody and P. J. Antsaklis. The dependence identification neural network construction algorithm. IEEE Transactions on Neural Networks, 7(1):3–15, 1996.Google Scholar
  9. 9.
    M. C. Mozer and P. Smolensky. Using relevance to reduce network size automatically. Connection Science, 1(1):3–16, 1989.Google Scholar
  10. 10.
    K. L. Priddy, S. K. Rogers, D. W. Ruck, G. L. Tarr, and M. Kabrisky. Bayesian selection of important features for feedforward neural networks. Neurocomputing, 5(2/3):91–103, 1993.Google Scholar
  11. 11.
    P. van de Laar, T. Heskes, and S. Gielen. Partial retraining: A new approach to input relevance determination. Submitted, 1997.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Piërre van de Laar
    • 1
  • Stan Gielen
    • 1
  • Tom Heskes
    • 1
  1. 1.RWCP Novel Functions SNN Laboratory Dept. of Medical Physics and BiophysicsUniversity of NijmegenThe Netherlands

Personalised recommendations