Advertisement

ICANN 98 pp 881-886 | Cite as

Architecture Optimization in Feedforward Connectionist Models

  • M. Yacoub
  • Y. Bennani
Part of the Perspectives in Neural Computing book series (PERSPECT.NEURAL)

Abstract

Given a set of training examples, determining the number of free parameters is a fundamental problem in neural network modeling. The number of such parameters influence the quality of the solution obtained. This paper deals with the problem of adapting the effective network complexity to the information contained in the training data set, and the task’s difficulty. The method we propose consists of choosing an oversized network architecture, training it until it is assumed to be close to a training error minimum then selecting the most important input variables and pruning irrelevant hidden neurones. This method is an extension of our previous one used for input variables selection, it is simple, cheap and effective. We show its effect experimentally through one classification and one regression problem.

Keywords

Hide Neuron Hide Unit Input Unit Discrimination Problem Pruning Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Reed R.: Pruning algorithms-A survey, IEEE Trans. Neural Networks, vol. 4, no. 5, pp. 740–747 (1993).CrossRefGoogle Scholar
  2. [2]
    Yacoub M. and Bennani Y.: HVS: A Heuristic for Variable Selection in Multilayer Artificial Neural Network Classifier. In Intelligent Engineering Systems Through Artificial Neural Networks, Vol 7: C. Dagli, M. Akay, O. Ersoy, B. Fernandez and A. Smith (Editors), pp. 527–532, (1997).Google Scholar
  3. [3]
    Yacoub M. and Bennani Y.: A Neural Network Methodology for Machines’ Class Identification. Proc. IEEE International Joint Conference on Neural Networks (IJCNN’98), Vol. 1, pp. 322–325 (1998).Google Scholar
  4. [4]
    Breiman L., Freidman J., Olshen R., Stone C.: Classification and regression trees. Wadsworth Int. Group. (1984).Google Scholar
  5. [5]
    De Bollivier M., Gallinari P., Thiria S.: Cooperation of neural nets and task decomposition. International Joint Conference on Neural Networks (IJCNN’91), Vol. 2, pp. 573–576 (1991).Google Scholar
  6. [6]
    Weigend A.S., Huberman B.A. and Rumelhart D.E.: Predicting the future: a connectionist approach. Int. Journal of Neural Systems, Vol. 1, No 3, pp. 193–209 (1990).CrossRefGoogle Scholar
  7. [7]
    Goutte C.: On the use of pruning prior for neural networks. In Neural Network for Signal Processing VI, pp. 52–61 (1996).Google Scholar
  8. [8]
    Svarer C., Hansen L.K., Larsen J.: On design and evaluation of tapped-delay neural networks architectures. In IEEE International Conference on Neural Networks, pp. 46–51 (1993).Google Scholar

Copyright information

© Springer-Verlag London 1998

Authors and Affiliations

  • M. Yacoub
    • 1
  • Y. Bennani
    • 1
  1. 1.LIPN-CNRSInstitut Galilée-Université Paris 13VilletaneuseFrance

Personalised recommendations