Skip to main content

Correlated activity pruning (CAPing)

  • Poster Abstracts
  • Conference paper
  • First Online:
Computational Intelligence Theory and Applications (Fuzzy Days 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1226))

Included in the following conference series:

Abstract

The generalisation ability of an Artificial Neural Network (ANN) is dependent on its architecture. An ANN with the correct architecture will learn the task presented by the training set but also acquire rales that are general enough to correctly predict outputs for unseen test set examples. To obtain this optimum network architecture it is often necessary to apply a labourious ‘trial and error’ approach. One approach that helps to achieve optimum network architecture in a more intelligent way is pruning. Such methods benefit from the learning advantages of larger networks while reducing the amount of overtraining or memorisation within these networks. Sietsma and Dow (1988) describe an interactive pruning method that uses several heuristics to identify units that fail to contribute to the solution and therefore can be removed with no degradation in performance. This approach removes units with constant outputs over all the training patterns as these are not participating in the solution. Also, units with identical or opposite activations for all patterns can be combined. The approach to merging hidden units detailed in Sietsma and Dow's paper is useful, however, it only covers perfectly correlated, binary activations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  • Balls GR, Palmer-Brown D, & Sanders GE. 1996. Investigating microclimate influences on ozone injury in clover (Trifolium subterraneum) using artificial neural networks. New Phvtologist. 132, 271–280

    Google Scholar 

  • Roadknight CM, Palmer-Brown D & Sanders GE. 1995. Learning the equations of data. Proceedings of 3rd annual SNN symposium on neural networks (eds. Kappen B and Gielen S) Springer-Verlag. 253–257.

    Google Scholar 

  • Sietsma J & Dow RJF. 1988. Neural net pruning — Why and how. Pro. IEEE Int. Conf. Neural Networks. Vol 1. p. 325–333.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Bernd Reusch

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Roadknight, C.M., Palmer-Brown, D., Mills, G.E. (1997). Correlated activity pruning (CAPing). In: Reusch, B. (eds) Computational Intelligence Theory and Applications. Fuzzy Days 1997. Lecture Notes in Computer Science, vol 1226. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-62868-1_176

Download citation

  • DOI: https://doi.org/10.1007/3-540-62868-1_176

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-62868-2

  • Online ISBN: 978-3-540-69031-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics