Neural net connection estimates applied for feature selection & improved linear classifier design

  • Alvin J. Surkan
Neural Nets
Part of the Lecture Notes in Computer Science book series (LNCS, volume 313)


Estimates of connection strengths are obtained to derive the weights used in building a linear classifier of complex binary patterns. Use of bootstrap resampling methods permits small, large and highly disparate-in-size training sets to be utilized with equal ease. Using ideas suggested by a method proposed for learning in neural net systems [Hopfield 1982] [Cruz-Young et al 1986], it is now possible to obtain connection values without prohibitive or arbitrarily terminated computations. The weight values so derived are identified as equivalent with those at the middle layer of three-layer neural net models. The model serving as the springboard for this study was developed by Kanerva [1986–87], and functions as a distributed sparse memory or DSM. In the various implementations of connection-based memories, both linear and nonlinear relationships within patterns play a role in the classification process. Experimental data used in this research were derived from psychological profiles which result from the coding of responses elicited by question-like items. Traditionally, such items are usually designed and chosen to be linear predictors of class membership or performance. The phase of the study reported here addresses the problems and successes of initial efforts towards a practical application of neural network computational concepts. The main focus is on those approaches which have been found to be particularly effective in obtaining adequate estimates of effective linear functions for classification of binary patterns with a number of hits ranging from 64 to 256.


Class Membership Connection Strength Memory Line Sparse Memory 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Kanerva, P. [1987] Self-propagating search: "A unified theory of memory" (Report No. CSLI-84-7) Stanford, CA. Stanford. CA. Stanford University, Center for the Study of Language and Information.Google Scholar
  2. Kanerva, P. [1986] Parallel structures in human and computer memory. AIP Conference Proceedings 151, Neural Networks for Computing — Snowbird, Utah 1986. pp 247–258Google Scholar
  3. Hopfield J. J. [1982] Neural networks and and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, USA, 79, 2554–2558.Google Scholar
  4. Cruz-Young, C. A., W. A. Hanson, J. Y. Tam [1986] Flow-of-activation processing: Parallel Associative Networks (PAN). AIP Conference Proceedings 151, Neural Networks for Computing — Snowbird, Utah 1986. pp 115–120.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1988

Authors and Affiliations

  • Alvin J. Surkan
    • 1
  1. 1.Department of Computer ScienceUniversity of Nebraska-Lincoln

Personalised recommendations