Learning and Using Specific Instances

  • Steven E. Hampson


Various tradeoffs between time and space efficiency are possible in connectionistic models. One obvious effect is that while an ability to generalize effectively permits improved space efficiency, attempted generalization leads to increased learning time when generalization is not appropriate. For example, a maximal generalization over observed positive instances includes, by definition, as many unobserved (possibly negative) instances as possible. This is a significant drawback when specific instance learning is known to be appropriate. For example, in order to recognize a particular stimulus (e.g., a picture of a person) as having been seen before, the representation should be as specific as possible to avoid incorrectly responding to similar stimuli. A second problem with generalization is that, even when generalization is appropriate, an incremental learning system that stores only a single generalization hypothesis can make repeated mistakes on the same input patterns, a situation which need not occur with specific instance learning. Perceptron training is good at learning generalizations, but poor at learning specific instances.


Input Pattern Specific Instance Positive Instance Negative Instance Connectionistic Problem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Birkhäuser Boston 1990

Authors and Affiliations

  • Steven E. Hampson
    • 1
  1. 1.Department of Information & Computer ScienceUniversity of CaliforniaIrvineUSA

Personalised recommendations