Learning and Using Specific Instances
Various tradeoffs between time and space efficiency are possible in connectionistic models. One obvious effect is that while an ability to generalize effectively permits improved space efficiency, attempted generalization leads to increased learning time when generalization is not appropriate. For example, a maximal generalization over observed positive instances includes, by definition, as many unobserved (possibly negative) instances as possible. This is a significant drawback when specific instance learning is known to be appropriate. For example, in order to recognize a particular stimulus (e.g., a picture of a person) as having been seen before, the representation should be as specific as possible to avoid incorrectly responding to similar stimuli. A second problem with generalization is that, even when generalization is appropriate, an incremental learning system that stores only a single generalization hypothesis can make repeated mistakes on the same input patterns, a situation which need not occur with specific instance learning. Perceptron training is good at learning generalizations, but poor at learning specific instances.
KeywordsInput Pattern Specific Instance Positive Instance Negative Instance Connectionistic Problem
Unable to display preview. Download preview PDF.