, Volume 45, Issue 1, pp 35-41

A neural model for category learning

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access


We present a general neural model for supervised learning of pattern categories which can resolve pattern classes separated by nonlinear, essentially arbitrary boundaries. The concept of a pattern class develops from storing in memory a limited number of class elements (prototypes). Associated with each prototype is a modifiable scalar weighting factor (λ) which effectively defines the threshold for categorization of an input with the class of the given prototype. Learning involves (1) commitment of prototypes to memory and (2) adjustment of the various λ factors to eliminate classification errors. In tests, the model ably defined classification boundaries that largely separated complicated pattern regions. We discuss the role which divisive inhibition might play in a possible implementation of the model by a network of neurons.

This work was supported in part by the Alfred P. Sloan Foundation and the Ittleson Foundation, Inc.