Distributed representation of perceptual categories in the auditory cortex
- 215 Downloads
Categorical perception is a process by which a continuous stimulus space is partitioned to represent discrete sensory events. Early experience has been shown to shape categorical perception and enlarge cortical representations of experienced stimuli in the sensory cortex. The present study examines the hypothesis that enlargement in cortical stimulus representations is a mechanism of categorical perception. Perceptual discrimination and identification behaviors were analyzed in model auditory cortices that incorporated sound exposure-induced plasticity effects. The model auditory cortex with over-representations of specific stimuli exhibited categorical perception behaviors for those specific stimuli. These results indicate that enlarged stimulus representations in the sensory cortex may be a mechanism for categorical perceptual learning.
KeywordsCategorical perception Sensory cortex Learning
While sensory stimuli may vary continuously along their physical dimensions, the behaviorally significant events that they represent are often discrete. Through a process called categorical perception, the sensory system maps continuous stimulus spaces to discrete perceptual events (Harnad 2003). For instance, lights of gradually changing wavelength are perceived as having discrete hues (Bornstein et al. 1976). Gradual shift in sound frequencies may lead to categorical changes of the perceived musical intervals (Burns and Ward 1978). Categorically perceived stimuli may be recognized more quickly in the presence of distortions and contextual interferences. This efficient sensory processing provides the bases for higher-level cognitive functions such as verbal communication and music appreciation (Harnad 1987).
Categorical perception was first discovered in speech research and was thought to involve language-specific, higher-level brain mechanisms, but not the basic sensory processing mechanisms of the auditory system (Liberman et al. 1957, 1967). Later research indicated that categorical perception occurs in a variety of non-speech sounds (Ehret 1992; Ehret and Haack 1981; Nelson and Marler 1989; Wyttenbach et al. 1996). In addition, speech sounds are categorically perceived by animals of many species (Kluender et al. 1987; Kuhl and Miller 1975; Kuhl and Padden 1982, 1983). These findings suggest that categorical perception may be an auditory, rather than a purely phonetic, process and may be mediated by the auditory sensory system.
Neural mechanisms underlying categorical perception are not well understood. Investigations of such mechanisms often involve searching for categorical neurons—those that respond preferentially to all stimuli in one category, but not to any of the other categories, showing sigmoidal stimulus selectivity. These categorical neurons have been found in the frontal cortex (Freedman et al. 2001; Romo et al. 1997). Although behavioral and psychophysical evidence suggest that sensory systems may mediate categorical perception, the neurons in the sensory cortex, which typically respond to a broad range of stimuli and exhibit bell-shaped tuning curves, are not considered categorical.
Categorical perception may arise both through innate mechanisms and as a result of sensory experiences and learning (Livingston et al. 1998). Some human speech sounds, for instance, are categorically perceived in newborn human infants (Eimas 1974) and in some model animals that have never been exposed to the speech sounds (Ehret and Haack 1981; Kluender et al. 1987; Kuhl and Miller 1975; Nelson and Marler 1989; Wyttenbach et al. 1996). It has been suggested that the auditory systems of both humans and the model animals are innately sensitive to the acoustic distinctions of those speech sounds, and our vocal communication system simply exploits this sensitivity (Holt et al. 2004; Steinschneider et al. 2003). On the other hand, language experience can also alter the perceptual sensitivity of the auditory system to speech sounds and change their categorical boundaries (Lasky et al. 1975; MacKay et al. 2001; Williams 1977). This language-specific reshaping of the phonetic perceptual categories occurs in the first year of life (Kuhl et al. 1992), presumably as a result of acoustic exposure to the speech sound environment. Categorical perception of pitch is also shaped by musical experiences (Burns and Ward 1978).
Sensory experience in a limited window of early life has a profound influence on the development of cortical sensory representations (Wiesel 1982). Recent studies indicate that repeated exposure to a stimulus results in enlarged cortical representations of the experienced stimulus—i.e., more neurons becoming selectively responsive to the stimulus (Chang and Merzenich 2003; Erickson et al. 2000; Sengpiel et al. 1999; Zhang et al. 2001). Similar preferential representations of experienced speech sounds and musical notes have also been shown in humans (Naatanen et al. 1997; Pantev et al. 1998). Given the profound impact of early experience on categorical perception of speech sounds and musical and, as well as cortical sound representations, it is possible that experience-driven reorganization of the auditory cortex plays a role in forming perceptual categories (Crozier 1997; Lasky et al. 1975; MacKay et al. 2001; Takeuchi and Hulse 1993; Williams 1977). In this study, we construct models of acoustic representations of the primary auditory cortex, and examine the effects of experience-induced reorganization of acoustic representation on perceptual discrimination and identification performances of the model primary auditory cortex. We show that categorical perception may arise as a result of enlarged cortical representation induced, for instance, by early experience.
2 Materials and methods
2.1 Modeling the frequency representations in the primary auditory cortex
2.2 Modeling frequency discrimination
2.3 Modeling frequency identification
In a typical behavioral identification task, the subject is presented with an unknown stimulus ( fx) and asked to make a forced choice on which of two fixed stimuli ( f1 and f2) is more likely to be the unknown stimulus. In our simulation, the model AI was presented with an unknown frequency ( fx). The response of model AI to fx was denoted as Rx. The task was to determine which of two known frequencies ( f1 and f2) was more likely to be the one that activated Rx. We modeled the perceptual decision process in the frequency identification task with a stochastic process and a deterministic process.
Each simulation was run 100 times, and the percentage that the model AI chose f1 was used as the identification index. Each point in all the graphs is the mean of 200 individually calculated identification indices in the specific testing condition. The variability of the performance was measured with 95% confidence intervals, which cover the range of 2.5th and 97.5th percentile of the identification indices.
2.4 Testing stimulus discrimination in adult rats
2.5 Testing stimulus identification in adult rats
Animals were first trained to recognize two prototype tonal frequencies. In each trial, 100-ms tone pips of a prototype frequency were played at a rate of five pips per second and at 60 dB SPL. The animal was trained to make a nose-poke in one of two nosing holes (either on the left or on the right) depending on which one of the two prototype frequencies (6 kHz or 12 kHz) was being played—i.e. an identification task. A nose-poke in the correct hole within 10 s from the onset of the sound was considered a “hit” and rewarded with a food pellet. A nose-poke in the wrong hole or inaction in the 10-s period was a miss and not rewarded. It takes approximately 10 days for naive animals to reach an asymptotic performance level of approximately 80% correct recognition. Then, we tested how animals perceived and categorized a series of nine tones of intermediate frequencies. These frequencies were logarithmically equally spaced between the two prototype frequencies. The prototype sounds were tested in regular trials (80% of all trials). The intermediate sounds were tested in probe trials (20%), in which the animal did not receive a food pellet regardless of the animal’s response. We did not reinforce the animals in these trials to avoid biasing their responses, which could interfere with the perceptual tests. To keep animals motivated with food pellet reward, we included 80% regular trials in which correct responses to prototype stimuli were rewarded. The percentage of trials that animals made nose-poke in the left nosing hole (corresponding to the lower frequency) was used to construct the identification function.
3.1 Psychometric function of the model AI
We first examined the model performances as a function of the input frequency difference and the total number of neurons in the model AI. As shown in Fig. 3, the psychometric performance-difference function was approximately sigmoidal. Having more model neurons improved the model performance, as indicated by a leftward shift of the psychometric function. The shape of the psychometric function, however, did not change with the neuron numbers. As predicted (Seung and Sompolinsky 1993), the discrimination threshold of the model AI, as measured with the half-height frequency difference, was inversely proportional to the square root of the number of neurons (Fig. 4(a)).
We examined animal performance in a frequency discrimination task, in which discrimination of various frequency differences was tested in adult rats that have not been exposed to specific sound (hereafter referred to as naïve animals, in contrast to sound-exposed animals with altered frequency representations). The psychometric function of naïve rats was sigmoidal, similar to that of the model AI. Furthermore, the performances of the model AI with 800 neurons fitted well with the animal performances. The total number of neurons in the primary auditory cortex of the rat (1–2 mm2 in size) is on the order of 100,000, including local and inhibitory neurons (Cherniak 1990; O’Kusky and Colonnier 1982). The relatively small number of neurons required for the model to reach the performance levels of the animals is consistent with earlier modeling results (Paradiso 1988). All simulations presented in the subsequent sections used model AIs with 800 neurons.
The tuning bandwidth, response magnitude and spontaneous firing rate of the model neurons were also varied to examine how these properties influence perceptual discrimination behaviors of the model AI. Frequency discrimination threshold decreases with greater response magnitude, narrower tuning bandwidth and lower spontaneous firing rate (Fig. 4(b–d)). These results provide constraints for further comparison between model and animals performances.
3.2 Perceptual discrimination by sound-exposed model AI
One of the two behavioral traits of categorical perception is that the perceptual discrimination ability is worse within a category than between different categories. If a perceptual category forms around the experienced stimulus, perceptual discrimination would be relatively poor within the category. We constructed a sound-exposed model AI, incorporating sound exposure-induced plasticity effects: over-representation of the experienced frequency and under-representation of neighboring frequencies in the range of ±1 octave (see Fig. 1(b) and Chang and Merzenich 2003). Simulation results indicate that discrimination of 0.1-octave frequency differences in the over-represented frequency range was significantly impaired. By contrast, discrimination of neighboring frequencies was improved (Fig. 5).
These results may be understood in terms of the amount of Fisher information the model neurons provide for frequency decoding (Dayan and Abbott 2001). Sensory neurons contribute to stimulus decoding by changing their firing rates (Bala et al. 2003; Luna et al. 2005; Paradiso 1988). Two similar stimuli that are near the center of a Gaussian-shaped tuning curve of a neuron will elicit similar firing rates (close to the maximum response magnitude). However, two similar stimuli that fall on the slopes of a neuron’s tuning curve, where firing rate is most sensitive to stimulus differences, will elicit responses of very different firing rates. In the sound-exposed AI, a large number of neurons become tuned near the experienced frequency. These retuned neurons are less sensitive to changes in frequencies near the experienced tone, because those frequencies fall near the center of their tuning curves. Instead, these neurons become sensitive to frequency changes in the neighboring frequency bands, where the slopes of the tuning curves are located. The limit of decoding accuracy set by Fisher information measure can be attained by maximum likelihood estimation, when a large number of neurons are involved in coding (Dayan and Abbott 2001). Thus, discrimination thresholds derived from Fisher information should be similar to those calculated with MLE.
3.3 Perceptual identification by sound-exposed model AI
The second behavioral trait of categorical perception is the sigmoidal identification function where stimuli on one side of a categorical boundary are classified as members of the same category. Behaviorally, it is often tested with an identification task, in which subjects are required to classify a series of equally spaced stimuli into two categories. We performed frequency identification test in naive animals, and observed a near-linear frequency identification function (Fig. 6). Using this result as a constraint, we explored three methods to model the stimulus identification process—a Bernoulli-stochastic process method, a likelihood-ratio threshold method and a maximum-likelihood estimation method (see Section 2 for details). Among the three methods, only the Bernoulli random process method produced a near linear identification function for naïve model AI. The performances of the likelihood-ratio threshold (LR) and maximum-likelihood estimation (MLE) methods were almost identical, and were pooled together (Fig. 6). The LR/MLE methods produced an inverted sigmoidal identification function that diverges from the corresponding animal behavior. The identification function generated with these two methods shows a complete categorical transition within a 0.2-octave frequency distance, similar to the frequency discrimination threshold shown in Fig. 3. This is not surprising because the methods essentially perform frequency decoding, and then make perceptual decisions based on the decoded frequency. The result that the model AI performed equally well in identification and discrimination tasks when LR/MLE methods are used is inconsistent with experimental findings that animals generally perform worse in identification than in discrimination tasks (For a discussion, see Massaro 1987), suggesting that the LR/MLE methods are inappropriate as models of the perceptual identification processes. The difference between the Bernoulli-stochastic and LR/MLE methods is likely due to their different assumptions about the decision-making process—the Bernoulli stochastic method assumes that the decision-making is stochastic, and the LR/MLE methods assume that the decision-making is deterministic (see Section 2).
Comparison of likelihood measures has been proposed as a model of the perceptual decision processes (Green and Swets 1966). In simple stimulus difference detection tasks (e.g., stimulus discrimination), subjects may compare a likelihood of having perceived stimulus differences with a threshold value to make a perceptual decision (as in the frequency discrimination process described above). Thus the performance is limited by the frequency decoding ability. In the perceptual identification task, however, the stimulus differences are often supra-threshold—i.e., fx is perceived as different from both f1 and f2. Deciding which one of f1 and f2 is closer to the unknown frequency fx is likely a probabilistic process, not a simple comparison of an index value to a fixed threshold. The notion that the discrimination and identification tasks involve different perceptual decision processes is consistent with the findings that performances are generally worse in identification than in discrimination tasks (Massaro 1987). Figure 6 indicates that the performances of MLE/LR methods are as good as the performances of the model AI in a discrimination task, but deviate from the animal performance. Instead, a Bernoulli-random process with the choice probabilities described by the linearly scaled log-likelihood ratio may capture some aspects of the perceptual identification behaviors in an identification task.
We analyzed perceptual identification behaviors of the model 7.1-kHz-exposed AI using the Bernoulli-random process method. The results showed that the tone-exposed AI consistently classified frequencies near 7.1 kHz as the lower one (i.e., 5.9 kHz) of the two prototypes (Fig. 7). This behavior, together with the reduced discrimination performance near 7.1 kHz, indicates that frequencies near 7.1 kHz were grouped into a perceptual category. It is a result of the sound exposure, because it only occurred near the exposed frequency, but not for the frequencies above 8.3 kHz.
3.4 Representations of two perceptual categories
Simulation results indicate that when the two experienced frequencies were two octaves apart, the model two-tone-exposed AI showed categorical perceptual behaviors—a sigmoidal identification function and a peaked discrimination function. The discrimination function is similar to that of categorical discrimination of phonemes observed in animals (Kuhl and Padden 1983). These results indicate that categorical perception may be mediated by populations of neurons with bell-shaped tuning curves. In addition, the prototypes of the categorically perceived stimuli are over-represented—e.g., more neurons were tuned to the categorically perceived frequencies near 3.5 and 14 kHz as shown in Fig. 8(a–b). Interestingly when the two frequencies were 0.5 octave apart, no categorical perception was observed. Categorical perception would be established in this case if the tuning bandwidths of the neurons become narrower (data not shown). These results suggest that the properties of the cortical circuits constrain the categorical learning processes. Certain stimuli may be more learnable as categorical prototypes than the others.
Categorical perception may be learned by exposure to specific stimuli during early development, or by extensive training in adulthood (Goldstone 1994; Lasky et al. 1975; MacKay et al. 2001; Williams 1977). After learning, the stimuli within a stimulus category are perceived as being more similar, and stimuli from different categories are perceived as being more different. These two forms of perceptual alterations are referred to as acquired perceptual equivalence and distinctiveness, respectively (Liberman et al. 1957). They are believed to underlie categorical perceptual behaviors—e.g., peaked discrimination functions and sigmoidal identification functions. Recently electrophysiological studies have revealed that sensory exposure and perceptual training often enlarge cortical representations of the relevant stimuli by retuning neuronal selectivity to the stimuli. In the present study, we examined the possibility that enlargement in cortical representation is a cortical mechanism of categorical perception. Our computational simulation results indicate that the perceptual contrast of the over-represented stimuli may be reduced, analogous to acquired perceptual equivalence, and the perceptual contrast of the neighboring under-represented stimuli may be enhanced, resulting in acquired perceptual distinctiveness. Thus, a perceptual category may form for the over-represented stimuli. Further analysis of the model AI with two over-represented stimulus ranges revealed behaviors characteristic of categorical perception—a peaked discrimination function and a sigmoidal identification function. These results support the notion that enlargement in cortical representation mediates learned categorical perception.
Previous electrophysiological studies have investigated neural mechanism of categorical perception by identifying categorical neurons—those that respond to all members of one category but not to any members of other categories. These neurons may be regarded as the category readout neurons. It is still unclear what kind of transformation of sensory information gives rise to this category-selectivity and where the transformations take place. Results of the present study suggest that experience-dependent reorganization of stimulus representations in the primary sensory cortex could provide the transformation underlying learned categorical perception. In the sensory cortex, sensory information and hence perceptual categories are represented in populations of neurons, each of which shows graded responses to a large range of stimuli. There must be readout mechanisms to transform this distributed categorical representation into categorical responses in single neurons. In the present study, we obtained categorical perceptual behaviors in the models of AI using analyses of likelihood measures. Whether and how the neural systems perform likelihood analysis is still under active investigations, and some models have been proposed (Jazayeri and Movshon 2006; Zhang et al. 1998). These models may provide the needed readout mechanisms to transform distributed categorical representations into categorical responses in single neurons.
Several computational models of categorical learning have been investigated in earlier studies such as unsupervised, auto associative feedback networks (Anderson et al. 1977) and supervised, multi-layered networks with a hidden layer and back-propagating error signals (Harnad et al. 1991). The construction of these models was primarily based on theoretical considerations, and the biological plausibility of some of the mechanisms (e.g., the back-propagation of error signal) is unclear. In the present study, the model auditory representations were based on findings of electrophysiological studies—e.g., more neurons become tuned to more frequently experienced frequency. We only considered the cortical decoding capacity and how it would influence animals’ perceptual performances. We did not provide accounts on how the experience-altered cortical decoding capacity can be transformed into categorical neuronal response and guide perceptual behaviors (i.e., the readout problem). The shaping of categorical perception with sensory exposure described in the present study is similar to the learning of perceptual categories by the auto-associative network in that both are unsupervised learning and the learned perceptual categories are represented in distributed population responses (Anderson et al. 1977). The acoustic representations modeled in the present study may also be analogous to the hidden layers of the multilayer network models, which may be altered by experience in animals, and by learning in the multilayer network models (Harnad et al. 1991). Studies of sensory plasticity may provide insights for constructing biologically plausible models of categorical learning.
The results of this study provide some insights into cortical mechanisms of perceptual learning. Enlarged cortical representations of relevant stimuli have been observed after extensive training of adult animals to discriminate tonal frequencies (Recanzone et al. 1993), sound levels (Polley et al. 2004, 2006), temporal modulation rates (Bao et al. 2004), or somatosensory stimuli (Recanzone et al. 1992). Some of the studies show that representational sizes are highly correlated with tonal frequency discrimination performances after perceptual training (Recanzone et al. 1993). These results lead to the notion that greater cortical representations are the neural basis for better perceptual discrimination performance. Such a simplistic view, however, has been challenged by opposite results showing that perceptual discrimination training sometimes does not alter the cortical feature representational map (Brown et al. 2004). Furthermore, animals with cortical representations of certain tonal frequencies enlarged by intracortical electrical stimulations did not show any improvement in stimulus discrimination performances in the over-represented frequency range (Talwar and Gerstein 2001). These results suggest that perceptual discrimination capability may be determined by many cortical neuronal properties, and not just by representational sizes. This is consistent with the simulation results of the present study, which shows that enlarged representations of a very narrow frequency range may cause impaired discrimination of the over-represented frequencies. Our modeling results also indicate that over-represented frequencies may be discriminated better if the tuning bandwidths of the neurons become narrower (Fig. 10(a)), or if a large range of frequencies are over-represented (not shown). These results help to reconcile the seemingly contradicting results reviewed above.
Maximum likelihood estimation is an optimal population decoding method. It is not considered a biologically realistic decoding mechanism, although certain neuronal architectures are thought to be able to perform similar computations (Jazayeri and Movshon 2006; Zhang et al. 1998). In the limit of large numbers of encoding neurons and for Poisson firing rate distributions, its performance saturates the Cramer-Rao bound of the variance of estimate, and sets the upper limit of the performance of the biological systems (Dayan and Abbott 2001; Seung and Sompolinsky 1993). In essence, maximum likelihood estimation measures the maximum decoding capacity of a representational system. It has been used to model visual discrimination processes (Paradiso 1988). Although the successful applications of the method do not imply that the brain decodes sensory information using a similar maximum likelihood decoding method, it does indicate that perceptual behaviors are correlated with stimulus decoding capacity of the neuronal network revealed by the method. We followed the same rationale in our analysis of the impact of cortical plasticity effects on perceptual discrimination performance.
The information processing events underlying the perceptual identification behavior are unknown. The traditional view is that both discrimination and identification are mediated by the same perceptual processes so that their performances should match each other. Later experiments showed that the stimulus identification performance is generally worse than what would be predicted from discrimination functions (Massaro 1987). In the present study, animals showed a nearly linear identification function across a large frequency range. Such a linear identification function is inconsistent with a purely discrimination-based identification process, which would have yielded sigmoidal identification function like that of the MLE/LR group in Fig. 6. We modeled identification behaviors in two steps—first, the choice probability is determined with the log-likelihood ratio, and, second, a Bernoulli random process determines the identification choices. The two steps may correspond to the two separate processes underlying identification behaviors—sensory decoding and decision-making.
In this study, we simplified neuronal tuning properties—all neurons have the same firing rate, tuning bandwidth and spontaneous firing rate. Essentially same results were obtained with model neurons whose properties have the same distributions as those of recorded neurons (data not shown). The sound exposure-induced cortical plasticity effects were also simplified in this study, and only changes in the tuning frequencies were included in the analysis. Other neuronal response properties, such as the shapes of the tuning curves, the maximum response magnitudes, spontaneous firing rates, and spike timing/correlation can also be altered either by sound exposure or by perceptual learning (Bao et al. 2001; Beitel et al. 2003; Blake et al. 2006; Brown et al. 2004; Chang and Merzenich 2003; Chowdhury and Suga 2000; Edeline and Weinberger 1993; Engineer et al. 2004; Fritz et al. 2003; Kilgard and Merzenich 1998; Kilgard et al. 2001; Ma and Suga 2003; Ohl and Scheich 1996; Polley et al. 2004; Recanzone et al. 1993; Schoups et al. 2001; Zhang et al. 2001). Those forms of cortical plasticity effects could also contribute to the learning of categorical perception. Nevertheless, our analysis demonstrates that the enlargement of cortical representations could be a mechanism for categorical perception. Systematic examinations of categorical perception in animals that have been exposed to controlled sensory input would provide new insights into the neural mechanisms of categorical perceptual learning.
The work was supported by a grant from US National Institute of Health.
- Beitel, R. E., Schreiner, C. E., Cheung, S. W., Wang, X., & Merzenich, M. M. (2003). Reward-dependent plasticity in the primary auditory cortex of adult monkeys trained to discriminate temporally modulated signals. Proceedings of the National Academy of Sciences of the United States of America, 100(19), 11070–11075.PubMedCrossRefGoogle Scholar
- Dayan, P., & Abbott, L. F. (2001). Theoretical Neuroscience. Cambridge, MA: The MIT Press.Google Scholar
- Eimas, P. D. (1974). Auditory and linguistic processing of cues for place of articulation by infants. Perception & Psychophysics, 16, 564–570.Google Scholar
- Green, D. M., & Swets, J. A. (1966). Singal detection theory and psychophysics. New York: Wiley.Google Scholar
- Harnad, S. R. (1987). Categorical perception: the groundwork of cognition. Cambridge: Cambridge University Press.Google Scholar
- Harnad, S. (2003). Categorical perception. In L. Nadel (Ed.), Encyclopedia of cognitive science. London: Macmillan.Google Scholar
- Harnad, S., Hanson, S. J., & Lubin, J. (1991). Categorical perception and the evolution of supervised learning in neural nets. In L. Reeker (Ed.), Working Papers of the AAAI Spring Symposium on Machine Learning of Natural Language and Ontology. pp. 65–74. Standford, CA.Google Scholar
- Kuhl, P. K., & Padden, D. M. (1982). Enhanced discriminability at the phonetic boundaries for the voicing feature in macaques. Perception & Psychophysics, 32(6), 542–550.Google Scholar
- Massaro, D. W. (1987). Categorical partition: A fussy logical model of categorical behavior. In S. Harnad (Ed.), Categorical perception: the groundwork of cognition. pp. 254–283, Cambridge, UK: Cambrige University Press.Google Scholar
- Pollack, L., & Norman, D. A. (1964). A non-parametric analysis of recognition experiments. Psychonomet Sci, 1, 125–126.Google Scholar
- Polley, D. B., Heiser, M. A., Blake, D. T., Schreiner, C. E., & Merzenich, M. M. (2004). Associative learning shapes the neural code for stimulus magnitude in primary auditory cortex. Proceedings of the National Academy of Sciences of the United States of America, 101(46), 16351–16356.PubMedCrossRefGoogle Scholar
- Powell, M. J. D. (1977). A fast algorithm for nonlinearly constrained optimization calculations. In G. A. Watson (Ed.), Numerical analysis. New York: Springer.Google Scholar
- Williams, L. (1977). The perception of stop consonant voicing by Spanish–English bilinguals. Perception & Psychophysics, 21, 289–297.Google Scholar