Deviance Detection Based on Regularity Encoding Along the Auditory Hierarchy: Electrophysiological Evidence in Humans
- First Online:
- Cite this article as:
- Escera, C., Leung, S. & Grimm, S. Brain Topogr (2014) 27: 527. doi:10.1007/s10548-013-0328-4
Detection of changes in the acoustic environment is critical for survival, as it prevents missing potentially relevant events outside the focus of attention. In humans, deviance detection based on acoustic regularity encoding has been associated with a brain response derived from the human EEG, the mismatch negativity (MMN) auditory evoked potential, peaking at about 100–200 ms from deviance onset. By its long latency and cerebral generators, the cortical nature of both the processes of regularity encoding and deviance detection has been assumed. Yet, intracellular, extracellular, single-unit and local-field potential recordings in rats and cats have shown much earlier (circa 20–30 ms) and hierarchically lower (primary auditory cortex, medial geniculate body, inferior colliculus) deviance-related responses. Here, we review the recent evidence obtained with the complex auditory brainstem response (cABR), the middle latency response (MLR) and magnetoencephalography (MEG) demonstrating that human auditory deviance detection based on regularity encoding—rather than on refractoriness—occurs at latencies and in neural networks comparable to those revealed in animals. Specifically, encoding of simple acoustic-feature regularities and detection of corresponding deviance, such as an infrequent change in frequency or location, occur in the latency range of the MLR, in separate auditory cortical regions from those generating the MMN, and even at the level of human auditory brainstem. In contrast, violations of more complex regularities, such as those defined by the alternation of two different tones or by feature conjunctions (i.e., frequency and location) fail to elicit MLR correlates but elicit sizable MMNs. Altogether, these findings support the emerging view that deviance detection is a basic principle of the functional organization of the auditory system, and that regularity encoding and deviance detection is organized in ascending levels of complexity along the auditory pathway expanding from the brainstem up to higher-order areas of the cerebral cortex.
KeywordsMismatch negativityMMNChange detectionMiddle-latency responseMLROddballStimulus-specific adaptationSSAInferior colliculusFrequency following response (FFR)
The ability to detect the occurrence of unexpected novel or deviant stimuli in the acoustic environment is critical for survival, as it prevents potentially relevant stimuli to go unnoticed. In the auditory modality, the detection of deviant events has been associated to a particular brain response derived from the human electroencephalogram (EEG), the mismatch negativity (MMN; Näätänen et al. 1978; for a recent review, see Näätänen et al. 2007) auditory evoked potential (AEP). Typically, the auditory oddball paradigm is used to obtain the MMN, in which a repeated “standard” stimulus is occasionally replaced by a sound (the “deviant” stimulus) differing in any of its attributes from the repeating one. The MMN is isolated as the difference waveform between the AEP elicited to the standard sound from that elicited to the deviant, and reaches its maximum peak amplitude at 100–200 ms from change onset. The MMN has a frontocentral scalp distribution, with positive voltages at electrode positions below the Sylvian fissure, indicating generator sources located bilaterally to the supratemporal plane of the auditory cortex (Escera et al. 2000a). Also, generator sources located in prefrontal regions are often observed (Giard et al. 1990; Deouell 2007). Due to its reliability as an EEG signal (Escera et al. 2000b), the MMN has become a valuable tool to study auditory perceptual resolution and regularity representations (Snyder and Alain 2007; Winkler et al. 2009), and it has been recently claimed to index NMDA neurotransmission dysfunction on a range of neurological and psychiatric diseases (Näätänen et al. 2011), based on its abnormal elicitation in a broad range of clinical conditions (Näätänen and Escera 2000; Näätänen et al. 2012).
The critical role of the MMN generating process in involuntary attention has been shown by studies using the so-called auditory distraction paradigms. In these paradigms, participants are instructed to perform a primary auditory (Schröger 1996; Schröger and Wolff 1998) or visual (Escera et al. 1998, 2001, 2003; Domínguez-Borràs et al. 2008; SanMiguel et al. 2008) task while ignoring rare task-irrelevant deviant stimuli. In all these studies, the unexpected occurrence of deviant, or novel, sounds prolonged the response times and reduced the hit rates to target stimuli in the primary task (Escera et al. 2000a; Escera and Corral 2007), thus demonstrating involuntary attention switching to these task-irrelevant sound changes. Confirmatory evidence is also provided by the P3a or novelty-P3, a positive event-related brain potential (ERP) that follows the MMN (see Escera and Corral 2007), which is taken as a neural correlate of involuntary attention (Knight 1996; Escera et al. 2000a). All these results fit well with theoretical models of the role of MMN in involuntary attention (Näätänen and Michie 1979; Näätänen 1990), including the existence of its frontal generator, which would reflect the activation of the attention switching mechanism following the detection of a stimulus change in the auditory cortex (Rinne et al 2000). Yet, some studies have found that the frontal MMN generator gets engaged even earlier than the supratemporal contribution (Yago et al 2001; Tse and Penney 2008), leaving open the nature of the underlying generator neurophysiology. For instance, it may well be that deviance detection at the level of the auditory thalamus (Kraus et al. 1994a, b; Mäkelä et al. 1998) feeds directly the frontal cortex (Martínez-Moreno et al. 1987), as discussed by Yago et al. (2001), yielding deviance-related activity there before change detection is represented in auditory cortex (at least at the latencies and regions generating the MMN).
In the present account, we will review recent evidence from our lab indicating the existence of neurophysiologic correlates of deviance detection at much earlier latencies and involving lower hierarchical structures than those of the MMN. These data support the view that deviance detection based on regularity encoding is a basic principle of the functional organization of the whole auditory system, and that regularity encoding and deviance detection is organized in ascending levels of hierarchical complexity along the auditory pathway expanding from the brainstem up to higher-order areas of the cerebral cortex.
Auditory Deviance Detection and MMN
Although self-evident, the MMN is elicited to a deviant stimulus that occurs when the physical standard stimulus is no longer present, so that the brain’s neurophysiologic response to such a rare sound requires a kind of memory trace of the preceding repeating stimulus for comparison. This reasoning led to the so-called sensory memory hypothesis (Näätänen 1990, 1992), according to which the MMN is generated by a mismatch during the comparison process between the sensory input from a deviant stimulus and a neural sensory-memory trace representing the physical features of the standard stimulus. Yet, a more parsimonious interpretation was opposed, based on a mechanism of adaptation of the neural population responding to the standard stimulus features which is stimulated at a higher probability compared to the neural population responding to the deviant feature, which stays “fresh” and less adapted, hence resulting in a larger deviant response (the N1 adaptation hypothesis; Jääskeläinen et al. 2004; May and Tiitinen 2010). More recently however, evidence accumulated that it is not necessary to have a fixed standard sound that is frequently repeated, but that MMN can also be elicited with paradigms in which regularities more complex than simple standard repetitions are involved (e.g., when the relationship between consecutive sounds is regular, but the characteristics of each single sound event vary; Paavilainen 2013). This led to the notion of “regularity representation” underlying MMN rather than a memory trace for a particular standard tone (e.g., Cowan et al. 1993; Winkler 1993). Depending on the paradigm, the regularity can have different degrees of complexity (see Picton et al. 2000), ranging from simple ones in which the regularity is a feature repetition (the classic oddball paradigm), to more complex paradigms in which a certain feature combination, a complex sound pattern, or a certain relationship between sounds is kept regular. A recent account (Bendixen et al. 2012) assumes that the auditory system actively makes sensory predictions based on the extracted regularity representations and that the MMN is a signal of mismatch between the representations of the predicted event and the incoming one (prediction error, e.g., Garrido et al. 2009). This view is supported amongst others by the fact that even in the absence of auditory regularities, for instance in the case where predictions for a particular sound to occur are derived from a visual signal, such as a musical notation, an MMN-like auditory response is elicited when a sound occurs that does not match the corresponding visual symbol (Widmann et al. 2004). These latter notions emphasize that MMN certainly reflects more than a simple form of adaptation to the repetition of a particular standard stimulus; yet it requires in a simple-feature oddball paradigm, particular care to disentangle which aspect of the electrophysiological response is based on mere adaptation and which on the representation of the context regularity that is kept in sensory memory. For this purpose, the so-called controlled protocol has been proposed (Schröger and Wolff 1996), which allows comparing the brain response to the deviant stimulus from the oddball block to that to the same stimulus occurring amongst a series of other different equiprobable stimuli controlling for refractoriness (Jacobsen and Schröger 2001; Ruhnau et al. 2012; see “Methodological controls” section).
In general, the MMN can be used as an electrophysiological measure of the evolvement of regularity representations. By means of dynamic sequences, in which regular sub-sequences of different durations are embedded in irregular portions, it has for instance been shown that regularity extraction occurs after as few as two standard presentations for simple regularities (frequency repetition: Bendixen et al. 2007; and frequency relationship: Bendixen and Schröger 2008). More abstract regularities, such as a sequence in which the frequency of the next tone is predicted by the duration of the previous tone need more than ten successive events conforming to the contingent relation in order to be extracted (Bendixen et al. 2008).
In order to find a more direct measure of how regularity representations evolve, recent studies focused on the effects of the standard stimulus repetition on brain responses. These studies have identified a putative correlate of acoustic regularity representation, namely the repetition positivity (RP; Haenschel et al. 2005; Costa-Faidella et al. 2011a, b), supporting the regularity-encoding view of MMN. The RP appears as an amplitude modulation of the P50, N1 and P2 components of the long-latency AEP, the three of them riding on a slow positive waveform (Haenschel et al. 2005; Baldeweg 2007). The RP increases with the number of repetitions of the standard stimulus (Haenschel et al. 2005), and correlates with stimulus probability in multiple time-scales (Costa-Faidella et al. 2011b), showing a similar behavior comparable to single neurons exhibiting stimulus-specific adaptation (SSA) in primary auditory cortex (Ulanovsky et al. 2004). However, early repetition effects (at P50 latency range) are only observed when the stimulation timing is isochronous, as opposed to random timing conditions where these effects only appear later on at the P2 latency range (Costa-Faidella et al. 2011a).
That stimulus repetition effects, that is, regularity encoding, can be observed as short as 70 ms from sound onset is suggestive that neural correlates of deviance detection could be detected at early stages of the auditory processing hierarchy. A more direct suggestion for the existence of such early neural correlates of the MMN in humans comes from animal studies of single neuron activity elicited to stimulus repetition and sound changes. In their seminal study, Ulanovsky et al. (2003) recorded individual neurons—and multiunit activity—in the primary auditory cortex, and found that a high proportion of cortical neurons reduced their responses after a few repetitions of the same tone. Interestingly, these neurons restored their firing rate in response to a tone of a different pitch, that is, showed SSA. The authors employed a design that resembles the oddball paradigm to obtain the MMN in humans. In their experiments, the probability of the standard/deviant stimuli was manipulated (i.e., 90/10 %, 70/30 %, including a control 50/50 % condition), as was the intensity and the normalized frequency difference between the two tones (∆f = 0.37, 0.10, 0.04, defined as ∆f = (f2 − f1)/(f2 × f1)1/2). Moreover, as the study did not reveal SSA in the MGB, the authors went further to claim that the origin of MMN would be above the thalamus, while these cortical neurons exhibiting SSA would be upstream MMN generation (Ulanovsky et al. 2003; see also Taaseh et al. 2011). Subsequent studies, together with the seminal recording of multiunit activity to consonant–vowel contrasts in the MGB of guinea pigs (Kraus et al. 1994a, b; King et al. 1995) challenge the attribution of deviance detection to cortical regions. Indeed, the single-unit recording studies by Malmierca and colleagues have described individual neurons in the inferior colliculus (IC; Pérez-González et al. 2005, 2012; Malmierca et al. 2009; Duque et al. 2012; Ayala and Malmierca 2013; Ayala et al. 2013) and MGB (Antunes et al. 2010; Antunes and Malmierca 2011) of the rat that exhibit SSA and novelty responses similar in many respects to those found by Ulanovsky et al. (2003) in the cat’s primary auditory cortex, thereby supporting the earlier finding from Kraus et al. (1994a, b), that is, the subcortical origin of some deviant-related responses in the auditory system. Whether SSA in subcortical auditory stations is inherited from the auditory cortex (Nelken and Ulanovsky 2007) via the cortico-fugal pathway (Suga et al. 2002) or whether it is generated de novo in MGB or IC is still an open question (see however, Antunes and Malmierca 2011 and Anderson and Malmierca 2013), and has been recently reviewed elsewhere (Escera and Malmierca 2013).
The existence of individual neurons in the IC, the MGB and the primary auditory cortex of several animal species exhibiting SSA is suggestive of the idea that deviance detection based on regularity encoding is a property of the whole auditory system, expanding all its hierarchical levels, from the lower auditory pathway (or at least from the IC downstream) to higher-order areas of the auditory cortex. Yet, latency differences between the MMN recorded in humans—but also in animals (see Ruusuvirta et al. 1998; Astikainen et al. 2011)—and the deviance-related responses in individual neurons, that is, the firing onset of novelty units at circa 20–30 ms (e.g., Ulanovsky et al. 2003, for cortical, Pérez-González et al. 2005, for subcortical) makes this suggestion somewhat puzzling. Here, we argue that if SSA cannot account directly for MMN and only lies upstream MMN generation (see Nelken and Ulanovsky 2007), it should nevertheless yield electrophysiological activity visible at the scalp. In other words, if SSA is not the scalp correlate of MMN, there should be correlates of deviance-related single-neuron activity at earlier latencies of the AEP and by AEP components generated at lower stages of the auditory system’s hierarchy. Of course, these correlates cannot reflect directly the action potentials triggered by the novelty units releasing from adaptation to the deviant stimuli, as AEPs, and EEG in general are assumed to originate from synchronized synaptic currents in large cell assemblies (Nunez and Srinivasan 2006), but they would reflect the (postsynaptic) outcome of the novelty-units related activity in their target neural populations. This hypothesis can be tested by making use of the whole human AEP, which includes three groups of well-characterized waveforms (Picton et al. 1974; Picton 2010): the auditory brainstem response (ABR), the middle latency response (MLR) and the Long-Latency AEP, which includes the MMN. At present, it is well established that the successive waveforms of the ABR ranging from 1–10 ms from sound onset (waves I, II, III, IV, V, and A) originate from lower-to-upper structures in the subcortical auditory pathway starting from the auditory nerve (wave I–II) up to the IC (waves V–A; Stochard et al. 1979). On the other hand, the MLR is characterized by a sequence of waveforms in the range 12–50 ms, labeled as N0, P0, Na, Pa, and Nb (sometimes Pb, equivalent to P50, is included; Picton 2010), and represent the earliest cortical responses to a sound, for example, the P0 waveform peaking at 16–19 ms is generated in primary auditory cortex (Yvert et al. 2001, 2005). However, when implementing any oddball experiment to record the electrophysiological correlates of auditory deviance detection by the ABR or the MLR latency ranges, one has to take into account several methodological considerations as described in the next section.
A further critical test is whether independent of low stimulus probability, deviant sounds occurring within a regular context of standard tones elicit differential responses compared to the same low-probability sound occurring within an irregular unpredictable context. As described above, this comparison is conceptually important to distinguish a more simple mechanism of change detection based on feature adaptation from a higher-order mechanism based on regularity encoding which is often referred to as ‘genuine’ or ‘true’ deviance detection. The typical paradigm used with this regard is the controlled paradigm introduced by Schröger and Wolff (1996), in which the response to the deviant stimuli from the oddball block is compared with the response to the physically identical sound presented in a context of different randomly intermixed, equi-probable sounds. In this way, physical stimulus features and sound probability are kept constant, whereas the regularity of the context is varied. By using this controlled block, genuine deviance detection for simple feature changes in the latency range of the MMN has been demonstrated for location (Schröger and Wolff 1996), frequency (Jacobsen and Schröger 2001), intensity (Jacobsen et al. 2003), and duration (Jacobsen and Schröger 2003).
Recently, the controlled paradigm has been refined by proposing the use of regular presentation of different low-probability sounds rather than a randomly intermixed presentation of these sounds, such as for example in a cascade sequence in which tone frequency is rising and falling again (Ruhnau et al. 2012). This allows for the comparison of the responses to a physically identical, same low-probability sound in a context in which it mismatches (oddball condition) or matches the predictions based on the context regularity (cascade control condition).
Early Correlates of Deviance Detection
In a series of studies conducted in our laboratory, we have examined whether the MMN is the earliest electrophysiological response for regularity violations in humans. These studies have employed paradigms which can be grouped into two main categories (see Picton et al. 2000): simple and complex regularity violations. Here, we define simple regularity violation as a feature change, in which a violation is being represented by a change in any discernible sound feature, for example, frequency, location, intensity, or SOA, from a repetitive stream of uniform auditory features. Paradigms used to examine such regularity include the typical oddball paradigm and the multi-feature paradigm (Näätänen et al. 2004). We define complex regularity as the relationship or rule between discrete sounds, but not on a specific feature per se. A typical example of a complex sequence is the tone-alternation paradigm, in which a low-frequency tone (A) alternates with a high-frequency tone (B) in a ABABAB… sequence, where the deviant event is a tone repetition, such as at the end of this train: ABABABB… (e.g., Alain et al. 1994). Another example of a complex sequence is the feature-conjunction paradigm as implemented by Gomes et al. (1997). In this paradigm, the standard events were defined by the high pitch/high intensity and low-pitch/low intensity, whereas deviants events were stimuli violating this rule, that is, having a high pitch/low intensity and low pitch/high intensity.
Deviants differing in various simple features other than frequency or spectral content have also led to early deviance-related responses. Perceived location changes have been consistently linked to an enhanced Na component, at circa 25 ms from change onset. This effect was first associated with a location change, when Sonnadara et al. (2006) employed band-pass filtered noise bursts and varied the perceived location by head-related transfer functions. It was later confirmed to be a genuine deviance detection process when the same effect was found in the same component using clicks in free-field stimulation in a study that included a condition to control for refractoriness confounds (Fig. 2c; Grimm et al. 2012). Confirmatory evidence comes also from the enhanced Na response by the occasional feature change in the interaural time difference (ITD) of stimuli presented binaurally through headphones (Cornella et al. 2012). On the other hand, intensity deviants were associated with a more negative potential at the transition from Na to Pa component (Fig. 2d; Althen et al. 2011), and temporal regularity violations led to enhanced Pa and Nb responses (Leung et al. 2013). However, the story was different when more complex auditory stimuli were involved. Using a controlled oddball paradigm, Cornella et al. (2013) manipulated the direction of frequency-modulated (FM) sweeps while keeping the frequency content of the stimuli constant. Instead of observing an enhanced response generated by deviants, the authors observed an enhancement of the Pa component of the MLR to repeated standards. This suggested that the auditory system was sensitive to the physical characteristics of the repetitive complex stimuli at a very early processing stage, but that the regularity violations of such stimuli can only be encoded with the involvement of a higher-order mechanism, eliciting the MMN that was observed to the deviant stimuli in this study.
Among the above reviewed studies, two of them have attempted to examine if deviance-related responses could be observed at even earlier stages, by the wave V of the ABR. By using broadband noise tokens (Slabu et al. 2010) and clicks (Althen et al. 2011), no deviance-related effect could be observed, however, at such an early latency. This is intriguing, as the wave V of the ABR is thought to be generated in IC (Stochard et al. 1979; Picton 2010), and is at odds with results from SSA in animal studies (e.g., Pérez-González et al. 2005). However, the very short latency of the wave V at 5–10 ms contrasts with the timing at which deviance-related responses have been described in animal recordings of single units in IC, by 20–30 ms (Pérez-González et al. 2005; Malmierca et al. 2009; Duque et al. 2012). More importantly, the short latency of wave V suggests that it is generated in the ascending lemniscal portions of IC, whereas novelty units in this nucleus are predominantly located in the non-lemniscal regions (i.e., the dorsal and lateral cortices; Malmierca et al. 2009; Duque et al. 2012). Therefore, in a follow-up study we set to measure another aspect of the ABR, namely the complex ABR (cABR) which is based on the frequency following response (FFR). The FFR follows the phasic ABR after waves V and A, that is, usually starting at circa 15–20 ms from sound onset, and reflects the tonic brainstem response of bursts of activity matching the repetitive peaks present in the acoustic signal (Chandrasekaran and Kraus 2010; Skoe and Kraus 2010). By measuring the FFR elicited to consonant–vowel stimuli (/ba/and/wa/) presented in oddball, reversed oddball and controlled paradigms, Slabu et al. (2012) demonstrated genuine deviance-related responses, based on regularity encoding in the human auditory brainstem. This finding was further confirmed by our recent fMRI study that revealed the involvement of the left IC and the right MGB in the appropriate deviant versus control statistical contrast (Cacciaglia et al. 2013).
In a further, MEG study, Recasens et al. (2013) set to examine differential responses to violations in local (simple feature repetition) versus global (pattern) rules. In this study, two tones (‘A’ tone: 988 Hz, ‘B’ tone: 880 Hz) were arranged in two types of short-term sequences or patterns (‘AAAB’ as standard patterns; ‘AAAAB’ as deviant patterns), which formed the global condition. The ‘B’ tone served as the local deviant, as it was of different frequency to the repetitive ‘A’ tone. Consistent with the findings from the abovementioned two studies, MMNm responses were generated by both local and global rule violations, but deviance-related Nbm and Pbm responses were only observed in the local rule violation conditions. Taken together, these studies suggest that simple regularity is encoded in the early processing stages, whereas encoding complex regularity requires the higher levels of the auditory hierarchy. Together with the oddball studies reviewed above, these findings support the notion of a hierarchical organization of the auditory novelty system.
The studies reviewed in this account provide compelling evidence for the existence of very early auditory deviance-detection correlates in the human AEP. Specifically, changes in the frequency of a repetitive tone elicit enhancements of the Nb (Grimm et al. 2011; Alho et al. 2012; Leung et al. 2012; Recasens et al. 2012; Althen et al. 2013) and sometimes Pa (Slabu et al. 2010) components of the MLR; changes in the physical or perceived (i.e., via ITDs) location source of a sound enhance the Na component of the MLR (Sonnadara et al. 2006; Grimm et al. 2012; Cornella et al. 2012), and stimulus intensity changes shift the Na–Pa slope towards negative potentials (Althen et al. 2011). Critically, most of these effects were observed not only in comparison to the responses elicited by the standard stimulus in the reversed oddball block, but also in comparison to the very same physical stimulus occurring with a low probability amongst other equiprobable stimuli in the so-called controlled block. These latter effects support the proposal that very early deviance-related responses originate from the encoding of regularity in the acoustic environment, rather than reflecting mere refractoriness. In addition, the cABR study by Slabu et al. (2012) revealed the involvement of the human inferior colliculus in encoding the acoustic regularity and detecting the violation of such regularity in stimuli of linguistic nature.
All these findings are well in agreement with the single-neuron activity studies in several animal species that have revealed SSA to repetitive auditory stimuli along the auditory pathway, including the IC (Pérez-González et al. 2005, 2012; Malmierca et al. 2009; Duque et al. 2012; Ayala and Malmierca 2013; Ayala et al. 2013), the MGB (Antunes et al. 2010; Antunes and Malmierca 2011) and the primary auditory cortex (Ulanovsky et al. 2003, 2004; Taaseh et al. 2011). In fact, we here suggest that these early human deviance-related responses, in the MLR and cABR latency ranges and generating structures, are the putative correlate of the single-neuron activity revealed in SSA experiments, and that MMN generation lies downstream these deviance-related activities. However, further experiments should aim at bridging the gap between the different temporal and neurophysiological scales of single-neuron, cABR, MLR and MMN correlates of auditory deviance detection. This could be achieved by recording simultaneously scalp potentials, epidural potentials, local-field potentials or multi-unit activity in animals or in neurological patients with the paradigms discussed here.
With regard to the simple-feature deviance detection experiments, two aspects are worth mentioning. First, one should be very careful to implement the appropriate methodological controls. In the first place, the stimulus parameters have to be carefully selected, as the early (ABR, cABR and MLR) AEPs are remarkably sensitive to the physical dimension of the stimulus (see Fig. 1). This requires that the response to the oddball stimulus is compared to the response to the same physical stimulus presented with a high probability in the so-called reversed block. Also, it is important to control for stimulus probability, so that the alternative explanation of any effect produced by mere refractoriness can be disregarded. This is achieved by implementing the so-called controlled block in which the deviant stimulus is presented amongst other equiprobable different stimuli.
Second, an intriguing question is why the deviance-related effects observed in the reviewed experiments emerge in different components of the MLR, namely: Nb for frequency—although sometimes Pa; Na for location; or Na–Pa transition for intensity. There is no straightforward explanation for these dissociations. One possible explanation is that the specific parameters used in the different experiments, such as the stimulus type—either, click, pure tone, broad-band noise, chirp—, stimulus-onset asynchrony and even presentation mode (free field through loudspeakers, headphones, binaurally, monaurally), influences the latency of the effects revealed in the different experiments (i.e., on Na, Pa, etc.). Yet another alternative explanation is that those components showing deviance-related effects would actually reflect the activity of neural populations encoding for those specific features, so that for instance Na generators would encode for location information, or Nb generators would encode for pitch information, as suggested elsewhere (Woods et al. 1995; Alho et al. 2012).
The experiments that have attempted to find early correlates of complex regularity violations have provided negative results so far, even when using appropriate controls to show early traces of deviance detection for simple feature violations. Indeed, while violations to feature-conjunction, pattern alternation and global rule—defined as three tones of frequency A + one tone of frequency B—failed to elicit early correlates of deviance detection, they nevertheless elicited sizeable MMNs. These results clearly indicate that encoding of complex acoustic regularities require high-order regions of the auditory hierarchy beyond the primary auditory cortex and areas around the Heschl’s gyrus where the MLR components are generated (Yvert et al. 2001, 2005).
In summary, the human studies reviewed here together with the animal studies on SSA support the emerging view that deviance detection based on regularity encoding is a basic principle of the functional organization of the auditory system (see Grimm and Escera 2012; Escera and Malmierca 2013). Moreover, the fact that violations to complex regularities fail to elicit any trace of deviance detection in the early processing stages, while clear MMNs are elicited, suggests that deviance detection based on regularity encoding is organized in ascending levels of complexity along the auditory pathway, expanding from the brainstem up to the higher-order areas of the auditory cortex. We are convinced that this novel theoretical point of view has opened new research avenues towards the understanding of the human auditory function.
This work was supported by the Spanish Ministry of Economy and Knowledge: Project PSI2012-37174, Programa Euroinvestigación-EUI2009-04086 awarded to the ERANET-NEURON Project PANS, and Consolider-Ingenio 2010 program (CDS2007-00012). Funds were also received from a grant from the Catalan Government (SGR2009-11) and the ICREA Academia Distinguished Professorship awarded to Carles Escera.