Advertisement

Brain Topography

, Volume 31, Issue 6, pp 972–984 | Cite as

Increased Early Sensitivity to Eyes in Mouthless Faces: In Support of the LIFTED Model of Early Face Processing

  • Roxane J. ItierEmail author
  • Frank Preston
Original Paper

Abstract

The N170 ERP component is a central neural marker of early face perception usually thought to reflect holistic processing. However, it is also highly sensitive to eyes presented in isolation and to fixation on the eyes within a full face. The lateral inhibition face template and eye detector (LIFTED) model (Nemrodov et al. in NeuroImage 97:81–94, 2014) integrates these views by proposing a neural inhibition mechanism that perceptually glues features into a whole, in parallel to the activity of an eye detector that accounts for the eye sensitivity. The LIFTED model was derived from a large number of results obtained with intact and eyeless faces presented upright and inverted. The present study provided a control condition to the original design by replacing eyeless with mouthless faces, hereby enabling testing of specific predictions derived from the model. Using the same gaze-contingent approach, we replicated the N170 eye sensitivity regardless of face orientation. Furthermore, when eyes were fixated in upright faces, the N170 was larger for mouthless compared to intact faces, while inverted mouthless faces elicited smaller amplitude than intact inverted faces when fixation was on the mouth and nose. The results are largely in line with the LIFTED model, in particular with the idea of an inhibition mechanism involved in holistic processing of upright faces and the lack of such inhibition in processing inverted faces. Some modifications to the original model are also proposed based on these results.

Keywords

Faces Eyes N170 ERPs Gaze-contingent procedure Inhibition 

Introduction

Understanding the neural activity underlying the perception and recognition of faces has been a major focus in Cognitive Neuroscience over the past few decades, due to the central role faces bear in many aspects of social cognition. Studies have used the Event-Related Potential (ERP) technique to study the temporal dynamics of face processing, and many have focused on the first 500 ms of visual processing during which numerous facial cues are extracted. In particular, since its discovery over 20 years ago (Bentin et al. 1996; George et al. 1996), the scalp-recorded N170 has been one of the most studied ERP components in the field. This negative component is seen between 130 and 200 ms after the onset of face presentations over occipito-temporal sites (see Eimer 2011; Rossion and Jacques 2012 for reviews on this component). The most widespread view is that the N170 represents the earliest reliable stages of face perception, where features are perceptually “glued” into an indecomposable holistic whole (Eimer 2000b; Sagiv and Bentin 2001). Holistic processing is regarded as the hallmark of efficient face processing (Rossion 2009) while other visual objects seem processed mostly on the basis of their parts (featural processing, Tanaka and Gordon 2011). The N170 is thus considered an early marker of this face-specific holistic processing, a view in part due to its sensitivity to plane inversion. Indeed, inversion impacts the perception and recognition of faces more so than objects and this Face Inversion Effect (FIE) is assumed to reflect disruption to holistic processing (Maurer et al. 2002; Rossion 2009). At the neural level, the N170 is larger and delayed for inverted compared to upright faces (Bentin et al. 1996; Eimer 2000a; Itier and Taylor 2002; Rossion et al. 2000) while no such effect is seen for inverted objects or nonhuman faces (de Haan et al. 2002; Itier et al. 2006, 2007, 2011; Rossion et al. 2000; Wiese et al. 2009). The exact mechanism behind these modulations are, however, still unclear despite the N170 FIE being one of the most replicated phenomenon in the field.

The holistic processing view of the N170 component contrasts with other evidence supporting an early sensitivity to the eyes, the N170 being larger for isolated eye regions compared to full faces [e.g. (Bentin et al. 1996; Itier et al. 2006, 2007, 2011; Kloth et al. 2013; Taylor et al. 2001)]. This eye sensitivity does not appear to be a general feature sensitivity or a simple sensitivity to the disruption of the face configuration, as the N170 elicited by isolated noses or mouths is either smaller than, or of similar amplitude as that elicited by faces, and smaller than the N170 elicited by eye regions (Bentin et al. 1996; Nemrodov and Itier 2011; Taylor et al. 2001). No such effect is seen with objects such as car fronts from which car lights were taken out to mimic the eye region (Kloth et al. 2013), suggesting it is not a mere part-whole effect and seems particular to faces. Recent studies have shown that an eye sensitivity is also seen within full faces. Controlling gaze fixation on the features of whole faces by means of a gaze-contingent eye-tracking procedure revealed that the N170 was larger when fixation was on an eye compared to the nose or mouth (de Lissa et al. 2014; Nemrodov et al. 2014; see also; McPartland et al. 2010 for similar results using a cuing procedure), and this eye sensitivity has been found regardless of the facial expression (Neath and Itier 2015; Neath-Tavares and Itier 2016). Other studies, using reverse correlation and classification techniques, have also reported a sensitivity for the contralateral eye even before the N170 peak, between the P1 and the N170 (Rousselet et al. 2014; Schyns et al. 2003).

In an attempt to reconcile the holistic and the eye sensitivity accounts of the N170 component, Nemrodov et al. (2014) proposed the Lateral Inhibition Face Template and Eye Detector (LIFTED) model. The LIFTED model assumes that the N170 reflects the neural activity generated in face sensitive areas. It also assumes an existing upright human face template against which the visual percept is compared, and an eye detector mechanism that enables the eyes to become anchor points from which the position and distance of the other features are coded. The key mechanism proposed by the model is the neural inhibition of neurons coding visual information at the fovea by neurons coding the parafoveal information (Fig. 1). These “lateral inhibitions” would take place only for upright faces in front view, once the eyes have been detected and the other features coded in the correct arrangement in comparison to these anchor points (e.g. nose below the eyes and mouth below the nose or, in other words, the typical upright face configuration based on the face template). This neural inhibition would decrease the over representation of the fovea that is assumed to arise with cortical magnification, allowing an equal representation of the facial information in fovea and in parafovea. This inhibition is the neural mechanism that enables the perceptual “gluing” of the features together, i.e. holistic processing of upright faces. In inverted faces, the face configuration is disrupted, so the inhibition mechanism does not kick in. In this framework, the N170 amplitude increase that is typically seen for inverted faces compared to upright faces (Itier and Taylor 2002; Rossion et al. 2000) is explained in terms of the neural activity elicited by the foveal information combined to the neural activity elicited by the parafoveal information.

Fig. 1

Predictions regarding the modulations of the N170 component recorded to intact and mouthless faces depending on fixation location. Those predictions are derived from the LIFTED model (Nemrodov et al. 2014) whose core mechanism is schematized. Lateral inhibitions coming from neurons coding for parafoveal visual information onto the foveated visual information are represented by the red arrows and are strong or weak depending on the distances to fovea (here represented by the small blue circle; parafovea is represented by the larger blue circle). Neural inhibition occurs only in upright faces. See text for details on the LIFTED model and its predictions. The green check marks indicate which predictions were supported by the results, and the red cross indicates the prediction that was not supported by the results

The LIFTED model was originally developed based on a large set of ERP results obtained with the use of intact faces and eyeless faces presented upright and upside down (Nemrodov et al. 2014). The use of eyeless faces was central to the development of the model as it revealed the behaviour of the N170 when the eyes were or were not in fovea or in parafovea, and whether this varied depending on face orientation. However, because no other control condition was used, from this study alone, it is not possible to ascertain whether the results obtained reflect specific mechanisms disrupted by the removal of the eyes per se, or are simply due to the removal of a facial feature. To address this gap, the main goal of the present study was to provide a control condition to the eyeless face category previously used. We chose to use mouthless faces given the mouth is also an important face feature and is the second most salient feature after the eyes (Shepherd et al. 1981). In order to directly compare our results to the Nemrodov et al. (2014) study, we used the exact same gaze-contingent design, task and face stimuli, with the sole exception of mouthless faces replacing the previously used eyeless faces. This allowed us to replicate, in a larger sample, the main findings of Nemrodov et al. (2014) concerning intact faces, and also to test a few predictions concerning mouthless faces, based on the LIFTED model.

First, we expected larger N170 amplitude when fixation was on the eyes compared to anywhere else on the face. This eye sensitivity was expected for both upright and inverted intact faces (prediction #1), as reported by Nemrodov and colleagues (2014), reflecting a true eye sensitivity rather than an upper/lower visual field effect (e.g. Zerouali et al. 2013).

Second, we sought to test whether, like the eyes, the removal of the mouth would lead to an amplitude reduction when the mouth is in fovea (fixated) in upright faces. In the Nemrodov et al. (2014) study, fixation on the area where the eyes would normally be elicited a reduced N170 amplitude in eyeless faces compared to intact faces. This finding led to the conclusion that the presence of an eye at the fovea was necessary to reveal the eye sensitivity, as that sensitivity disappeared when eyes were removed. The question is whether this effect was due to the removal of the eyes per se, or whether the removal of any feature from fovea would also lead to a decreased amplitude. According to the LIFTED model, the reduction in N170 amplitude for eye fixations in eyeless faces is specific to the eyes, because eyes are special and elicit a larger response than any other feature. Beside the eye sensitivity, the model assumes holistic processing of the rest of the face, and fixating anywhere else on the face should elicit a similar N170 amplitude, as there is enough facial information to promote this holistic processing. Accordingly, when fixation is on the mouth in an upright mouthless face, the N170 amplitude should remain unchanged compared to the same fixation in an intact face (prediction #2; Fig. 1).

Third, we sought to test further the inhibition mechanism central to the LIFTED model. According to the model, when fixation is on one feature in an upright intact face, the neurons coding for that feature in fovea are inhibited by the neurons coding for the other facial features situated in parafovea. When the nose is fixated, the neurons coding for that nose should thus be inhibited by the neurons coding for the rest of the face, including neurons coding for the mouth. Therefore, when the mouth is lacking in mouthless faces, this “nose inhibition” should be diminished because mouth-coding neurons no longer contribute to the inhibition. This leads to the prediction that, when fixation is on the nose in upright mouthless faces, the N170 should be larger than when fixation is on the nose in intact faces (prediction # 3), as the response to this nose should be less inhibited in mouthless compared to intact faces (Fig. 1). This inhibition logic accounted for the unusual finding in the Nemrodov et al. (2014) study of larger N170 for nose fixation in eyeless compared to intact faces, explained by a diminished inhibition of nose-coding neurons once the eyes were removed (as eye coding neurons would no longer contribute to this inhibition). The same logic should apply for fixation on one eye, with N170 amplitude expected to be larger for eye fixation in mouthless compared to intact faces, as mouth-coding neurons would no longer contribute to the inhibition of the eye. However, the LIFTED model proposes that the strength of the inhibition depends on the angular distance between the foveated feature and the features situated in parafovea. The mouth is the farthest away from the eyes, and that distance has been invoked to explain some of the findings in Nemrodov et al. (2014). It is thus possible that the mouth-coding neurons do not in fact inhibit eye-coding neurons with those stimuli (identical size as in the original study), in which case the N170 amplitude for eye fixations would remain the same between intact and mouthless faces (prediction #4; Fig. 1).

Finally, we sought to test further the LIFTED model idea according to which the increase in N170 amplitude seen for inverted faces (Bentin et al. 1996; Eimer 2000a; Itier and Taylor 2002; Rossion et al. 2000), the N170 FIE, is the result of the activity elicited by the feature in fovea combined with the activity elicited by the features in parafovea, as there is no more inhibition with inversion. When fixation is on the mouth in inverted intact faces, the N170 amplitude should represent the activity of the mouth-coding neurons combined with the activity of the neurons coding for the other features (Fig. 1). The main prediction is that in inverted mouthless faces, the mouth-coding neurons no longer contribute to this activity so when fixation is on the mouth, the N170 amplitude should be smaller for inverted mouthless compared to inverted intact faces (prediction #5). This amplitude reduction would be in sharp contrast to the lack of amplitude variation between intact and mouthless faces predicted in the upright format for the same mouth fixation condition, as described above. The same logic applies for nose fixation which is expected to elicit smaller amplitude for inverted mouthless than inverted intact faces (prediction #6). However, for an eye fixation, again, the possibility remains that the distance between eyes and mouth is too large for mouth-coding neurons to contribute to the overall signal, especially given the model’s idea that eyes elicit the strongest response in the first place, which could mask a weak contribution of mouth-coding neurons in intact faces. Thus, for eye fixations, the N170 amplitude may be slightly smaller for inverted mouthless than inverted intact faces, like for the other fixations, or, most probably, not different between the two conditions (prediction #7; Fig. 1).

Materials and Methods

Participants

Undergraduate students from the University of Waterloo (UW) participated in exchange for course-credit and the study was approved by a UW Research Ethics Committee. Participants reported normal or corrected-to-normal vision, no history of head-injury or neurological disease, no medication use and signed an informed written consent prior to the study. Of the initial 48 participants tested, 14 were rejected due to eye tracking calibration issues (N = 7), too many eye movements picked up by the eye tracker (N = 2), too few blocks completed (N = 3) or too many artifacts (N = 2), the latter three reasons resulting in too few trials per condition. The final sample included 34 participants (20.3 ± 1.5 years, 21 female, 30 right-handed).

Stimuli

The intact faces were the exact same ones used in Nemrodov et al. (2014) and comprised 20 male and 20 female identities originally created using FACES™ 4.0 (IQBiometrix Inc) by combining different features displayed in the same location within the same bald outline. Mouth removal was done in Adobe™ Photoshop CS5 and inverted stimuli were created by rotating all stimuli by 180°. Stimuli subtended 9.5° × 13.6° visual angles and were presented on a white background (Fig. 2).

Fig. 2

Left panel: examples of a mouthless face presented at each fixation location in the inverted and upright conditions. Each white rectangle represents the monitor in the center of which participants fixated. Each face was presented offset so that gaze fixated the facial portion of interest (forehead, nasion, left eye, right eye, tip of the nose, or mouth). Eye positions are from a viewer perspective (i.e. left eye is on the left of the picture). Right panel: angular distances between facial features and stimulus size, on an intact face example. For each fixation location condition, trials were rejected if gaze fixation was outside the 1.8° interest areas centered on each location (yellow circles)

Intact and mouthless faces were presented upright and inverted with six fixation-locations (forehead, nasion, left-eye, right-eye, nose, mouth). Upright and inverted house pictures were also presented as in Nemrodov et al. (2014), but were not analyzed and will not be reported here (please see Nemrodov et al. (2014) for upright and inverted house results and their Fig. 1 for examples of the house stimuli). There were a total of 24 conditions of interest (6 fixation locations × 2 face categories × 2 orientations).

Design and Procedure

Participants performed an orientation-detection task in a dimly-lit and sound-attenuated Faraday-cage booth. Stimuli were presented using Experiment Builder (SR Research, http://sr-research.com) on a CRT monitor 70 cm in front of participants whose head movements were restricted by the use of a chin-rest. Two game-controller buttons were used, one for upright and one for inverted stimuli (buttons were counter-balanced across participants). Instructions emphasized participants were to remain fixated on a centered fixation cross, to avoid moving their eyes and to respond as quickly and accurately as possible. Practice trials were given at the beginning to familiarize participants with the study. Stimuli were presented offset so as to center the desired feature on the centered fixation cross (Fig. 2).

Using an SR Research EyeLink 1000 eye-tracker sampling at 1000 Hz, a gaze-contingent procedure required participants to be fixated on the cross for 250 ms before the stimulus could be presented. The faces were then presented for 250 ms, followed by a response screen for 900 ms. If participants lingered over 10 s without triggering the fixation-trigger, a drift-correction was engaged. After two drift-corrections, a mid-block recalibration was performed. A nine-point automatic calibration was used at the beginning of every block with participants’ dominant eye (as determined by the Miles test). Across the eight blocks presented, there were 80 trials per condition (10 pictures per condition in each block). Breaks were given between blocks as needed.

Electrophysiological Recordings

The EEG recordings were continuously acquired at 516 Hz by a 64 channels Active-two Biosemi system. Two extra electrodes (PO9/PO10) were embedded in the custom electrode-cap (10/20 system-extended) and three pairs of additional electrodes were used (outer canthi and infra-orbital ridges to monitor eye movements, and over the mastoids) for a total of 72 recording sites. During recording, a Common Mode Sense (CMS) active-electrode and a driven right leg (DRL) passive-electrode acted as a ground and the average-reference was computed offline. Electrode direct current offsets were kept under 20 mV, as recommended by the manufacturer.

Data Analysis

Trials with incorrect responses or with micro-saccades made beyond the pre-defined Interest Area (IA) of 1.8° centred on the fixation cross, were discarded. The IA size was identical to that of Nemrodov et al. (2014) and covered non-overlapping portions of the faces (Fig. 2). Data were analyzed using EEGLab (Delorme and Makeig 2004) and ERPLAB (http://erpinfo.org/erplab) toolboxes in Matlab. EEG data were epoched in − 100 to + 400 ms segments around face onsets and band-pass filtered (0.01–30 Hz). Trials with artifacts of ± 70 µV were automatically rejected; trials were further inspected visually and those with remaining artifacts were manually rejected. Across participants and conditions, the average number of trials was 57.7 (SD = 13,2) out of the 80 initial trials per condition. Bonferroni corrected paired t tests revealed no significant difference in trial numbers between conditions, except for the inverted intact face forehead fixation condition which included a lower number of trials compared to the upright intact face nose (p = 0.026), upright intact face nasion (p = 0.023) and inverted intact face nose (p = 0.029) conditions (see Table 1 for mean trial numbers per condition).

Table 1

Mean number of trials for each of the 24 conditions (displayed by fixation locations, face type and orientation), with standard deviations in parentheses

Fixation location

Mouthless inverted (SD)

Intact inverted (SD)

Mouthless upright (SD)

Intact upright (SD)

Forehead

55.8 (13.9)

53.8 (13.7)

54.8 (14.6)

55.2 (14.8)

Nasion

58.1 (12.6)

58.2 (12.0)

59.4 (11.7)

60.2 (12.7)

Nose

60.0 (12.1)

59.8 (12.3)

58.8 (12.5)

59.4 (12.6)

Mouth

59.7 (13.1)

59.3 (11.9)

56.6 (13.8)

57.0 (14.2)

Left eye

59.2 (13.0)

58.9 (13.1)

57.9 (13.7)

56.6 (15.5)

Right eye

57.0 (14.5)

54.6 (14.5)

57.4 (11.3)

57.3 (13.4)

Using automatic peak detection between 120 and 220 ms, the N170 amplitudes were extracted at the electrode at which the component peak was maximal for each participant. Table 2 gives the number of participants for whom the peak was extracted at a given electrode (see Itier and Neath-Tavares 2017; Neath and Itier 2015; Neath-Tavares and Itier 2016 for similar approaches). Given our hypotheses, repeated measures analyses of variance (ANOVA) were run separately for upright and inverted orientations, using the within-subject factors of Face Category (2: intact, mouthless), Fixation location (6: forehead, nasion, left-eye, right-eye, nose, mouth) and Hemisphere (2: left, right). Greenhouse-Geisser adjusted degrees of freedom were used when necessary. Pairwise comparisons were Bonferroni corrected. Data were analysed using SPSS Statistics 21.

Table 2

Number of subjects for whom the N170 was maximal at left and right occipitotemporal electrodes

Left Hemisphere electrodes

(N = 34)

Right Hemisphere electrodes

(N = 34)

PO9

15

PO10

13

P9

11

P10

10

PO7

6

PO8

6

P7

1

P8

5

TP9

1

  

Results

Behavioural Results

Participants categorized upright and inverted stimuli well, with an overall 91% hits. A 2 Orientation (upright, inverted) × 2 Face Category (Intact, mouthless) × 6 Fixation locations (Forehead, nasion, left eye, right eye, nose, mouth) ANOVA on correct responses only revealed a main effect of fixation [F(1.5, 49.5) = 4.45, p = 0.025, ηp2 = 0.119], with overall best responses when fixation was on the nose or mouth; however, none of the paired comparisons were significant.

Participants responded faster to upright than inverted stimuli [effect of Orientation, F(1,33) = 7.25, p = 0.011, ηp2 = 0.180], and this inversion effect was least pronounced when fixation was on the mouth, and most pronounced when fixation was on the nose (Orientation × Fixation location, F(3.89,128.44) = 3.56, p = 0.009, ηp2 = 0.097). No other effects were found.

N170 Amplitude

We first ran an omnibus repeated measure ANOVA using Hemisphere (2: left, right), Orientation (2: upright, inverted), Face Category (2: Intact, mouthless) and Fixation locations (6: Forehead, nasion, left eye, right eye, nose, mouth) as within-subject factors, as was originally done by Nemrodov et al. (2014).

No main effect of Face Category was found (F = 0.61, p = 0.43, ηp2 = 0.018) but the main effects of fixation location [F(2.4,78.8) = 36.77, p < 0.0001, ηp2 = 0.53], orientation [F(1,33) = 109.1, p < 0.0001, ηp2 = 0.768] and hemisphere [F(1,33) = 7.71, p = 0.009, ηp2 = 0.189] were significant. These effects were modulated by significant interactions between fixation location and face category [F(5,165) = 8.68, p < 0.0001, ηp2 = 0.21], orientation and face category [F(1,33) = 22.69, p < 0.0001, ηp2 = 0.407], orientation and fixation location [F(3.9,130.4) = 6.01, p < 0.0001, ηp2 = 0.154], hemisphere and fixation location [F(2.1,67.8) = 6.78, p = 0.002, ηp2 = 0.17], hemisphere and orientation [F(1,33) = 9.93, p = 0.003, ηp2 = 0.231] and by a three-way interaction between hemisphere, fixation location and orientation [F(3.5,116.7) = 8.69, p < 0.0001, ηp2 = 0.209].

Given all these interactions, we analyzed upright and inverted faces separately to test our specific predictions.

Replicating the Eye Sensitivity and Testing the Inhibition Mechanism: Upright Intact Versus Upright Mouthless Faces

We assessed the impact of removing the mouth in upright faces using a 2 (Hemisphere) × 2 (Face Category: intact vs. mouthless faces) × 6 (Fixation location: left eye, right eye, nose, mouth, nasion and forehead) repeated measures ANOVA.

The main effect of Fixation location [F(3.4,112.8) = 21.76, p < 0.0001, ηp2 = 0.397] was due to larger amplitudes for the left and right eyes (Figs. 3, 4) compared to all other fixation locations (0.001 ≤ ps ≤ 0.015 for all Bonferroni-corrected paired comparisons). Paired comparisons also revealed that fixation on the forehead elicited the smallest amplitude (0.001 ≤ ps ≤ 0.05 for paired comparisons) while the N170 was not statistically different for fixations on the nasion, nose and mouth.

Fig. 3

Mean group N170 waveforms obtained by averaging the N170 across left hemisphere (LH) and right hemisphere (RH) electrodes at which it was recorded maximally for each participant, displayed for intact upright (upper panels) and mouthless upright (lower panels) faces. Note the larger N170 for eye fixations compared to the other fixation locations

Fig. 4

a Mean N170 peak amplitude (with standard errors to the means) for upright and inverted intact and mouthless faces at each fixation location (averaged across both hemispheres). Note the largest amplitudes for eyes regardless of orientation and face type, and the clear inversion effect for each face type. b Direct comparison of the N170 amplitude between intact and mouthless faces depending on fixation location, for upright (left) and inverted (right) orientations. Note the larger amplitudes for mouthless than intact faces at nasion and eye fixations in upright faces, and the smaller amplitudes for mouthless than intact faces at nose and mouth fixations for inverted faces (Asterisk represents a statistically significant difference)

The main effects of Face Category [F(1,33) = 10.39, p = 0.003, ηp2 = 0.239] reflected an overall slightly larger N170 amplitude for mouthless than for intact upright faces. Most importantly, the Face Category by Fixation location interaction [F(4.61,152.1) = 5.2, p < 0.001, ηp2 = 0.136] reflected a more pronounced effect of Fixation for mouthless than intact faces driven by largest responses to the eyes in mouthless faces (Fig. 4b). Separate analyses (2 Face Category × 2 Hemisphere ANOVAs) for each fixation location confirmed significantly larger N170 for mouthless than intact faces for nasion fixation [F(1,33) = 12.6, p = 0.001, ηp2 = 0.136], left eye fixation [F(1,33) = 24.9, p < 0.001, ηp2 = 0.431] and right eye fixation [F(1,33) = 6.23, p = 0.018, ηp2 = 0.159], but no significant differences between face category for forehead (F = 0.019, p = 0.89, ηp2 = 0.001), nose (F = 2.15, p = 0.151, ηp2 = 0.061) and mouth (F = 1.33, p = 0.257, ηp2 = 0.039) fixations (Fig. 4b).

No main effect of hemisphere was found (F = 3.3, p = 0.078, ηp2 = 0.091) but the hemisphere by fixation interaction was significant [F(3.1,102.4) = 8.025, p < 0.0001, ηp2 = 0.196]. Separate analysis of each fixation revealed larger amplitude for the right than left hemisphere for the right eye [F(1,33) = 13.44, p = 0.001, ηp2 = 0.289] and mouth fixations [F(1,33) = 5.73, p = 0.023, ηp2 = 0.148], while no effect of hemisphere was seen for the other fixations (left eye: F = 1.83, p = 0.18; nose: F = 2.59, p = 0.11; nasion: F = 0.94, p = 0.33; forehead: F = 0.001, p = 0.97). The three-way interaction between Hemisphere, Fixation and Face Category, was not significant (F = 0.84, p = 0.50, ηp2 = 0.025).

In summary, this analysis revealed a clear eye sensitivity for both intact and mouthless upright faces (Figs. 3, 4). However, this eye sensitivity was even more pronounced in mouthless faces, to the point that the N170 was larger for mouthless than intact faces when fixation was on the eyes and nasion. In other words, taking the mouth out of an upright face elicited an increase in N170 amplitude but only for eye and nasion fixations.

Testing the Lack of Inhibition with Inversion: Inverted Intact Versus Inverted Mouthless Faces

We assessed the impact of removing the mouth in inverted faces using a 2 (Hemisphere) × 2 (Face Category: intact vs. mouthless faces) × 6 (Fixation location: left eye, right eye, nose, mouth, nasion and forehead) repeated measures ANOVA.

The main effect of Fixation location [F(3.5,117.4) = 30.15, p < 0.0001, ηp2 = 0.477] was due to overall largest amplitudes for the eyes while the main effect of Face Category reflected slightly smaller amplitudes for mouthless than for intact faces [F(1,33) = 8.99, p = 0.003, ηp2 = 0.214]. However, these effects were modulated by a Face Category by Fixation location interaction [F(4.01,132.4) = 5.37, p < 0.001, ηp2 = 0.140], which reflected smaller N170 amplitude for mouthless compared to intact faces only for nose and mouth fixations (Fig. 4b). Separate analyses (2 Face Category × 2 Hemisphere) for each fixation location confirmed significantly smaller N170 for mouthless than intact faces for nose [F(1,33) = 20.47, p < 0.001, ηp2 = 0.383] and mouth [F(1,33) = 14.67, p < 0.001, ηp2 = 0.308] fixations, but no effect of face category for left eye (F = 2.61, p = 0.11, ηp2 = 0.073), right eye (F = 1.83, p = 0.185, ηp2 = 0.053), forehead (F = 0.67, p = 0.41, ηp2 = 0.02) and nasion (F = 0.45, p = 0.5, ηp2 = 0.014) fixations.

Amplitudes were overall larger on the right hemisphere main effect of hemisphere, [F(1,33) = 11.12, p = 0.002, ηp2 = 0.252] although this was seen for all fixation locations except forehead fixation [hemisphere by fixation location interaction, F(3.03,100.08) = 7.13, p < 0.0001, ηp2 = 0.178]. Separate analysis of each fixation location confirmed larger amplitudes on the right compared to the left hemisphere for nasion (F = 7.4, p = 0.01, ηp2 = 0.184), nose (F = 7.3, p = 0.01, ηp2 = 0.182), mouth (F = 6.5, p = 0.015, ηp2 = 0.166), left eye (F = 31.1, p < 0.001, ηp2 = 0.486) and right eye (F = 9.1, p = 0.005, ηp2 = 0.217) fixations but a lack of hemisphere effect for forehead fixation (F = 3.08, p = 0.088, ηp2 = 0.085). The three-way interaction between Hemisphere, Fixation and Face Category was not significant (F = 1.41, p = 0.22, ηp2 = 0.041).

In summary, this analysis confirmed that the eye sensitivity was seen for both types of inverted faces. Most importantly, it revealed smaller amplitude for mouthless than intact inverted faces but only when fixation was on the nose and the mouth. In other words, taking the mouth out of an inverted face triggered a reduction in N170 amplitude but only for nose and mouth fixations. This effect contrasts sharply with the results obtained for the upright format.

Note that the Orientation by Face Category by Fixation location was not significant in the large Omnibus ANOVA [F(4.3,145.0) = 1.95, p = 0.094, ηp2 = 0.057], and this is the main difference with the Nemrodov et al. (2014) findings. However, we believe that this is due to the small differences we are dealing with for each of the six fixation locations, especially given that the Fixation location by Face Category interactions were significant for each orientation analyzed separately. The significant orientation by face category interaction in the omnibus ANOVA also confirmed opposite effects of face types as a function of orientation, which we also found for each orientation analyzed separately, with overall larger amplitudes for mouthless than intact faces in the upright format, and overall smaller amplitude for mouthless than intact faces in the inverted format.

Finally, as done by Nemrodov et al. (2014), we also calculated N170 amplitude difference scores (inverted–upright) for each fixation location and face type to index the FIE (Fig. 4a). Bonferroni corrected paired comparisons between fixation locations were performed for each face type. For intact faces, the FIE was larger for nasion than forehead (p = 0.019) and mouth (p = 0.001) fixations but not significantly different between the other fixation locations. For mouthless faces, the FIE was larger for forehead and nasion fixations compared to nose (p = 0.032, and 0.005 respectively) and mouth fixations (p = 0.019 and 0.026 respectively). The other comparisons did not reach significance.

Discussion

In this study we aimed at replicating the main findings from Nemrodov et al. (2014) regarding an eye sensitivity seen regardless of face orientation. We used the same design, task, gaze-contingent procedure and stimuli as Nemrodov et al. (2014), with the exception that eyeless faces were replaced with mouthless faces. This design allowed us to confirm previous results with intact faces and to test several predictions derived from the lateral inhibition, face template and eye detector (LIFTED) model proposed by the authors to reconcile the holistic processing and eye sensitivity accounts of the N170 ERP component.

Eyes Sensitivity Within Faces

We replicated the eye sensitivity in upright faces (prediction #1), with largest N170 for eye fixations compared to all other fixations (Nemrodov et al. 2014), as also reported in other studies using a gaze-contingent approach (de Lissa et al. 2014; Neath and Itier 2015; Neath-Tavares and Itier 2016). As in Nemrodov et al. (2014), largest amplitudes for eye fixations were also seen when faces were inverted (prediction #1). These results argue against the upper/lower hemifield hypothesis (e.g. Zerouali et al. 2013), according to which sensitivity to the eyes would only be seen in upright faces because eyes are in the upper visual field, while a sensitivity for mouth would be seen when faces are inverted, as mouths would now be in the upper visual field. In contrast, the present data confirm a true eye sensitivity that is independent of face orientation, supporting the idea of an eye detector/processor at play around the N170 timing.

It is worth highlighting that, although the eye sensitivity seems to start as early as the P1 component (Fig. 3), at this early latency this effect is likely driven by changes in face position inherent to the design used, and to local low-level variations such as contrast which is typically highest around the eyes. When fixation is on one eye, most of the face is situated in one hemifield; when fixation is on the mouth, the face is situated mostly in the upper visual field and when fixation is on the forehead, the face is situated mostly in the lower visual field (see Fig. 1). These position variations impact the P1, as reflected by largest P1 amplitude for mouth fixation and contralateral effects for eye fixations (for intact upright faces), visible most clearly at occipital sites O1 and O2 (Supplementary Figure). We have analyzed and reported those early effects in previous studies (see Neath and Itier 2015 and Neath-Tavares and Itier 2016 for P1 analyses at occipital sites, and also P1 analysis at P9/P10 in supplementary data in Nemrodov et al. 2014) and refer the interested reader to these papers. The fact that the greater P1 amplitude for mouth fixation seen with intact faces disappears for mouthless faces (Supplementary Figure), also suggests that local low-level effects are at play at this latency, while this effect is not seen on the N170 (see discussion below). Other studies have suggested that the eye-related information starts to be coded between the P1 and N170 peaks (Schyns et al. 2003; Rousselet et al. 2014). The present study, and the LIFTED model, both focus on the N170 component peak at which time the eye sensitivity seems to reflect higher-level processes during which the eyes are integrated with the rest of the face.

Compared to faces, isolated eye regions elicit larger N170 while isolated mouths and noses usually elicit smaller and delayed N170 (Bentin et al. 1996; Itier et al. 2006, 2007, 2011; Nemrodov and Itier 2011; Taylor et al. 2001) and this effect was sometimes taken as evidence for an eye detector activity (e.g. Bentin et al. 1996). However, the eye detector idea was refuted on the account that the N170 amplitude did not vary when eyes were removed from an upright face (Eimer 1998), a result reported in several subsequent studies (Itier et al. 2007, 2011; Kloth et al. 2013). Nemrodov et al. (2014) demonstrated that the lack of amplitude variation with upright eyeless faces in fact depended on where participants fixated. In eyeless faces, when participants fixated on the area where the eyes would normally be, the N170 amplitude was reduced compared to the eye fixations in intact faces. In contrast, for the other fixation locations, the N170 did not vary between intact and eyeless faces, except for nose fixation, a result we come back to below. These results were taken as evidence that an eye needs to be in fovea in order to elicit maximal response. The present results support the idea that maximal response to the eyes is elicited when eyes are situated in the fovea, as even fixation on the nasion, although in between the two eyes, elicited the same amplitude as nose and mouth fixations in upright faces.

In contrast to Nemrodov et al. (2014), however, we found that fixation on the forehead elicited an N170 of smaller amplitude than all other fixations, a result that was not statistically significant in Nemrodov et al. (2014) although in the same direction (see their Fig. 5). Together the present results point to a gradient in amplitude as a function of fixation location, with smallest N170 amplitudes for forehead fixation, intermediate amplitudes for nose, mouth and nasion fixations, and largest amplitudes for eye fixations. This amplitude gradient is in line with the LIFTED model’s assumption that the eyes act as anchor points from which the rest of the face is coded (Nemrodov and Itier 2011; Nemrodov et al. 2014). However, this amplitude gradient goes against a strict holistic processing view for upright faces at the level of the N170 component, assumed to be happening in parallel with the eye sensitivity. A strict holistic view entails that all features are perceptually “glued” into an indecomposable whole. Accordingly, the N170 amplitude should not vary as a function of fixation location on the face, whether on features or not (as on the forehead). The LIFTED model suggests that beside the eye sensitivity, the rest of the face is processed holistically, which would predict same amplitudes for all other non-eye fixations (as originally found by Nemrodov et al. 2014). The present data suggest that the N170 is not only largest when fixation is on the eyes, but also varies across other face fixations, suggesting holistic processing is not uniform across face locations.

In the present study, the mouth was removed from original intact faces to create mouthless faces. In addition to higher-level changes in face configuration, this removal introduced low-level changes in the image overall contrast and pixel intensity compared to intact faces, just like low-level changes were seen with eyeless faces in the Nemrodov et al. (2014) study. However, just as in that study, N170 amplitude variations were registered for some fixation locations but not others, arguing against a general effect of those overall low-level variations. Here, when participants fixated on the mouth of an upright face, there was no significant change in the N170 amplitude between mouthless and intact faces (Fig. 4b; prediction #2), despite those low-level changes being right there in the fovea (and modulating the occipital P1, see Supplementary Figure). In contrast, the amplitude changes that were found concern the eye region fixations, despite the visual information in the fovea kept identical between intact and mouthless faces for these eye fixation conditions. These amplitude variations can solely be explained by changes in the visual content of the parafovea, which is more in line with the inhibition mechanism described in the LIFTED model than with simple low-level effects.

Neural Inhibition of the Eyes

The LIFTED model proposes that when the face is in the normal upright template configuration, the neural inhibition mechanism kicks in and neurons coding for visual information at the fovea are inhibited by neurons coding for visual information in parafovea, to allow holistic processing to occur. We reasoned that if this was the case, then when fixation was on the eyes, there should be less inhibition from the parafovea in mouthless faces compared to intact faces, because neurons coding for the mouth would not take part in this inhibition (the mouth no longer being present). This diminished neural inhibition should result in larger N170 for mouthless than intact faces for these eye fixation locations (prediction #4), which is what we found. This amplitude increase ranged from about 0.6 µV (for right eye) to 1.1 µV (left eye) and was seen even for fixation on the nasion (0.7 µV) situated in between the two eyes, an overall quite large amplitude variation.

The same logic should apply for nose fixation (prediction #3), the nose being situated in between the eyes and the mouth. When fixation is on the nose, nose-coding neurons should be less inhibited by neurons coding for parafoveal information in mouthless faces, compared to intact faces, due to the lack of mouth-coding neurons contribution. Although in the right direction (0.4 µV increase, Fig. 4b), this effect was not statistically significant for nose fixation in the present study. In the Nemrodov et al. (2014) study, the N170 amplitude was actually larger in eyeless compared to intact faces for nose fixation, a result interpreted as a diminished inhibition of the nose-coding neurons due to the lack of eye-coding neurons contribution to this inhibition. However, both eyes were removed in eyeless faces, while only one feature was missing in mouthless faces. The inhibition of nose-coding neurons might not be sufficiently diminished in mouthless faces to elicit a significant change in the N170 amplitude. However, if that is the case, then why was the inhibition sufficiently diminished for eye fixations?

The present results point at a possible differential inhibition based on which feature is fixated and which one is in parafovea. According to the feature saliency hypothesis (Shepherd et al. 1981), the eyes are the most salient feature, followed by the mouth and then by the nose. The differential inhibition might map onto this feature saliency. As eyes are so salient in upright faces, they likely need to be inhibited the most to promote holistic processing, an idea put forth by Nemrodov et al. (2014). This inhibition is not complete, though, as the N170 is largest for eye fixation in the first place (eye sensitivity) and the LIFTED model proposes that this inhibition mechanism depends on the distances between features and thus on face size. In the present study we used faces of identical size as those used in Nemrodov et al. (2014) and kept the same viewing distance. Mouth-coding neurons might thus exert a stronger inhibition on eye-coding neurons than on nose-coding neurons, despite the nose being closer to the mouth than the eyes, because eyes are more salient than the nose. This inhibition is strong enough that when it is cancelled in mouthless faces, the N170 amplitude increases for fixations on the eyes and nasion. In turn, the results of Nemrodov et al. (2014) might reflect the fact that eye-coding neurons inhibit strongly every other feature-coding neurons, including nose-coding neurons, such that the removal of eyes diminished that inhibition enough to allow an increase in N170 amplitude for nose fixation in eyeless faces (Nemrodov et al. 2014). Alternatively, the latter result might be due solely to the fact that two eyes were removed in eyeless faces, rather than just one. It is possible that removal of a single eye would not be sufficient to yield an increase in N170 amplitude for nose fixation, just like removal of the mouth was not sufficient in the present study, an idea that future studies could test. Future studies will also need to test further whether feature saliency is indeed related to inhibition strength.

Lack of Neural Inhibition with Face Inversion: Contribution of Foveal and Parafoveal Information to the Neural Response to Inverted Faces

The last predictions derived from the LIFTED model that we sought to test concerned the contribution of foveal and parafoveal information to the N170 recorded to inverted faces. The LIFTED model assumes that the inhibition mechanism does not kick in when faces are presented upside-down, because the normal face configuration is disrupted. As the eyes are detected by the eye detector (starting earlier than the N170 peak), and the location of the other features start being coded in relation to these anchoring points, the system presumably detects that the other features are situated in the wrong place compared to an upright face template, and the inhibition mechanism is not triggered. Therefore, the overall activity recorded on the scalp represents the activity elicited by the neurons coding for the foveated feature combined with the activity elicited by the neurons coding for the parafoveal features.

We predicted that when fixation was on the mouth, or on the nose, in inverted mouthless faces, the mouth-coding neurons would not contribute to the overall activity and thus the N170 amplitude would be smaller for these conditions compared to the same fixations in inverted intact faces (predictions #5 and 6). However, for an eye fixation, there were two possibilities (prediction #7). The first was that the N170 amplitude would be smaller for mouthless than intact faces, as for the mouth and nose fixations. The other possibility, which is the one we thought would be most likely, is that the strong response for the eyes (no longer inhibited) might mask the weaker response for the mouth situated the farthest away, so that in the end the N170 amplitude might not differ between inverted mouthless and inverted intact faces. The latter possibility was supported by the results. In line with those predictions, we found that the N170 was smaller for inverted mouthless than inverted intact faces for nose and mouth fixations, but not different between the two face types for the other fixations including eye fixations. These results are in sharp contrast to the results obtained for upright faces where the exact opposite was found, with larger amplitude for mouthless than intact faces only for the eye fixations. The fact that the same mouthless faces elicited opposite effects depending on face orientation is another argument against a simple low-level difference account. Rather, all these results are very much in line with the predictions derived from the LIFTED model.

Although the brains of human and non-human primates vary in anatomical and functional ways that require careful consideration (e.g. Rossion and Taubert 2017), the LIFTED model is in line with cell recordings from the macaque inferotemporal cortex that favor a face feature space framework over cells coding for whole faces (Freiwald et al. 2009). Face cells from the monkey middle face patch detect distinct constellation of face parts, with a larger sensitivity for eye cues (e.g. iris size and inter-ocular distance), yet features are interpreted according to their position in an upright face template (Freiwald et al. 2009), which echoes the eye enchoring and upright human face template ideas stipulated by the LIFTED model. Recent intracranial recordings in humans also suggest that the majority of intracranial sites recorded over both the ventral and lateral occipitotemporal cortex are actually eye-region selective, rather than face-selective (Engell and McCarthy 2014), with a category selectivity defined as a response of at least − 50 µV and twice as large as the response to the control flower category. That study also found that a majority of sites were eye-specific while only a few were face-specific, category specificity being defined as a response that met the criteria for selectivity compared to all categories tested (not just compared to flowers). These interesting results highlight the possibly greater sensitivity of the human cortical face perception network to the eyes than to the whole face, in line with the eye sensitivity reported on the scalp.

The inhibition mechanism central to the model is based on the idea that (GABA-mediated) lateral inhibitions from neighbouring neurons seem to determine the characteristic of neurons’ feature selectivity within the inferotemporal cortex (Wang et al. 2000). Inhibition of category-specific cells by non-preferred stimuli has also been proposed as a general characteristic of object-processing extrastriate cortex in humans, explaining some of the properties of the intracranial N200 and P200 ERP components (Allison et al. 2002). Building upon these ideas, the LIFTED model further proposes that lateral inhibitions of neurons coding foveally-presented features by neurons coding peripheral features act as a major mechanism enabling holistic face perception. This inhibition can be captured at the neuron population level as variations of the scalp-recorded N170 component. Finally, within a face feature space where holistic processing is possible thanks to the inhibition mechanism, there is no need for additional object-selective neurons to account for the increase in N170 amplitude with inverted faces as suggested previously (e.g. Rossion et al. 1999; Sadeh and Yovel 2010), the lack of inhibition suffices to explain the effect. According to the model, any manipulation that disrupts holistic processing disrupts the inhibition mechanism and thus elicits an increase in N170 amplitude, a phenomenon that has been reported for inverted faces, but also for misaligned faces in the face composite task (Jacques and Rossion 2009), and for faces with jumbled features (George et al. 1996). Interestingly, the N170 FIE seems to vary with fixation location, a new result not reported for intact faces in the original study by Nemrodov et al. (2014), being largest around the nasion area (the center of the eye region) and smallest for mouth fixation. If the FIE reflects holistic processing disruption, then this result suggests holistic processing is maximally disrupted when fixation is around the eyes in inverted faces, and less disrupted when fixation is around the mouth. Thus, holistic processing in upright faces is not uniform across face locations, in line with the N170 amplitude gradient obtained for intact upright faces, as discussed earlier.

Conclusions

The present results support the idea of a neural sensitivity to the eyes within the face at the level of the N170 ERP component, which can be seen regardless of the face orientation. They are also largely in line with the LIFTED model’s central idea of an inhibition mechanism at play in upright but not in inverted faces. However, they also suggest some modifications to the original model. Instead of assuming that the inhibition mechanism is a linear function of the angular distance between the foveated feature and the features situated in parafovea, the present results suggest a possible differential inhibition depending on which feature is fixated and which one is in parafovea. It looks like the eyes are indeed inhibited in upright faces and that the mouth contributes to this inhibition, while the nose does not seem to be inhibited by the mouth. In turn, the findings reported in the original paper of a larger N170 for eyeless than intact faces when fixation was on the nose (Nemrodov et al. 2014) might suggest that the eyes exert a strong inhibition on the other features, although it is at present unclear whether this is due to the fact that there are two eyes, and thus twice as much inhibition than if only one eye was present. The amplitude gradient that was found for upright faces depending on fixation location also argues against the view of a strict holistic processing of upright faces assumed to be happening in parallel to the eye sensitivity. Finally, the results support the LIFTED model’s idea that the activity recorded for inverted faces reflects the combined activity elicited by the feature fixated in fovea and that elicited by the other features situated in parafovea.

Notes

Acknowledgements

This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC Discovery Grants #418431); the Ontario government (Early Researcher Award, ER11-08-172); the Canada Foundation for Innovation (Grant #213322); and the Canada Research Chairs program (Grants #213322 and #230407) to RJI. We would also like to thank Marina Ren for help with testing, supported by an NSERC Undergraduate Studies Research Award (USRA).

Supplementary material

10548_2018_663_MOESM1_ESM.docx (293 kb)
Supplementary material 1 (DOCX 292 KB)

References

  1. Allison T, Puce A, McCarthy G (2002) Category-sensitive excitatory and inhibitory processes in human extrastriate cortex. J Neurophysiol 88(5):2864–2868CrossRefPubMedGoogle Scholar
  2. Bentin S, Allison T, Puce A, Perez E, McCarthy G (1996) Electrophysiological studies of face perception in humans. J Cogn Neurosci 8:551–565CrossRefPubMedPubMedCentralGoogle Scholar
  3. de Haan M, Pascalis O, Johnson MH (2002) Specialization of neural mechanisms underlying face recognition in human infants. J Cogn Neurosci 14(2):1–11CrossRefGoogle Scholar
  4. de Lissa P, McArthur G, Hawelka S, Palermo R, Mahajan Y, Hutzler F (2014) Fixation location on upright and inverted faces modulates the N170. Neuropsychologia 57, 1–11CrossRefPubMedGoogle Scholar
  5. Delorme A, Makeig S (2004) EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 134(1):9–21CrossRefPubMedGoogle Scholar
  6. Eimer M (1998) Does the face-specific N170 component reflect the activity of a specialized eye processor? Neuroreport 9(1399019440):2945–2948CrossRefPubMedGoogle Scholar
  7. Eimer M (2000a) Effects of face inversion on the structural encoding and recognition of faces. Evidence from event-related brain potentials. Brain Res Cogn Brain Res 10(1–2):145–158CrossRefPubMedGoogle Scholar
  8. Eimer M (2000b) The face-specific N170 component reflects late stages in the structural encoding of faces. Neuroreport 11(10):2319–2324CrossRefPubMedGoogle Scholar
  9. Eimer M (2011) The face-sensitive N170 component of the event-related brain potential. In: Calder AJ, Rhodes G, Johnson MH and Haxby JV (eds) The oxford handbook of face perception. Oxford University Press, Oxford, pp. 329–344Google Scholar
  10. Engell AD, McCarthy G (2014) Face, eye, and body selective responses in fusiform gyrus and adjacent cortex: an intracranial EEG study. Front Hum Neurosci 8:642CrossRefPubMedPubMedCentralGoogle Scholar
  11. Freiwald WA, Tsao DY, Livingstone MS (2009) A face feature space in the macaque temporal lobe. Nat Neurosci 12(9):1187–1196CrossRefPubMedPubMedCentralGoogle Scholar
  12. George N, Evans J, Fiori N, Davidoff J, Renault B (1996) Brain events related to normal and moderetately scrambled faces. Brain Res Cogn Brain Res 4(2):65–76CrossRefPubMedGoogle Scholar
  13. Itier RJ, Neath-Tavares KN (2017) Effects of task demands on the early neural processing of fearful and happy facial expressions. Brain Res 1663:38–50CrossRefPubMedPubMedCentralGoogle Scholar
  14. Itier RJ, Taylor MJ (2002) Inversion and contrast polarity reversal affect both encoding and recognition processes of unfamiliar faces: a repetition study using ERPs. NeuroImage 15(2):353–372CrossRefPubMedGoogle Scholar
  15. Itier RJ, Latinus M, Taylor MJ (2006) Face, eye and object early processing: what is the face specificity? NeuroImage 29(2):667–676CrossRefPubMedGoogle Scholar
  16. Itier RJ, Alain C, Sedore K, McIntosh AR (2007) Early face processing specificity: it’s in the eyes! J Cogn Neurosci 19(11):1815–1826CrossRefPubMedGoogle Scholar
  17. Itier RJ, Van Roon P, Alain C (2011) Species sensitivity of early face eye processing NeuroImage 54(1):705–713CrossRefPubMedGoogle Scholar
  18. Jacques C, Rossion B (2009) The initial representation of individual faces in the right occipito-temporal cortex is holistic: electrophysiological evidence from the composite face illusion. J Vision 9(6):8, 1–16CrossRefGoogle Scholar
  19. Kloth N, Itier RJ, Schweinberger SR (2013) Combined effects of inversion and feature removal on N170 responses elicited by faces and car fronts. Brain Cogn 81(3):321–328CrossRefPubMedPubMedCentralGoogle Scholar
  20. Maurer D, Le Grand R, Mondloch CJ (2002) The many faces of configural processing. Trends Cogn Sci 6(6):255–260CrossRefPubMedGoogle Scholar
  21. McPartland J, Cheung CH, Perszyk D, Mayes LC (2010) Face-related ERPs are modulated by point of gaze. Neuropsychologia 48(12):3657–3660CrossRefPubMedPubMedCentralGoogle Scholar
  22. Neath KN, Itier RJ (2015) Fixation to features and neural processing of facial expressions in a gender discrimination task. Brain Cogn 99:97–111CrossRefPubMedPubMedCentralGoogle Scholar
  23. Neath-Tavares KN, Itier RJ (2016) Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: a fixation-to-feature approach. Biol Psychol 119:122–140CrossRefPubMedPubMedCentralGoogle Scholar
  24. Nemrodov D, Itier RJ (2011) The role of eyes in early face processing: a rapid adaptation study of the inversion effect. Br J Psychol 102(4):783–798CrossRefPubMedPubMedCentralGoogle Scholar
  25. Nemrodov D, Anderson T, Preston FF, Itier RJ (2014) Early sensitivity for eyes within faces: a new neuronal account of holistic and featural processing. NeuroImage 97, 81–94CrossRefPubMedPubMedCentralGoogle Scholar
  26. Rossion B (2009) Distinguishing the cause and consequence of face inversion: the perceptual field hypothesis. Acta Physiol 132(3):300–312Google Scholar
  27. Rossion B, Jacques C (2012) The N170: understanding the time course of face perception in the human brain. In: Luck SJ, and Kappenman ES (eds) The oxford handbook of event-related potential components. Oxford university Press, Oxford, pp. 115–141Google Scholar
  28. Rossion B, Taubert J (2017) Commentary: the code for facial identity in the primate brain. Front Hum Neurosci 2017 11:550CrossRefPubMedPubMedCentralGoogle Scholar
  29. Rossion B, Delvenne JF, Debatisse D, Goffaux V, Bruyer R, Crommelinck M et al (1999) Spatio-temporal localization of the face inversion effect: an event-related potentials study. Biol Psychol 50(399388900):173–189CrossRefPubMedGoogle Scholar
  30. Rossion B, Gauthier I, Tarr MJ, Despland P, Bruyer R, Linotte S et al (2000) The N170 occipito-temporal component is delayed and enhanced to inverted faces but not to inverted objects: an electrophysiological account of face-specific processes in the human brain. Neuroreport 11(120147456):69–74CrossRefPubMedGoogle Scholar
  31. Rousselet GA, Ince RA, van Rijsbergen NJ, Schyns PG (2014) Eye coding mechanisms in early human face event-related potentials. J Vis 14(13):1–24CrossRefGoogle Scholar
  32. Sadeh B, Yovel G (2010) Why is the N170 enhanced for inverted faces? An ERP competition experiment. NeuroImage 53(2), 782–789CrossRefPubMedGoogle Scholar
  33. Sagiv N, Bentin S (2001) Structural encoding of human and schematic faces: holistic and part-based processes. J Cogn Neurosci 13(7):937–951CrossRefGoogle Scholar
  34. Schyns PG, Jentzsch I, Johnson M, Schweinberger SR, Gosselin F (2003) A principled method for determining the functionality of brain responses. Neuroreport 14:1665–1669CrossRefPubMedGoogle Scholar
  35. Shepherd J, Davies G, Ellis H (1981) Studies of cue saliency. In: Davies G, Ellis HD, Shepherd J (eds) Perceiving and remembering faces. Academic Press, New York, pp 105–131Google Scholar
  36. Tanaka JW, Gordon I (2011) Features, configuration, and holistic face processing. In: Calder AJ, Rhodes G, Johnson MJ, Haxby JV (eds) The Oxford handbook of face perception. Oxford University Press, New York, pp 177–194Google Scholar
  37. Taylor MJ, Edmonds GE, McCarthy G, Allison T (2001) Eyes first! eye processing develops before face processing in children. Neuroreport 12(8):1671–1676CrossRefPubMedGoogle Scholar
  38. Wang Y, Fujita I, Murayama Y (2000) Neuronal mechanisms of selectivity for object features revealed by blocking inhibition in inferotemporal cortex. Nat Neurosci 3(8):807–813CrossRefPubMedGoogle Scholar
  39. Wiese H, Stahl J, Schweinberger SR (2009) Configural processing of other-race faces is delayed but not decreased. Biol Psychol 81(2):103–109CrossRefPubMedGoogle Scholar
  40. Zerouali Y, Lina JM, Jemel B (2013) Optimal eye-gaze fixation position for face-related neural responses. PloS ONE 8(6):e60128CrossRefPubMedPubMedCentralGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of PsychologyUniversity of WaterlooWaterlooCanada

Personalised recommendations