Discrimination and recognition of faces with changed configuration

Abstract

Subtle metric differences in facial configuration, such as between-person variation in the distances between the eyes, have been used widely in psychology to explain face recognition. However, these studies of configuration have typically utilized unfamiliar faces rather than the familiar faces that the process of recognition ultimately seeks to explain. This study investigates whether face recognition relies on the metric information presumed in configural theory, by manipulating the interocular distance in both unfamiliar and familiar faces. In Experiment 1, observers were asked to detect which face in a pair was presented with its configuration intact. In Experiment 2, this discrimination task was repeated with faces presented individually, and observers were also asked to make familiarity categorizations to the same stimuli. In both experiments, familiarity determined detection of faces in their original configuration, and also enhanced identity categorization in Experiment 2. However, discrimination of configuration was generally low. In turn, recognition accuracy was generally high irrespective of configuration condition. Moreover, observers most sensitive to configuration during discrimination did not appear to rely on this information for recognition of familiar faces. These results demonstrate that configuration theory provides limited explanatory power for the recognition of familiar faces.

Introduction

The recognition of familiar faces, of people who are well known to observers, appears to be consistently accurate across a range of viewing conditions (for reviews, see Burton, Jenkins, & Schweinberger, 2011; Johnston & Edmonds, 2009; Young & Burton, 2017). Familiar face recognition is robust, for example, from degraded or blurred images (e.g., Burton, Wilson, Cowan, & Bruce, 1999; Sandford, Sarker, & Bernier, 2018). In comparison, the recognition of unfamiliar faces is often poor, even in seemingly favorable tasks (for reviews, see Burton & Jenkins, 2011; Hancock, Bruce, & Burton, 2000; Jenkins & Burton, 2011; Young & Burton, 2017). For example, this process is impaired by common variation in viewpoint (Longmore, Liu, & Young, 2008), facial expression (Mian & Mondloch, 2012), and lighting (Adini, Moses, & Ullman, 1997). In the context of this discrepancy between unfamiliar and familiar faces, it is imperative to understand the characteristics that underpin robust face recognition.

An influential concept that has been researched widely in psychology to explain face recognition is that of configural processing (see review by Maurer, Le Grand, & Mondloch, 2002). Though different definitions of configuration have been employed (Piepers & Robbins, 2012; Rakover, 2002; Sandford, 2017), most studies have adopted Diamond and Carey’s (1986) concept of second-order relational processing. This definition stipulates that the metric distances between different facial features, such as the eyes, nose and mouth, vary systematically from person to person. Therefore, these inter-feature differences between people can be used to distinguish one face from another and form the building blocks of cognitive face representations underlying recognition (Richler, Mack, Gauthier, & Palmeri, 2009).

Paradigms that examine the role of configuration in person recognition typically require participants to learn a set of unfamiliar faces in a frontal view. Recognition of these faces is then tested onto the exact same image or a modified version, in which the distance between some of the facial features has been changed. For example, participants may be asked to decide which one of two faces in a stimulus pair was seen during the initial learning phase (Rhodes, Brake, & Atkinson, 1993). These studies show consistently that participants can distinguish learned faces from their modified counterparts, even when only small changes to metric distances between facial features distinguish one face image from the other (e.g., Crookes & Hayward, 2012; Freire, Lee, & Symons, 2000; Leder & Bruce, 2000; Leder & Carbon, 2006; Mondloch, Le Grand, & Maurer, 2002; Rhodes et al., 1993; Rhodes, Hayward, & Winkler, 2006). The sensitivity that observers exhibit to such spatial manipulations, often comprising only a difference of a few pixels in an image, is typically interpreted as evidence that configuration mediates the recognition of faces (Richler et al., 2009).

However, an important discrepancy exists in this field. Whereas the existing studies typically assess recognition with images of unfamiliar faces that were learned in the course of an experiment (Sandford, 2017; but see Hosie, Ellis, & Haig, 1988; Itz, Schweinberger, & Kaufmann, 2018), the concept of configuration ultimately seeks to explain the processes underlying the recognition of familiar faces, of the people that we know. If configural accounts are to speak meaningfully to the processes that enable robust recognition of faces, then the principles that underlie these theories must therefore be demonstrated with familiar faces. Paradoxically, despite extensive research on configuration in the face domain, it remains untested whether observers are sensitive to subtle metric changes in the feature spacing of familiar faces, and utilize this for recognition.

Whereas configural theories predict such sensitivity to fine metric distances between features, and have demonstrated this with unfamiliar faces, there are reasons why this should not be beneficial to the recognition of familiar faces. It is straightforward to demonstrate, for example, that faces vary naturally in appearance in ways that distort the metric distances between features, indicating that these are not stable diagnostic indices of facial identity under realistic conditions (see, e.g., Balas & Pearson, 2017; Jenkins, White, Van Montfort, & Burton, 2011; Kramer, Manesi, Towler, Reynolds, & Burton, 2018; Zhou & Mondloch, 2016). Moreover, whereas changes in view, expressions, or distortion by different camera lenses induce changes in faces that contort configural metrics, identification of familiar faces appears to proceed unhindered (see, e.g., Noyes & Jenkins, 2017). Even drastic changes in configural information, by stretching faces in a horizontal or vertical plane to 150% of their original size, appear to leave recognition accuracy, response time, and neural responses intact (Baseler, Young, Jenkins, Burton, & Andrews, 2016; Bindemann, Burton, Leuthold, & Schweinberger, 2008; Gilad-Gutnick, Harmatz, Tsourides, Yovel, & Sinha, 2018; Hole, George, Eaves, & Rasek, 2002; Sandford & Rego, 2019; Sandford et al., 2018). This is a striking finding considering that this manipulation not only alters the horizontal or vertical distances between features, it also changes metric angles between features and distance ratios crossing these planar dimensions, thus producing severe configurational distortion.

These studies provide converging evidence for a surprising tolerance to configural changes in the recognition of familiar faces. Moreover, variation in within-person appearance, which also affects configural metrics, actually appears to be important for enhancing recognition. The learning of previously unknown faces, for example, is more accurate from sets of photographs that afford a large variance in a person’s appearance, including changes to facial metrics that occur naturally with head movements and across different expressions (see, e.g., Baker, Laurence, & Mondloch, 2017; Ritchie & Burton, 2017; Robins, Susilo, Ritchie, & Devue, 2018). Similarly, observers are more accurate at associating names and faces when the latter have been learned with sets of highly variable images (Ritchie & Burton, 2017). Thus, these studies suggest that face recognition is enhanced under conditions in which configuration is not stable, casting doubt on this metric as a reliable index of identity. In turn, when observers are asked to resize randomly scaled images of faces to their identity-specific configuration by adjusting the height-to-width aspect ratio, familiar faces are not resized more accurately than their unfamiliar counterparts, indicating only limited sensitivity related to face familiarity (Sandford & Burton, 2014).

Taken together, the available evidence suggests that the recognition of familiar faces is tolerant to and might even benefit from within-person variation that induces changes in configural information. This contrasts with studies of configural processing, which demonstrate that observers are highly sensitive to this information, but have only tested this with unfamiliar faces. Recently, a study combined these separate developments by investigating whether the distance between the eyes (interocular distance) or between the nose and the mouth is important for the recognition of familiar faces (Itz et al., 2018). In this study, observers were shown famous faces, which were presented either in their original or a modified configuration during an initial priming phase. The effect of these configural changes was then assessed at a subsequent recognition test, which required famous versus non-famous judgments to the famous and unfamiliar faces. In this paradigm, changes to interocular distance during priming produced a subsequent recognition cost, indicating sensitivity to this aspect of configuration. However, these priming costs were only observed among individuals with poor face-processing ability. This suggests a limited role for configuration, whereby observers who are most adept at familiar face recognition do not rely on the spatial relationships under investigation.

So far, however, this evidence is still limited and none of the existing studies have examined sensitivity to configuration in familiar faces with paradigms that are directly comparable to previous research with unfamiliar faces. Until this discrepancy is resolved, a parsimonious theoretical account of the role of configuration in face recognition remains elusive. The aim of this study was to address this important gap in knowledge, by testing the perception of configurationally changed and original (unchanged) familiar faces with paradigms that have been employed extensively with unfamiliar faces in this domain. Thus, we investigate the role of configuration with a spacing manipulation that has been employed in previous studies with unfamiliar faces, which involved subtle changes to the interocular distance (Freire et al., 2000; Leder & Bruce, 2000; Mondloch et al., 2002). To investigate whether configural information is important for the recognition of familiar faces, we modified this task so that identity-specific knowledge about familiar faces was beneficial (Experiment 1) or required (Experiment 2) to maximize performance.

In this manner, two experiments tested two central tenets of configural theory, namely that observers are sensitive to inter-feature metric distances for familiar faces (Hosie et al., 1988; Itz et al., 2018), and that these are important for successful recognition (Maurer et al., 2002; Richler et al., 2009). Specifically, if observers are sensitive to the inter-feature metric distances (i.e., interocular distance) of familiar faces, then they should be able to determine consistently whether a face is presented in its original or a changed configuration. And if inter-feature metric distances are also useful for face recognition, then identification of faces should be best in their original configuration. In turn, however, and considering that the recognition of familiar faces proceeds unhindered from highly varied (e.g., Baker et al., 2017; Ritchie & Burton, 2017; Robins et al., 2018) and distorted images (e.g., Baseler et al., 2016; Bindemann et al., 2008; Hole et al., 2002), it is also possible that sensitivity to distinguish configurally changed and unchanged familiar faces does not present.

Experiment 1

In this experiment, participants were shown pairs of famous or unfamiliar faces, in which the configuration of one face was changed whereas the other was unaltered. This paradigm mimics previous paradigms in the study of facial configuration closely (as in Freire et al., 2000; Leder & Bruce, 2000; Mondloch et al., 2002; Searcy & Bartlett, 1996), but differs in one important respect. Rather than deciding whether two images present identical or different configurations based on an initial learning phase for the target faces, participants had to access their stored representations of familiar faces to select the configurationally unchanged image in each pair. If such configural information forms an integral part of the cognitive presentations for familiar faces, then observers should be able to classify these types of faces consistently on this basis. We also included unfamiliar faces as comparison stimuli for which pre-existing cognitive identity representations, and hence stored configural information, do not exist. Thus, detection of configurationally unchanged images for unfamiliar faces should be low.

We also presented these images upside-down as an additional control condition. There is substantial evidence that identity information from faces is processed differently when these are inverted, with recognition accuracy declining (e.g., de Gelder & Rouw, 2000; Farah, Wilson, Drain, & Tanaka, 1995; Moscovitch, Winocur, & Behrmann, 1997). One explanation for this effect is that such planar face inversion disrupts sensitivity to configuration (see Lewis & Glenister, 2003), and therefore the detection of spatial modifications to feature spacing. For this reason, face inversion has been applied as a control condition in many previous studies on facial configuration (e.g., Freire et al., 2000; Leder & Bruce, 2000; Mondloch et al., 2002). If observers are sensitive to changes in the configuration of familiar faces based on the stored cognitive representations for these stimuli, then such sensitivity to configuration should therefore be reduced disproportionately with inverted familiar compared to unfamiliar faces here.

Method

Participants

A total of 24 students or staff (18 female, average age 28.1 years (SD = 10.9)) affiliated with the University of Guelph-Humber and Humber Institute of Technology and Advanced Learning participated in Experiment 1 in exchange for a gift card. This sample size is comparable to or exceeds previous studies examining facial configuration (see, e.g., Freire et al., 2000; Leder & Bruce, 2000; Leder & Carbon, 2006; Searcy & Bartlett, 1996). Each participant had normal or corrected-to-normal vision, signed consent forms, and was debriefed at the end of their participation in the study. The experiment was conducted in compliance with ethics guidelines outlined by the institutional research ethics board.

Stimuli

Images of 120 celebrities (60 male, 60 female) were downloaded from Google Image searches. Half (60) of these celebrities were well known in Canada and therefore used as familiar faces, while the other half were British or Australian B-list celebrities who served as unfamiliar face stimuli. Each face was front facing, rotated so that the head appeared upright, and sized to 360 × 504 px. Extraneous image background was retained in each image, which was converted into grayscale, as in previous studies of configural processing (Freire et al., 2000; Mondloch et al., 2002). We duplicated the resulting face images and manipulated eye distance to produce two additional face images of each celebrity. Eye distance in one of these images was reduced by 8 pixels, and increased by 8 pixels in the other image. An illustration of this manipulation can be viewed in Fig. 1. To create the stimulus displays for the experiment, the original images of the identities were paired with their changed counterparts. The faces in each pair were placed side-by-side, providing a 72-px (or approximately 1 cm) gap between the nearest face contours.

Fig. 1
figure1

Examples of stimuli used in Experiment 1 (images were in full color in Experiment 2). Left image = original configuration; middle image = shorter interocular distance; right image = longer interocular distance. Interocular distances were changed by 8 pixels

In this way, four versions were created of the stimulus array for a given identity, which showed a face in its configurationally unchanged (original) form and either with the longer or shorter interocular distance, which could be presented on the left or right side of a face pair. In total, we generated 480 pairs of upright faces, plus an additional 480 pairs of upside-down faces by vertically flipping the image display. In the experiment, each stimulus array was presented centrally on a 15-in. monitor display at a resolution of 1,920 × 1,080 px with a black background, using Superlab 5.0 software. An additional image of each identity was downloaded and printed in a paper booklet for a post-experiment familiarity check.

Procedure

Participants were asked to decide which of two faces in each stimulus display was depicted in its original configuration, by pressing one of two buttons on a standard computer keyboard. In cases where participants did not know the identity on-screen, they were encouraged to guess which of the two images presented the correct appearance. No specific cues as to which aspect of the face was changed were provided and no feedback for responses was given. To counterbalance whether a pair of faces showed a configurationally unchanged version and the longer or shorter interocular distance version, and on the left or right side of a face pair, four experimental scripts were created, each of which contained only one upright and one inverted face pairing for each identity. The application of these scripts was counterbalanced across participants, with an equal number (N = 6) completing each version. In this manner, each participant viewed 240 trials, comprising a block of upright face pairs (120 trials) and a block with inverted face pairs (120 trials). Within each block, familiar and unfamiliar faces occurred with equal frequency but trial order was randomized and block order was counterbalanced across participants. Following the completion of this discrimination task, participants were given the post-experiment familiarity check. For this, they were shown a photograph of each identity from the experiment and asked to provide names or specific semantic details (e.g., occupation) associated with the person. These responses were recorded in writing.

Results

Accuracy was measured by calculating the percentage of correct responses. Response times (RTs) were also analyzed for correct responses. For each participant, any reported unknown famous faces and known “unfamiliar” faces were removed from their data prior to this analysis. This accounted for 7% of the total number of trials. The cross-subject means of the remaining data for both accuracy and RTs are illustrated in Fig. 2. A 2 (familiarity: familiar, unfamiliar) × 2 (orientation: upright, inverted) within-subjects ANOVA of accuracy data revealed a main effect of familiarity, F(1, 23) = 13.70, p < .01, ηp2 = 0.37, due to higher discrimination accuracy for familiar faces, and a main effect of orientation, F(1, 23) = 23.72, p < .001, ηp2 = 0.51, with higher accuracy on upright trials. These effects were qualified by an interaction between these factors, F(1, 23) = 16.45, p < .001, ηp2 = 0.42. Analysis of simple main effects revealed higher discrimination accuracy for upright familiar compared with upright unfamiliar faces, F(1, 46) = 29.34, p < .001, ηp2 = 0.39, whereas performance was comparable for inverted familiar and unfamiliar faces, F(1, 46) = 0.16, p = .691, ηp2 < 0.01. In addition, an inversion effect was observed for familiar faces, F(1, 46) = 40.16, p < .001, ηp2 = 0.47, with higher accuracy for upright face pairs, but not for unfamiliar faces, F(1, 46) = 1.46, p = .233, ηp2 = 0.03. Finally, although accuracy was low for inverted familiar faces and both unfamiliar face conditions, a series of one-sample t-tests showed that this did exceed chance in all conditions, all ts(23) ≥ 3.684, ps ≤ .001, Cohen’s d ≥ 0.75.

Fig. 2
figure2

Left panel shows percentage correct and right panel shows mean response time in familiar and unfamiliar face trials in Experiment 1. Light bars show upright trials and dark bars show inverted trials. Error bars show 95% confidence intervals

A 2 × 2 within-subjects ANOVA on RT data, with the same factors as before, revealed a main effect of orientation, F(1, 23) = 6.57, p < .05, ηp2 = 0.22, due to slower RTs for faces presented upright. There was no main effect of familiarity, F(1, 23) = 0.14, p = .712, ηp2 = 0.01 and no interaction of factors, F(1, 23) = 3.15, p = .089, ηp2 = 0.12. Given the moderate effect size of the interaction, we note faster RTs for upright familiar faces (M = 5.69 s) compared with upright unfamiliar faces (M = 6.04 s), and faster RTs for upside-down unfamiliar faces (M = 4.81 s) compared with upside-down familiar faces (M = 4.95 s).

Discussion

This experiment reveals sensitivity to the configuration of familiar faces, whereby observers could distinguish veridical interocular distances of these faces from images in which these had been altered. This sensitivity to configuration was supported further by the presence of an inversion effect, whereby discrimination declined when familiar faces were presented upside-down, in part due to a speed-accuracy tradeoff. In contrast, performance was much worse for unfamiliar faces, and comparable for upright and inverted stimuli. These results indicate that the stored cognitive representations of familiar faces incorporate the metric interocular distance that was manipulated here, and that observers can access this knowledge during the discrimination of these faces. We note, however, that the results also indicate a limit in configural sensitivity, as the altered familiar faces were selected by mistake on nearly 25% of trials and classification of familiar faces was slow. Moreover, Experiment 1 measured only whether configural changes can be detected, but not whether this information is essential for recognition of familiar faces to occur. To assess this directly, we conducted a second experiment.

Experiment 2

In Experiment 1, we found observers were sensitive to configuration, to some extent, with familiar faces compared with unfamiliar faces, and that inversion also limits this sensitivity. This pattern of results generally supports previous studies that used the spacing paradigm (see Rakover, 2002) with unfamiliar faces. However, this experiment does not show that sensitivity to configuration directly influences recognition of faces. Therefore, in Experiment 2, observers again performed a discrimination task that required decisions as to whether the configuration of faces had been changed. As in Experiment 1, this required observers to access their stored cognitive representation of a given identity to determine whether the configuration had been manipulated (i.e., the image has been “changed”) or not (i.e., the image is “unchanged”). In addition, however, they were also presented with the same face images in a speeded familiarity categorization task, to establish whether configural changes directly affect the recognition of known faces. To make these tasks comparable, the discrimination task was modified so that only one face, as opposed to the pairs of faces in Experiment 1, was shown at a time. This modification also allowed us to probe the relationship between sensitivity to configuration in a discrimination task and its use in recognition, by allowing for comparisons on a by-item basis.

Consistent with Experiment 1, we predicted observers to exhibit some sensitivity to the configuration of familiar faces in the discrimination task, as informed by configural theory (e.g., Richler et al., 2009). The question of main interest here was whether configurationally unaltered familiar faces were also categorized more efficiently in the recognition task, and whether this effect was related across tasks. Given configural theory suggests inter-feature distances are encoded and are useful in recognition, we predicted familiarity categorization to be best when familiar faces (i.e., faces with stored cognitive representation) are presented in original (unchanged) configuration, and accurate discrimination of configuration to be directly related to recognition. As in Experiment 1, unfamiliar faces served as controls because observers do not possess stored cognitive representations of these faces. Therefore, if an advantage for familiar faces in their original configuration is obtained in the categorization task compared to when inter-feature distances have been altered, then this should not translate to unfamiliar faces. To our knowledge, this is the first time that sensitivity to configuration and recognition of familiar faces have been directly tested in this way.

Method

Participants

A total of 36 students or staff (31 female, average age 22.4 years (SD = 5.7 years)) affiliated with the University of Guelph-Humber and Humber Institute of Technology and Advanced Learning participated in Experiment 2 in exchange for a gift card. This sample size is comparable to or exceeds previous studies examining facial configuration (see, e.g., Freire et al., 2000; Leder & Bruce, 2000; Leder & Carbon, 2006; Searcy & Bartlett, 1996). Each participant had normal or corrected-to-normal vision, signed consent forms, and was debriefed at the end of their participation in the study. The experiment was conducted in compliance with ethics guidelines outlined by the institutional research ethics board.

Stimuli and procedure

All participants completed the discrimination task and the familiarity categorization task. As in Experiment 1, these were followed by the post-experiment familiarity check. The stimuli of the discrimination task were identical to Experiment 1, except for noted differences. All images were now presented in full color, only upright faces were displayed, and these were shown one at a time. On each trial, participants were first presented with a central fixation cross for 500 ms, followed by a face, which remained on screen until a button-press response was registered. In the discrimination task, participants were asked to decide whether the configuration of each face had been changed or not by pressing one of two buttons on a standard computer keyboard. As in Experiment 1, participants were encouraged to guess whether the appearance had been changed or not in cases where they did not know the identity on screen. Participants were not provided with specific cues as to which aspects of the face had changed and no feedback was provided for responses. In addition, they were not informed of the frequency with which they should expect changes to occur across trials. In this manner, each participant completed two blocks of 60 trials. Within blocks, half of the images depicted familiar and unfamiliar faces, and half configurally changed or original faces. In addition, each identity was seen only once in either the configurally changed or original conditions. However, the familiar and unfamiliar identities were rotated around the configuration conditions across participants, so that each occurred equally often in each condition over the course of the experiment. In addition, trials were blocked by sex of face, but presentation of blocks was counterbalanced, and trial order was randomized for each participant.

The same images were presented to each participant in the familiarity categorization task, which followed the same format except that participants were now required to decide whether faces were familiar or unfamiliar. All participants completed the familiarity categorization task before the discrimination task because we did not want to cue participants to the fact that some configurations had been changed during the categorization task. This is worth noting because configural theory assumes processing of configuration automatically occurs. Following the completion of these tasks, participants were provided with the post-experiment familiarity check. As before, for this they were shown a photograph of each identity from the experiment and asked to provide names or specific semantic details (e.g., occupation) associated with the person. Responses were recorded in writing.

Results

Discrimination

Accuracy was measured by calculating the percentage of correct responses. For each participant, any reported unknown famous faces and known “unfamiliar” faces were removed from their data prior to this analysis. This accounted for 9.8% of the total number of trials. The cross-subject means of the remaining data are illustrated in Fig. 3. A 2 (familiarity: familiar, unfamiliar) × 2 (configuration: original, changed) within-subjects ANOVA revealed a main effect of familiarity, F(1, 35) = 22.16, p < .001, ηp2 = 0.39), reflecting higher discrimination accuracy for known faces. This indicates that familiarity facilitated detection of configuration. A main effect of configuration, F(1, 35) = 3.27, p = .079, ηp2 = 0.09, and an interaction between these factors were not found, F(1, 35) = 1.26, p = .270, ηp2 = 0.03. Figure 3 also shows that accuracy was generally low, though one-sample t-tests demonstrate it exceeded chance in the familiar original, t(35) = 7.92, p < .001, Cohen’s d = 1.32, familiar changed t(35) = 3.95, p < .001, Cohen’s d = 0.66, and unfamiliar original conditions, t(35) = 5.11, p < .001, Cohen’s d = 0.85, but not with changed unfamiliar faces, t(35) = .943, p = .352, Cohen’s d = 0.16.

Fig. 3
figure3

Top left panel shows discrimination accuracy as a proportion of “unchanged” responses to configurationally unchanged (original) faces and “changed” responses to configurationally changed faces. Top right panel shows response times for correct discrimination of configuration. Bottom left panel shows categorization accuracy as a proportion of “familiar” responses to famous faces and “unfamiliar” responses to non-famous faces. Bottom right panel shows response times for familiarity categorization of faces. Error bars show 95% confidence intervals

Response times were analyzed only for correct responses. Cross-subject mean RTs are illustrated in Fig. 3. A within-subjects ANOVA, with the same factors as above, showed no main effects of familiarity, F(1, 35) = 2.60, p = .116, ηp2 = 0.07, or configuration, F(1, 35) = 3.70, p = .063, ηp2 = 0.10, and no interaction, F(1, 35) = .43, p = .516, ηp2 = 0.01. The moderate effects sizes reflect slower RTs in discriminating familiar faces (M = 2.10 s) compared with unfamiliar faces (M = 1.89 s), and images in their original configuration (M = 2.16 s) than in the changed configuration (M = 1.83 s).

For completeness, we converted the accuracy data into signal detection measures of sensitivity (d’) and response bias (criterion). A paired-sample t-test revealed that d’ was higher for familiar (mean d’ = 0.882) compared with unfamiliar faces (mean d’= 0.369), t(35) = 4.755, p < .001, Cohen’s d = 0.79. Criterion was comparable for discriminating familiar and unfamiliar faces, t(35) = 0.860, p = .396, Cohen's d = 0.15, and was close to zero (familiar: t(35) = 0.833, p = .410, Cohen's d = 0.14; unfamiliar: t(35) = 1.910, p = .064, Cohen's d = 0.32). These analyses suggest the above-chance discrimination of configuration was not due to response bias, particularly for familiar faces.

Familiarity categorization

The data for the familiarity categorization task were analyzed in the same way and are illustrated in Fig. 3. A 2 (familiarity: familiar, unfamiliar) × 2 (configuration: original, changed) within-subjects ANOVA of this data did not find main effects of familiarity, F(1, 35) = 0.53, p = .472, ηp2 = 0.01 or configuration, F(1, 35) = 3.70, p = .063, ηp2 = 0.10, but an interaction between these factors, F(1, 35) = 11.01, p < .01, ηp2 = 0.24. Analysis of simple main effects revealed an effect of configuration for familiar faces, F(1, 70) = 13.44, p < .001, ηp2 = 0.16, with higher recognition accuracy for original compared to changed familiar faces. None of the other simple main effects were significant, all Fs ≤ 3.00, all ps ≥ .088, ηp2 ≤ 0.04.

The mean RTs were subjected to a within-subjects ANOVA with the same factors as above. This ANOVA revealed a main effect of familiarity, F(1, 35) = 17.39, p < .001, ηp2 = 0.33, due to faster RTs for familiar compared with unfamiliar faces, and a main effect of configuration, F(1, 35) = 9.01, p < .01, ηp2 = 0.20, due to faster RTs for original compared with changed images. There was no interaction of factors, F(1, 35) = 2.50, p = .123, ηp2 = 0.07.

Discrimination and familiarity categorization

In a final step of the analysis, we examined whether sensitivity to configuration in the discrimination task relates to performance in the recognition task. For this purpose, we reanalyzed the familiarity categorization data for trials where faces were either correctly or incorrectly discriminated. These data are illustrated in Fig. 4. A 2 (familiarity: familiar, unfamiliar) × 2 (configuration: original, changed) × 2 (discrimination accuracy: correct, incorrect) within-subjects ANOVA supported our initial findings by returning only a significant interaction between familiarity and configuration, F(1, 35) = 5.51, p < .05, ηp2 = 0.14. Analysis of simple main effects confirmed categorization accuracy was significantly higher for original compared with changed familiar faces, F(1, 70) = 4.69, p < .05, ηp2 = 0.06 (other simple main effects with these interacting factors were not significant, Fs ≤ 2.57, ps ≥ .113, ηp2 ≤ 0.04). By contrast a main effect of discrimination accuracy, F(1, 35) = .84, p = .366, ηp2 = .02, and interaction between discrimination accuracy and the other factors were not found, all Fs ≤ 1.10, ps ≥ .301, ηp2 ≤ 0.03.

Fig. 4
figure4

Left panels show accuracy and right panels show response times of categorizing correctly or incorrectly discriminated faces. Correct trials were those where discrimination was accurate (i.e., participant responded “changed” when image was configurationally changed or “unchanged” when image was in its original configuration) and incorrect trials are those where discrimination was inaccurate. Top panels show data for familiar faces and bottom panels show data for unfamiliar faces. Error bars show 95% confidence intervals

We also reanalyzed RTs in the categorization task for trials where faces were either correctly or incorrectly discriminated. When faces were incorrectly discriminated, their RTs were still recorded if the same faces were correctly categorized according to participant familiarity with the presented identity. These data are illustrated in Fig. 4. A 2 (familiarity: familiar, unfamiliar) × 2 (configuration: original, changed) × 2 (discrimination accuracy: correct, incorrect) within-subjects ANOVA supported our initial findings by returning main effects of familiarity, F(1, 35) = 16.70, p < .001, ηp2 = .32, and of configuration, F(1, 35) = 4.95, p = .033, ηp2 = .12. In addition, the interaction between discrimination accuracy and configuration was significant, F(1, 35) = 5.60, p = .024, ηp2 = .14. Analysis of simple main effects confirmed faster mean RT for correctly discriminated faces in their original configuration compared with those inchanged configuration, F(1, 70) = 10.27, p = .002, ηp2 = .13, and for changed than original images that were incorrectly discriminated, F(1, 70) = 6.25, p = .015, ηp2 = .08 (other simple main effects with these interacting factors were not significant, Fs ≤ 2.12, ps ≥ .150, ηp2 ≤ .03). A main effect of discrimination accuracy, F(1, 35) = .91, p = .347, ηp2 = .03, and interaction between discrimination and the other factors were not found, all Fs ≤ 3.71, ps ≥ .062, ηp2 ≤ 0.10.

Lastly, we computed each observer’s difference in accuracy between the changed and original configuration conditions of the discrimination task, and correlated this with their recognition accuracy for the original familiar faces in the familiarity task (see Fig. 5). We expected a positive correlation in this analysis if perceptual sensitivity to configuration supports recognition of familiar faces, but this was not found, r(34) = .196, p = .251. Taken together, these analyses therefore suggest that accurate discrimination does not relate to categorization performance in the familiarity task here.

Fig. 5
figure5

Scatter plot of original subtract changed data in discrimination task and original data in categorization task. A positive value refers to better performance in original trials compared with changed trials

Discussion

This experiment replicates the familiarity advantage in a discrimination task, by showing higher accuracy in rates for the detection of original and changed images of known compared to unfamiliar faces. In addition, Experiment 2 examined whether these configural changes directly affect the recognition of known faces, with a familiarity categorization task. This revealed a recognition benefit for faces depicted with the correct configuration, which was present only for familiar faces. These results therefore converge to provide further evidence that stored cognitive representations of familiar faces incorporate configural metrics, such as interocular distance, and demonstrate that this information enhances identification. We note, however, that discrimination accuracy was similarly low (< 70%) to Experiment 1, indicating that configural sensitivity was also limited. Moreover, whereas recognition accuracy in the familiarity task was much higher (> 90%), the difference between familiar faces with the original and changed configuration was numerically small (< 5%). This indicates that recognition was not fundamentally impaired by the configural changes here but, rather, that it proceeds largely unhindered. Finally, we found no evidence that sensitivity for configuration in the discrimination task is related to recognition of familiar faces by processing of configuration.

General discussion

This study examined whether face recognition relies on some of the metric information presumed in configural theory, by manipulating interocular distance in unfamiliar and familiar faces. Sensitivity to changes in configuration was then examined with discrimination (Experiments 1 and 2) and familiarity categorization tasks (Experiment 2). In Experiment 1, sensitivity to configuration was demonstrated by more accurate discrimination of familiar face pairs in which one image was configurally changed. This was evident in comparison with unfamiliar faces and also with upside-down face pairs, for which configural processing is held to be disrupted (Lewis & Glenister, 2003). Experiment 2 replicated the advantage for discriminating configuration in familiar faces with a different paradigm in which faces were presented individually, and showed also that familiarity categorization is sensitive to configural information for this type of face. Taken together, these findings demonstrate that familiarity determines detection of faces in their original configuration, and that configuration also enhances identity categorization of familiar faces.

However, the experiments also provide evidence that any sensitivity to, and benefits of, configuration are limited. In the discrimination task, for instance, the effects of configuration were numerically moderate (14.8% in Experiment 1, 4.6% in Experiment 2) and accuracy was fairly low (e.g., 74.6% for original familiar faces in Experiment 1, 67.7% in Experiment 2, with a chance level of 50%). In turn, recognition performance in the familiarity task was generally high (89.4% +) even when configuration had been changed. Moreover, further analysis combining accuracy across the discrimination and familiarity tasks, and which also correlated these measures, suggests that sensitivity to configuration during discrimination was not related to recognition of familiar faces. In addition, observers were not faster to discriminate images of familiar than unfamiliar faces based on configuration in both experiments, but they were faster to categorize familiar faces based on identity-specific familiarity. Therefore, the RT data also suggest configuration does not have a specific role in processing familiar faces (over unfamiliar faces) in the tasks reported here. Taken together, these results indicate that sensitivity to the configural information manipulated here was limited, and that this information did not appear to be the primary means for face identification in the familiarity task.

We note the above-chance performance in discriminating unfamiliar faces in both experiments, except those with changed configuration in Experiment 2. The above-chance performance was not due to response bias (Experiment 2), but there was a moderate effect size in criterion with unfamiliar faces, which to some extent might suggest observers are inclined to respond “unchanged” in this task. These data might be due to artefacts in creating the stimuli with graphics software, which may alert observers to the instances in which the faces have been changed. Alternatively, it is possible that the changed faces deviated from a generic face template by having the eyes set closer together or further apart than occurs naturally in faces. However, as configural theory suggests encoded inter-feature metric distances are useful for individuating and recognizing faces (Richler et al., 2009), a generic face template may not be helpful in discriminating configuration of individual faces. Moreover, had the manipulation of interocular distance been a consistent cue for observer responses, then we would have observed higher accuracy in discriminating configuration of familiar faces.

These findings help to bridge seemingly disparate literatures on face recognition. On one hand, our results appear consistent with previous studies that have shown sensitivity to configuration (e.g., Freire et al., 2000; Leder & Bruce, 2000; Leder & Carbon, 2006; Rhodes et al., 1993; see also Mondloch et al., 2002). These studies have demonstrated these effects with unfamiliar faces that were briefly learned at the start of experiments. In the current study, we observed similar effects with faces already familiar to participants, which did not require prior learning, but not with completely unfamiliar (i.e., not learned) faces. One way to reconcile these results is that stored cognitive representations of both well-known and newly learned faces incorporate subtle interocular distances, and that observers can access this knowledge during the discrimination of these faces.

On the other hand, the observation that categorization of changed familiar faces proceeded largely unhindered here also resonates with findings that recognition is robust when configuration is altered subtly by factors such as lens distortion (Noyes & Jenkins, 2017) or drastically through geometric distortions (Baseler et al., 2016; Bindemann et al., 2008; Gilad-Gutnick et al., 2018; Hole et al., 2002; Sandford et al., 2018). One way to reconcile these results, of some sensitivity to configuration but largely intact recognition even when this is changed, is that some metric information must be encoded inadvertently when faces are learned. At the same time, considering that faces vary naturally in appearance through rigid (e.g., view) and non-rigid (e.g., expressions) transformations (see, e.g., Jenkins et al., 2011; Kramer et al., 2018; Zhou & Mondloch, 2016), one should also expect configuration to change with these natural variances. Of course, such variation does not change the identity of a person per se, so the cognitive face recognition system must be capable of dealing with such differences (e.g., Jenkins et al., 2011; Redfern & Benton, 2017). This is an important characteristic of face recognition missed in studies of configuration that employ highly controlled images of unfamiliar or computer-generated faces, which feature only superficial changes between learning and test (e.g., Leder & Bruce, 2000; Leder & Carbon, 2006; Rhodes et al., 1993). This scenario does not resemble face learning in the real world, where exposure to variation in a person’s appearance appears to be particularly important for robust recognition (e.g., Dowsett, Sandford, & Burton, 2016; Ritchie & Burton, 2017; Robins et al., 2018).

We are, of course, not the first to raise concerns about the role of configuration in recognition of familiar faces (see, e.g., Baseler et al., 2016; Burton, 2013; Burton, Schweinberger, Jenkins, & Kaufmann, 2015; Gilad-Gutnick et al., 2018; Hole et al., 2002; Itz et al., 2018), but to our knowledge we are the first to provide this direct test of configurational processing in familiar face recognition. Thus, it is possible that other studies, perhaps focusing on spatial relationships other than interocular distance, will find stronger evidence for a role of configuration in the recognition of familiar faces. We focused on this region in the experiments reported here because previous studies suggest an important role for eyes in the recognition of familiar faces (see, e.g., Brooks & Kemp, 2007; Gilad, Meng, & Sinha, 2009; Sormaz, Andrews, & Young, 2013; but also see Kramer et al., 2018), and because this manipulation is in line with those reported for unfamiliar faces in the configural processing literature (Barton, Keenan, & Bass, 2001; Crookes & Hayward, 2012; Freire et al., 2000; Leder & Bruce, 2000; Mondloch et al., 2002; Rhodes et al., 2006; Sekunova & Barton, 2008).

To summarize, this study manipulated interocular distance in familiar and unfamiliar faces to investigate whether face recognition relies on this metric information. Configural theory suggests recognition of known faces relies on encoded inter-feature distances (Richler et al., 2009), but while previous studies show that observers can detect changes to configuration with unfamiliar faces (e.g., Freire et al., 2000; Mondloch et al., 2002), others have found recognition of familiar faces to be largely unharmed by large configurational changes (e.g., Bindemann et al., 2008; Hole et al., 2002; Sandford et al., 2018). Here, we found some support for configural theory whereby familiarity determined detection of faces in their original configuration and enhanced identity categorization. However, discrimination of configuration was generally low, whereas recognition was high even when configuration had been changed. Moreover, discrimination of configuration did not relate directly to recognition of familiar faces at the level of the individual, which contrasts with configural theory’s suggestion that inter-feature distances are encoded and relied upon for recognition of faces. Overall, these results suggest that configuration theory provides limited explanatory power for the recognition of familiar faces.

References

  1. Adini, Y., Moses, Y., & Ullman, S. (1997). Face recognition: The problem of compensating for changes in illumination direction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, 721-732. https://doi.org/10.1109/34.598229

    Article  Google Scholar 

  2. Baker, K. A., Laurence, S., & Mondloch, C.J. (2017). How does a newly encountered face become familiar? The effect of within-person variability on adults’ and children’s perception of identity. Cognition, 161, 19-30. https://doi.org/10.1016/j.cognition.2016.12.012

    Article  PubMed  Google Scholar 

  3. Balas, B., & Pearson, H. (2017). Intra- and extra-personal variability in person recognition. Visual Cognition, 25(4-6), 456-469. https://doi.org/10.1080/13506285.2016.1274809

    Article  Google Scholar 

  4. Barton, J. J. S., Keenan, J. P., & Bass, T. (2001). Discrimination of spatial relations and features in faces: Effects of inversion and viewing duration. British Journal of Psychology, 92(3), 527-549. https://doi.org/10.1348/000712601162329

    Article  PubMed  Google Scholar 

  5. Baseler, H. A., Young, A. W., Jenkins, R., Burton, A. M., & Andrews, T. J. (2016). Face-selective regions show invariance to linear, but not to non-linear, changes in facial images. Neuropsychologia, 93(A), 76-84. https://doi.org/10.1016/j.neuropsychologia.2016.10.004

    Article  PubMed  Google Scholar 

  6. Bindemann, M., Burton, A. M., Leuthold, H., & Schweinberger, S. R. (2008). Brain potential correlates of face recognition: Geometric distortions and the N250r brain response to stimulus repetitions. Psychophysiology, 45(4), 535-544. https://doi.org/10.1111/j.1469-8986.2008.00663.x

    Article  PubMed  Google Scholar 

  7. Brooks, K. R., & Kemp, R. I. (2007). Sensitivity to feature displacement in familiar and unfamiliar faces: Beyond the internal/external feature distinction. Perception, 36(11), 1646-1659. https://doi.org/10.1068/p5675

    Article  PubMed  Google Scholar 

  8. Burton, A.M. (2013). Why has research in face recognition progressed so slowly? The importance of variability. The Quarterly Journal of Experimental Psychology, 66(8), 1467-1485. https://doi.org/10.1080/17470218.2013.800125

    Article  Google Scholar 

  9. Burton, A. M., & Jenkins, R. (2011). Unfamiliar face perception. In A. J. Calder, G. Rhodes, M. H. Johnson, & J. V. Haxby (Eds.), The Oxford Handbook of Face Perception (pp. 287-306). Oxford: Oxford University Press

    Google Scholar 

  10. Burton, A. M., Jenkins, R., & Schweinberger, S. R. (2011). Mental representations of familiar faces. British Journal of Psychology, 102(4), 943-958. https://doi.org/10.1111/j.2044-8295.2011.02039.x

    Article  PubMed  Google Scholar 

  11. Burton, A. M., Schweinberger, S. R., Jenkins, R., & Kaufmann, J. M. (2015). Arguments against a ‘configural processing’ account of face recognition. Perspectives on Psychological Science, 10(4), 482-496. https://doi.org/10.1177/1745691615583129

    Article  PubMed  Google Scholar 

  12. Burton, A. M., Wilson, S., Cowan, M., & Bruce, V. (1999). Face recognition in poor-quality video: Evidence from security surveillance. Psychological Science, 10(3), 243-248. https://doi.org/10.1111/1467-9280.00144

    Article  Google Scholar 

  13. Crookes, K., & Hayward, W. G. (2012). Face inversion disproportionately disrupts sensitivity to vertical over horizontal changes in eye position. Journal of Experimental Psychology: Human Perception and Performance, 38(6), 1428-1437. https://doi.org/10.1037/a0027943

    Article  PubMed  Google Scholar 

  14. de Gelder, B., & Rouw, R. (2000). Paradoxical configuration effects for faces and objects in prosopagnosia. Neuropsychologia, 38(9), 1271-1279. https://doi.org/10.1016/s0028-3932(00)00039-7

    Article  PubMed  Google Scholar 

  15. Diamond, R., & Carey, S. (1986). Why faces are and are not special: An effect of expertise. Journal of Experimental Psychology: General, 115(2), 107-117. https://doi.org/10.1037/0096-3445.115.2.107

    Article  Google Scholar 

  16. Dowsett, A. J., Sandford, A., & Burton, A. M. (2016). Face learning with multiple images leads to fast acquisition of familiarity for specific individuals. The Quarterly Journal of Experimental Psychology, 69(1), 1-10. https://doi.org/10.1080/17470218.2015.1017513

    Article  PubMed  Google Scholar 

  17. Farah, M. J., Wilson, K. D., Drain, H. M., & Tanaka, J. R. (1995). The inverted faces inversion effect in prosopagnosia: Evidence for mandatory, face-specific perceptual mechanisms. Vision Research, 35(14), 2089-2093. https://doi.org/10.1016/0042-6989(94)00273-o

    Article  PubMed  Google Scholar 

  18. Freire, A., Lee, K., & Symons, L. A. (2000). The face-inversion effect as a deficit in the encoding of configural information: Direct evidence. Perception, 29(2), 159-170. https://doi.org/10.1068/p3012

    Article  PubMed  Google Scholar 

  19. Gilad, S., Meng, M., & Sinha, P. (2009). Role of ordinal contrast relationships in face encoding. Proceedings of the National Academy of Sciences of the United States of America, 106(13), 5353-5358. https://doi.org/10.1073/pnas.0812396106

    Article  PubMed  PubMed Central  Google Scholar 

  20. Gilad-Gutnick, S., Harmatz, E. S., Tsourides, K., Yovel, G., & Sinha, P. (2018). Recognizing facial slivers. Journal of Cognitive Neuroscience, 30(7), 951-962. https://doi.org/10.1162/jocn_a_01265

    Article  PubMed  Google Scholar 

  21. Hancock, P. J. B, Bruce V., & Burton, A. M. (2000). Recognition of unfamiliar faces. Trends in Cognitive Sciences, 4(9), 330-337. https://doi.org/10.1016/S1364-6613(00)01519-9

    Article  PubMed  Google Scholar 

  22. Hole, G. J., George, P. A., Eaves, K., & Rasek, A. (2002). Effects of geometric distortions on face-recognition performance. Perception, 31(10), 1221-1240. https://doi.org/10.1068/p3252

    Article  PubMed  Google Scholar 

  23. Hosie, J. A., Ellis, H. D., & Haig, N. D. (1988). The effect of feature displacement on the perception of well-known faces. Perception, 17(4), 461-474. https://doi.org/10.1068/p170461

    Article  PubMed  Google Scholar 

  24. Itz, M. L., Schweinberger, S. R., & Kaufmann, J. M. (2018). Familiar face priming: The role of second-order configuration and individual face recognition abilities. Perception, 47(2), 185-196. https://doi.org/10.1177/0301006617742069

    Article  PubMed  Google Scholar 

  25. Jenkins, R., & Burton, A. M. (2011). Stable face representations. Philosophical Transactions of the Royal Society B: Biological Sciences, 366(1571), 1671-1683. https://doi.org/10.1098/rstb.2010.0379

    Article  Google Scholar 

  26. Jenkins, R., White, D., Van Montfort, X., & Burton, A. M. (2011). Variability in photos of the same face. Cognition, 121(3), 313-323. https://doi.org/10.1016/j.cognition.2011.08.001

    Article  PubMed  Google Scholar 

  27. Johnston, R. A., & Edmonds, A. J. (2009). Familiar and unfamiliar face recognition: A review. Memory, 17(5), 577-596. https://doi.org/10.1080/09658210902976969

    Article  PubMed  Google Scholar 

  28. Kramer, R. S. S., Manesi, Z., Towler, A., Reynolds, M. G., & Burton, A. M. (2018). Familiarity and within-person facial variability: The importance of internal and external features. Perception, 47(1), 3-15. https://doi.org/10.1177/0301006617725242

    Article  PubMed  Google Scholar 

  29. Leder, H., & Bruce, V. (2000). When inverted faces are recognised: The role of configural information in face recognition. The Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 53(2), 513-536. https://doi.org/10.1080/713755889

    Article  PubMed  Google Scholar 

  30. Leder, H., & Carbon, C-C. (2006). Face-specific configural processing of relational information. British Journal of Psychology, 97, 19-29. https://doi.org/10.1348/000712605X54794

    Article  PubMed  Google Scholar 

  31. Lewis, M. B., & Glenister, T. E. (2003). A sideways look at configural encoding: Two different effects of face rotation. Perception, 32(1), 7-14. https://doi.org/10.1068/p3404

    Article  PubMed  Google Scholar 

  32. Longmore, C. A., Liu, C. H., & Young, A. W. (2008). Learning faces from photographs. Journal of Experimental Psychology: Human Perception and Performance, 34(1), 77-100. https://doi.org/10.1037/0096-1523.34.1.77

    Article  PubMed  Google Scholar 

  33. Maurer, D., Le Grand, R., & Mondloch, C. J. (2002). The many faces of configural processing. Trends in Cognitive Sciences, 6(6), 255-260. https://doi.org/10.1016/S1364-6613(02)01903-4

    Article  PubMed  Google Scholar 

  34. Mian, J. F., & Mondloch, C. J. (2012). Recognizing identity in the face of change: The development of an expression-independent representation of facial identity. Journal of Vision, 12(7), 1-11. https://doi.org/10.1167/12.7.17

    Article  Google Scholar 

  35. Mondloch, C. J., Le Grand, R., & Maurer, D. (2002). Configural face processing develops more slowly than featural face processing. Perception, 31(5), 553-566. https://doi.org/10.1068/p3339

    Article  PubMed  Google Scholar 

  36. Moscovitch, M., Winocur, G., & Behrmann, M. (1997). What is special about face recognition? Nineteen experiments on a person with visual object agnosia and dyslexia but normal face recognition. Journal of Cognitive Neuroscience, 9(5), 555-604. https://doi.org/10.1162/jocn.1997.9.5.555.

    Article  PubMed  Google Scholar 

  37. Noyes, E., & Jenkins, R. (2017). Camera-to-subject distance affects face configuration and perceived identity. Cognition, 165, 97-104. https://doi.org/10.1016/j.cognition.2017.05.012

    Article  PubMed  Google Scholar 

  38. Piepers, D. W., & Robbins, R. A. (2012). A review and clarification of the terms “holistic”, “configural”, and “relational” in the face perception literature. Frontiers in Psychology, 3(559). https://doi.org/10.3389/fpsyg.2012.00559

  39. Rakover, S. S. (2002). Featural vs. configurational information in faces: A conceptual and empirical analysis. British Journal of Psychology, 93(1), 1-30. https://doi.org/10.1348/000712602162427

    Article  PubMed  Google Scholar 

  40. Redfern, A. S., & Benton, C. P. (2017). Expressive faces confuse identity. i-Perception, 8(5), 1-21. https://doi.org/10.1177/2041669517731115

    Article  Google Scholar 

  41. Rhodes, G., Brake, S., & Atkinson, A. P. (1993). What’s lost in inverted faces? Cognition, 47(1), 25-57. https://doi.org/10.1016/0010-0277(93)90061-Y

    Article  PubMed  Google Scholar 

  42. Rhodes, G., Hayward, W. G., & Winkler, C. (2006). Expert face coding: Configural and component coding of own-race and other-race faces. Psychonomic Bulletin & Review, 13(3), 499-505. https://doi.org/10.3758/BF03193876

    Article  Google Scholar 

  43. Richler, J. J., Mack, M. L., Gauthier, I., & Palmeri, T. J. (2009). Holistic processing of faces happens at a glance. Vision Research, 49(23), 2856-2861. https://doi.org/10.1016/j.visres.2009.08.025

    Article  PubMed  PubMed Central  Google Scholar 

  44. Ritchie, K. L., & Burton, A. M. (2017). Learning faces from variability. The Quarterly Journal of Experimental Psychology, 70(5), 897-905. https://doi.org/10.1080/17470218.2015.1136656

    Article  PubMed  Google Scholar 

  45. Robins, E., Susilo, T., Ritchie, K. L., & Devue, C. (2018). Within-person variability promotes learning of internal facial features and facilitates perceptual discrimination and memory. https://doi.org/10.31219/osf.io/5scnm

  46. Sandford, A. (2017). Configural processing and the recognition of familiar faces. In M. Bindemann & A. M. Megreya (Eds.), Face processing: Systems, disorders and cultural differences (pp. 121-136). New York: Nova Science Publishers, Inc.

    Google Scholar 

  47. Sandford, A., & Burton, A. M. (2014). Tolerance for distorted faces: Challenges to a configural processing account of familiar face recognition. Cognition, 132(2), 262-268. https://doi.org/10.1016/j.cognition.2014.04.005

    Article  PubMed  Google Scholar 

  48. Sandford, A., & Rego, S. (2019). Recognition of deformed familiar faces: Contrast negation and nonglobal stretching. Perception, 48(10), 992-1012. https://doi.org/10.1117/0301006619872059

    Article  PubMed  Google Scholar 

  49. Sandford, A., Sarker, T., & Bernier, T. (2018). Effects of geometric distortions, Gaussian blur, and contrast negation on recognition of familiar faces. Visual Cognition, 26(3), 207-222. https://doi.org/10.1080/13506285.2017.1407853

    Article  Google Scholar 

  50. Searcy, J. H., & Bartlett, J. C. (1996). Inversion and processing of component and spatial-relational information in faces. Journal of Experimental Psychology: Human Perception and Performance, 22(4), 904-915. https://doi.org/10.1037/0096-1523.22.4.904

    Article  PubMed  Google Scholar 

  51. Sekunova, A., & Barton, J. S. S. (2008). The effects of inversion on the perception of long-range and local spatial relations in eye and mouth configuration. Journal of Experimental Psychology: Human Perception and Performance, 34(5), 1129-1135. https://doi.org/10.1037/0096-1523.34.5.1129

    Article  PubMed  Google Scholar 

  52. Sormaz, M., Andrews, T. J., & Young, A. W. (2013). Contrast negation and the importance of the eye region for holistic representations of facial identity. Journal of Experimental Psychology: Human Perception and Performance, 39(6), 1667-1677. https://doi.org/10.1037/a0032449

    Article  PubMed  Google Scholar 

  53. Young, A. W., & Burton, A. M. (2017). Recognizing faces. Current Directions in Psychological Science, 26(3), 212-217. https://doi.org/10.1177/0963721416688114

    Article  Google Scholar 

  54. Zhou, X., & Mondloch, C. J. (2016). Recognizing “Bella Swan” and “Hermione Granger”: No own-race advantage in recognizing photos of famous faces. Perception, 45(2), 1426-1429. https://doi.org/10.1177/0301006616662046

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Afnan Khan, Courtney Rende, Kavita Brijpaul, and Skylar Rego for assistance with data collection.

Funding

This research was supported by a University of Guelph-Humber grant.

Data repository

Data for both reported experiments are readily available at: https://osf.io/m4gf2/?view_only=50a068e1b37c44b096259a1b3b10a373

Author information

Affiliations

Authors

Corresponding author

Correspondence to Adam Sandford.

Ethics declarations

Declaration of conflicting interests

The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Sandford, A., Bindemann, M. Discrimination and recognition of faces with changed configuration. Mem Cogn 48, 287–298 (2020). https://doi.org/10.3758/s13421-019-01010-7

Download citation

Keywords

  • Face
  • Recognition
  • Configuration