Advertisement

Motivation and Emotion

, Volume 36, Issue 1, pp 16–26 | Cite as

The role of the amygdala in face perception and evaluation

  • Alexander Todorov
Open Access
Original Paper

Abstract

Faces are one of the most significant social stimuli and the processes underlying face perception are at the intersection of cognition, affect, and motivation. Vision scientists have had a tremendous success of mapping the regions for perceptual analysis of faces in posterior cortex. Based on evidence from (a) single unit recording studies in monkeys and humans; (b) human functional localizer studies; and (c) meta-analyses of neuroimaging studies, I argue that faces automatically evoke responses not only in these regions but also in the amygdala. I also argue that (a) a key property of faces represented in the amygdala is their typicality; and (b) one of the functions of the amygdala is to bias attention to atypical faces, which are associated with higher uncertainty. This framework is consistent with a number of other amygdala findings not involving faces, suggesting a general account for the role of the amygdala in perception.

Keywords

Amygdala Face perception Face evaluation Social cognition 

Introduction

In one of the first attempts to formulate a model of the social brain, Brothers (1990) considered a few regions primarily focusing on the amygdala, orbitofrontal cortex, and the superior temporal sulcus. Since then, the number of regions implicated in social cognition has rapidly proliferated (Adolphs 2009; Lieberman 2010; Todorov et al. 2011a). The two major reasons for this proliferation are methodological advances in functional neuroimaging research and the introduction of multiple experimental paradigms tapping diverse aspects of social cognition. These aspects range from the study of perception of emotional expressions to the study of representation of others’ mental states and actions. At the same time, various proposals have been made about the core region/s underlying social cognition. Depending on one’s interests, the seat of social cognition is either in the medial prefrontal cortex (Amodio and Frith 2006), the temporoparietal junction (Saxe and Wexler 2005), or in the inferior frontal gyrus and the inferior parietal lobule (Gallese et al. 2004). Although these proposals have great merits, they have been derived from narrowing down social cognition to specific tasks such as understanding beliefs and understanding goal-directed actions. Ultimately, multiple functional brain networks underlie the complexity of social cognition.

Arguably, a good starting point for building a comprehensive model of social cognition is the ability to represent others as distinct individuals. Understanding actions, beliefs, and intentions presupposes the ability to perceive and represent other people as agents. For most people, face perception and memory is critical for representing others, and people are extremely adept at that task. Decades of computer science research have yet to produce a computer model that approximates human performance of face recognition (Bowyer et al. 2006; Sinha et al. 2006). Moreover, faces are not only used to represent and track individuals over time, but also provide a wealth of social information ranging from the individual’s membership in social categories (e.g., age, sex, race) to his or her mental and emotional states (e.g., bored, anxious, etc.). Not surprisingly, after extremely brief exposures or highly degraded visual input, people can identify faces (Grill-Spector and Kanwisher 2005; Yip and Sinha 2002), their race and gender (Cloutier et al. 2005; Martin and Macrae 2007), recognize their emotional expressions (Esteves and Öhman 1993), and make a variety of social judgments such as aggressiveness (Bar et al. 2006), trustworthiness (Todorov et al. 2009), and sexual orientation (Rule and Ambady 2008). Perception of faces is inherently imbued with affect (Todorov et al. 2008).

Yet, until recently face perception has been generally considered a cognitive area of research with forays into other areas only when emotional expressions or affective associations with faces are the focus of research. Standard cognitive models (Bruce and Young 1986) and their corresponding neural equivalents (Haxby et al. 2000) are not framed in social terms, and regions that are dedicated to face processing are rarely framed as “social regions.” To a large extent, this probably reflects disciplinary divisions and interests. The neural underpinnings of face perception have been primarily studied by vision scientists for whom faces are a well-defined category of complex stimuli that can be contrasted to other categories of complex stimuli such as houses.

Face selectivity in the brain

Vision scientists have had a tremendous success in mapping the regions responsible for face perception. Until the middle of Twentieth century, it was not even established that the inferior temporal (IT) cortex is involved in vision and, in particular, object recognition (Gross 1994). Face selective neurons were discovered in the IT cortex of the macaque brain in the 1970s (Bruce et al. 1981; Desimone 1991; Perrett et al. 1982). Consistent with these findings, Positron Emission Tomography studies of humans in the early 1990s reported face responsive regions in fusiform and inferior temporal regions (Haxby et al. 1993; Sergent et al. 1992). Electrophysiological studies recording from the same regions in epileptic patients found negative potentials (N200) evoked by faces (Allison et al. 1994). Subsequent functional Magnetic Resonance Imaging (fMRI) studies using a variety of categories established a face selective region in the fusiform gyrus (Kanwisher et al. 1997; McCarthy et al. 1997). This region—labeled the fusiform face area (FFA; Kanwisher et al. 1997)—can be reliably identified in individual subjects and its location is robust with respect to task demands (Berman et al. 2010). Two other regions can be consistently identified across most subjects: a region in the posterior Superior Temporal Sulcus (pSTS) and a region in the occipital gyrus—labeled the occipital face area (OFA). These regions are usually referred to as comprising the core system for perceptual analysis of faces (Haxby et al. 2000; Said et al. 2011).

Two of the most exciting recent developments in the field are the combination of fMRI and single cell recordings in macaques (Tsao et al. 2006) and the use of Transcranial Magnetic Stimulation (TMS) in humans (Pitcher et al. 2007). Tsao and her colleagues used fMRI to identify face selective patches in the macaque brain and then recorded from these patches. They identified a stunning number of face selective neurons in these patches. In contrast to previous studies, which have rarely reported more than 20% of face selective neurons from the sample of recorded neurons, Tsao and her colleagues reported more than 90% of face selective neurons in some of the patches. Pitcher and his colleagues used TMS to transiently disrupt the activity of the right OFA (it is not possible to target the FFA) and found that this affected performance on face perception tasks.

Undoubtedly, we have accumulated rich evidence for the importance of the “core” regions in face perception. However, given the affective and social significance of faces, the question is whether the core regions are sufficient to describe face perception. Of course, researchers have acknowledged the participation of other regions, including both subcortical and prefrontal, but these regions are usually considered as part of the “extended” as opposed to “core” system of face processing (Haxby et al. 2000).

In the rest of the paper, I argue that faces automatically evoke responses not only in the core regions but also in regions in the medial temporal lobe (MTL). In particular, I focus on the amygdala and argue that it is an integral part of the functional network dedicated to face processing. In the next section of the paper, I review evidence consistent with a general role of the amygdala in face processing. This evidence comes from (a) single cell recording studies in both monkeys and humans; (b) human functional localizer studies; and (c) meta-analyses of neuroimaging studies involving faces. In the last section of the paper, I propose a hypothesis that the key property of faces represented in the amygdala is their typicality. I also attempt to place this hypothesis into an overall framework that accommodates not only face findings but also findings from other stimuli and other modalities.

Face selective responses in the primate amygdala

The importance of the amygdala for perception, learning, memory, and behavior is well established (Aggleton 2000). In almost all cases, the role of the amygdala is related to the affective significance of stimuli. In this context, it is not surprising that the first functional neuroimaging studies that targeted the amygdala and face perception used faces expressing emotions (Breiter et al. 1996; Morris et al. 1996). However, it is unlikely that the role of the amygdala in face processing is limited to processing of emotional expressions.

At about the time of the discovery of face selective neurons in IT cortex, it was also discovered that there are visually responsive neurons in the macaque’s amygdala and that some of these neurons respond to faces (Sanghera et al. 1979). A number of subsequent neurophysiology studies reported face responsive neurons in the amygdala (Perrett et al. 1982; Leonard et al. 1985; Rolls 1984; Wilson and Rolls 1993; for a review see Rolls 2000). Recent studies have confirmed these findings. Nakamura et al. (1992) showed that the amygdala responds to visual stimuli that are not relevant to the immediate task, and that a high proportion of the visual neurons are category selective with some of the neurons preferring monkey’s faces and a smaller proportion human faces. Other studies have found selective responses for emotional expressions and identity (Gothard et al. 2007) and supramodal neurons responding to both visual (faces) and auditory (sounds) social cues (Kuraoka and Nakamura 2007).

Importantly, the monkey neurophysiology findings have been confirmed in human studies (Fried et al. 1997; Kreiman et al. 2000). Fried and his colleagues recorded from neurons in the MTL of patients undergoing treatment for epilepsy. They found face selective neurons in the amygdala, hippocampus, and entorhinal cortex. Subsequent studies have shown that the responses of some of these neurons are modulated by face familiarity (Quiroga et al. 2005; Viskontas et al. 2009). These findings are consistent with findings from patients with amygdala lesions who show impairments at face recognition (Aggleton and Shaw 1996), although the most studied patient with bilateral amygdala damage, SM (Adolphs and Tranel 2000), seems to be primarily impaired at recognition of fearful expressions.

The logic of neurophysiology studies on category selectivity is to present stimuli representing different categories (e.g., faces, everyday objects, novel objects, etc.) and look for neurons that show preference for one or more categories. The same logic underlies neuroimaging studies that use functional localizers. In such studies, human subjects are presented with faces and a number of other categories such as houses, hands, chairs, flowers, etc. Such studies identified the FFA (Kanwisher et al. 1997; McCarthy et al. 1997; Tong et al. 2000), the OFA (Gauthier et al. 2000; Puce et al. 1996), and face selective regions in the pSTS (Allison et al. 2000; Puce et al. 1996). Despite some controversy about the value of functional localizers (Friston et al. 2006; Saxe et al. 2006), they are an excellent tool for identifying category selective regions and then probe the response properties of these regions. A recent meta-analysis also shows that, at least in the case of localizing the FFA, the results are robust with respect to task demands and control categories (Berman et al. 2010).

If there are face selective neurons in the amygdala, as suggested by neurophysiology studies, why is it that fMRI studies that use functional localizers do not detect face selective voxels in the amygdala? There are, at least, two primary sets of reasons. First, the amygdala is a very small structure that is difficult to image not only because of its size but also because of its location (LaBar et al. 2001; Zald 2003). Moreover, in almost all neurophysiology studies, the number of face selective neurons is small, rarely exceeding 10% of the recorded neurons. This suggests that there would be a few face selective voxels in the amygdala. Second, given the expected small size of face selective clusters, it would be difficult to find these clusters unless one is looking for them. In fact, there is a large variation across individual subjects in functional localizer studies. The typical approach in such studies is to threshold the statistical map of the contrast of faces and the control category (or categories) at a specified probability value (e.g., p = .005) and then to record the locations of face selective regions for each subject. However, the number of observed peaks can vary from a few or none in some subjects to a few dozens in other subjects. Researchers would typically record peaks from the fusiform gyri, occasionally from the occipital gyri, and pSTS, and rarely from other regions. Some of this individual variation in observed peaks is due to a measurement error, which can be reduced by averaging across subjects. However, functional localizers were specifically introduced to map category selective regions for individual brains and, hence, to avoid the need to conduct group analyses (Kanwisher et al. 1997). The rationale for using localizers is that brains are individually different and, hence, group alignment can distort the data.

Not surprisingly, researchers who use face localizers rarely report group analyses, although these analyses can be informative and more reliable than individual level analyses (Poldrack et al. 2009). In a recent meta-analysis of studies that used functional localizers to localize the FFA, Berman and colleagues (2010) selected 49 out of 339 papers. These were studies on healthy adults that reported both the coordinates of the localized FFA and the localization task. Out of these papers, only nine reported the group analysis from the face localizer (Chen et al. 2007; Downing et al. 2006; Eger et al. 2004, 2005; Henson and Mouchlianitis 2007; Kesler-West et al. 2001; Maurer et al. 2007; Pourtois et al. 2005; Zhang et al. 2008). Four of the nine studies reported amygdala activation (see Table 1). Another study did not report a group analysis but reported that face selective voxels in the amygdala were identified by anatomical location and contrast between intact and scrambled faces (Ganel et al. 2005). Occasionally, researchers would report that they observed amygdala activation in face localizer contrasts but would not investigate this further or report the coordinates (Berman et al. 2010, p. 69; Jiang et al. 2009, p. 1085). It should be noted that the opposite is also true: emotion researchers interested in the amygdala would compare faces with another category of stimuli but not report group analyses or activations in posterior areas (Goossens et al. 2009; Hariri et al. 2002). In other cases, researchers would perform a group analysis but not individual level analyses (Fitzgerald et al. 2006; Wright and Liu 2006).
Table 1

Coordinates of face selective voxels in the amygdala from fMRI studies that compared activation to faces with activation to other categories

Study

Control category

Sample size

R. amygdala

L. amygdala

N

x

y

z

x

y

z

Fitzgerald et al. (2006)

Portable radios

20

   

−24

−3

−20

Ganel et al. (2005)

Scrambled faces

11

18

−7

−9

   

Hariri et al. (2002)

Emotional scenes

12

16

−5

−13

   

Kesler-West et al. (2001)

Scrambled faces

21

17

−7

−8

−17

−9

−8

Maurer et al. (2007)

Houses and common household objects

12

20

−9

−20

   

Pourtois et al. (2005)

Houses

14

   

−21

−15

−9

Goossens et al. (2009)

Houses

20

22

−3

−12

−18

−3

−16

Said et al. (2010)

Chairs

37

17

−5

−10

−17

−2

−10

Wright and Liu (2006)

Pixilated patterns

12

22

−7

−10

−9

−3

−10

Zhang et al. (2008)

Chinese characters, common objects, and scrambled images

16

18

−1

−18

   

The coordinates are reported in Talairach space. Four of the studies (Kesler-West et al. 2001; Maurer et al. 2007; Pourtois et al. 2005; Zhang et al. 2008) are from the sample of studies in Berman et al. (2010). Two of the studies (Fitzgerald et al. 2006; Wright and Liu 2006) compared emotional and neutral faces with a control category. To extract face selective voxels in the amygdala, they performed a conjunction analysis of the individual face contrasts with the control category

Comparing the studies that found amygdala activation in response to faces and those that did not shows that the former had greater statistical power to detect such activations. First, studies that found amygdala activation tended to have larger samples (mean n = 14.8 vs. 12.4). Second, these studies used a less stringent statistical criterion in the group analysis (the most stringent threshold was p < .001 uncorrected, which was the minimum criterion in the other studies). To take two extreme examples, Kesler-West et al. (2001) and Chen et al. (2007) used the same contrast (faces vs. scrambled faces) but only Kessler/West et al. reported amygdala activation in the group analysis. However, whereas Kessler/West et al.’s study had 21 subjects and used uncorrected p < .001, Chen et al.’s study had 5 subjects and used Bonferroni corrected p < .001 across all voxels. In principle, it is better to be statistically conservative, but conservative procedures would penalize small regions, particularly when the sample size of the study is small. As shown in Table 1, many human studies report amygdala activation in functional localizer tasks. This is consistent with high resolution fMRI studies of monkeys that also find face selective voxels in the amygdala (Hoffman et al. 2007; Logothetis et al. 1999).

In one of our recent studies (Said et al. 2010), we used a face localizer and following standard practices recorded the peaks in fusiform gyri, occipital gyri, and pSTS. These data are revisited here.1 In the localizer task, subjects were presented with blocks of faces and chairs and asked to press a button when an image was repeated (one back task). As shown in Fig. 1, in addition to the clusters in the fusiform gyri (Fig. 1a), the group analysis showed large clusters in bilateral amygdala that were more active for faces than chairs (Fig. 1b, c). An analysis of individual subjects data showed that 30 out of 37 subjects had face responsive voxels in the amygdala. For this analysis, the map of the-faces-greater-than-chairs contrast was liberally thresholded at p < .05 for each individual and then intersected with an anatomical mask of the amygdala. As with the FFA, there was individual variation across subjects with respect to the size and location of the clusters of face selective voxels (Fig. 2).
Fig. 1

Brain regions responding more strongly to faces than to chairs: bilateral fusiform gyri (a) and bilateral amygdala (b, c). The regions were identified in a group analysis (n = 37), p < .001 (uncorrected)

Fig. 2

Clusters of voxels in the amygdala of individual subjects responding more strongly to faces than to chairs. The statistical maps for individual subjects were thresholded at p < .05 and intersected with an anatomical mask of the amygdala. Different colors indicate different clusters within the amygdala. The clusters are shown on a standardized brain image

These findings suggest that standard functional localizers can be used to identify face selective voxels in the amygdala. However, the conclusions may be limited given that we used a single control category. At the same time, using a single control category (e.g., scrambled faces, houses, etc.) to localize the posterior face selective network is a common practice and it seems that the type of category does not seriously affect the localization (Berman et al. 2010; Downing et al. 2006). Nevertheless, we need more targeted studies that use multiple categories to test for face selectivity in the amygdala.

In addition to data from single unit recordings and functional localizer studies, data from meta-analyses of functional neuroimaging studies also support a general role of the amygdala in face processing. Two large meta-analyses of PET and fMRI studies on emotional processing showed that faces are one class of stimuli that most consistently elicits responses in the amygdala (Costafreda et al. 2008; Sergerie et al. 2008). The only stimulus class that was more potent in eliciting amygdala responses was gustatory and olfactory stimuli (Costafreda et al. 2008).

Two other meta-analyses (Bzdok et al. in press; Mende-Siedlecki, Said, and Todorov, under review) analyzed fMRI studies on face evaluation. These studies typically presented emotionally neutral faces that varied either on attractiveness or perceived trustworthiness. Using an Activation Likelihood Estimation approach, Bzdok and colleagues analyzed 16 studies. Using a Multi-level Kernel Density Analysis (MKDA) approach, which treats contrast maps rather than individual activation peaks as the unit of analysis (Wager et al. 2008), Mende-Siedlecki and colleagues analyzed 30 studies. In both meta-analyses, one of the most consistently activated regions across studies was the amygdala (see Table 2).
Table 2

Coordinates of voxels in the amygdala identified in (a) meta-analyses of fMRI studies on face evaluation; (b) face localization studies (see Table 1); and (c) meta-analyses of studies on emotion processing irrespective of faces

Meta-analyses

R. amygdala

L. amygdala

x

y

z

x

y

z

Face evaluation studies

      

Bzdok et al. in press (n = 16)

26

−1

−18

   
 

18

−8

−11

−18

−7

−15

Mende-Siedlecki et al. under review (n = 30)

20

−3

−12

−18

−3

−12

Average coordinates for face selective voxels weighted by sample size (Table 1)

18.5

−5.2

−11.9

−18.0

−5.1

−12.1

Emotional processing studies

      

Costafreda et al. 2008 (n = 94)

22

−6

−12

−22

−6

−12

Sergerie et al. 2008 (n = 148)

22

−5

−12

−21

−6

−14

The coordinates are reported in Talairach space

To sum up, both single unit recording data and neuroimaging data suggest that the primate amygdala contains neurons that respond to faces.

The role of the amygdala in face processing

The question about the computational role of the amygdala in face processing is much harder than the question about establishing face selectivity in the amygdala. Although initial fMRI studies focused on the role of the amygdala in processing of fearful expressions (Morris et al. 1996; Whalen et al. 1998), subsequent studies supported a much broader role in face processing. First, many studies have observed amygdala responses not only to fearful but also to other emotional expressions, including positive expressions (e.g., Pessoa et al. 2006; Sergerie et al. 2008; Winston et al. 2003; Yang et al. 2002). Second, as described above, meta-analyses of fMRI studies on face evaluation that typically use emotionally neutral faces show that the amygdala is one of the most consistently activated regions in these studies (Bzdok et al. in press; Mende-Siedlecki et al. under review). Moreover, many studies have observed non-linear amygdala responses with stronger responses to both negative and positive faces than to faces at the middle of the continuum (Said et al. 2009, 2010; Todorov et al. 2011b; Winston et al. 2007). Third, amygdala responses have been observed to bizarre faces (faces with inverted features; Rotshtein et al. 2001) and to novel faces (Kosaka et al. 2003; Schwartz et al. 2003).

To start answering the question about the computational role of the amygdala in face processing, one needs to have a working model of how faces are represented. According to the idea of face space (Valentine 1991), faces are represented as points in a multi-dimensional face space (MDFS). Face space is a high dimensional space in which every face can be approximated as a point defined by its coordinates on the face dimensions. These dimensions define abstract, global properties of the faces. Valentine (1991) used this idea to account for a number of face recognition findings, including effects of distinctiveness (recognition advantage for distinctive faces) and race (recognition advantage for own race faces). Subsequently, face space models have been successfully used to account for a number of other face perception findings (Rhodes and Jeffery 2006; Tsao and Freiwald 2006) and to model social perception of faces (Oosterhof and Todorov 2008; Todorov and Oosterhof 2011; Walker and Vetter 2009). Finally, both single unit recording and fMRI studies have shown increased responses in face selective regions as a function of the distance from the average face (Leopold et al. 2006; Loffler et al. 2005).

Recently, using a MDFS model, we studied whether the amygdala and the FFA respond to social properties of faces or more general properties related to the distance of the faces from the average face in the model (Said et al. 2010). In terms of perception, the distance from the average face could be described as indicating the typicality of the face, where more distant faces are less typical. We used a parametric face model (Oosterhof and Todorov 2008) to generate faces that varied on valence and faces that differed on valence to a much smaller extent. Importantly, both types of faces were matched on their distance from the average face. Behavioral studies also confirmed that the faces were matched on their perceived typicality.

Contrary to our initial expectation, we found that both the FFA and the amygdala responded to the distance from the average face rather than to changes in valence. With hindsight, coding faces according to their typicality is more parsimonious than coding faces according to their social value, because the former requires only statistical learning that extracts the average and variance of the faces encountered in one’s life. Further, in real life, social attributions from facial appearance and face typicality are highly correlated (Fig. 3). Finally, the typicality explanation resolved a previous puzzle in the literature on face evaluation. Whereas some studies have observed linear responses to face valence with stronger responses to negative faces (Engell et al. 2007; Todorov and Engell 2008; Winston et al. 2002), others have observed non-linear responses with stronger responses to both positive and negative faces than to faces at the middle of the continuum (Todorov et al. 2011b). It turned out that in studies that observed linear responses, face typicality was linearly related to face valence (with more negative faces perceived as less typical). In studies that observed non-linear responses, face typicality was non-linearly related to face valence (with more negative and more positive faces perceived as less typical). Both patterns of responses could be explained by the hypothesis that the amygdala responds more strongly to less typical faces.
Fig. 3

Scatter plots of judgments of face typicality (“How likely would you be to see a person who looks like this walking down the street?”) and judgments of face weirdness (a), attractiveness (b), trustworthiness (c), and dominance (d). Each point represents a face. Judgments are in standardized units. Typicality judgments were correlated with 13 out of 14 social judgments (Said et al. 2010). The only exception was judgments of dominance (panel D)

What is the functional value of coding face typicality? Atypical faces, by definition, are less likely to be encountered and as such are less predictable. That is, they are associated with higher uncertainty and may require deployment of additional attentional resources to resolve uncertainty. The amygdala, which receives input from IT cortex and projects back not only to IT but also to striate and extrastriate cortex (Amaral et al. 2003), is in the perfect position to modulate attention to infrequent, unexpected stimuli that have motivational significance. In other words, salient, unexpected stimuli can trigger amygdala responses, which in turn can bias attention to these stimuli (Vuilleumier 2005). There is a large body of animal work showing that the amygdala is critical for regulation of attention (Davis and Whalen 2001; Gallagher 2000; Holland and Gallagher 1999). Recent work also shows that unpredictable sound sequences evoke sustained activity in the amygdala in both mice and humans (Herry et al. 2007).

The typicality findings suggest that in the context of face perception, one of the functions of the amygdala is to regulate attention. This proposal is consistent with several other proposals about the role of the amygdala in maintaining vigilance (Whalen 2007) and detection of salient or motivationally relevant stimuli (Adolphs 2010; Sander et al. 2003). This hypothesis could account for stronger responses to bizarre faces (Rotshtein et al. 2001), novel faces (Kosaka et al. 2003; Schwartz et al. 2003), and emotional expressions (Whalen et al. 2009). It is important to note that both expressions and differences in identity could be represented within the same MDFS model (Calder and Young 2005). Finally, this hypothesis is also consistent with findings about the importance of individual differences in amygdala functioning (Aleman et al. 2008; Bishop 2008; Hariri 2009). According to the MDFS model, typicality of faces and emotional expressions can vary across individuals and such differences can result in different amygdala responses to the same face stimuli. This is an important research question to pursue in future studies.

In this framework, face information processed in face selective regions (e.g., the FFA) is further processed in the amygdala, where faces that are atypical or unexpected augment the amygdala’s responses, which in turn augment responses in face selective regions via feedback projections. Such general principles can also account for a variety of other non-face findings. These include stronger responses to both highly positive and negative visual stimuli (Sabatinelli et al. 2005), high intensity positive and negative odors (Anderson et al. 2003) and tastes (Small et al. 2003); loud sounds (Bach et al. 2008); and unpredictable sound sequences (Herry et al. 2007).

Conclusions

Although this article started with the proliferation of neural systems involved in social cognition, I focused on one specific region, the amygdala, and one category of stimuli, faces. A justification for this choice is that both the amygdala and perception of faces are at the intersection of cognition, affect, and motivation. I argued that faces robustly activate the amygdala and that one of its functions is to regulate attention to salient, atypical faces.

Undoubtedly, this proposal is an oversimplification. The amygdala consists of several nuclei with different structures, connectivity, and functions (Aggleton 2000; Amaral et al. 2003) that may play different roles in face processing. In fact, it is likely that the population of neurons that are face selective is different from the population of neurons that participate in the regulation of attention. Face selective neurons are usually located in the basolateral amygdala, whereas neurons involved in attention are located in the central nucleus. Unfortunately, current fMRI techniques do not have a sufficient spatial resolution to study subdivisions in the amygdala. It should be noted that although the activation peaks from our meta-analysis of face evaluation studies (Mende-Siedlecki et al. under review) and the face selective peaks were different (Table 2), they were in close proximity (about 3 mm distance).

At a larger scale, the network involved in face processing involves a number of regions in addition to regions in IT cortex and the amygdala. In fact, studies have shown face selectivity in lateral orbitofrontal cortex (Ó Scalaidhe et al. 1997; Rolls et al. 2006; Tsao et al. 2008). In our meta-analysis of face evaluation studies, in addition to the amygdala, we observed consistent activations across studies in ventromedial prefrontal cortex, pregenual anterior cingulate cortex, and left caudate/NAcc. Understanding face perception would require understanding the cognitive functions of all these regions and how they interact in the context of perceiving (and evaluating) faces.

Footnotes

  1. 1.

    For this analysis, 13 new subjects were added to the 24 subjects from Said et al. (2010). These 13 subjects participated in a different experimental task but in the same face localizer task at the end of the scanning session.

Notes

Acknowledgments

I thank Andy Engell, Charlie Gross, and Winrich Freiwald for comments on previous versions of this manuscript. This work was supported by National Science Foundation grant 0823749 and the Russell Sage Foundation.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

References

  1. Adolphs, R. (2009). The social brain: Neural basis of social knowledge. Annual Review of Psychology, 60, 693–716.PubMedCrossRefGoogle Scholar
  2. Adolphs, R. (2010). What does the amygdala contribute to social cognition? Annals of the New York Academy of Sciences, 1191, 42–61.PubMedCrossRefGoogle Scholar
  3. Adolphs, R., & Tranel, D. (2000). Emotion, recognition, and the human amygdala. In J. P. Aggleton (Ed.), The Amygdala: A functional analysis (pp. 587–630). New York: Oxford University Press.Google Scholar
  4. Aggleton, J. P. (Ed.). (2000). The amygdala: A functional analysis. New York: Oxford University Press.Google Scholar
  5. Aggleton, J. P., & Shaw, C. (1996). Amnesia and recognition memory: A re-analysis of psychometric data. Neuropsychologia, 34, 51–62.PubMedCrossRefGoogle Scholar
  6. Aleman, A., Swart, M., & van Rijn, S. (2008). Brain imaging, genetics and emotion. Biological Psychology, 79, 58–69.PubMedCrossRefGoogle Scholar
  7. Allison, T., McCarthy, G., Nobre, A., Puce, A., & Belger, A. (1994). Human extrastriate visual cortex and the perception of faces, words, numbers, and colors. Cerebral Cortex, 4, 544–554.PubMedCrossRefGoogle Scholar
  8. Allison, T., Puce, A., & McCarthy, G. (2000). Social perception from visual cues: Role of the STS region. Trends in Cognitive Sciences, 4, 267–278.PubMedCrossRefGoogle Scholar
  9. Amaral, D. G., Behniea, H., & Kelly, J. L. (2003). Topographic organization of projections from the amygdala to the visual cortex in the macaque monkey. Neuroscience, 118, 1099–1120.PubMedCrossRefGoogle Scholar
  10. Amodio, D. M., & Frith, C. D. (2006). Meeting of minds: The medial frontal cortex and social cognition. Nature Reviews Neuroscience, 6, 268–277.CrossRefGoogle Scholar
  11. Anderson, A. K., Christoff, K., Stappen, I., Panitz, D., Ghahremani, D. G., Glover, G., et al. (2003). Dissociated neural representations of intensity and valence in human olfaction. Nature Neuroscience, 6, 196–202.PubMedCrossRefGoogle Scholar
  12. Bach, D. R., Schächinger, H., Neuhoff, J. G., Esposito, F., Di Salle, F., et al. (2008). Rising sound intensity: An intrinsic warning cue activating the amygdala. Cerebral Cortex, 18, 145–150.PubMedCrossRefGoogle Scholar
  13. Bar, M., Neta, M., & Linz, H. (2006). Very first impressions. Emotion, 6, 269–278.PubMedCrossRefGoogle Scholar
  14. Berman, M. G., Park, J., Gonzalez, R., Polk, T. A., Gehrke, A., Knaffla, S., et al. (2010). Evaluating functional localizers: The case of the FFA. NeuroImage, 50, 56–71.PubMedCrossRefGoogle Scholar
  15. Bishop, S. J. (2008). Neural mechanisms underlying selective attention to threat. Annals of the New York Academy of Sciences, 1129, 141–152.PubMedCrossRefGoogle Scholar
  16. Bowyer, K. W., Chang, K., & Flynn, P. (2006). A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition. Computer Vision and Image Understanding, 101, 1–15.CrossRefGoogle Scholar
  17. Breiter, H. C., Etcoff, N. L., Whalen, P. J., Kennedy, W. A., Rauch, S. L., Buckner, R. L., et al. (1996). Response and habituation of the human amygdala during visual processing of facial expression. Neuron, 17, 875–887.PubMedCrossRefGoogle Scholar
  18. Brothers, L. (1990). The social brain: A project for integrating primate behavior and neurophysiology in a new domain. Concepts in Neuroscience, 1, 27–51.Google Scholar
  19. Bruce, C., Desimone, R., & Gross, C. G. (1981). Visual properties of neurons in a polysensory area in superior temporal sulcus of the macaque. Journal of Neurophysiolology, 46, 369–384.Google Scholar
  20. Bruce, V., & Young, A. (1986). Understanding face recognition. British Journal of Psychology, 77, 305–327.PubMedCrossRefGoogle Scholar
  21. Bzdok, D., Langner, R., Caspers, S., Furth, F., Habel, U., Zilles, K., et al. (2011). ALE meta-analysis on facial judgments of trustworthiness and attractiveness. Brain Structure and Function, 215, 209–223.Google Scholar
  22. Calder, A. J., & Young, A. W. (2005). Understanding the recognition of facial identity and facial expression. Nature Reviews Neuroscience, 6, 641–651.PubMedCrossRefGoogle Scholar
  23. Chen, A. C., Kao, K. L. C., & Tyler, C. W. (2007). Face configuration processing in the human brain: the role of symmetry. Cerebral Cortex, 17, 1423–1432.PubMedCrossRefGoogle Scholar
  24. Cloutier, J., Mason, M. F., & Macrae, C. N. (2005). The perceptual determinants of person construal: Reopening the social-cognitive toolbox. Journal of Personality and Social Psychology, 88, 885–894.PubMedCrossRefGoogle Scholar
  25. Costafreda, S. G., Brammer, M. J., David, A. S., & Fu, C. H. Y. (2008). Predictors of amygdala activation during the processing of emotional stimuli: A meta-analysis of 385 PET and fMRI studies. Brain Research Reviews, 58, 57–70.PubMedCrossRefGoogle Scholar
  26. Davis, M., & Whalen, P. J. (2001). The amygdala: Vigilance and emotion. Molecular Psychiatry, 6, 13–34.PubMedCrossRefGoogle Scholar
  27. Desimone, R. (1991). Face-selective cells in the temporal cortex of monkeys. Journal of Cognitive Neuroscience, 3, 1–8.CrossRefGoogle Scholar
  28. Downing, P. E., Chan, A. W. Y., Peelen, M. V., Dodds, C. M., & Kanwisher, N. (2006). Domain specificity in visual cortex. Cerebral Cortex, 16(10), 1453–1461.PubMedCrossRefGoogle Scholar
  29. Eger, E., Schweinberger, S. R., Dolan, R. J., & Henson, R. N. (2005). Familiarity enhances invariance of face representations in human ventral visual cortex: FMRI evidence. NeuroImage, 26(4), 1128–1139.PubMedCrossRefGoogle Scholar
  30. Eger, E., Schyns, P. G., & Kleinschmidt, A. (2004). Scale invariant adaptation in fusiform face-responsive regions. NeuroImage, 22(1), 232–242.PubMedCrossRefGoogle Scholar
  31. Engell, A. D., Haxby, J. V., & Todorov, A. (2007). Implicit trustworthiness decisions: Automatic coding of face properties in human amygdala. Journal of Cognitive Neuroscience, 19, 1508–1519.PubMedCrossRefGoogle Scholar
  32. Esteves, F., & Öhman, A. (1993). Masking the face: Recognition of emotional facial expressions as a function of the parameters of backward masking. Scandinavian Journal of Psychology, 34, 1–18.PubMedCrossRefGoogle Scholar
  33. Fitzgerald, D. A., Angstadt, M., Jelsone, L. M., Nathan, P. J., & Phan, K. L. (2006). Beyond threat: Amygdala reactivity across multiple expressions of facial affect. Neuroimage, 30, 1441–1448.PubMedCrossRefGoogle Scholar
  34. Fried, I., MacDonald, K. A., & Wilson, C. (1997). Single neuron activity in human hippocampus and amygdala during recognition of faces and objects. Neuron, 18, 753–765.PubMedCrossRefGoogle Scholar
  35. Friston, K. J., Rotshtein, P., Geng, J. J., Sterzer, P., & Henson, R. N. (2006). A critique of functional localisers. NeuroImage, 30, 1077–1087.PubMedCrossRefGoogle Scholar
  36. Gallagher, M. (2000). The amygdala and associative learning. In J. P. Aggleton (Ed.), The Amygdala: A functional analysis (pp. 311–330). New York: Oxford University Press.Google Scholar
  37. Gallese, V., Keysers, C., & Rizzolatti, G. (2004). A unifying view of the basis of social cognition. Trends in Cognitive Sciences, 8, 396–403.PubMedCrossRefGoogle Scholar
  38. Ganel, T., Valyear, K. F., Goshen-Gottstein, Y., & Goodale, M. A. (2005). The involvement of the fusiform face area in processing facial expression. Neuropsychologia, 43(11), 1645–1654.PubMedCrossRefGoogle Scholar
  39. Gauthier, I., Tarr, M. J., Moylan, J., Skudlarski, P., Gore, J. C., & Anderson, A. W. (2000). The fusiform face area is part of a network that processes faces at the individual level. Journal of Cognitive Neuroscience, 12(3), 495–504.PubMedCrossRefGoogle Scholar
  40. Goossens, L., Kukolja, J., Onur, O. A., Fink, G. R., Maier, W., et al. (2009). Selective processing of social stimuli in the superficial amygdala. Human Brain Mapping, 30, 3332–3338.PubMedCrossRefGoogle Scholar
  41. Gothard, K. M., Battaglia, F. P., Erickson, C. A., Spitler, K. M., & Amaral, D. G. (2007). Neural responses to facial expression and face identity in the monkey amygdala. Journal of Neurophysiology, 97, 1671–1683.PubMedCrossRefGoogle Scholar
  42. Grill-Spector, K., & Kanwisher, N. (2005). Visual recognition: As soon as you know it is there, you know what it is. Psychological Science, 16, 152–160.PubMedCrossRefGoogle Scholar
  43. Gross, C. G. (1994). How inferior temporal cortex became a visual area. Cerebral Cortex, 5, 455–469.CrossRefGoogle Scholar
  44. Hariri, A. R. (2009). The neurobiology of individual differences in complex behavioral traits. Annual Review of Neuroscience, 32, 225–247.PubMedCrossRefGoogle Scholar
  45. Hariri, A. R., Tessitore, A., Mattay, V. S., Fera, F., & Weinberger, D. R. (2002). The amygdala response to emotional stimuli: A comparison of faces and scenes. NeuroImage, 17, 317–323.PubMedCrossRefGoogle Scholar
  46. Haxby, J. V., Grady, C. L., Horwitz, B., Salerno, J., Ungerleider, L. G., Mishkln, M., et al. (1993). Dissociation of object and spatial visual processing pathways in human extra-striate cortex. In B. Gulyas, D. Ottoson, & P. E. Roland (Eds.), Functional organisation of the human visual cortex (pp. 329–340). Oxford: Pergamon.Google Scholar
  47. Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4, 223–233.PubMedCrossRefGoogle Scholar
  48. Henson, R. N., & Mouchlianitis, E. (2007). Effect of spatial attention on stimulus-specific haemodynamic repetition effects. NeuroImage, 35(3), 1317–1329.PubMedCrossRefGoogle Scholar
  49. Herry, C., Bach, D. R., Esposito, F., Di Salle, F., Perrig, W. J., et al. (2007). Processing of temporal unpredictability in human and animal amygdala. The Journal of Neuroscience, 27, 5958–5966.PubMedCrossRefGoogle Scholar
  50. Hoffman, K. L., Gothard, K. M., Schmid, M. C., & Logothetis, N. K. (2007). Facial-expression and gaze-selective responses in the monkey amygdala. Current Biology, 17, 766–772.PubMedCrossRefGoogle Scholar
  51. Holland, P. C., & Gallagher, M. (1999). Amygdala circuitry in attentional and representational processes. Trends in Cognitive Sciences, 3, 65–73.PubMedCrossRefGoogle Scholar
  52. Jiang, F., Dricot, L., Blanz, V., Goebel, R., & Rossion, B. (2009). Neural correlates of shape and surface reflectance information in individual faces. Neuroscience, 163, 1078–1091.PubMedCrossRefGoogle Scholar
  53. Kanwisher, N., McDermott, J., & Chun, M. M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. Journal of Neuroscience, 17, 4302–4311.PubMedGoogle Scholar
  54. Kesler-West, M. L., Andersen, A. H., Smith, C. D., Avison, M. J., Davis, C. E., Kryscio, R. J., et al. (2001). Neural substrates of facial emotion processing using fMRI. Cognitive Brain Research, 11(2), 213–226.PubMedCrossRefGoogle Scholar
  55. Kosaka, H., Omori, M., Iidaka, T., Murata, T., Shimoyama, T., Okada, T., et al. (2003). Neural substrates participating in acquisition of facial familiarity: An fMRI study. NeuroImage, 20, 1734–1742.PubMedCrossRefGoogle Scholar
  56. Kreiman, G., Koch, C., & Fried, I. (2000). Category-specific visual responses of single neurons in the human medial temporal lobe. Nature Neuroscience, 3, 946–953.PubMedCrossRefGoogle Scholar
  57. Kuraoka, K., & Nakamura, K. (2007). Responses of Single Neurons in Monkey Amygdala to Facial and Vocal Emotions. Journal of Neurophysiology, 97, 1379–1387.PubMedCrossRefGoogle Scholar
  58. LaBar, K. S., Gitelman, D. R., Mesulam, M.-M., & Parrish, T. B. (2001). Impact of signal-to-noise on functional MRI of the human amygdala. Neuroreport, 12, 3461–3464.PubMedCrossRefGoogle Scholar
  59. Leonard, C. M., Rolls, E. T., Wilson, F. A. W., & Baylis, G. C. (1985). Neurons in the amygdala of the monkey with responses selective for faces. Behavioural Brain Research, 15, 159–176.PubMedCrossRefGoogle Scholar
  60. Leopold, D. A., Bondar, I. V., & Giese, M. A. (2006). Norm-based face encoding by single neurons in the monkey inferotemporal cortex. Nature, 442(7102), 572–575.PubMedCrossRefGoogle Scholar
  61. Lieberman, M. D. (2010). Social cognitive neuroscience. In S. T. Fiske, D. T. Gilbert, & G. Lindzey (Eds.), Handbook of social psychology (5th ed., pp. 143–193). New York, NY: McGraw-Hill.Google Scholar
  62. Loffler, G., Yourganov, G., Wilkinson, F., & Wilson, H. R. (2005). fMRI evidence for the neural representation of faces. Nature Neuroscience, 8(10), 1386–1390.PubMedCrossRefGoogle Scholar
  63. Logothetis, N. K., Guggenberger, H., Peled, S., & Pauls, J. (1999). Functional imaging of the monkey brain. Nature Neuroscience, 2, 555–562.PubMedCrossRefGoogle Scholar
  64. Martin, D., & Macrae, C. N. (2007). A boy primed Sue: Feature based processing and person construal. European Journal of Social Psychology, 37, 793–805.CrossRefGoogle Scholar
  65. Maurer, D., O’Craven, K. M., Le Grand, R., Mondloch, C. J., Springer, M. V., Lewis, T. L., et al. (2007). Neural correlates of processing facial identity based on features versus their spacing. Neuropsychologia, 45(7), 1438–1451.PubMedCrossRefGoogle Scholar
  66. McCarthy, G., Puce, A., Gore, J. C., & Allison, T. (1997). Face-specific processing in the human fusiform gyrus. Journal of Cognitive Neurosciences, 9, 605–610.CrossRefGoogle Scholar
  67. Mende-Siedlecki, P., Said, C. P., & Todorov, A. The social evaluation of faces: A meta-analysis of functional neuroimaging studies (under review).Google Scholar
  68. Morris, J. S., Frith, C. D., Perrett, D. I., Rowland, D., Young, A. W., Calder, A. J., et al. (1996). A differential neural response in the human amygdala to fearful and happy expressions. Nature, 383, 812–815.PubMedCrossRefGoogle Scholar
  69. Nakamura, K., Mikami, A., & Kubota, K. (1992). Activity of single neurons in the monkey amygdala during performance of a visual discrimination task. Journal of Neurophysiology, 67, 1447–1463.PubMedGoogle Scholar
  70. Ó Scalaidhe, S. P., Wilson, F. A., & Goldman-Rakic, P. S. (1997). Areal segregation of face-processing neurons in prefrontal cortex. Science, 278, 1135–1138.PubMedCrossRefGoogle Scholar
  71. Oosterhof, N. N., & Todorov, A. (2008). The functional basis of face evaluation. Proceedings of the National Academy of Sciences of the USA, 105, 11087–11092.PubMedCrossRefGoogle Scholar
  72. Perrett, D. I., Rolls, E. T., & Caan, W. (1982). Visual neurons responsive to faces in the monkey temporal cortex. Experimental Brain Research, 47, 329–342.CrossRefGoogle Scholar
  73. Pessoa, L., Japee, S., Sturman, D., & Underleider, L. G. (2006). Target visibility and visual awareness modulate amygdala responses to fearful faces. Cerebral Cortex, 16, 366–375.PubMedCrossRefGoogle Scholar
  74. Pitcher, D., Walsh, V., Yovel, G., & Duchaine, B. (2007). TMS evidence for the involvement of the right occipital face area in early face processing. Current Biology, 17, 1568–1573.PubMedCrossRefGoogle Scholar
  75. Poldrack, R. A., Halchenko, Y., & Hanson, S. J. (2009). Decoding the large-scale structure of brain function by classifying mental states across individuals. Psychological Science, 20, 1364–1372.PubMedCrossRefGoogle Scholar
  76. Pourtois, G., Schwartz, S., Seghier, M. L., Lazeyras, F., & Vuilleumier, P. (2005). Portraits or people? Distinct representations of face identity in the human visual cortex. Journal of Cognitive Neuroscience, 17(7), 1043–1057.PubMedCrossRefGoogle Scholar
  77. Puce, A., Allison, T., Asgari, M., Gore, J. C., & McCarthy, G. (1996). Differential sensitivity of human visual cortex to faces, letterstrings, and textures: A functional magnetic resonance imaging study. Journal of Neuroscience, 16(16), 5205–5215.PubMedGoogle Scholar
  78. Quiroga, Q. R., Reddy, L., Kreiman, G., Koch, C., & Fried, I. (2005). Invariant visual representation by single neurons in the human brain. Nature, 435, 1102–1107.PubMedCrossRefGoogle Scholar
  79. Rhodes, G., & Jeffery, L. (2006). Adaptive norm-based coding of facial identity. Vision Research, 46, 2977–2987.PubMedCrossRefGoogle Scholar
  80. Rolls, E. T. (1984). Neurons in the cortex of the temporal lobe and in the amygdala of the monkey with responses selective for faces. Human Neurobiology, 3, 209–222.PubMedGoogle Scholar
  81. Rolls, E. (2000). Neurophysiology and function of the primate amygdala, and neural basis of emotion. In J. P. Aggleton (Ed.), The amygdala: A functional analysis (pp. 447–478). Oxford: Oxford University Press.Google Scholar
  82. Rolls, E. T., Critchley, H. D., Browning, A. S., & Inoue, K. (2006). Face-selective and auditory neurons in the primate orbitofrontal cortex. Experimental Brain Research, 170, 74–87.CrossRefGoogle Scholar
  83. Rotshtein, P., Malach, R., Hadar, U., Graif, M., & Hendler, T. (2001). Feeling or features: Different sensitivity to emotion in high-order visual cortex and amygdala. Neuron, 32, 747–757.PubMedCrossRefGoogle Scholar
  84. Rule, N. O., & Ambady, N. (2008). Brief exposures: Male sexual orientation is accurately perceived at 50 ms. Journal of Experimental Social Psychology, 44, 1100–1105.CrossRefGoogle Scholar
  85. Sabatinelli, D., Bradley, M. M., Fitzsimmons, J. R., & Lang, P. J. (2005). Parallel amygdala and inferotemporal activation reflect emotional intensity and fear relevance. Neuroimage, 24, 1265–1270.PubMedCrossRefGoogle Scholar
  86. Said, C. P., Baron, S., & Todorov, A. (2009). Nonlinear amygdala response to face trustworthiness: Contributions of high and low spatial frequency information. Journal of Cognitive Neuroscience, 21, 519–528.PubMedCrossRefGoogle Scholar
  87. Said, C. P., Dotsch, R., & Todorov, A. (2010). The amygdala and FFA track both social and non-social face dimensions. Neuropsychologia, 48, 3596–3605.PubMedCrossRefGoogle Scholar
  88. Said, C. P., Haxby, J. V., & Todorov, A. (2011). Brain systems for the assessment of the affective value of faces. Philosophical Transactions of the Royal Society, B, 336, 1660–1670.Google Scholar
  89. Sander, D., Grafman, J., & Zalla, T. (2003). The human amygdala: an evolved system for relevance detection. Reviews in the Neurosciences, 14(4), 303–316.PubMedGoogle Scholar
  90. Sanghera, M. F., Rolls, E. T., & Roper-Hall, A. (1979). Visual response of neurons in the dorsolateral amygdala of the alert monkey. Experimental Neurology, 63, 61–62.CrossRefGoogle Scholar
  91. Saxe, R., Brett, M., & Kanwisher, N. (2006). Divide and conquer: A defense of functional localizers. NeuroImage, 30, 1088–1096.PubMedCrossRefGoogle Scholar
  92. Saxe, R., & Wexler, A. (2005). Making sense of another mind: The role of the right temporo-parietal junction. Neuropsychologia, 43, 1391–1399.PubMedCrossRefGoogle Scholar
  93. Schwartz, C. E., Wright, C. I., Shin, L. M., Kagan, J., Whalen, P. J., McMullin, K. G., et al. (2003). Differential amygdalar response to novel versus newly familiar neutral faces: A functional MRI probe developed for studying inhibited temperament. Biological Psychiatry, 53, 854–862.PubMedCrossRefGoogle Scholar
  94. Sergent, J., Ohta, S., & MacDonald, B. (1992). Functional neuroanatomy of face and object processing: a positron emission tomography study. Brain, 115, 15–36.PubMedCrossRefGoogle Scholar
  95. Sergerie, K., Chochol, C., & Armony, J. L. (2008). The role of the amygdala in emotional processing: A quantitative meta-analysis of functional neuroimaging studies. Neuroscience and Biobehavioral Reviews, 32, 811–830.PubMedCrossRefGoogle Scholar
  96. Sinha, P., Balas, B., Ostrovsky, Y., & Russell, R. (2006). Face recognition by humans: Nineteen results all computer vision researchers should know about. Proceedings of the IEEE, 94, 1948–1962.CrossRefGoogle Scholar
  97. Small, D. M., Gregory, M. D., Mak, Y. E., Gitelman, D., Mesulam, M. M., & Parrish, T. (2003). Dissociation of neural representation of intensity and affective valuation in human gestation. Neuron, 39, 701–711.PubMedCrossRefGoogle Scholar
  98. Todorov, A., & Engell, A. (2008). The role of the amygdala in implicit evaluation of emotionally neutral faces. Social, Cognitive, and Affective Neuroscience, 3, 303–312.CrossRefGoogle Scholar
  99. Todorov, A., Fiske, S. T., & Prentice, D. (Eds.). (2011a). Social Neuroscience: Toward understanding the underpinnings of the social mind. Oxford: Oxford University Press.Google Scholar
  100. Todorov, A., & Oosterhof, N. N. (2011). Modeling social perception of faces. Signal Processing Magazine IEEE, 28, 117–122.CrossRefGoogle Scholar
  101. Todorov, A., Pakrashi, M., & Oosterhof, N. N. (2009). Evaluating faces on trustworthiness after minimal time exposure. Social Cognition, 27, 813–833.CrossRefGoogle Scholar
  102. Todorov, A., Said, C. P., Engell, A. D., & Oosterhof, N. N. (2008). Understanding evaluation of faces on social dimensions. Trends in Cognitive Sciences, 12, 455–460.PubMedCrossRefGoogle Scholar
  103. Todorov, A., Said, C. P., Oosterhof, N. N., & Engell, A. D. (2011b). Task-invariant brain responses to the social value of faces. Journal of Cognitive Neuroscience. Advanced online publication: January 21, 2011. doi: 10.1162/jocn.2011.21616.
  104. Tong, F., Nakayama, K., Moscovitch, M., Weinrib, O., & Kanwisher, N. (2000). Response properties of the human fusiform face area. Cognitive Neuropsychology, 17, 257–279.PubMedCrossRefGoogle Scholar
  105. Tsao, D. Y., & Freiwald, W. A. (2006). What’s so special about the average face? Trends in Cognitive Sciences, 10(9), 391–393.PubMedCrossRefGoogle Scholar
  106. Tsao, D. Y., Freiwald, W. A., Tootell, R. B. H., & Livingstone, M. S. (2006). A cortical region consisting entirely of face-selective cells. Science, 311, 670–674.PubMedCrossRefGoogle Scholar
  107. Tsao, D. Y., Schweers, N., Moeller, S., & Freiwald, W. A. (2008). Patches of face-selective cortex in the macaque frontal lobe. Nature Neuroscience, 11, 877–879.PubMedCrossRefGoogle Scholar
  108. Valentine, T. (1991). A unified account of the effects of distinctiveness, inversion, and race in face recognition. Quarterly Journal of Experimental Psychology, 43(2), 161–204.PubMedGoogle Scholar
  109. Viskontas, I. V., Quiroga, R. Q., & Fried, I. (2009). Human medial temporal lobe neurons respond preferentially to personally relevant images. Proceedings of the National Academy of Sciences of the USA, 106, 21329–21334.PubMedCrossRefGoogle Scholar
  110. Vuilleumier, P. (2005). How brains beware: neural mechanisms of emotional attention. Trends In Cognitive Sciences, 9, 585–594.PubMedCrossRefGoogle Scholar
  111. Wager, T. D., Barrett, L. F., Bliss-Moreau, E., Lindquist, K., Duncan, S., Kober, H., et al. (2008). The neuroimaging of emotion. In M. Lewis, J. M. Haviland-Jones, & L. F. Barrett (Eds.), Handbook of emotion (3rd ed., pp. 249–271). New York: Guilford.Google Scholar
  112. Walker, M., & Vetter, T. (2009). Portraits made to measure: Manipulating social judgments about individuals with a statistical face model. Journal of Vision, 9(11), 12, 1–13.Google Scholar
  113. Whalen, P. J. (2007). The uncertainty of it all. Trends in Cognitive Sciences, 11(12), 499–500.PubMedCrossRefGoogle Scholar
  114. Whalen, P. J., Davis, F. C., Oler, J. A., Kim, H., Kim, M. J., & Neta, M. (2009). Human amygdala responses to facial expressions of emotion. In P. J. Whalen & E. A. Phelps (Eds.), The human amygdala (pp. 265–288). New York: Guilford Press.Google Scholar
  115. Whalen, P. J., Rauch, S. L., Etcoff, N. L., McInerney, S. C., Lee, M. B., & Jenike, M. A. (1998). Masked presentations of emotional facial expression modulate amygdala activity without explicit knowledge. Journal of Neuroscience, 18, 411–418.PubMedGoogle Scholar
  116. Wilson, F. A. W., & Rolls, E. T. (1993). The effects of stimulus novelty and familiarity on neuronal activity in the amygdala of monkeys performing recognition memory tasks. Experimental Brain Research, 93, 367–382.CrossRefGoogle Scholar
  117. Winston, J., O’Doherty, J., & Dolan, R. J. (2003). Common and distinct neural responses during direct and incidental processing of multiple facial emotions. NeuroImage, 20, 84–97.PubMedCrossRefGoogle Scholar
  118. Winston, J., O’Doherty, J., Kilner, J. M., Perrett, D. I., & Dolan, R. J. (2007). Brain systems for assessing facial attractiveness. Neuropsychologia, 45, 195–206.PubMedCrossRefGoogle Scholar
  119. Winston, J., Strange, B., O’Doherty, J., & Dolan, R. (2002). Automatic and intentional brain responses during evaluation of trustworthiness of face. Nature Neuroscience, 5, 277–283.PubMedCrossRefGoogle Scholar
  120. Wright, P., & Liu, Y. (2006). Neutral faces activate the amygdala during identity matching. Neuroimage, 29, 628–636.PubMedCrossRefGoogle Scholar
  121. Yang, T. T., Menon, V., Eliez, S., Blasey, C., White, C. D., et al. (2002). Amygdalar activation associated with positive and negative facial expressions. NeuroReport, 13, 1737–1741.PubMedCrossRefGoogle Scholar
  122. Yip, A. W., & Sinha, P. (2002). Contribution of color to face recognition. Perception, 31(8), 995–1003.PubMedCrossRefGoogle Scholar
  123. Zald, D. H. (2003). The human amygdala and the emotional evaluation of sensory stimuli. Brain Research Reviews, 41, 88–123.PubMedCrossRefGoogle Scholar
  124. Zhang, H., Liu, J., Huber, D. E., Rieth, C. A., Tian, J., & Lee, K. (2008). Detecting faces in pure noise images: A functional MRI study on top-down perception. Neuroreport, 19(2), 229–233.PubMedCrossRefGoogle Scholar

Copyright information

© The Author(s) 2011

Authors and Affiliations

  1. 1.Department of PsychologyPrinceton UniversityPrincetonUSA
  2. 2.Behavioural Science InstituteRadboud UniversityNijmegenthe Netherlands

Personalised recommendations