Psychonomic Bulletin & Review

, Volume 25, Issue 3, pp 1035–1042 | Cite as

Intensity dependence in high-level facial expression adaptation aftereffect

Brief Report

Abstract

Perception of a facial expression can be altered or biased by a prolonged viewing of other facial expressions, known as the facial expression adaptation aftereffect (FEAA). Recent studies using antiexpressions have demonstrated a monotonic relation between the magnitude of the FEAA and adaptor extremity, suggesting that facial expressions are opponent coded and represented continuously from one expression to its antiexpression. However, it is unclear whether the opponent-coding scheme can account for the FEAA between two facial expressions. In the current study, we demonstrated that the magnitude of the FEAA between two facial expressions increased monotonically as a function of the intensity of adapting facial expressions, consistent with the predictions based on the opponent-coding model. Further, the monotonic increase in the FEAA occurred even when the intensity of an adapting face was too weak for its expression to be recognized. These results together suggest that multiple facial expressions are encoded and represented by balanced activity of neural populations tuned to different facial expressions.

Keywords

Facial expressions Adaptation Intensity dependence Opponent coding 

Prolonged viewing of a visual stimulus induces a reduction in sensitivity to the specific stimulus, which results in a change or bias in perceptual experience for a subsequently presented stimulus. For example, an adaptation to a reddish-appearing light makes a subsequently viewed neutral light (achromatic) to appear greenish. In addition to low-level visual features such as color, orientation, and direction of motion, visual adaptation aftereffect can be observed with a high-level visual stimulus, such as a face. Indeed, the adaptation aftereffect has been reported in various aspects of facial properties including identity (Leopold, O’Toole, Vetter, & Blanz, 2001), gender (Webster, Kaping, Mizokami, & Duhamel, 2004), ethnicity (Webster et al., 2004), and facial expressions (Benton et al., 2007; Butler, Oruc, Fox, & Barton, 2008; C. J. Fox & Barton, 2007; Pell & Richards, 2013; Skinner & Benton, 2010; Webster et al., 2004). The face adaptation aftereffect (FAA) has been robustly observed even when an adapting face and a test face are different in size (Yamashita, Hardy, De Valois, & Webster, 2005) and presented at different locations (Kovács, Zimmer, Harza, Antal, & Vidnyánszky, 2005). The FAA thus seems to occur at a later stage of visual processing and is referred to as high-level adaptation (Webster & MacLeod, 2011).

The FAA is particularly interesting because it could provide insight on how faces are encoded. Using antiface and visual adaptation paradigm, it has been demonstrated that the identity of a face is encoded by a norm-based, opponent-coding mechanism (Jeffery et al., 2010; Jeffery et al., 2011; McKone, Jeffery, Boeing, Clifford, & Rhodes, 2014; Rhodes & Jeffery, 2006; Susilo, McKone, & Edwards, 2010). The two-pool norm-based opponent-coding model posits that responses of two pools of neurons tuned to opposite extremes in a given face dimension adaptively determine the norm; the FAA occurs due to a shift in the position of the norm after adaptation to a particular face. On the other hand, a recent computational simulation reveals that the FAA from antifaces can also be qualitatively and quantitatively predicted based on the exemplar-based model (Ross, Deroche, & Palmeri, 2014). The exemplar-based model argues that faces are encoded based on the location of the faces relative to exemplars of previously experienced faces, rather than relative to the norm (Lewis, 2004; Valentine, 1991). Unlike the two-pool opponent-coding model that posits that the FAA would increase monotonically as the intensity (extremity) of an adapting face increases (McKone et al., 2014; Robbins, McKone, & Edwards, 2007), the exemplar-based model predicts a nonmonotonic change in the magnitude of the FAA as a function of the adaptor extremity.

The monotonic relation between adaptor extremity and the FAA has been demonstrated also for the facial expression adaptation aftereffect (FEAA) with antiexpressions, indicating that facial expressions may also be encoded by the norm-based opponent-coding mechanism (Burton, Jeffery, Calder, & Rhodes, 2015; Burton, Jeffery, Skinner, Benton, & Rhodes, 2013; Rhodes, Pond, Jeffery, Benton, Skinner, & Burton, 2017; Skinner & Benton, 2010, 2012a). However, the FEAA with an antiexpression can support opponent coding only for each individual facial expression represented by a single, selected dimension within a facial expression space (see Fig. 1a). An antiexpression is artificially created by distorting facial features from a norm face to the direction opposite to a corresponding facial expression (Sato & Yoshikawa, 2009), and thus does not exist in the real world. In the real world, the FEAA would occur between two expressions. However, it is unknown whether the opponent-coding mechanism can account for the FEAA between two expressions.
Fig. 1

Perceptual space of facial expression. The locations of happy and angry faces in the space are arbitrary because the dimensions of the space are not specified. a Direction of a shift in the norm caused by adaptation to a happy face based on opponent coding model. b Direct transition between happy and angry faces

The FEAA has been observed between two facial expressions (Webster et al., 2004; Yang, Hong, & Blake, 2010). For example, adaptation to one (e.g., happy) of a pair of facial expressions (e.g., happy–angry pair) leads to a shift in the perception of an average of the two expressions toward the nonadapted facial expression (e.g., angry). Previous research with two facial expressions, however, has not demonstrated a monotonic increase in the magnitude of the FEAA with increasing intensity of an adapting face. Adaptation to a facial expression selectively affects the sensitivity to the adapted emotional expression but only has marginal, if any, influence on the processing of other expressions (Hsu & Young, 2004; Juricevic & Webster, 2012). The specificity of the FEAA suggests that different facial expressions may be processed by distinct neural populations (Hsu & Young, 2004). Thus, we hypothesize that the FEAA between two facial expressions can be accounted for by adaptive changes in neural responses of two pools of neural populations, each tuned to the two expressions. This is an extension of the opponent-coding model with two important changes. First, two pools of neural populations are tuned to two different facial expressions instead of a facial expression and its antiexpression. Second, we did not specifically assume the norm facial expression because an average expression between two facial expressions is not necessarily the same as the norm (see an example in Fig. 1b). If the FEAA between two expressions is determined by balanced neural activity between two distinct neural populations, the magnitude of the FEAA between two facial expressions would also monotonically increase as a function of the intensity of the adapting facial expression as predicted by the opponent-coding model.

In two experiments, we assessed the intensity dependence of the FEAA when the adapting and test faces were presented at the same location (Experiment 1) and when the adapting and test faces were presented at different locations (Experiment 2). One of the distinctive characteristics of the high-level adaptation aftereffect, including the FEAA, is that it does not require retinotopic locations of the adapting and test stimuli to be the same (Kovács et al., 2005; Yamashita et al., 2005). Thus, if the intensity dependence truly characterizes the FEAA, it should be observed even when the adapting and test faces appear at different locations (i.e., Experiment 2).

Antiexpressions do not correspond to obvious emotional labels (Skinner & Benton, 2010, 2012b). Thus, the FEAA from antiexpressions suggests that the FEAA may occur due to purely perceptual processes and does not require conscious recognition of expressions. Along this line, the FEAA is observed even when adapting facial expressions are rendered invisible by continuous flash suppression (Adams, Gray, Garner, & Graf, 2010; Yang et al., 2010). Thus, subtle changes in facial features that are too small to be recognized as a specific facial expression may be potent enough to induce the FEAA. To test this hypothesis, we assessed the minimum intensity of facial expressions required for the recognition of happy and angry faces and compared the minimum intensity for recognition of facial expressions and the lowest intensity that could elicit the FEAA.

Method

Participants

Undergraduate students participated in the study in exchange for course credit. A total of 83 students participated in two experiments (Experiment 1: same location adaptation; Experiment 2: different location adaptation). Each experiment included two conditions (angry adaption and happy adaption), each with six blocks (five intensity blocks and one baseline, no-adaptation block). Each participant completed either the angry or the happy adaption condition. Participants who did not complete all blocks were excluded from all analyses (15 participants for Experiment 1 and eight participants from Experiment 2).1 As a result, data from 18 participants (14 females) for the happy adaptation condition and 18 participants (15 females) for the angry adaptation condition were analyzed in Experiment 1. In Experiment 2, data from 12 participants (five females) for the happy adaptation condition and data from 12 participants (eight females) for the angry adaptation condition were analyzed. All participants signed the informed consent form approved by the Florida Atlantic University Institutional Review Board before participating.

Apparatus and stimuli

Stimulus presentation on a Sony CPD-G520, 21-in. CRT monitor (100 Hz frame rate), and the collection of behavioral responses were controlled by the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997). Stimuli were presented to participants positioned 90 cm from the CRT monitor whose luminance had been linearized from black (0.5 cd/m2) to white (70 cd/m2).

Two female and two male faces displaying angry, happy, and neutral expressions were chosen from the Karolinska Directed Emotional Faces (KDEF, Lundqvist, Flykt, & Öhman, 1998). Each facial picture was resized to 1.45° × 2° visual angle. All facial pictures were adjusted to set identical root mean square (RMS) contrast to set the physical contrast identical for all adapting faces. Happy adapting faces were created by morphing happy and neutral faces while systematically changing the proportion of emotional faces (10% happy and 90% neutral, 30% happy and 70% neutral, 50% happy and 50% neutral, 70% happy and 30% neutral, 90% happy and 10% neutral), using Abrosorft Frantamorph 3 (www.fantamorph.com). Angry adapting faces were created by morphing angry and neutral faces in the same manner. Test faces were created by morphing happy and angry faces with a 10% interval (10% happy and 90% angry, 20% happy and 80% angry, etc.). Examples of happy and angry adapting faces and test faces are presented in Fig. 2a and b, respectively. To assess the minimal intensity of facial expressions to be recognized as happy or angry, each emotional face was morphed with a neutral face in a 1% increment for each of four identities.
Fig. 2

a Adapting faces created by morphing a neutral face and a 100% happy (top)/angry (bottom) face with systematically varied proportions (10%, 30%, 50%, 70%, and 90% of emotional faces). One of four identities used in all experiments are shown. b Test faces presented after a 5-second of adaptation. Test faces were created by morphing a happy face and an angry face of the same model with systematically varied proportions

Tasks and procedure

Adaptation aftereffects to happy and angry facial expressions were assessed using a two-alternative forced-choice (2-AFC) task. Each adaptation condition (happy or angry) was composed of six adaptation blocks: a blank adaptation (no adapting face as a baseline), 10%, 30%, 50%, 70%, and 90% facial expression (either happy or angry) adaptation blocks. Each block began with a 2-minute initial adaptation period, during which four adapting faces were presented twice (15 s for each presentation) in random order. After the initial adaptation period, the main task began. The main task of each block was composed of 108 trials (4 identities × 3 repetitions × 9 testing faces). Thus, each of the nine testing faces was presented 12 times. Each trial began when the participants pressed the spacebar, followed by a 5-s adaptation to an adapting face (happy or angry) at a specific intensity level. Then, a mask (200 ms) and a test face (200 ms) were presented subsequently. In the blank adaptation block, the test face was presented after a 5-s blank screen presentation. The participants indicated whether the test face was a happy or an angry face after the offset of the test face. Once participants responded, a blank screen was presented, lasting until participants pressed the spacebar to proceed. A schematic diagram of a trial is presented in Fig. 3 (left: same location; right: different location). An experimental session always began with the blank-adaptation block, while the order of the five intensity blocks was randomized.
Fig. 3

a Schematic illustration of a same-location adaptation trial (Experiment 1). b Schematic illustration of a different-location adaptation trial (Experiment 2)

After the completion of all six blocks, the minimum intensity of facial expressions required for the recognition of an emotion was examined. On each trial, a morphed face (happy and neutral, angry and neutral) was presented. The participants adjusted the proportion of happy (angry) and neutral until the face was recognizable as happy (angry). Two buttons were used to adjust (increase and decrease) the proportion of an emotional face, and participants ended a trial by pressing another button when they found a morphed face that they barely recognized as happy or angry. Initial faces’ intensity of expressions was randomly determined. Each of four identities was presented five times in a random order. The participants either completed the angry or the happy face version that matched the adaptation condition that they completed.

Results

Experiment 1: Same-location adaptation

To examine the influence of adaptor intensity on the FEAA, a best fit Weibull function (Wichmann & Hill, 2001) was fit to the responses on the test faces for each participant. Then, the point of subjective equality (PSE) was calculated based on the individual fit. The PSE indicates the point at which the test face was judged as either happy or angry with an equal chance (50%; see Fig. 4a). The shift in PSEs (differences between PSEs for 10% to 90% intensity adaptation blocks and the PSEs for the baseline blank-adaptation block) is presented in Fig. 4b. Positive values in Fig. 4b indicate that participants were more likely to perceive a 50:50 morphed face between happy and angry faces as happy. Negative values indicate that participants were more likely to perceive the same 50:50 morphed face as angry.
Fig. 4

Results from Experiment 1 (same-location adaptation). a Points of subjective equality (PSEs) for happy (white bars) and angry (gray bars) adaptation conditions as a function of the intensity of adapting facial expressions. Error bars represent ±1 standard error. b The shift in PSEs relative to the baseline block. Positive values indicate that participants were more likely to perceive a 50:50 morphed face as happy, and negative values indicate that participants were more likely to perceive a 50:50 morphed face as angry. Error bars represent ±1 standard error

The PSEs for six adaptation blocks (a blank and five intensity levels) were subjected to a repeated-measures analysis of variance (ANOVA), separately for the angry and the happy adaption. For both emotions, effects of the intensity of adapting facial expressions were significant: for happy, F(5, 85) = 8.38, p < .001, ηp 2 = .33; for angry, F(5, 85) = 23.12, p < .001, ηp 2 = .576. For the happy adaptation, follow-up tests revealed that the PSE of the blank adaptation was significantly different from the PSEs when the adapting stimuli were 50%, F(1, 17) = 5.18, p = .036, ηp 2 = .234; 70% F(1, 17) = 19.65, p < .001, ηp 2 = .536); and 90%, F(1, 17) = 31.35, p < .001, ηp 2 = .648, happy faces. For the angry adaptation, follow-up tests revealed that all PSEs were significantly different from the PSE of the blank adaptation (smallest F value was 8.19, p = .011, ηp 2 = .325 for 10% angry face).

To test whether the shift in the PSEs linearly increased as a function of the intensity of adapting facial expressions, we calculated the linear trend L scores (Rosenthal, Rosnow, & Rubin, 2000) for each participant by multiplying the PSEs for the 10%, 30%, 50%, 70%, and 90% blocks by contrast weights of -3, -1, 0, +1, and +3. A positive L score indicates a linear increase in the PSEs and a negative L score indicates a linear decrease in the PSEs. The L scores for both the angry adaptation condition (.453) and the happy adaption condition (-.396) were significantly different from zero, t(17) = 6.32, p < .001, and t(17) = 7.00, p < .001, respectively. These results indicate that the adaptation aftereffect became stronger as a function of the intensity of adapting facial expressions, suggesting that adaptive changes in neural responses of two pools of neural populations may be responsible for the FEAA between two facial expressions.

Experiment 2: Different-location adaptation

Recent studies demonstrated that adaptation to a curved line presented at the location of a mouth could affect the perception of a subsequently presented facial expression (Dickinson & Badcock, 2013; Xu, Dayan, Lipkin, & Qian, 2008), suggesting that adaptation aftereffect can propagate along the visual processing hierarchy. Thus, the intensity dependence of the FEAA reported in Experiment 1 could have resulted from local adaptation due to both the adapting and test faces being presented at the same retinotopic location. To rule out the local-adaptation account, we assessed the adaptation aftereffect while presenting the adapting and test faces at different locations (see Fig. 3b). If the intensity dependence truly characterizes the FEAA, it should not be constrained by the retinotopic location of low-level features (e.g., a curved line of a mouth). That is, the intensity dependence should be observed when adapting and test faces are presented at different retinotopic locations.

Replicating results from Experiment 1 (same-location adaptation), the strength of the FEAA increased as a function of the intensity of adapting facial expressions (see Fig. 5). For both facial expressions, the effect of the intensity of adapting facial expressions was significant: for happy, F(5, 55) = 10.76, p < .001, ηp 2 = .494; for angry, F(5, 55) = 15.69, p < .001, ηp 2 = .588. For the happy adaptation, follow-up tests revealed that the PSE of the blank adaptation was significantly different from the PSEs when the adapting stimuli were 70%, F(1, 11) = 21.13, p = .001, ηp 2 = .658; and 90% F(1, 11) = 11.33, p = .006, ηp 2 = .507, happy faces. For the angry adaptation, follow-up tests revealed that all PSEs were significantly different from the PSE of the blank adaptation (smallest F value was 10.99, p = .007, ηp 2 = .500 for 10% angry face). Again, the L score for the angry adaptation condition (.362) was significantly higher than zero, t(17) = 5.35, p < .001, and the L score for the happy adaptation (-.561) was significantly lower than zero, t(17) = 5.03, p < .001. Thus, the intensity dependence of the FEAA cannot be entirely due to the inheritance of the property from the low-level visual feature adaptation. Overall, the results from both experiments consistently revealed the monotonic relation between the FEAA and adaptor intensity, suggesting that the opponent-coding model is not limited to a single dimension of facial expression but also accounts for the FEAA between two facial expressions.
Fig. 5

Results from Experiment 2 (different-location adaptation). a Points of subjective equality (PSEs) for happy (white bars) and angry (gray bars) adaptation conditions as a function of the intensity of adapting facial expressions. Error bars represent ±1 standard error. b The shift in PSEs relative to the baseline block. Positive values indicate that participants were more likely to perceive a 50:50 morphed face as happy, and negative values indicate that participants were more likely to perceive a 50:50 morphed face as angry. Error bars represent ±1 standard error

Minimum intensity of facial expression and FEAA

To examine whether the FEAA is based on a perceptual process that does not require recognition or labeling of facial expressions, a minimum intensity of a facial expression required for recognition of facial expressions was examined. The means of the minimum intensity required for an expression to be recognized as happy was 39.76% (SD = 15.5) and as angry was 58.3% (SD = 18.12). As presented earlier, for the happy adaptation, a significant shift in the PSE was observed if the intensity of the adapting faces was at least 50%, which exceeded the minimum intensity required for a face to be recognized as happy. However, for the angry adaptation, the FEAA was observed even at the 10% intensity, which was below the minimum intensity required for a face to be perceived as angry. These results indicate that, at least for angry faces, recognition of an emotion may not be critical for the FEAA to occur.

Discussion

Consistent with the prediction based on the opponent-coding model, the current findings demonstrated that the magnitude of the FEAA between two facial expressions increased monotonically as the intensity of adapting facial expressions increased. This result, thus, extends the scope of the opponent-coding model to the FEAA between two facial expressions. We also demonstrated that biases in the perception of a facial expression could be induced by adaptation to a face whose intensity of expression was too weak to be recognized as angry, suggesting that subtle changes in facial features are potent enough to cause the FEAA.

A monotonic increase in the FEAA as a function of extremity of an adapting antiexpression suggests that a single facial expression may be represented by a balanced activity between two pools of neural populations that are tuned to the opposite ends of a single facial expression dimension (Burton et al., 2015; Burton et al., 2013; Rhodes et al., 2017; Skinner & Benton, 2010, 2012a). Antiexpressions are created by morphing a facial expression (e.g., a happy face) in a linear trajectory through the overall norm face to a point opposite to the original expression (see Fig. 1a). Although antiexpressions generally do not represent any particular emotion (Sato & Yoshikawa, 2009), the two-pool opponent-coding model provides a useful scheme to understand how multiple facial expressions may be encoded and represented. Consistent with the prediction based on the opponent-coding model, the current study demonstrated a monotonic relation between adaptor extremity and the magnitude of the FEAA between happy and angry faces. Thus, these two facial expressions may also be encoded and represented by a balanced activity between two neural populations tuned to each expression. In the current study, the test faces were morphed between happy and angry faces, and the participants were instructed to indicate whether the faces were happy or angry. Due to the use of the 2-AFC paradigm, the shift away from an adapting facial expression always resulted in a response of the other facial expression. Nevertheless, the monotonic increase of the FEAA as a function of the intensity of adapting face demonstrates that the findings from previous studies using anti-expressions can be applied to two facial expressions.

Processing of facial expressions relies on both identity-dependent and identity-independent mechanisms (Campbell & Burke, 2009; C. J. Fox & Barton, 2007). The size of the FEAA is generally larger when the adapting and test faces are of the same identity than when they are different, indicating that an identity-dependent mechanism is involved in the FEAA. The fact that the FEAA is often observed even when the adapting and test faces are different identities, however, indicates that an identity-independent mechanism is also involved in the FEAA. The intensity dependence of the FEAA with antiexpressions has been observed both when the identity of adapting face is different from the test face (Skinner & Benton, 2012a) and when the identity of adapting face is the same as the test face (Burton et al., 2015; Burton et al., 2013; Skinner & Benton, 2010). Furthermore, the magnitude of the FEAA with antiexpressions is modulated simultaneously by both the intensity of adapting faces and identity of the adapting/test faces (Skinner & Benton, 2012a). Thus, both the identity-independent and identity-dependent mechanisms of facial expression processing may share a common encoding scheme based on the opponent-coding mechanism. The FEAA between two facial expressions, however, has been studied only with the same identity between the adapting and the test faces (Webster et al., 2004; Yang et al., 2010). Future study should examine the intensity dependence of the FEAA between two facial expressions when the identities of adapting and test faces are different (vs. same).

The FEAA was observed even when the intensity of an adapting facial expression was too weak to be recognized as angry, and the monotonic increase in the magnitude of the FEAA began from 10% intensity of both angry and happy expressions. These results indicate that subtle changes in facial features that are not sufficient for recognition or labeling of facial expressions are potent enough to cause the FEAA. Thus, the current results suggest that perceptual processing of facial expressions and recognition of emotions are separate constructs represented by distinct systems (Skinner & Benton, 2010). Our results are also consistent with previous studies demonstrating that the FEAA occurs without conscious recognition of adapting expressions (Adams et al., 2010; Yang et al., 2010). However, it is worth noting that interocular suppression using continuous flash suppression may impact visual signals selectively (Yang & Blake, 2012), implying that selective perceptual processing of facial expressions outside visual awareness may result in the FEAA without conscious recognition of adapting faces. Thus, it is not clear whether the FEAA without awareness of adapting faces results from selective perceptual processing of suppressed adapting faces or from adaptation to emotional information represented without awareness (Killgore & Yurgelun-Todd, 2004; Vuilleumier et al., 2002; Whalen et al., 1998).

Interestingly, the FEAA with a very low-intensity adapting facial expression occurred only with angry but not happy faces. The FEAA with happy adapting faces was observed only when the intensity of the adapting faces surpassed the minimum intensity required for a face to be recognized as happy. Although it is unclear, attention might play a role in the differences between angry and happy faces in the FEAA. Increased attention tends to enhance neural adaptation processes for faces (Rhodes et al., 2011) and for low-level visual features (Ling & Carrasco, 2006). Considering that angry faces attract more attention than happy faces (e.g., E. Fox et al., 2000; Pinkham, Griffin, Baron, Sasson, & Gur, 2010), increased attention to angry adapting faces might lower the threshold or intensity of expressions required for the FEAA. Although this speculation is consistent with previous findings that the FEAA without awareness of adapting stimuli occurs only when spatial attention is allocated to the location where an invisible adapting face is presented (Yang et al., 2010), future research should examine the role of attention in the intensity dependence of the FEAA.

In sum, the current study demonstrates that the FEAA between two faces monotonically increases as a function of the intensity of an adapting facial expression. The current result extends the scope of opponent-coding model to encoding and representation of two facial expressions. The FEAA with subtle changes in facial features further supports the perceptual nature of the FEAA. Thus, recognition or labeling of facial expressions may not be critical for the FEAA.

Footnotes

  1. 1.

    Participants were instructed to take a rest after completing each block as long as they wanted. As a result, about 28% participants could not complete all six blocks within a 2-hour session.

References

  1. Adams, W. J., Gray, K. L. H., Garner, M., & Graf, E. W. (2010). High-level face adaptation without awareness. Psychological Science, 21(2), 205–210. doi: 10.1177/0956797609359508 CrossRefPubMedGoogle Scholar
  2. Benton, C. P., Etchells, P. J., Porter, G., Clark, A. P., Penton-Voak, I. S., & Nikolov, S. G. (2007). Turning the other cheek: The viewpoint dependence of facial expression after-effects. Proceedings of the Royal Society B: Biological Sciences, 274(1622), 2131–2137. doi: 10.1098/rspb.2007.0473 CrossRefPubMedPubMedCentralGoogle Scholar
  3. Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10(4), 433–436. doi: 10.1163/156856897X00357 CrossRefPubMedGoogle Scholar
  4. Burton, N., Jeffery, L., Calder, A. J., & Rhodes, G. (2015). How is facial expression coded? Journal of Vision, 15(1), 1–1. doi: 10.1167/15.1.1 CrossRefGoogle Scholar
  5. Burton, N., Jeffery, L., Skinner, A. L., Benton, C. P., & Rhodes, G. (2013). Nine-year-old children use norm-based coding to visually represent facial expression. Journal of Experimental Psychology: Human Perception and Performance, 39(5), 1261–1269. doi: 10.1037/a0031117 PubMedGoogle Scholar
  6. Butler, A., Oruc, I., Fox, C. J., & Barton, J. J. S. (2008). Factors contributing to the adaptation aftereffects of facial expression. Brain Research, 1191, 116–126. doi: 10.1016/j.brainres.2007.10.101 CrossRefPubMedGoogle Scholar
  7. Campbell, J., & Burke, D. (2009). Evidence that identity-dependent and identity-independent neural populations are recruited in the perception of five basic emotional facial expressions. Vision Research, 49(12), 1532–1540. doi: 10.1016/j.visres.2009.03.009 CrossRefPubMedGoogle Scholar
  8. Dickinson, J. E., & Badcock, D. R. (2013). On the hierarchical inheritance of aftereffects in the visual system. Frontiers in Psychology, 4. doi: 10.3389/fpsyg.2013.00472
  9. Fox, C. J., & Barton, J. J. S. (2007). What is adapted in face adaptation? The neural representations of expression in the human visual system. Brain Research, 1127, 80–89. doi: 10.1016/j.brainres.2006.09.104 CrossRefPubMedGoogle Scholar
  10. Fox, E., Lester, V., Russo, R., Bowles, R. J., Pichler, A., & Dutton, K. (2000). Facial expressions of emotion: Are angry faces detected more efficiently? Cognition & Emotion, 14(1), 61–92.CrossRefGoogle Scholar
  11. Hsu, S., & Young, A. (2004). Adaptation effects in facial expression recognition. Visual Cognition, 11(7), 871–899. doi: 10.1080/13506280444000030 CrossRefGoogle Scholar
  12. Jeffery, L., McKone, E., Haynes, R., Firth, E., Pellicano, E., & Rhodes, G. (2010). Four-to-six-year-old children use norm-based coding in face-space. Journal of Vision, 10(5), 18–18. doi: 10.1167/10.5.18 CrossRefPubMedGoogle Scholar
  13. Jeffery, L., Rhodes, G., McKone, E., Pellicano, E., Crookes, K., & Taylor, E. (2011). Distinguishing norm-based from exemplar-based coding of identity in children: Evidence from face identity aftereffects. Journal of Experimental Psychology: Human Perception and Performance, 37(6), 1824–1840. doi: 10.1037/a0025643 PubMedGoogle Scholar
  14. Juricevic, I., & Webster, M. A. (2012). Selectivity of face aftereffects for expressions and anti-expressions. Frontiers in Psychology, 3. doi: 10.3389/fpsyg.2012.00004
  15. Killgore, W. D. S., & Yurgelun-Todd, D. A. (2004). Activation of the amygdala and anterior cingulate during nonconscious processing of sad versus happy faces. NeuroImage, 21(4), 1215–1223. doi: 10.1016/j.neuroimage.2003.12.033 CrossRefPubMedGoogle Scholar
  16. Kovács, G., Zimmer, M., Harza, I., Antal, A., & Vidnyánszky, Z. (2005). Position-specificity of facial adaptation. NeuroReport, 16(17), 1945–1949.CrossRefPubMedGoogle Scholar
  17. Leopold, D. A., O’Toole, A. J., Vetter, T., & Blanz, V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4(1), 89–94.CrossRefPubMedGoogle Scholar
  18. Lewis, M. (2004). FacespaceR: Towards a unified account of face recognition. Visual Cognition, 11(1), 29–69. doi: 10.1080/13506280344000194 CrossRefGoogle Scholar
  19. Ling, S., & Carrasco, M. (2006). When sustained attention impairs perception. Nature Neuroscience, 9(10), 1243–1245. doi: 10.1038/nn1761 CrossRefPubMedPubMedCentralGoogle Scholar
  20. Lundqvist, D., Flykt, A., & Öhman, A. (1998). The Karolinska directed emotional faces (KDEF) [CD-ROM, 91–630]. Stockholm, Sweden: Department of Clinical Neuroscience, Psychology Section, Karolinska Institutet.Google Scholar
  21. McKone, E., Jeffery, L., Boeing, A., Clifford, C. W. G., & Rhodes, G. (2014). Face identity aftereffects increase monotonically with adaptor extremity over, but not beyond, the range of natural faces. Vision Research, 98, 1–13. doi: 10.1016/j.visres.2014.01.007 CrossRefPubMedGoogle Scholar
  22. Pell, P. J., & Richards, A. (2013). Overlapping facial expression representations are identity-dependent. Vision Research, 79, 1–7. doi: 10.1016/j.visres.2012.12.009 CrossRefPubMedGoogle Scholar
  23. Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10(4), 437–442.CrossRefPubMedGoogle Scholar
  24. Pinkham, A. E., Griffin, M., Baron, R., Sasson, N. J., & Gur, R. C. (2010). The face in the crowd effect: Anger superiority when using real faces and multiple identities. Emotion, 10(1), 141–146. doi: 10.1037/a0017387 CrossRefPubMedGoogle Scholar
  25. Rhodes, G., & Jeffery, L. (2006). Adaptive norm-based coding of facial identity. Vision Research, 46(18), 2977–2987. doi: 10.1016/j.visres.2006.03.002 CrossRefPubMedGoogle Scholar
  26. Rhodes, G., Jeffery, L., Evangelista, E., Ewing, L., Peters, M., & Taylor, L. (2011). Enhanced attention amplifies face adaptation. Vision Research, 51(16), 1811–1819. doi: 10.1016/j.visres.2011.06.008 CrossRefPubMedGoogle Scholar
  27. Rhodes, G., Pond, S., Jeffery, L., Benton, C., Skinner, A., & Burton, N. (2017). Aftereffects support opponent coding of expression. Journal of Experimental Psychology: Human Perception and Performance, 43, 619–628Google Scholar
  28. Robbins, R., McKone, E., & Edwards, M. (2007). Aftereffects for face attributes with different natural variability: Adapter position effects and neural models. Journal of Experimental Psychology: Human Perception and Performance, 33(3), 570–592. doi: 10.1037/0096-1523.33.3.570 PubMedGoogle Scholar
  29. Rosenthal, R., Rosnow, R. L., & Rubin, D. B. (2000). Contrasts and effect sizes in behavioral research: A correlational approach. Cambridge, UK: Cambridge University Press.Google Scholar
  30. Ross, D. A., Deroche, M., & Palmeri, T. J. (2014). Not just the norm: Exemplar-based models also predict face aftereffects. Psychonomic Bulletin & Review, 21(1), 47–70. doi: 10.3758/s13423-013-0449-5 CrossRefGoogle Scholar
  31. Sato, W., & Yoshikawa, S. (2009). Anti-expressions: Artificial control stimuli for the visual properties of emotional facial expressions. Social Behavior and Personality: An International Journal, 37(4), 491–501. doi: 10.2224/sbp.2009.37.4.491 CrossRefGoogle Scholar
  32. Skinner, A. L., & Benton, C. P. (2010). Anti-expression aftereffects reveal prototype-referenced coding of facial expressions. Psychological Science, 21(9), 1248–1253. doi: 10.1177/0956797610380702 CrossRefPubMedGoogle Scholar
  33. Skinner, A. L., & Benton, C. P. (2012a). The expressions of strangers: Our identity-independent representation of facial expression. Journal of Vision, 12(2), 12–12. doi: 10.1167/12.2.12 CrossRefPubMedGoogle Scholar
  34. Skinner, A. L., & Benton, C. P. (2012b). Visual search for expressions and anti-expressions. Visual Cognition, 20(10), 1186–1214. doi: 10.1080/13506285.2012.743495 CrossRefGoogle Scholar
  35. Susilo, T., McKone, E., & Edwards, M. (2010). What shape are the neural response functions underlying opponent coding in face space? A psychophysical investigation. Vision Research, 50(3), 300–314. doi: 10.1016/j.visres.2009.11.016 CrossRefPubMedGoogle Scholar
  36. Valentine, T. (1991). A unified account of the effects of distinctiveness, inversion, and race in face recognition. The Quarterly Journal of Experimental Psychology, A: Human Experimental Psychology, 43(2), 161–204.CrossRefPubMedGoogle Scholar
  37. Vuilleumier, P., Armony, J. L., Clarke, K., Husain, M., Driver, J., & Dolan, R. J. (2002). Neural response to emotional faces with and without awareness: Event-related fMRI in a parietal patient with visual extinction and spatial neglect. Neuropsychologia, 40(12), 2156–2166.CrossRefPubMedGoogle Scholar
  38. Webster, M. A., Kaping, D., Mizokami, Y., & Duhamel, P. (2004). Adaptation to natural facial categories. Nature, 428(6982), 557–561. doi: 10.1038/nature02420 CrossRefPubMedGoogle Scholar
  39. Webster, M. A., & MacLeod, D. I. A. (2011). Visual adaptation and face perception. Philosophical Transactions of the Royal Society, B: Biological Sciences, 366(1571), 1702–1725. doi: 10.1098/rstb.2010.0360 CrossRefPubMedCentralGoogle Scholar
  40. Whalen, P. J., Rauch, S. L., Etcoff, N. L., McInerney, S. C., Lee, M. B., & Jenike, M. A. (1998). Masked presentations of emotional facial expressions modulate amygdala activity without explicit knowledge. The Journal of Neuroscience: the Official Journal of the Society for Neuroscience, 18(1), 411–418.CrossRefGoogle Scholar
  41. Wichmann, F. A., & Hill, N. J. (2001). The psychometric function: I. Fitting, sampling, and goodness of fit. Perception & Psychophysics, 63(8), 1293–1313. doi: 10.3758/BF03194544 CrossRefGoogle Scholar
  42. Xu, H., Dayan, P., Lipkin, R. M., & Qian, N. (2008). Adaptation across the cortical hierarchy: Low-level curve adaptation affects high-level facial-expression judgments. Journal of Neuroscience, 28(13), 3374–3383. doi: 10.1523/JNEUROSCI.0182-08.2008 CrossRefPubMedGoogle Scholar
  43. Yamashita, J. A., Hardy, J. L., De Valois, K. K., & Webster, M. A. (2005). Stimulus selectivity of figural aftereffects for faces. Journal of Experimental Psychology: Human Perception and Performance, 31(3), 420–437. doi: 10.1037/0096-1523.31.3.420 PubMedGoogle Scholar
  44. Yang, E., & Blake, R. (2012). Deconstructing continuous flash suppression. Journal of Vision, 12(3), 8–8. doi: 10.1167/12.3.8 CrossRefPubMedPubMedCentralGoogle Scholar
  45. Yang, E., Hong, S.-W., & Blake, R. (2010). Adaptation aftereffects to facial expressions suppressed from visual awareness. Journal of Vision, 10(12), 24. doi: 10.1167/10.12.24 CrossRefPubMedPubMedCentralGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2017

Authors and Affiliations

  1. 1.Department of Psychology and Center for Complex Systems and Brain SciencesFlorida Atlantic UniversityBoca RatonUSA
  2. 2.Department of PsychologyUniversity of Notre DameNotre DameUSA

Personalised recommendations