Advertisement

Attention, Perception, & Psychophysics

, Volume 80, Issue 6, pp 1461–1473 | Cite as

Serial dependence promotes the stability of perceived emotional expression depending on face similarity

  • Alina Liberman
  • Mauro ManassiEmail author
  • David Whitney
Article

Abstract

Individuals can quickly and effortlessly recognize facial expressions, which is critical for social perception and emotion regulation. This sensitivity to even slight facial changes could result in unstable percepts of an individual’s expression over time. The visual system must therefore balance accuracy with maintaining perceptual stability. However, previous research has focused on our sensitivity to changing expressions, and the mechanism behind expression stability remains an open question. Recent results demonstrate that perception of facial identity is systematically biased toward recently seen visual input. This positive perceptual pull, or serial dependence, may help stabilize perceived expression. To test this, observers judged random facial expression morphs ranging from happy to sad to angry. We found a pull in perceived expression toward previously seen expressions, but only when the 1-back and current face had similar identities. Our results are consistent with the existence of the continuity field for expression, a specialized mechanism that promotes the stability of emotion perception, which could help facilitate social interactions and emotion regulation.

Keywords

Serial dependence Face perception Perceptual stability 

Introduction

The perception of emotional expression is fundamental for successful social interactions, personal emotion regulation, the experience of empathy, and many other vital activities (Salovey & Mayer, 1990). Individuals with Parkinson’s disease, schizophrenia, traumatic brain injury, and other cognitive deficits have impairments in recognizing facial affect, which may have deleterious effects on their personal and social interactions (Croker & McDonald, 2005; Jacobs, Shuren, Bowers, & Heilman, 1995; Martin, Baudouin, Tiberghien, & Franck, 2005). Most research on emotion perception focuses on fast emotion categorization or recognition speed and accuracy (Edwards, 1998; Ekman & Friesen, 1971; Kirouac & Dore, 1984; Stel & van Knippenberg, 2008; Tracy & Robins, 2008; Tracy & Randles, 2011). The visual system is very sensitive to emotional expression; observers are able to make above chance discrimination of expressions shown as briefly as 30–50 ms (Calvo & Esteves, 2005; Kirouac & Dore, 1984; Milders, Sahraie, & Logan, 2008). Yet, emotional expressions do not constantly or spontaneously change. Thus, it is important to balance the ability to detect new facial expressions with the need to maintain perceived stability of an individual’s emotional state. However, no study has addressed how the visual system promotes the perception of expression stability over time.

From moment to moment, we perceive the identities of objects and people in the world as stable and continuous even though their image properties frequently change due to factors like occlusion, visual noise, changes in viewpoint, and eye movements. Previous studies have shown that the perception of orientation, numerosity, and other low-level stimulus features is serially dependent— systematically biased (i.e., pulled) towards similar visual input from the recent past (Cicchini, Anobile, & Burr, 2014; Corbett, Fischer, & Whitney, 2011; Fischer & Whitney, 2014). This serial dependence is tuned over distance (Fischer & Whitney, 2014; Manassi, Liberman, Kosovicheva, Zhang, & Whitney, 2018) and time (Fischer & Whitney, 2014; Manassi et al., 2018; Taubert, Alais, & Burr, 2016; Xia, Leib, & Whitney, 2016), as well as in feature space (object similarity; Fischer & Whitney, 2014; Fritsche, Mostert, & de Lange, 2017; Liberman, Fischer, & Whitney, 2014; Manassi, Liberman, Chaney, & Whitney, 2017). The spatio-temporal region over which current object features, such as orientation, are pulled by previously seen features is known as the Continuity Field (CF).

Beyond orientation (Fischer & Whitney, 2014; Fritsche et al., 2017; Liberman, Zhang, & Whitney, 2016; Manassi et al., 2017) and other basic features (motion: Alais, Leung, & Van der Burg, 2017; position: Manassi et al., 2018), serial dependence occurs at higher levels of perception as well. We have recently demonstrated that the continuity field is object-selective by showing that the perception of face identity is systematically biased towards identities seen up to several seconds prior, even across changes in viewpoint (Liberman et al., 2014; see also Taubert, Alais, & Burr, 2016; Taubert, Van der Burg, & Alais, 2016; Xia et al., 2016). If the continuity field promotes the perceived stability of emotional expression as well as identity, then there should be serial dependence not just in identity, but also in facial expression. We therefore predicted that perceived emotional expression would be biased towards recently seen emotional expressions. Here, we tested this using a psychophysical task, and we also determined whether this serial dependence in perceived emotional expression depended on the similarity of sequential identities. Because expressions within an individual may be more autocorrelated than across individuals, we expected that there should be a larger perceptual pull from previously seen faces if the previous face was more similar to the current face.

Experiment 1: Serial dependence of perceived emotional expression

If the CF facilitates the perceptual stability of perceived emotional expression, then there should be a measurable serial dependence in judged expression; the perception of a facial expression at one moment in time should be pulled towards recently seen expressions. To test this, we presented a series of random facial expressions drawn from an expression morph continuum (Fig. 1a-b) and had observers report the facial expression that they last saw through a method of adjustment task (Fig. 1c-d). The question was whether the perceived expression at a given moment was serially dependent on the expressions of the faces seen several seconds previously.
Fig. 1

Stimuli and trial sequence from Experiments 1 and 2. (a-b) Face morphs used in Experiments 1 and 2. These morphs were based on original female (a) and male (b) Ekman identities displaying a happy, sad, or angry face (Ekman, 1976). For each gender, a set of 48 morphs was created between these expressions, resulting in a face morph continuum of 147 faces. In Experiment 1, only female identities were presented (a). In Experiment 2, both identities (male and female) were presented (a-b). (c) Trial sequence for the method of adjustment task in Experiment 1. On each trial, a randomly selected target expression of either gender was presented for 250 ms, followed by a 1000-ms noise mask of black and white pixels to reduce afterimages, and a 250-ms fixation cross. Participants then saw a test screen containing a random adjustment face, which they modified by scrolling through the continuous expression wheel to match the target expression. After picking a match expression, participants saw a 1000-ms noise mask followed by a 1000-ms fixation cross before the next trial began. (d) In addition to the female morphs (c), a set of male face morphs was also used in Experiment 2

Methods

Participants

Six participants (three female) ranging in age from 19 to 33 years (M = 26.7, SD= 5.5 years) participated in Experiment 1. One of the participants in Experiment 1 was not naïve to the experiment. All experimental procedures were approved the by UC Berkeley Institutional Review Board. Participants were affiliates of UC Berkeley and provided written informed consent before participation. All participants had normal or corrected-to-normal vision.

Stimuli and procedure

We used a set of 147 Caucasian female face morphs with different expressions (Fig. 1a), which were generated using Morph 2.5 (Gryphon Software) from one original Ekman identity displaying a happy, sad, or angry face (Ekman, 1976), cropped by an oval aperture to remove the hairline. Each presented face subtended 5.9 x 7.3° of visual angle. During the experiment, participants were tested on their ability to identify randomly chosen target expressions with a method of adjustment (MOA) task. We measured participants’ identification errors on the MOA task to determine whether a participant’s perception of each target expression was influenced by previously seen target expressions. For all experiments, faces were centered on a white background and overlaid with a central fixation cross. All experiments were programmed in MATLAB (The MathWorks, Natick, MA) using Psychophysics Toolbox (Brainard, 1997). Participants viewed stimuli at a distance of 56 cm on a monitor with a resolution of 1024 x 768 and a refresh rate of 100 Hz. Participants used a keyboard or mouse for all responses.

On each trial, a random target expression was presented for 250 ms, followed by a 1000-ms noise mask of randomly shuffled black and white pixels, to reduce afterimages, and then a 250-ms fixation cross prior to the response (Fig. 1c). Participants then saw a test screen containing a random adjustment expression, which they adjusted to match the target facial expression. After picking a match expression, participants saw a 1000-ms noise mask followed by a 1000-ms fixation cross before the next trial began. Here, we use the terms “target expression” to mean the face that participants tried to match, “adjustment expression” to denote the randomly-selected face used as the starting point for matching the target, and “match expression” for the facial expression that participants selected as most similar to the target expression. The experiment was self-paced and participants were allowed to take as much time as necessary to respond. We recorded responses based on the numerical value of the match expression along the morph continuum, with possible values ranging from 1 to 147. Six participants each completed 500 trials.

Analysis

Response (perceptual) error was computed as the shortest distance along the morph wheel between the match expression and the target expression. Response error was compared to the difference in expressions between the current and previous trial, computed as the shortest distance along the morph wheel between the previous target expression (1-back) and the current target expression. For each participant’s data, trials were considered lapses and excluded if error exceeded 3 standard deviations from the mean or if the response time was longer than 10 s (less than 5% of data excluded on average). We fitted a simplified Gaussian derivative (DoG) to each participant’s data of the form:
$$ y= ab\sqrt{2}/{e}^{-0.5}{xe}^{-{(bx)}^2} $$
where parameter y is identification error on each trial (match expression - current target expression), x is the difference along the wheel between the current and 1-back target expression (1-back target expression – current target expression), a is half the peak-to-trough amplitude of the derivative-of-Gaussian, b scales the width of the Gaussian derivative, and a constant √2/e –0.5, which scales the curve to make the a parameter equal to the peak amplitude (Fig. 2b). We fitted the Gaussian derivative using constrained nonlinear minimization of the residual sum of squares. When fitting each participant’s data with a von Mises function, parameters a and b yielded very similar values.
Fig. 2

Experiment 1 results. (a) Collapsed data from all participants for 1-back trials, with each data point showing performance on one trial. The x-axis is the difference between the current and 1-back target expression (1-back target expression - current target expression, in units of expression morph steps), and the y-axis is the difference between the selected match expression and target expression (match expression - current target expression). Black lines show the DoG fit for single observers. (b) Half-amplitude of the serial dependence for each participant in Experiment 1 for one, two, three, and four trials back. On average, all participants had a significant, positive perceptual pull of the current facial expression towards the expression seen one or two trials previously (p<.05, permuted null distribution). Error bars are bootstrapped 95% confidence intervals

For each participant’s data, we generated confidence intervals by calculating a bootstrapped distribution of the model-fitting parameter values by resampling the data with replacement 5000 times (Efron & Tibshirani, 1986). On each iteration, we fitted a new DoG to obtain a bootstrapped half-amplitude and width for each participant. We used the half amplitude of the DoG—the a parameter in the above equation—to measure the degree to which participants’ reports of facial expression were pulled in the direction of n-back expressions. If participants’ perception of facial expression was repelled by the 1-back expression (e.g., because of a negative aftereffect; Clifford et al., 2007; Webster, Kaping, Mizokami, & Duhamel, 2004) or not influenced by the 1-back expression (because of independent, bias-free perception on each trial), then the half-amplitude of the DoG should be negative or close to zero, respectively. In order to further confirm the reliability of our effects across participants, we also separately ran an additional bootstrap analysis by resampling with replacement the amplitudes within each group (1–4 back for Experiment 1; Same and Different for Experiments 2 and 3). The results were equivalent to the group bootstrapped amplitudes that we report across the manuscript.

In order to calculate significance, we also generated a null distribution of half amplitude (a) values for each participant using a permutation analysis. We randomly shuffled each participant’s response errors relative to the difference between the current and 1-back target expression and recalculated the DoG fit for each iteration of the shuffled data. We ran this procedure for 5000 iterations to generate a within-participant null distribution of half amplitude values. P-values were calculated by computing the proportion of half amplitudes in each participant’s null distribution that were greater than or equal to the observed half amplitude. To test significance at the group level, we chose a random a parameter value index (without replacement) from each participant’s null distribution and averaged those values across all five participants. We repeated this procedure for 5000 iterations in order to generate a group null distribution of average half amplitude values, and calculated the P-value as described above.

Results

All participants displayed a positive DoG half-amplitude, indicating that perceived expression on a given trial was significantly pulled in the direction of expressions presented in the previous trial (p < 0.001, n=6, group permuted null, Fig. 2b), with four of the six participants showing significant serial dependence (p < .05, permuted null). The largest attraction of perceived expression occurred when the 1-back target expression was, on average, ± 23.1 morph frames away from the current target expression, which resulted in an average perceptual pull towards the 1-back face of ± 3.02 face morph frames. Most participants also showed an influence of expressions seen two trials back (p<.05, n=6, group permuted null). Average response time (RT) across participants was 2695 ms (SD = 1523 ms), so the 1-back face occurred, on average, ~ 6195 ms prior to the current trial face, and the 2-back face occurred ~12390 ms prior to the current face. Perceived facial expression was therefore pulled toward the expression of a random target expression seen more than 6–12 s prior.

Experiment 2: Is serial dependence for emotional expression gender dependent?

In Experiment 1, we found serial dependence in perceived emotional expression. This is consistent with the idea that the continuity field facilitates the stability of perceived expression by echoing the physical autocorrelations of facial expressions (Liberman et al., 2014). Individual faces convey expressions that vary over time, but these expressions do not randomly or suddenly change.

Therefore, emotional expressions might be more physically auto correlated within an individual face than across different identities. If the visual system mirrors this, we would expect stronger serial dependence in perceived expression within the same gender, than across different identities. In Experiment 2, we tested whether the amplitude of serial dependence for facial expression was modulated by the similarity between the current and the 1-back face gender.

Methods

Participants

Seven participants (four female) ranging in age from 20 to 37 years (M = 29.14, SD= 5.5 years) participated in Experiment 2. One participant was excluded because their response error SD was more than two SDs away from the other participants, but the inclusion of their data did not change the pattern or significance of the results. One of the participants in Experiment 2 was not naïve to the experiment, and five of the participants also participated in Experiment 1. All experimental procedures were approved the by UC Berkeley Institutional Review Board. Participants were affiliates of UC. Berkeley and provided written informed consent before participation. All participants had normal or corrected-to-normal vision.

Stimuli and procedure

The faces used in Experiment 2 consisted of two emotional morph continuums: the first morph continuum was the set of female faces used in Experiment 1 (Fig. 1a), and the second set was based on a male face (Fig. 1b; Ekman, 1976). We created the male face morph continuum between three facial expressions (happy, sad, and angry) using the same morph procedures as described in Experiment 1 (Fig. 1b).

During the experiment, participants were tested on their ability to identify randomly chosen target emotions with a MOA matching task, similar to the task in Experiment 1. However, on each trial, participants now saw a randomly chosen male or female target expression for 250 ms, followed by a 1000-ms noise mask of randomly shuffled black and white pixels, and then a 250-ms fixation cross prior to the response (Fig. 1c-d). Participants then saw a test screen containing a random adjustment expression with the same gender as the target expression, which they adjusted to match the last expression they saw. After picking a match expression, participants saw a 1000-ms noise mask followed by a 1000-ms fixation cross before the next trial began. Six participants each completed 400 trials. All other experiment procedures were identical to Experiment 1.

Analysis

Response error was computed as the shortest distance along the morph wheel between the match expression and the target expression. Response error was compared to the difference in expressions between the current and previous trial, computed as the shortest distance along the morph wheel between the previous target expression (1-back) and the current target expression. Trials where the 1-back target expression and the current target expression shared the same gender were labeled “Same 1-back Gender,” and trials where the 1-back target expression and the current target expression had different identities were labeled “Different 1-back Gender.” For each participant, we fitted a separate simplified DoG functions to Same 1-back and Different 1-back trials, according to the fitting and significance testing procedures described in Experiment 1. We then determined whether there was a significant difference in the serial dependence amplitude between same 1-back gender and different 1-back gender trials using a permutation analysis.

For the permutation analysis, we shuffled the same 1-back and different 1-back trial labels, recalculated DoG fits for the new, randomly assigned trial types, and took the difference between same and different 1-back DoG amplitude. We ran this procedure for 5000 iterations in order to generate a within-participant null distribution of difference scores. We calculated a p-value by computing the proportion of difference values in each participant’s null distribution that were greater than or equal to the observed difference between amplitudes. To test significance at the group level, we chose a random a-difference parameter value index (without replacement) from each participant’s null distribution of differences and averaged those values across all five participants. We repeated this procedure for 5000 iterations in order to generate a group null distribution of average difference values, and calculated the p-value as described above.

Results

The half-amplitude of serial dependence was significantly larger for the Same 1-back trials compared to the Different 1-back trials (p < 0.001, n=6, group permuted null, Fig. 3b), with three of the six participants showing significantly larger serial dependence (p < .05, permuted null). Additionally, the Same 1-back trials showed an overall positive serial dependence effect, replicating the results from Experiment 1 (p < 0.001, n=6, group permuted null, Fig. 3b). The Different 1-back trials showed no overall serial dependence effect for perceived expression (p=.8, n=6, group permuted null, Fig. 3b). Therefore, perceived facial expression was pulled towards the facial expression seen one trial ago, but only if the current and previous facial expression came from similar identities. Average response time (RT) across participants was 2874 ms (SD = 1553 ms) for this experiment. This result suggests that the object-selective continuity field maintains the stability of perceived facial expressions over time in a gender-dependent manner. These results also demonstrate that the positive serial dependence found in the same 1-back trials is not entirely due to previous motor responses or general response biases (Luce & Green, 1974; Tanner, Rauk, & Atkinson, 1970; Wiegersma, 1982a, 1982b), since responding on the previous trial did not elicit a serial dependence effect in both conditions.
Fig. 3

Experiment 2 results. (a) Collapsed data from all participants for trials with the Same 1-back gender (left), and collapsed data from all participants for all trials with a different 1-back gender (right). Each data point shows performance on one trial. Black lines show the DoG fit for single observers. (b) Half-amplitude of the serial dependence for each individual participant in Experiment 2 for same 1-back and different 1-back trials. On average, there was a significantly larger amplitude of serial dependence for Same 1-back trials compared to Different 1-back trials (p<.001, permuted null distribution). Error bars are bootstrapped 95% confidence intervals and p-value is based on group permuted null distribution. Participant 2 and Participant 6 also participated in Experiment 1

Experiment 3: Is serial dependence for emotional expression ethnicity dependent?

In Experiment 2, we found that serial dependence in perceived emotional expression was selective to the gender of previously seen faces. However, the gender may be confounded by the different identities of the male and female faces. In order to disentangle whether serial dependence is selective to gender or identity, we tested whether the amplitude of serial dependence for facial expression was modulated by the similarity between the current and the 1-back face ethnicity. The rationale is that, if serial dependence is as selective for ethnicities as it is for gender (Experiment 2), then serial dependence is identity-dependent. Conversely, if serial dependence is not selective for ethnicity, then serial dependence is at least gender-dependent.

Methods

Participants

Five participants (three female) ranging in age from 20 to 37 years (M = 27.6, SD = 4.07 years) participated in Experiment 3. One of the participants in Experiment 3 was not naïve to the experiment, and three of the participants also participated in Experiments 1 and 2. All experimental procedures were approved the by UC Berkeley Institutional Review Board. Participants were affiliates of UC Berkeley and provided written informed consent before participation. All participants had normal or corrected-to-normal vision.

Stimuli, procedure, and analysis

First, we created two prototype faces, one Asian and one Caucasian, using FaceGen Modeller. Second, for each prototype we created a happy, sad, or angry facial expression, cropped by an oval aperture to remove the hairline for each ethnicity. Third, we generated 48 face morphs across facial expressions and within each ethnicity, using Morph 2.5 (Gryphon Software). As a result, we obtained two sets of 147 (48+48+48 morphs +3 original) facial expressions, one set of Asian faces (Fig. 4a) and one set of Caucasian faces (Fig. 4b). Each presented face subtended 5.9 x 7.3 degrees of visual angle.
Fig. 4

Stimuli and trial sequence from Experiment 3. (a-b) Face morphs used in Experiments 3. These morphs were based on an Asian (a) and Caucasian (b) ethnicity, with each identity displaying a happy, sad, or angry face. For each ethnicity, a set of 48 morphs was created between these expressions, resulting in a face morph continuum of 147 faces. (c-d) On each trial, a randomly selected target expression of either ethnicity, Asian (c) or Caucasian (d) was presented for 250 ms, followed by a 1000-ms noise mask of black and white pixels to reduce afterimages, and a 250-ms fixation cross. Participants then saw a test screen containing a random adjustment face, which they modified by scrolling through the continuous expression wheel to match the target expression. After picking a match expression, participants saw a 1000-ms noise mask followed by a 1000-ms fixation cross before the next trial began

During the experiment, participants were tested on their ability to identify randomly chosen target emotions with a MOA matching task, similar to the task in Experiment 2. On each trial, participants saw a randomly chosen Asian or Caucasian target expression for 250 ms, followed by a 1000-ms noise mask of randomly shuffled black and white pixels, and then a 250-ms fixation cross prior to the response (Fig. 4c-d). Participants then saw a test screen containing a random adjustment expression with the same ethnicity as the target expression, which they adjusted to match the last expression they saw. After picking a match expression, participants saw a 1000-ms noise mask followed by a 1000-ms fixation cross before the next trial began. Five participants each completed 400 trials. All other experiment procedures were identical to Experiment 2.

Results

The Same 1-back ethnicity trials and Different 1-back ethnicity trials showed an overall positive serial dependence effect, replicating the results from Experiment 1 (Same 1-back trials: p < 0.01; Different 1-back trials: p<0.01; n=5, group permuted null, Fig. 5a). Therefore, perceived facial expression was pulled towards the facial expression seen one trial ago independent of the ethnicity of the face. Crucially, the half-amplitude of serial dependence was not significantly different for the Same 1-back trials compared to the Different 1-back trials (p = 0.46, n=5, group permuted null, Fig. 5b).
Fig. 5

Experiment 3 results. (a) Collapsed data from all participants for trials with the Same 1-back ethnicity (left), and collapsed data from all participants for all trials with a different 1-back ethnicity (right). Each data point shows performance on one trial. Black lines indicate the DoG fit for single observers. (b) Half-amplitude of the serial dependence for each individual participant in Experiment 3 for same 1-back and different 1-back trials. On average, there was no significant difference in the amplitude of serial dependence for Same 1-back trials compared to Different 1-back trials (p<.001, permuted null distribution). Error bars are bootstrapped 95% confidence intervals and p-value is based on group permuted null distribution. Participant 3– Participant 4 also participated in Experiment 1

Additionally, average response time (RT) across participants was 2774 ms (SD = 1328 ms) for this experiment. Importantly, the difference in serial dependence between same and different gender (Experiment 2) was higher compared to the difference in serial dependence between same versus different ethnicity (Experiment 3): p < 0.01, group permuted null. Taken together, these results suggest that the object-selective continuity field maintains the stability of perceived facial expressions over time independent of ethnicity (Experiment 3), whereas it is selective for gender (Experiment 2).

Discussion

Our experiments demonstrated that perceived emotional expression was pulled by expressions seen up to two trials previously (6–12 s ago, Experiment 1). Furthermore, this serial dependence effect was selective to the gender of the previously seen faces (Experiment 2). We saw a pull on the perceived current expression if the previously seen facial expression had a dissimilar ethnicity (Experiment 3). The results provide existence proof for serial dependence in emotion perception. The continuity field is therefore a mechanism that helps maintain the perceptual stability of emotional expression, in addition to facial identity and low-level features (Alais et al., 2017; Cicchini et al., 2014; Cicchini, Mikellidou, & Burr, 2017; Corbett et al., 2011; Fischer & Whitney, 2014; Liberman et al., 2014; Manassi et al., 2017).

In Experiment 3, serial dependence was not ethnicity selective. This may seem at odds with the second experiment, which revealed a gender selectivity in the serial dependence of perceived emotion. However, the degree of ethnicity (or gender) invariance of serial dependence in emotion perception may be related to the perceived similarity of the faces. The two ethnicities (Fig. 4a-b) may have appeared more similar than the two genders (Fig. 1a-b). It is still possible that serial dependence may be selective for ethnicity or identity with two sufficiently different faces. Given the results, a conservative interpretation is that serial dependence in perceived emotional expression is tuned for face similarity.

Several alternative explanations for our results can be ruled out. A generalized response bias, or motor serial dependence would not predict the serial dependence we report (Luce & Green, 1974; Tanner et al., 1970; Wiegersma, 1982a, 1982b), since participants did not show expression serial dependence when they responded to a 1-back face from a different gender. Furthermore, adaptation and associated negative aftereffects, priming, and other phenomena show a type of perceptual dependence on the recent past, yet remain distinct from serial dependence and the CF. Adaptation studies show that prior exposure to a variety of stimulus features (Anstis, Verstraten, & Mather, 1998; Campbell & Maffei, 1971; Webster et al., 2004) results in a stimulus-specific negative aftereffect, or perceptual repulsion, away from the adapting stimulus (for reviews see Thompson & Burr, 2009; Webster, 2012). Additionally, both emotional expression and facial identity have previously been reported to exhibit negative aftereffects (Carbon & Leder, 2005; Fox & Barton, 2007; Fox, Oruc, & Barton, 2008; Leopold, Rhodes, Muller, & Jeffery, 2005; Rhodes, Jeffery, Clifford, & Leopold, 2007; Tillman & Webster, 2012; Webster & MacLin, 1999; Webster et al., 2004).

However, our experiments show a positive perceptual pull towards the recent past and are therefore not a result of known forms of adaptation. The reason we find serial dependence rather than a negative aftereffect (Taubert, Alais, et al., 2016) is likely because of (1) the brief exposure duration in our experiments (adaptation studies generally expose observers to an image for many seconds or even minutes), and (2) the long inter-stimulus intervals in our experiments, as well as the fact that (3) each trial had a random expression, which would tend to wash out adaptation and reduce negative aftereffects. With that in mind, both adaptation and serial dependence are likely operating here and in previous studies, albeit with different time courses, as found in the orientation domain (Alais et al., 2017; Fischer & Whitney, 2014; Fritsche et al., 2017; Liberman et al., 2016; Manassi et al., 2017). In addition, adaptation and positive serial dependences share some similarities. For example, like positive serial dependencies, adaptation to emotional expression and the associated negative aftereffects can be tuned to the identity of the face (Fox & Barton, 2007; Fox et al., 2008; Schweinberger & Soukup, 1998; Schweinberger, Burton, & Kelly, 1999; Wild-Wall, Dimigen, & Sommer, 2008). Thus, although serial dependence and adaptation for facial expression show different time scales and opposite perceptual effects, they do both seem to conform to the identity of the face. More generally, it is important to be cautious about interpreting negative and positive aftereffects. Positive and negative serial dependencies can simultaneously contribute to perceptual outcomes (Alais et al., 2017; Fischer & Whitney, 2014; Fritsche et al., 2017; Maus, Chaney, Liberman, & Whitney, 2013; Taubert, Alais, et al., 2016), and finding one does not rule out the presence of the other in this study or in any other (see Bliss, Sun, & D'Esposito, 2017).

The perceptual serial dependence we report may be related to priming effects (Kouider, Berthet, & Faivre, 2011; Kristjansson, Ingvarsdottir, & Teitsdottir, 2008; Kristjansson, Bjarnason, Hjaltason, & Stefansdottir, 2009; Maljkovic & Nakayama, 1994, 1996), but there are important differences. Priming generally manifests in reaction time (Kahneman, Treisman, & Gibbs, 1992; Maljkovic & Nakayama, 1994, 1996) and, where relevant, can improve discriminability of primed stimuli (Sigurdardottir, Kristjansson, & Driver, 2008); serial dependence does not impact reaction time and is a reduction in the discriminability of similar objects (Fischer & Whitney, 2014; Liberman et al., 2014), for the sake of perceptual stability. The CF is a spatiotemporal operator that may influence perception, memory, decision, and action (Kiyonaga, Scimeca, Bliss, & Whitney, 2017). It can affect appearance: it makes (even slightly different) objects look the same over time (Cicchini et al., 2017; Fischer & Whitney, 2014). The CF is one mechanism (of potentially many) that could generate priming-like effects, as long as our understanding of priming is broadened. This is not to say that priming, adaptation, and serial dependence are unrelated, as they may play complementary roles in helping to establish or maintain perceptual stability.

The exact mechanism(s) of serial dependence are still under debate (Fischer & Whitney, 2014; Fritsche et al., 2017). Serial dependence was shown to act directly on perception (Cicchini et al., 2017; Fischer & Whitney, 2014), biasing the appearance of our current percept and, hence, it has a perceptual component. Perceptual decisions were also proposed as a necessary component of serial dependence (Fritsche et al., 2017; but see Cicchini et al., 2017), but our results cannot be entirely explained by this. On each trial, observers make a decision regarding the emotional expression they are presented with. Accordingly, if the serial dependence mechanism were only based on sequential decisions, serial dependence should occur independent of face identity, and should not be gender-specific (as shown in Experiment 2). Memory was also proposed as a component of serial dependence, manifesting in terms of proactive interference (Bliss et al., 2017; Kiyonaga et al., 2017). Interestingly, proactive interference may simply be a special kind of serial dependence which occurs in memory. Future research should investigate whether proactive interference displays the same temporal, spatial, and featural tuning exhibited in classic perceptual serial dependence (Fischer & Whitney, 2014).

Although there is a great deal of debate surrounding the origin of sequential dependencies in perception (Bliss et al., 2017; Cicchini et al., 2017; Fischer & Whitney, 2014; Fritsche et al., 2017; Kiyonaga et al., 2017; Liberman et al., 2014; Pascucci et al., 2017; Suárez-Pinilla, Anil, & Roseboom, 2018), serial dependence may very likely happen at every level of visual and cognitive processing (perception, attention, decision, memory, and motor systems), and future research should investigate the interaction (and degree of independence) between all these components. Whereas our data cannot clearly disentangle between these different components, they provide further insights regarding the types of information and selectivity that serial dependence can exhibit. Whether serial dependence is due to perception, decision, memory, or a combination of these, our results show that serial dependence does not indiscriminately occur with all kinds of stimuli, but can be selective for face similarity. This builds on previous work showing that serial dependence effects can be selective for different object or stimulus categories (Taubert, Alais, et al., 2016; Xia et al., 2016; Kok, Taubert, Van den Brug, Rhodes, & Alais, 2017).

Our results have also important implications for models of face processing. Some models assume a dissociation between perception of identity (gender/ethnicity) and expression (Bruce & Young, 1986; Haxby, Hoffman, & Gobbini, 2000), suggesting that the analysis of facial identity occurs largely independent from the analysis of facial expression. Our results suggest that serial dependence in perceived facial expression is selective for face similarity and, hence, support the hypothesis that these two levels of analysis are linked. In accordance with this hypothesis (Bruce & Young, 1986; Calder & Young, 2005), it was shown that negative adaptation to facial expressions is stronger when adaptor and test stimuli are from the same identity (Campbell & Burke, 2009; Ellamil, Susskind, & Anderson, 2008; Fox & Barton, 2007). Furthermore, judging facial expression can be influenced by concomitant changes in identity (Schweinberger & Soukup, 1998) and functional magnetic resonance imaging (fMRI) studies found a link between identity and facial expression (Andrews & Ewbank, 2004; Davies-Thompson, Gouws, & Andrews, 2009).

In summary, our results demonstrate a serial dependence in perceived emotional expression that is selective for face similarity. A continuity field may therefore operate on perceived expression of faces for the purpose of perceptual stability across similar identities. By recycling previously perceived identities and expressions, the object-selective CF decreases the neural computations necessary for the identification of similar objects over time.

Notes

Acknowledgements

Supported by National Eye Institute Grant No. 2RO1EY018216 to D.W., Kirschstein National Research Service Award under Grant No. 1F31EY025942 and National Science Foundation Graduate Research Fellowship under Grant No. 1106400 to A.L. This work was presented, in part, at the Vision Sciences Society meeting in 2015.

References

  1. Alais, D., Leung, J., & Van der Burg, E. (2017). Linear summation of repulsive and attractive serial dependencies: Orientation and motion dependencies sum in motion perception. The Journal of Neuroscience, 37(16), 4381-4390.  https://doi.org/10.1523/JNEUROSCI.4601-15.2017 CrossRefPubMedGoogle Scholar
  2. Andrews, T. J., & Ewbank, M. P. (2004). Distinct representations for facial identity and changeable aspects of faces in the human temporal lobe. NeuroImage, 23(3), 905-913.CrossRefPubMedGoogle Scholar
  3. Anstis, S., Verstraten, F. A., & Mather, G. (1998). The motion aftereffect. Trends in Cognitive Sciences, 2(3), 111-117.  https://doi.org/10.1016/S1364-6613(98)01142-5 CrossRefPubMedGoogle Scholar
  4. Bliss, D. P., Sun, J. J., & D'Esposito, M. (2017). Serial dependence is absent at the time of perception but increases in visual working memory. Scientific Reports, 7(1), 14739.  https://doi.org/10.1038/s41598-017-5-7 CrossRefPubMedPubMedCentralGoogle Scholar
  5. Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10(4), 433-436.  https://doi.org/10.1163/156856897X00357 CrossRefPubMedGoogle Scholar
  6. Bruce, V., & Young, A. (1986). Understanding face recognition. British Journal of Psychology, 77(3), 305-327.CrossRefPubMedGoogle Scholar
  7. Calder, A. J., & Young, A. W. (2005). Understanding the recognition of facial identity and facial expression. Nature Reviews Neuroscience, 6(8), 641-651.CrossRefPubMedGoogle Scholar
  8. Calvo, M. G., & Esteves, F. (2005). Detection of emotional faces: Low perceptual threshold and wide attentional span. Visual Cognition, 12(1), 13-27.  https://doi.org/10.1080/13506280444000094 CrossRefGoogle Scholar
  9. Campbell, & Burke, D. (2009). Evidence that identity-dependent and identity-independent neural populations are recruited in the perception of five basic emotional facial expressions. Vision Research, 49(12), 1532-1540.CrossRefPubMedGoogle Scholar
  10. Campbell, F. W., & Maffei, L. (1971). The tilt after-effect: A fresh look. Vision Research, 11(8), 833-840.  https://doi.org/10.1016/0042-6989(71)90005-8 CrossRefPubMedGoogle Scholar
  11. Carbon, J., & Leder, H. (2005). Face adaptation: Changing stable representations of familiar faces within minutes? Advances in Cognitive Psychology, 1(1), 1-7.  https://doi.org/10.5709/acp-0038-8 CrossRefGoogle Scholar
  12. Cicchini, G. M., Anobile, G., & Burr, D. C. (2014). Compressive mapping of number to space reflects dynamic encoding mechanisms, not static logarithmic transform. Proceedings of the National Academy of Sciences of the United States of America, 111(21), 7867-7872.  https://doi.org/10.1073/pnas.1402785111 CrossRefPubMedPubMedCentralGoogle Scholar
  13. Cicchini, G. M., Mikellidou, K., & Burr, D. (2017). Serial dependencies act directly on perception. Journal of Vision, 17(14), 6.  https://doi.org/10.1167/17.14.6 CrossRefPubMedGoogle Scholar
  14. Clifford, C. W., Webster, M. A., Stanley, G. B., Stocker, A. A., Kohn, A., Sharpee, T. O., & Schwartz, O. (2007). Visual adaptation: Neural, psychological and computational aspects. Vision Research, 47(25), 3125-3131.  https://doi.org/10.1016/j.visres.2007.08.023 CrossRefPubMedGoogle Scholar
  15. Corbett, J. E., Fischer, J., & Whitney, D. (2011). Facilitating stable representations: Serial dependence in vision. PLoS One, 6(1), e16701.  https://doi.org/10.1371/journal.pone.0016701 CrossRefPubMedPubMedCentralGoogle Scholar
  16. Croker, V., & McDonald, S. (2005). Recognition of emotion from facial expression following traumatic brain injury. Brain Injury, 19(10), 787-799.  https://doi.org/10.1080/02699050500110033 CrossRefPubMedGoogle Scholar
  17. Davies-Thompson, J., Gouws, A., & Andrews, T. J. (2009). An image-dependent representation of familiar and unfamiliar faces in the human ventral stream. Neuropsychologia, 47(6), 1627-1635.CrossRefPubMedPubMedCentralGoogle Scholar
  18. Edwards, K. (1998). The face of time: Temporal cues in facial expressions of emotion. Psychological Science, 9(4), 270-276.  https://doi.org/10.1111/1467-9280.00054 CrossRefGoogle Scholar
  19. Efron, B., & Tibshirani, R. (1986). Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy. Statistical Science, 1(1), 54-75.CrossRefGoogle Scholar
  20. Ekman. (1976). Pictures of facial affect. Consulting Psychologists Press.Google Scholar
  21. Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17(2), 124-129.  https://doi.org/10.1037/h0030377 CrossRefPubMedGoogle Scholar
  22. Ellamil, M., Susskind, J. M., & Anderson, A. K. (2008). Examinations of identity invariance in facial expression adaptation. Cognitive, Affective, & Behavioral Neuroscience, 8(3), 273-281.CrossRefGoogle Scholar
  23. Fischer, J., & Whitney, D. (2014). Serial dependence in visual perception. Nature Neuroscience, 17(5), 738-743.  https://doi.org/10.1038/nn.3689 CrossRefPubMedPubMedCentralGoogle Scholar
  24. Fox, C. J., & Barton, J. J. (2007). What is adapted in face adaptation? The neural representations of expression in the human visual system. Brain Research, 1127(1), 80-89.  https://doi.org/10.1016/j.brainres.2006.09.104 CrossRefPubMedGoogle Scholar
  25. Fox, C. J., Oruc, I., & Barton, J. J. (2008). It doesn't matter how you feel. The facial identity after effect is invariant to changes in facial expression. Journal of Vision, 8(3):11-13.  https://doi.org/10.1167/8.3.11 CrossRefPubMedGoogle Scholar
  26. Fritsche, M., Mostert, P., & de Lange, F. P. (2017). Opposite effects of recent history on perception and decision. Current Biology, 27(4), 590-595.  https://doi.org/10.1016/j.cub.2017.01.006 CrossRefPubMedGoogle Scholar
  27. Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4(6), 223-233.CrossRefPubMedGoogle Scholar
  28. Jacobs, D. H., Shuren, J., Bowers, D., & Heilman, K. M. (1995). Emotional facial imagery, perception, and expression in Parkinson's disease. Neurology, 45(9), 1696-1702.  https://doi.org/10.1212/WNL.45.9.1696 CrossRefPubMedGoogle Scholar
  29. Kahneman, D., Treisman, A., & Gibbs, B. J. (1992). The reviewing of object files: object-specific integration of information. Cognitive Psychology, 24(2), 175-219.  https://doi.org/10.1016/0010-0285(92)90007-O CrossRefPubMedGoogle Scholar
  30. Kirouac, G., & Dore, F. Y. (1984). Judgment of facial expressions of emotion as a function of exposure time. Perceptual and Motor Skills, 59(1), 147-150.  https://doi.org/10.2466/pms.1984.59.1.147 CrossRefPubMedGoogle Scholar
  31. Kiyonaga, A., Scimeca, J. M., Bliss, D. P., & Whitney, D. (2017). Serial dependence across perception, attention, and memory. Trends in Cognitive Sciences, 21(7), 493-497.  https://doi.org/10.1016/j.tics.2017.04.011 CrossRefPubMedPubMedCentralGoogle Scholar
  32. Kok, R., Taubert, J., Van der Burg, E., Rhodes, G., Alais, D., (2017) Face familiarity promotes stable identity recognition: exploring face perception using serial dependence. Royal Society Open Science, 4(3), 160685.Google Scholar
  33. Kouider, S., Berthet, V., & Faivre, N. (2011). Preference is biased by crowded facial expressions. Psychological Science, 22(2), 184-189.  https://doi.org/10.1177/0956797610396226 CrossRefPubMedGoogle Scholar
  34. Kristjansson, A., Bjarnason, A., Hjaltason, A. B., & Stefansdottir, B. G. (2009). Priming of luminance-defined motion direction in visual search. Attention, Perception, & Psychophysics, 71(5), 1027-1041.  https://doi.org/10.3758/71.5.1027 CrossRefGoogle Scholar
  35. Kristjansson, A., Ingvarsdottir, A., & Teitsdottir, U. D. (2008). Object- and feature-based priming in visual search. Psychonomic Bulletin & Review, 15(2), 378-384.  https://doi.org/10.3758/pbr.15.2.378 CrossRefGoogle Scholar
  36. Leopold, D. A., Rhodes, G., Muller, K. M., & Jeffery, L. (2005). The dynamics of visual adaptation to faces. Proceedings of the Biological Sciences, 272(1566), 897-904.  https://doi.org/10.1098/rspb.2004.3022 CrossRefGoogle Scholar
  37. Liberman, A., Fischer, J., & Whitney, D. (2014). Serial dependence in the perception of faces. Current Biology, 24(21), 2569-2574.  https://doi.org/10.1016/j.cub.2014.09.025 CrossRefPubMedGoogle Scholar
  38. Liberman, A., Zhang, K., & Whitney, D. (2016). Serial dependence promotes object stability during occlusion. Journal of Vision, 16(15), 16.  https://doi.org/10.1167/16.15.16 CrossRefPubMedPubMedCentralGoogle Scholar
  39. Luce, R. D., & Green, D. M. (1974). Detection, discrimination, and recognition. Handbook of perception, 2, 299-342.Google Scholar
  40. Maljkovic, V., & Nakayama, K. (1994). Priming of pop-out: I. Role of features. Memory & Cognition, 22(6), 657-672.CrossRefGoogle Scholar
  41. Maljkovic, V., & Nakayama, K. (1996). Priming of pop-out: II. The role of position. Perception & Psychophysics, 58(7), 977-991.  https://doi.org/10.3758/BF03206826 CrossRefGoogle Scholar
  42. Manassi, Liberman, A., Chaney, W., & Whitney, D. (2017). The perceived stability of scenes: serial dependence in ensemble representations. Scientific Reports, 7(1), 1971.  https://doi.org/10.1038/s41598-017-02201-5 CrossRefPubMedPubMedCentralGoogle Scholar
  43. Manassi, Liberman, A., Kosovicheva, A., Zhang, K., & Whitney, D. (2018). Serial dependence in position occurs at the time of perception. Psychonomic Bulletin and Review  https://doi.org/10.3758/s13423-018-1454-5
  44. Martin, F., Baudouin, J. Y., Tiberghien, G., & Franck, N. (2005). Processing emotional expression and facial identity in schizophrenia. Psychiatry Research, 134(1), 43-53.  https://doi.org/10.1016/j.psychres.2003.12.031 CrossRefPubMedGoogle Scholar
  45. Maus, G. W., Chaney, W., Liberman, A., & Whitney, D. (2013). The challenge of measuring long-term positive aftereffects. Current Biology, 23(10), R438-439.  https://doi.org/10.1016/j.cub.2013.03.024 CrossRefPubMedGoogle Scholar
  46. Milders, M., Sahraie, A., & Logan, S. (2008). Minimum presentation time for masked facial expression discrimination. Cognition & Emotion, 22(1), 63-82.  https://doi.org/10.1080/02699930701273849 CrossRefGoogle Scholar
  47. Pascucci, D., Mancuso, G., Santandrea, E., Della Libera, C., Plomp, G., & Chelazzi, L. (2017). Laws of concatenated perception: vision goes for novelty, Decisions for perseverance. bioRxiv.Google Scholar
  48. Rhodes, G., Jeffery, L., Clifford, C. W., & Leopold, D. A. (2007). The timecourse of higher-level face aftereffects. Vision Research, 47(17), 2291-2296.  https://doi.org/10.1016/j.visres.2007.05.012 CrossRefPubMedGoogle Scholar
  49. Salovey, P., & Mayer, J. D. (1990). Emotional intelligence. Imagination, Cognition and Personality, 9(3), 185-211.  https://doi.org/10.2190/DUGG-P24E-52WK-6CDG CrossRefGoogle Scholar
  50. Schweinberger, S. R., Burton, A. M., & Kelly, S. W. (1999). Asymmetric dependencies in perceiving identity and emotion: Experiments with morphed faces. Perception & Psychophysics, 61(6), 1102-1115.  https://doi.org/10.3758/bf03207617 CrossRefGoogle Scholar
  51. Schweinberger, S. R., & Soukup, G. R. (1998). Asymmetric relationships among perceptions of facial identity, emotion, and facial speech. Journal of Experimental Psychology: Human Perception and Performance, 24(6), 1748-1765.  https://doi.org/10.1037/0096-1523.24.6.1748 PubMedGoogle Scholar
  52. Sigurdardottir, H. M., Kristjansson, A., & Driver, J. (2008). Repetition streaks increase perceptual sensitivity in visual search of brief displays. Visual Cognition, 16(5), 643-658.  https://doi.org/10.1080/13506280701218364 CrossRefPubMedPubMedCentralGoogle Scholar
  53. Stel, M., & van Knippenberg, A. (2008). The role of facial mimicry in the recognition of affect. Psychological Science, 19(10), 984-985.  https://doi.org/10.1111/j.1467-9280.2008.02188.x CrossRefPubMedGoogle Scholar
  54. Suárez-Pinilla, M., Anil, S., & Roseboom, W. (2018). Serial dependence in visual variance. PsyArXiv.  https://doi.org/10.17605/OSF.IO/GZDR6
  55. Tanner, T. A., Rauk, J. A., & Atkinson, R. C. (1970). Signal recognition as influenced by information feedback. Journal of Mathematical Psychology, 7(2), 259.  https://doi.org/10.1016/0022-2496(70)90048-9 CrossRefGoogle Scholar
  56. Taubert, J., Alais, D., & Burr, D. (2016). Different coding strategies for the perception of stable and changeable facial attributes. Scientific Reports, 6, 32239.  https://doi.org/10.1038/srep32239 CrossRefPubMedPubMedCentralGoogle Scholar
  57. Taubert, J., Van der Burg, E., & Alais, D. (2016). Love at second sight: Sequential dependence of facial attractiveness in an on-line dating paradigm. Scientific Reports, 6, 22740.  https://doi.org/10.1038/srep22740 CrossRefPubMedPubMedCentralGoogle Scholar
  58. Thompson, P., & Burr, D. (2009). Visual aftereffects. Current Biology, 19(1), R11-14.  https://doi.org/10.1016/j.cub.2008.10.014 CrossRefPubMedGoogle Scholar
  59. Tillman, M. A., & Webster, M. A. (2012). Selectivity of face distortion aftereffects for differences in expression or gender. Frontiers in Psychology, 3(14), 14.  https://doi.org/10.3389/fpsyg.2012.00014 PubMedPubMedCentralGoogle Scholar
  60. Tracy, J. L., & Randles, D. (2011). Four models of basic emotions: A review of Ekman and Cordaro, Izard, Levenson, and Panksepp and Watt. Emotion Review, 3(4), 397-405.  https://doi.org/10.1177/1754073911410747 CrossRefGoogle Scholar
  61. Tracy, J. L., & Robins, R. W. (2008). The automaticity of emotion recognition. Emotion, 8(1), 81-95.  https://doi.org/10.1037/1528-3542.8.1.81 CrossRefPubMedGoogle Scholar
  62. Webster, M. A. (2012). Evolving concepts of sensory adaptation. F1000 Biology Reports, 4, 21.  https://doi.org/10.3410/B4-21 CrossRefPubMedPubMedCentralGoogle Scholar
  63. Webster, M. A., Kaping, D., Mizokami, Y., & Duhamel, P. (2004). Adaptation to natural facial categories. Nature, 428(6982), 557-561.  https://doi.org/10.1038/nature02420 CrossRefPubMedGoogle Scholar
  64. Webster, M. A., & MacLin, O. H. (1999). Figural aftereffects in the perception of faces. Psychonomic Bulletin & Review, 6(4), 647-653.  https://doi.org/10.3758/bf03212974 CrossRefGoogle Scholar
  65. Wiegersma, S. (1982a). A control-theory of sequential response production. Psychological Research Psychologische Forschung, 44(2), 175-188.  https://doi.org/10.1007/Bf00308449 CrossRefGoogle Scholar
  66. Wiegersma, S. (1982b). Sequential response bias in randomized-response sequences - a computer-simulation. Acta Psychologica, 52(3), 249-256.  https://doi.org/10.1016/0001-6918(82)90011-7 CrossRefGoogle Scholar
  67. Wild-Wall, N., Dimigen, O., & Sommer, W. (2008). Interaction of facial expressions and familiarity: ERP evidence. Biological Psychology, 77(2), 138-149.  https://doi.org/10.1016/j.biopsycho.2007.10.001 CrossRefPubMedGoogle Scholar
  68. Xia, Y., Leib, A. Y., & Whitney, D. (2016). Serial dependence in the perception of attractiveness. Journal of Vision, 16(15), 28.  https://doi.org/10.1167/16.15.28 CrossRefPubMedPubMedCentralGoogle Scholar

Copyright information

© The Psychonomic Society, Inc. 2018

Authors and Affiliations

  1. 1.Helen Wills Neuroscience InstituteUniversity of CaliforniaBerkeleyUSA
  2. 2.Department of PsychologyUniversity of CaliforniaBerkeleyUSA
  3. 3.Vision Science GroupUniversity of CaliforniaBerkeleyUSA

Personalised recommendations