In Experiments 1 and 2, we found that the flicker adaptation did not decrease the precision of averaging orientations or sizes. In Experiment 3, we tested whether the adaptation influenced averaging more complex features, such as facial expressions. In this experiment, we measured and compared participants’ precisions of discriminating mean facial expressions depending on the adaptation. As in Experiments 1 and 2, we included a heterogeneity condition. In addition, we used two types of emotion, happy and angry, to test whether the adaptation effect differed depending on emotion.
Methods
Participants
Fourteen subjects (Mage = 25.57 years, SDage = 1.80; seven males) participated in Experiment 3. This sample size was calculated on the basis of the results of Experiments 1 and 2. Similar to Experiments 1 and 2, we conducted a simulation-based power analysis for a three-way repeated-measures ANOVA, and the estimated power was 1.00 for the main effect of adaptation, .99 for the heterogeneity effect, and .61 for their interaction based on a Type 1 error of .05. Two participants also participated in Experiment 1, and 10 participants, including the first author, participated in both Experiments 1 and 2. Because one participant dropped out after the second session, 13 participants completed the entire experiment. All participants reported normal or corrected-to-normal visual acuity. The experimental procedures were approved by the Yonsei University Institutional Review Board, and we obtained informed written consent from all participants prior to participation. After the experiments were completed, they were paid 5,000 KRW per 30 min, except the first author.
Stimuli
For a mean facial expression judgment task, we utilized a morphed face stimuli set used by Sun and Chong (2020). This stimulus set comprised two vectors relevant to emotion types (happy and angry) and each vector included 201 faces. A vector represents an emotional scale based on the norm-based coding account of facial expressions (Gwinn, Matera, O’Neil, & Webster, 2018; Palermo et al., 2018). Each scale ranges from the full-blown expression to the anti-emotion, and the intensity of the emotion was expressed as the extent to which each facial expression deviates from a face with a neutral expression. Specifically, the intensity of each emotion was defined on the basis of the ratio of full-blown to neutral expression in a morphed face. Each vector includes a neutral face (0%) and 100 emotional faces ranging from 1% to 100% (full-blown). In addition, this vector includes 100 levels (-100% to -1%) of anti-emotion faces by morphing faces from the neutral face to the opposite direction of the emotional faces. Thus, the same intensity levels of emotional and anti-emotional faces (e.g., 50% and -50%) correspond to the same physical differences from the neutral face, but they were symmetrical to each other with respect to the neutral face. In a mean facial expression judgment task, eight oval-shaped faces were used. Each face subtended to 1.46 × 1.92 dva2 and was located at the virtual vertices of two concentric squares. Each vertex deviated 1.5 dva for the inner square and 3 dva for the outer one from the center of the screen. In every trial, an inner square-shaped array rotated to a random degree, and an outer array rotated 40–50° more than the inner one. Each position of vertices was randomly jittered up to 0.1 dva horizontally and vertically to prevent the stimuli from being presented in a too regularly. The test facial expression intensity had seven levels (0%, 13%, 26%, 39%, 52%, 65%, and 78%). In the homogeneous condition, all faces had the same value with a test mean intensity. In the heterogeneous condition, each intensity of individual faces was sampled from a uniform distribution and normalized to a test mean intensity and a target standard deviation of 10% as much as possible. Note that the intensity of facial expressions should be expressed as an integer because it was limited to discrete index numbers of the face vector (one level change in an index corresponded to 1% change in the intensity). Thus, we first randomly sampled eight numbers from a uniform distribution, normalized them to a test mean and a target standard deviation (10%) and rescaled them as follows: first, we rounded up eight sampled numbers and averaged them. If the mean of the rounded numbers was not the same as the test mean, then we randomly picked one of the rounded numbers and adjusted it such that the mean of the rounded numbers matched the test mean as far as possible. The reference array always had a mean intensity of 0%, and the test array had one of seven test mean intensities. Please note that the anti-emotional faces (-1% to -100%) were not included in the test means because we found that sensitivity to anti-emotional faces was lower than emotional faces in the results of a pilot study, consistent with results in Sun and Chong (2020). Nevertheless, individual faces in a heterogeneous block (particularly in lower test mean displays, such as 0% and 13%) were sometimes sampled from an anti-emotional vector. For example, for a given trial with a test mean of 0%, individual faces might be -17%, -10%, -6%, 1%, 4%, 7%, 9%, and 12% (M = 0%, SD = 10.11). For adaptation, general methods were the same as in Experiments 1 and 2. Each adaptation area was 2.89 times larger than the area of stimuli array to maximize the effect of adaptation.
Design and procedure
The experimental design was the same as in Experiments 1 and 2 except that participants participated in four sessions because the two emotions were used (happy and angry).
General procedures in Experiment 3 were the same as those in Experiments 1 and 2, except that the reference and test stimuli were sequentially presented (depicted in Fig. 5). In the adaptation phase, a flickering dynamic noise adaptor was presented for 10 s in the first trial, and for 5 s in subsequent trials. After adaptation, the reference and test stimuli were serially presented for 250 ms each with an ISI of 500 ms, and their order was randomly decided in every trial. When face stimuli disappeared, and the crosshair turned red, participants were asked to indicate which interval (former or latter) had faces with a stronger facial expression on average with keyboard number pads (1 to the former and 2 to the latter). Before each main session, a practice session with auditory feedback was provided as in Experiments 1 and 2. One difference was that a practice session was repeated up to three times because participants reported the task of averaging facial expressions difficult in a pilot experiment. Feedback was not provided in the main session.
Analysis
As in Experiments 1 and 2, the proportion of “stronger” responses was fitted to a cumulative Gaussian function. Note that we used the slope in Experiments 1 and 2 – because it is a parameter inversely proportional to the differential threshold – to identify test levels that were larger or smaller from the reference level. By comparison, test levels in Experiment 3 were always the same or larger than the reference level and the chance level was 50%. Thus, we obtained the threshold where the function reached 75% and defined a reciprocal of a threshold as the sensitivity of each condition. Thus, the sensitivity in Experiment 3 was inversely proportional to the absolute threshold in order to detect the presence of average expression against neutral expressions (the reference) at the chance level. Therefore, sensitivity in Experiment 3 can also be considered a parameter denoting the precision of mean representation, like the slopes in Experiments 1 and 2.
We conducted both frequentist and Bayesian versions of three-way repeated-measures ANOVAs on sensitivities with a 2 (adaptation) × 2 (heterogeneity) × 2 (emotion) design.
Results and discussion
The average sensitivities of each condition are plotted in Fig. 6. We examined whether the adaptation influenced averaging facial expressions, and the pattern of effect changed depending on the heterogeneity and the emotion type of a set. We found that participants were more sensitive to mean facial expression after adaptation than without adaptation. In addition, they were more sensitive to sets of happy faces than those of angry faces. Sensitivity was not different between the homogeneous and heterogeneous conditions, and the pattern of adaptation was not different depending on either heterogeneity or emotion. These results were supported by following statistical analyses. We observed a significant main effect of adaptation, F(1, 12) = 6.82, p = .023, ηp2= .36, BFinclusion = 40.78, showing the higher sensitivity for discriminating mean facial expressions after adaptation (M = 6.46 × 10-2, SD = 2.22 × 10-2) than at baseline (no-adaptation, M = 5.29 × 10-2, SD = 1.83 × 10-2). Participants showed higher sensitivity to happy faces (M = 6.95 × 10-2, SD = 2.11 × 10-2) than to angry faces (M = 4.80 × 10-2, SD = 1.47 × 10-2), F(1, 12) = 23.19, p < .001, ηp2 = .66, BFinclusion = 2.83 × 106. However, sensitivity was not significantly different between the homogeneous (M = 6.04 × 10-2, SD = 1.79 × 10-2) and heterogeneous conditions (M = 5.71 × 10-2, SD = 2.39 × 10-2), F(1, 12) = 1.13, p = .309, ηp2 = .01, BFinclusion = 0.29. In addition, we did not observe any significant interaction among conditions, emotion type × adaptation: F(1, 12) = 0.54, p = 0.475, ηp2 = .04, BFinclusion = 0.34; emotion type × heterogeneity: F(1, 12) = 2.90, p = .114, ηp2 = .19, BFinclusion = 0.97; adaptation × heterogeneity: F(1, 12) = 0.55, p = .473, ηp2 = .04, BFinclusion = 0.31; emotion type × adaptation × heterogeneity: F(1, 12) = 0.36, p = .561, ηp2 = .03 , BFinclusion = 0.44, indicating that the pattern of adaptation effect did not differ depending on the other conditions.
Consistent with those of averaging orientations and sizes in Experiments 1 and 2, the performance of averaging facial expressions did not decrease after the flicker adaptation. Rather, we found that the precision of averaging increased after adaptation, and this pattern did not vary depending on the heterogeneity or emotion of a set. These results suggest that reducing LSF information helps to form a more precise mean representation. Furthermore, the adaptation effect did not differ depending on different emotions, suggesting that the ability to average facial expressions might be similar across emotions. Similarly, Sun and Chong (2020) showed that the influence of individual faces on mean facial expression did not differ depending on different emotions, although perceiving individual emotions relied on different facial features (e.g., the eyes for angry or the mouth for happy faces). In addition, they showed that the increasing number of inverted faces gradually disrupted the precision of averaging facial expressions, suggesting that the precision of individual faces is related to that of averaging them. Overall, our results suggest that ensemble computation does not rely solely on coarse information based on the M-pathway but might also be based on the fine information of visual inputs.