Participants’ responses when classifying the stimuli were recorded by use of a response box. The experiment ran on a computer with a 19-inch screen and 1280 × 1024 screen resolution. The experimental task was programmed in E-prime 2.0. Facial muscle activity was recorded and processed with MindWare Technologies EMG Application software (version 2.5).
Data preparation and analysis
Behavioral measures
Emotion detection accuracy levels were calculated in percentages per stimulus type for both tasks. Positive and negative words (task 1) and happy and angry facial expressions (task 2) served as target stimuli, classifying these stimuli as emotional was considered accurate, while classifying these stimuli as non-emotional was considered inaccurate.
Facial EMG
Facial muscle activity at the corrugator and zygomaticus sites was measured using bipolar placements of Ag/AgCl miniature surface electrodes filled with electrode gel attached on the left side of the face. The skin was cleansed and prepared with alcohol prep pads and semi abrasive lotion. The electrodes were placed following the methods described by Fridlund and Cacioppo (1986), and all pairs were referenced to a forehead electrode placed near the midline. The raw EMG signal was measured with a BioNex Bio-Potential amplifier and stored with a sampling frequency of 1000 Hz. Raw data were filtered with a 30–300 Hz band pass filter and a 50 Hz notch filter and then rectified. Facial muscle activity recorded during the last 500 ms of each blank screen that was shown before the fixation point was used as baseline measure for that specific trial. Difference scores were calculated by using these measures as a baseline. Prior to statistical analysis, data were collapsed per type of trial and averaged over the first 1000 ms of stimulus presentation.Footnote 3 One participant’s data were not included because of too many irrelevant facial movements due to tiredness, leading to unusable EMG measures.
Statistical analyses
Because the two tasks used different visual targets (words or faces), we examined the behavioural data of each task separately by subjecting emotion detection accuracy levels to a repeated measures ANOVA’s with contextual support level and valence of targets as within subject variables. We first tested the main and interaction effects on emotion detection accuracy in an Omnibus ANOVA according to the experimental design. In order to gain more insight into the relationship between contextual support and emotion detection accuracy, we examine the linear and quadratic trend effect of contextual support. We furthermore examined the physiological (facial EMG with respect to the zygomaticus muscle and the corrugator muscle) data for each task. Here the effects of interest were a main effect of valence, and a possible interaction between valence and contextual support level. Though emotion detection accuracy was the main focus of the current studies, for exploratory purposes we also subjected the decision times to repeated measures ANOVA’s for each task. In addition to the frequentist statistical tests, Bayesian analyses are performed to quantify the evidence of the hypotheses under investigation (main effect of contextual support) given the data. Bayesian Factors (BF) are reported; a larger BF represents more evidence in the data set for the hypothesis under consideration. In case sphericity was violated for any of the reported results, Greenhouse–Geisser corrections were applied and adjusted degrees of freedom were reported.
Experimental task 1: emotion detection in written words
Emotion detection accuracy in written words
Participants’ emotion detection accuracy levels when classifying written words was analyzed by use of a repeated measures ANOVA with contextual support level (none, partial, or full) and valence of the written word (positive vs. negative) as within participants factors.
The main effect of contextual support level showed to be significant, F(2,52) = 4.25, p = 0.020, ηp2 = 0.14. As can be seen in Fig. 2, highest emotion detection levels showed for fully supported written words (M = 90.1%, SD = 16.6), while partially supported (M = 86.0%, SD = 16.2) and contextually unsupported written words (M = 86.3%, SD = 15.1) had lower detection levels. A Bayesian analysis of variance showed that the data was 2.50 times more likely to reflect a main effect of contextual support level than for it not to reflect such an effect (BF10 = 2.50). No main effect showed for valence, F(1,26) = 2.32, p = 0.140, ηp2 = 0.08. Lastly, no interaction showed between valence and contextual information F(2, 52) = 1.07, p = 0.349, ηp2 = 0.04.
Furthermore, specific trend tests revealed a significant linear effect of contextual support level (F(1,26) = 4.78, p = 0.038, ηp2 = 0.16), while the quadratic effect was not significant (F(1,26) = 3.41, p = 0.076, ηp2 = 0.12). Finally, the analyses did not yield an interaction effect between valence and contextual support for the linear trend (F(1,26) = 1.18, p = 0.287, ηp2 = 0.04) or the quadratic trend (F(1,26) = 1.01, p = 0.324, ηp2 = 0.04).Footnote 4
Decision times for emotion detection in written words
Participants’ decision times when classifying written words was analyzed by use of a repeated measures ANOVA with contextual support level (none, partial, or full) and valence of the written word (positive vs. negative) as within participants factors.
No effect of contextual support level showed (F(1.49, 35.72) = 2.03, p = 0.156, ηp2 = 0.08), with similar decision times for contextually unsupported (M = 1037.9 ms, SD = 469.7), partially supported (M = 1083.1 ms, SD = 403.2), and fully supported (M = 1028.2 ms, SD = 371.6) written words, see Fig. 3. No main effect showed for valence (F(1,24) = 0.47, p = 0.499, ηp2 = 0.02). Lastly, no interaction showed between valence and contextual information (F(1.31, 31.33) = 2.52, p = 0.115, ηp2 = 0.10).
Zygomaticus activity to written words
Zygomaticus activity during the first 1000 ms of stimulus presentation was analyzed with a repeated measures ANOVA whereby contextual support level (none, partial, or full) and valence of the written word (positive vs. negative) were the within participants factors.
The main effect of valence on zygomaticus activity did not reach significance, F(1,26) = 2.92, p = 0.099, ηp2 = 0.10. A Bayesian paired samples t-test showed that the data were 1.37 times more likely to reflect a null effect than to reflect a difference based on valence (BF01 = 1.37). Furthermore, the interaction between valence of the written word and level of contextual support was not significant, F(1.63, 42.24) = 0.54, p = 0.551, ηp2 = 0.02.
Corrugator activity to written words
Corrugator activity during the first 1000 ms of stimulus presentation was analyzed with a repeated measures ANOVA with contextual support level (none, partial, or full) and valence of the written word (positive vs. negative) as the within participants factors.
This analysis revealed no significant main effect of valence of the written word on corrugator activity, F(1,26) = 3.02, p = 0.094, ηp2 = 0.10. A Bayesian paired samples t-test showed that the data were 1.30 times more likely to reflect a null effect than to reflect a difference based on valence (BF01 = 1.30). No interaction was found between valence of written word and level of contextual support, F(1.31, 35.10) = 0.62, p = 0.542, ηp2 = 0.02.
Experimental task 2: emotion detection in facial expressions
Emotion detection accuracy in facial expressions
Participants’ emotion detection accuracy levels when classifying the facial expressions was analyzed with a repeated measures ANOVA. Contextual support level (none, partial, or full) and valence of the facial expression (positive vs. negative) were the within participants factors.
This analysis revealed no main effect of contextual support level, F(2, 52) = 1.20, p = 0.309, ηp2 = 0.04, see Fig. 4. A Bayesian analysis of variance showed that the data was 3.66 times more likely to reflect a null effect than for it to reflect a main effect of contextual support level (BF01 = 3.66). Furthermore, no main effect showed for valence, F(1,26) = 0.18, p = 0.671, ηp2 = 0.01. Lastly, there was no interaction between valence and contextual support level, F(2,52) = 0.59, p = 0.559, ηp2 = 0.02. Furthermore, specific trend tests revealed neither a linear effect of contextual support level (F(1,26) = 1.18, p = 0.287, ηp2 = 0.04), nor a quadratic effect (F(1, 26) = 0.16, p = 0.696, ηp2 = 0.01). Finally, the analyses did not yield an interaction effect between valence and contextual support for the linear trend (F(1,26) = 1.15, p = 0.294, ηp2 = 0.04) or the quadratic trend (F(1,26) = 0.24, p = 0.878, ηp2 = 0.00).Footnote 5
Decision times for emotion detection in facial expressions
Participants’ decision times when classifying facial expressions was analyzed by use of a repeated measures ANOVA with contextual support level (none, partial, or full) and valence of the facial expression (positive vs. negative) as within participants factors.
The main effect of contextual support level was significant (F(2,52) = 4.22, p = 0.020, ηp2 = 0.14). Decision times for faces that were partially supported contextually were longest (M = 1045.0 ms SD = 284.0), while decision times for contextually unsupported (M = 965.3 ms, SD = 316.7), and fully supported (M = 996.3 ms, SD = 268.6) facial expressions were shorter. No main effect showed for valence (F(1,26) = 0.56, p = 0.462, ηp2 = 0.02). Lastly, the interaction between valence and contextual information was also significant (F(2,52) = 4.73, p = 0.013, ηp2 = 0.15) = 0.10. As can be seen in Fig. 5, the difference seems to occur in decision times for negative facial expressions, but not for positive facial expressions, with longer decision times for context supported negative facial expressions.
Zygomaticus activity to facial expressions
Zygomaticus activity during the first 1000 ms of stimulus presentation was analyzed with a repeated measures ANOVA. Contextual support level (none, partial, or full) and valence of the facial expression (positive vs. negative) were the within participants factors.
This analysis revealed a main effect of valence of the facial expression, F(1,26) = 4.56, p = 0.042, ηp2 = 0.15, see Fig. 6. In line with expectations, zygomaticus activity was stronger when participants saw positive (M = − 0.20 mV, SD = 0.65) than when they saw negative facial expressions (M = − 0.37 mV, SD = 0.75). A Bayesian paired samples t-test showed that the data were 1.43 times more likely to reflect such difference than to reflect a null effect (BF10 = 1.43). No interaction was found between valence of the facial expression and level of contextual support F(1.19, 30.94) = 2.95, p = 0.090, ηp2 = 0.10.
Corrugator activity to facial expressions
Corrugator activity during the first 1000 ms of stimulus presentation was analyzed with a repeated measures ANOVA. Contextual support level (none, partial, or full) and valence of the facial expression (positive vs. negative) were the within participants factors.
No main effect of valence of the facial expression was found for the corrugator; F(1,26) = 2.83, p = 0.105, ηp2 = 0.10. A Bayesian paired samples t-test showed that the data were 1.42 times more likely to reflect a null effect than to reflect a difference based on valence (BF01 = 1.42). No interaction between valence of the facial expression and level of contextual support was found, F(1.17, 30.45) = 0.04, p = 0.877, ηp2 = 0.00.