Introduction

Imagine yourself walking down the street when suddenly a stranger looks directly at you and expresses an angry face. Now imagine a nearly identical scenario in which the stranger looks at a person beside you and expresses the same angry facial expression. Although the input to the visual system is nearly identical in both cases, most people would probably agree that being the target of one’s expression is a far more engaging affective experience which holds more direct relevance to the target. In this study we sought to explore the mechanisms underlying this distinction. Specifically, we examine if mu suppression, a well-established electrophysiological measure of motor simulation, is sensitive to being the target versus mere spectator of facial expressions.

Early accounts of mirror neurons (MNs) pointed to their characteristic discharging both when a monkey performs a specific goal-directed action and when it observes someone else performing that very action (Di Pellegrino, Fogassi, Gallese & Rizzolatti, 1992). This finding was interpreted as an evolutionary mechanism enabling action understanding through direct matching as a reenactment of the observed action (Rizzolatti & Craighero, 2004; Rizzolatti, Fogassi & Gallese, 2001). It was put forward as a possible biological mechanism enabling the simulation proposed by simulation theory (Goldman, 1989), allowing us to understand the intentions of others through the observation of their actions (Gallese & Goldman, 1998). This mechanism was also proposed in the case of facial action processing, enabling us to understand facial expressions (Casile, Caggiano & Ferrari, 2011).

MNs and mirroring behavior were originally reported as being present at birth, as evident by automatic imitation of facial movements by neonates (Bard, 2007; Meltzoff & Moore, 1977), suggesting a hardwired, genetic predisposition (Casile et al., 2011; Ferrari et al., 2012; Ferrari, Paukner, Ionica & Suomi, 2009). More recent accounts, however, stress the flexible nature of the MNs. These accounts suggest that neurons of the visual system are weakly connected to neurons in the motor system from birth, and that learning plays an important role in their development. Specifically, it has been posited that some of these visual-motor neuron connections are established through associative learning processes in which a specific route is strengthened when a motor action tends to be correlated with, and predicts a specific observed action (Cook, Bird, Catmur, Press & Heyes, 2014). According to this approach, the properties of MNs evolve to a large extent through social interaction as the system learns to relate between events that are likely to occur together.

A growing body of knowledge points to mu rhythms’ desynchronization, suppression of EEG activity over the sensorimotor cortex in the 8–13 Hz range, as a valid marker of MN activity (for a review see Pineda, 2005). Suppression of these rhythms is evident when one performs a goal directed action, and also when one sees someone else performing a similar action. Moreover, suppression of mu rhythms has been linked to a wide range of social information processing tasks (Cheng, Yang, Lin, Lee & Decety, 2008; Perry, Bentin, Bartal, Lamm, & Decety, 2010a; Perry, Troje, & Bentin, 2010b; Pineda & Hecht, 2009; Whitmarsh, Nieuwenhuis, Barendregt & Jensen, 2011).

Mu suppression was also found to be modulated by social relevance and participants' involvement in a social game: as stimuli became more relevant and participants became more involved, larger suppression was seen (Oberman & Ramachandran, 2007; Perry, Stein & Bentin 2011). Perry et al. (2010a, b) found mu suppression to be modulated by point-light displays conveying social information, approaching or withdrawing from the observer, pointing once again to the sensitivity of MNs to the social relevance of observed actions, not only to the actions per se. These findings are in line with Kilner, Marchant and Frith’s (2006) suggestion that MNs filter the observed actions surrounding us so that only actions most socially relevant to us will enter the system.

A number of studies set ground for assuming involvement of MNs in the decoding of emotional expressions (Ferrari et al., 2012; Keuken et al., 2011; Molenberghs, Cunnington & Mattingley, 2012; van der Gaag, Minderaa & Keysers, 2007), yet relatively few studies have examined mu rhythm suppression in response to facial expression perception. Pineda and Hecht (2009) report greater mu suppression while making social perceptual judgments about emotional facial expressions in comparison to a gender discrimination task or a social cognitive Theory of Mind (ToM) task. In a more recent study, Moore, Gorodnitsky, and Pineda (2012) found mu suppression in response to the perception of happy and disgusted face photos. While it seems clear that merely observing an emotional face triggers the MNs, an associative learning approach would predict differential mirroring as a function of social relevance. Indeed, Trilla Gros, Panasiti and Chakrabarti (2015) used an evaluative conditioning paradigm to associate faces with rewarding or non-rewarding value. They subsequently presented all faces portraying happy emotional expressions and found greater mu suppression in response to rewarding than to non-rewarding faces.

Facial relevance may also be determined by the dynamics of the face such as gaze or head direction (Emery, 2000) in at least two ways. First, studies have demonstrated that gaze may differentially facilitate or hinder the perception of emotional faces (Adams & Kleck, 2003, 2005; Sander, Grandjean, Kaiser, Wehrle & Scherer, 2007; Schrammel, Pannasch, Graupner, Mojzisch & Velichkovsky, 2009). For example, Adams and Kleck (2003, 2005) demonstrated that approach-oriented facial expressions (e.g., anger) are more rapidly classified and judged as more intense when the faces display direct gaze than averted gaze, while avoidance oriented expressions (e.g., fear) display the opposite pattern. Interestingly, these effects are most prominent when the faces displaying the emotions are ambiguous and/or of weak intensity (Graham & LaBar, 2007; N’Diaye, Sander, & Vuilleumier, 2009; Sander et al., 2007). By contrast, when facial expressions are intense and unambiguous, the impact of gaze is greatly reduced or even non-existent (Bindemann, Mike Burton, & Langton, 2008; N’Diaye et al., 2009).

The second way gaze may influence facial relevance is by indicating to the perceiver that she is the target of the expressions. For example, Van der Schalk, Hawk, Fischer and Doosje (2011) studied the effect of head-turning (towards vs. away the observer) on the interpretation of dynamic emotional facial displays. They showed that viewing facial expressions turning towards the observer increased the phenomenological perception of that expression as directed towards the self. This in turn enhanced one’s sense of having caused the other’s emotion, thereby deeming it more relevant to the self than an expression turning away from the observer. Importantly, in that study the expressions were intense and unambiguous such that emotion recognition itself was not affected by head direction.

It is this experience of being the target of another’s emotional display – irrespective of the specific emotion displayed – that triggered the current study. The fact that two highly similar dynamic facial displays differ dramatically in their social relevance as a mere function of direction is intriguing. Given the role of MNs in facial expression perception, it seems plausible that mu suppression may be sensitive to facial expression directionality. Using the same set of stimuli developed by van der Schalk et al. (2011), we hypothesized that MNs serve as a possible mechanism processing the enhanced relevance of facial expressions directed towards the observer. As these stimuli are highly intense and prototypical, they are equally recognizable and intense whether turning away from or towards the observer (van der Schalk et al. 2011). Nevertheless, only when the expressions turn towards the observer would they be perceived as directly relevant. Consequently, we predicted that MN activation, as evident by mu rhythm suppression, will be stronger in response to facial expressions turning towards the observer than in response to facial expressions turning away from the observer.

Methods

Participants

Thirty-one participants (17 Females, five left handed, M age = 23.6, SD = 2.7) took part in an EEG experiment and subsequently were asked to recognize the emotions and rate the perceived intensity of all the clips shown. 31 additional participants (21 Females, M age = 24.2 years, SD = 2.8) in the first and 25 participants (18 Females, M age = 24 years, SD = 2.2) in the second, took part in two separate behavioral experiments and were asked to rate their sense of being the target of the expressions and their sense of feeling involved in the interaction with the expressions. Participants were recruited from the Hebrew University and were either paid or given course credit for their participation. Participants had normal or corrected to normal vision and were selected based on self-report of neurological and psychiatric health.

Stimuli

Video clips of facial expressions were selected from the “Amsterdam Dynamic Facial Expressions Set” (van der Schalk et al., 2011). The stimuli consisted of 5-s long video clips of four male and four female actors, filmed from the shoulders up, depicting the following emotions: happiness, disgust, fear, sadness, pride, anger, surprise, and neutral. Each video clip appeared in two forms, manipulating the directionality of the expressions: facing away or towards the viewer. In facing away expressions, the clip started with the actor facing the viewer with a neutral expression, then moving sideward to a 45° angle and expressing an emotion. In facing toward expressions, the clip started with the actor facing away from the observer at a 45° angle showing a neutral face, then moving towards the viewer and expressing an emotion. All clips started with a neutral expression that developed to the depicted emotional expression and ended while the expression was at peak (see Fig. 1). Subjects saw one clip of each emotion of every actor, one turning towards and one away from them, 128 clips in total. This set has been validated in our laboratory in the past to confirm that all expressions are well recognized by Israeli viewers.

Fig. 1
figure 1

The dynamic facial expressions clips were of two kinds: (a) Turn Away – the clip started with the actor facing the viewer, continued with them turning away from the viewer before making the expression. (b) Turn Forward – the clip started with the actor facing away from the viewer, continued with them turning towards the viewer before making the expression. ADFES images were reproduced with permission

Procedure

EEG experiment

The experiment started with a 3.5-min resting state baseline condition during which participants were instructed to look at a fixation point on the center of the screen (see Huffmeijer, Alink, Tops, Bakermans-Kranenburg & van IJzendoorn, 2012; Popov, Miller, Rockstroh & Weisz, 2013 for similar procedures). The second block consisted of all facial expression clips shown one at a time, in randomized order. To keep subjects attended and engaged, they were requested to keep count of the surprise expressions. The frequency of surprise expressions varied between subjects, ranging from 8–12 (see Fig. 2). These trials were removed from analysis.

Fig. 2
figure 2

A 3.5-min fixation block served as baseline after which participants viewed 112 non-target short video clips of male and female actors depicting the following emotions: happiness, disgust, fear, sadness, pride, and neutral. Surprised expressions were also shown and served as target stimuli. ADFES images were reproduced with permission

During the EEG task, participants were instructed to refrain from any movement including face movement and were monitored via a video camera throughout the experiment to make sure they complied with instructions.

Behavioral ratings of emotion and intensity

After completing the EEG experiment, participants continued to view the expression videos for a second time and categorized the emotion as well as the intensity of the expressions on a scale of 1 (low) to 7 (high). All clips were shown in a randomized order. These tasks enabled us to examine whether expressions turning towards or away from the viewer differed in perceived intensity or recognizability in a matter that may have influenced the EEG results.

Behavioral ratings of felt involvement

A second group of participants1 viewed the same set of videos in order to assess the degree to which they felt involved in interaction with the viewed expressions. Participants viewed all clips which were shown in a randomized order and were asked to imagine they were walking down the street when encountering the character shown on the screen. They were to rate between 1 and 7 to what extent they felt involved in the interaction. (“On a scale of 1–7, how much did you feel involved in the interaction?”).

Behavioral ratings of social relevance

A third group of participantsFootnote 1 viewed the same set of videos in order to assess the degree to which they felt that they were the target of the viewed expressions. Participants viewed all clips which were shown in a randomized order. They were asked to rate the relevance of the stimuli to them on a scale of 1 (low) to 7 (high) according to their sense of feeling the expression was directed at them ("to what extent, on a scale of 1–7, was the expression observed directed at you?").

Data acquisition and processing

The EEG signal was recorded from 64 Ag-AgCl pin-type active electrodes mounted on an elastic cap (ECI), and from an additional two electrodes placed behind each ear (mastoids). Blinks and eye movements were monitored using bipolar horizontal and vertical EOG derivations via two pairs of electrodes, one pair attached to the external canthi, and the other to the infraorbital and supraorbital regions of the right eye. EEG and EOG were sampled using a Biosemi Active II digital 24-bits amplification system. Off-line analysis was done using Brain Vision Analyzer II.

Data records were initially high-pass filtered at 0.5 Hz, and re-referenced offline to the average of the two mastoids. The correction of eye movements was done using an ICA procedure (Jung et al., 2000). Remaining artifacts exceeding 100 μV in amplitude were detected focusing on relevant sites (C3, C4, O1, O2), and epochs encompassing these artifacts were excluded. Based on previous literature, the EEG activity at the central sites was attributed to motor system activity yielding mu-suppression (Pfurtscheller, Stancák, & Neuper, 1996; Pineda, 2005). This was compared to alpha suppression at occipital sites, which is attributed to visual-attentional mechanisms (Sauseng & Klimesch, 2008).

Wavelet analysis

Motion throughout the clips was not uniformly balanced as the actors started by subtly moving forwards or sideward, continued by conveying one of the emotions and ended while producing very subtle changes at peak. As we expected that the suppression in the 8–13 Hz range would be affected by the observed motion, we validated the timing of the actions, using a wavelets analysis. Wavelets analysis was performed on single trials at each recording site (C3, C4, O1, O2). A complex Gaussian Morlet wavelet was used with width of the wavelet determined according to the Morlet parameter 5, in steps of 1 Hz. We then averaged the amplitudes at each time-frequency point at each recording site across trials for each subject in each condition. Finally, we calculated the suppression index for each point, as the logarithm of the ratio of the power during the experimental conditions relative to the power during the baseline condition for each point. The ratio, instead of simple subtraction, was used in order to control possible variability in absolute EEG power between subjects resulting from scalp thickness and electrode impedance. Moreover, since ratio data are not normally distributed as a result of lower bounding, a log transform was used for analysis. A log ratio of less than zero indicates suppression in the EEG amplitude whereas a value of more than zero indicates enhancement (see Oberman, Pineda & Ramachandran, 2007; Perry et al. 2011; Perry et al., 2010a, b). Alpha suppression was calculated in a similar fashion.

FFT analysis

Based on the wavelets analysis described above, time epochs in experimental blocks were segmented in epochs of 3 s that began 2 s after the onset of the video clip. The data from the full 5-s video clips were also analyzed (see Supplementary Fig. 5). The baseline block was segmented in epochs of 3 s or 5 s long accordingly. In order to extract mu-suppression we first computed the integrated power in the 8–13 Hz range using a Fast Fourier transform (FFT) at 0.5-Hz intervals. Using FFT we were able to extract power in different frequencies in each of the epochs collected. This measure was then averaged so that we were left with the average power in each frequency summed over all epochs of each participant. A mu suppression index was calculated similarly to that described in the wavelets analysis above, as the ratio of the power during the experimental conditions relative to the power during the baseline condition. This was used as a depended variable. Alpha suppression was calculated in a similar fashion. We compared mu and alpha suppression elicited by observation of emotional faces directed towards the observer to suppression elicited by emotional faces turning away from the observer.

Statistical analysis

Behavioral

Differences in recognition accuracy, intensity ratings, felt involvement, and relevance ratings were analyzed using a repeated measure ANOVA. The independent variables were Direction (turn forward/turn away) and Emotion (happiness, disgust, fear, sadness, pride, anger, surprise and neutral). The dependent variable was computed separately for each participant by averaging accuracy (of recognition) and ratings (of perceived intensity, felt involvement and perceived relevance).

EEG

Differences in mu suppression across conditions were analyzed using a repeated measure ANOVA. The independent variables included the Hemisphere (left/right) and the Direction (turn forward/turn away). The dependent variable was the average log transform values pertaining to the same experimental condition for each participant.

Results

Behavioral results

Recognition and intensity

Overall, the emotional expressions were well recognized by the viewers (M=0.93 SE=0.008) and in good accordance with the published norms (van der Schalk et al., 2011). We ran an ANOVA with repeated measures, examining the factors direction (Turn Away, Turn forwards) and emotion (happiness, disgust, fear, sadness, pride, anger, surprise, and neutral). In line with prior work, we found a main effect for emotion [F(7,30) = 9.01, p < 0.001 ηp 2 = 0.231]. Some facial expression categories were better recognized than others (see Supplementary Fig. 1 and following table for the different scores by emotion). Importantly, no other significant effects were found, directionality did not influence the recognition accuracy [F(1,30) = 1.35, p > 0.2] and the direction x emotion interaction was not significant [F(7,30) = 0.79, p > 0.5] (Fig. 3A). As the current study was not designed to detect mu suppression to specific facial expressions, we averaged recognition of the different expression categories into a single facial expression recognition score.Footnote 2

Fig. 3
figure 3

Recognition accuracy (A), intensity (B) ratings, felt involvement (C), and relevance (D) ratings (as reported by independent validation sample) shown as a factor of direction

Looking at intensity ratings, an ANOVA with repeated measures analysis found a significant main effect for emotion [F(7,30) = 12.82, p < 0.001 ηp 2 = 0.299]. No other significant effects were found. Directionality did not influence intensity ratings [F(1,30) = 1.18, p >0.2] (Fig. 3 B) and the direction × emotion interaction was not significant [F(7,30) = 2, p > 0.08] (see Supplementary Fig. 2 and following table for the different ratings by emotion).

Felt involvment

Using ANOVA with repeated measures, examining the factors direction (Turn Forward, Turn Away) and emotion (happiness, disgust, fear, sadness, pride, anger, surprise, and neutral), we found a main effect for direction [F(1,24) = 44.784, p < 0.001 ηp 2 = 0.651], emotion [F(7,168) = 13.49, p < 0.001 ηp 2 = 0.36], and a significant interaction between the two [F(7,168) = 2.597, p < 0.05 ηp 2 = 0.098]. These results from the first independent validation sample show that when observing expressions turning towards them, participants had a stronger feeling that they were involved in the interaction (Fig. 3 C). There were differences between emotion categories and the higher felt involvement when viewing expressions turning forward was stronger for some emotions more than others. See Supplementary Fig. 3 and following table for differences between emotion categories.

Relevance

Using ANOVA with repeated measures, examining the factors direction (Turn Forward, Turn Away) and emotion (happiness, disgust, fear, sadness, pride, anger, surprise, and neutral), we found a main effect for direction [F(1,30) = 166.3, p < 0.001 ηp 2 = 0.847], emotion [F(7,210) = 7.073, p < 0.001 ηp 2 = 0.191], and a significant interaction between the two [F(7,210) = 7.697, p < 0.001 ηp 2 = 0.204]. These results from the second independent validation sample show that when observing expressions turning towards them, observers had a stronger feeling that they, themselves, are the target of the expression (Fig. 3 D). There were differences between emotions and the higher felt relevance of expressions turning forward was stronger for some emotions more than others. See Supplementary Fig. 4 and following table for the ratings by emotion.

Fig. 4
figure 4

Wavelet spectrographs for the critical segments of the videos as seen in central and occipital sites (scaled separately) (We thank the anonymous reviewer for suggesting the different scaling.)

To summarize the behavioral results, the directionality of the expressions did not alter recognition accuracy or perceived intensity. This finding is of importance for excluding a potentially confounding variable because emotional ambiguity or intensity may themselves influence mu suppression. By contrast, and in good accordance with prior work, participants who rated faces turning towards them (vs. away from them) felt that they were the target of the expressions, and felt more involved in the interaction with these expressions.

EEG results

In all following analyses, the suppression index was analyzed using ANOVA with repeated measures, Bonferroni corrected wherever multiple comparisons were made. The degrees of freedom were corrected using the Greenhouse-Geisser epsilon values (G-GE) when needed. We found no significant main effect of gender [F(1,29) = 1.7, p>0.1] or interactions with gender, therefore, data were pooled across genders.

Locating the critical segments of the videos

A visual inspection of the Wavelet spectrographs depicted that suppression at the 8–13 Hz range was strongest between the second and fifth seconds of the presented stimuli (Fig. 4). An examination of the videos confirmed that it was at this point that the actors expressed the facial expression and when most of the action occurred. Therefore, we analyzed the last 3 s of each clip.

Turn away versus forward

Using ANOVA with repeated measures, examining the factors hemisphere (Left, Right) and direction (Turn Away, Turn forwards) in the central electrodes (C3, C4) we found, as predicted, a significant main effect for direction: faces turning towards the viewer induced more suppression [M = −0.28, SE = 0.04] than faces turning away from the viewer [M = −0.23, SE = 0.04], [F(1,30) = 9.15, p = 0.005 ηp 2 = 0.234]. No effect was found for hemisphere [F(1,30) = 1.68, p>0.2] or for the interaction direction × hemisphere [F(1,30) = 1.32, p>0.2] (see Fig. 5 A).

Fig. 5
figure 5

Mu suppression for the turn away and turn forward conditions, as measured in the last 3 s, in central (A) and occipital (B) sites

In order to strengthen the notion that we are looking at mu suppression centered over sensorimotor cortex, and not at a general attentional effect, we conducted the same analysis on occipital regions (O1, O2). A significant main effect was found for hemisphere indicating that the right hemisphere [M = −0.84, SE = 0.09] demonstrated more suppression than the left hemisphere [M = −0.71, SE = 0.08], [F(1,30) = 10.9, p < 0.005 ηp 2 = 0.267]. No effect was found for direction [F(1,30) = 1.39, p>0.2] or for the interaction hemisphere × direction [F(1,30) = 0.32, p>0.8] (Fig. 5 B).Footnote 3

Discussion

In the present study we investigated the effects of social relevance on mu suppression (8–13 Hz), an established measure of neural mirroring activity, by manipulating the direction of facial expressions turning towards or away from the observer. The results affirmed our main hypothesis, that mu suppression is stronger for facial expressions turning towards the observer compared to those turning away from the observer. This finding supports the notion that MNs are sensitive to the relevance of observed cues and may be involved in the ability of an observer to evaluate the relevance of facial expressions: the more relevant the stimuli are to the observer, the more activation of the MNs is seen.

As previously described, facial expressions turning towards and away from the observer were identical except for the fact that they were filmed from two different angles. The finding that such similar stimuli evoke different activation of the MNs supports the notion that the MNs are involved not only in rather low perceptual processes of action perception but also in higher cognitive processes, such as social interactions (Oberman et al., 2007; Perry et al., 2011). Our findings are in line with accumulating data presenting the MNs as a mechanism supporting the subtleties and complexities of interpersonal interaction.

Previous reports have suggested an innate preference for direct as opposed to averted gaze in newborns looking at neutral faces (Farroni, Csibra, Simion, & Johnson, 2002). However, it is yet unclear what happens throughout life and specifically in the case of perception of emotional facial expressions. Associative accounts postulate that the mirror properties of the human MNs are not wholly innate or fixed and thus may be modulated by experience (Cook et al., 2014). One possibility is that an innate preference to direct gaze aids in learning the heightened relevance of emotional expressions directed towards an individual, as opposed to away from an individual. Viewers learn that although these emotional displays are perceptually similar, they convey very different socio-emotional relevance (George, Driver & Dolan, 2001; Kampe, Frith & Frith, 2003).

Our findings make good ecological sense because it is economically beneficial to have a brain mechanism that not only simulates motor actions, but also filters the relevant information and prioritizes it. Hamilton (2013) suggests that MNs play a role in our ability to respond to the social world around us in real time and in a socially appropriate fashion. The appropriate response to a facial expression facing the observer will, in most instances, differ from the response to a facial expression targeted at someone standing beside them. Whereas in the first case one would most probably act rather quickly, in the latter one may even not act at all.

As previously suggested, being the direct target of an expression may potentially induce more attention when compared to not being the target. When studying mu suppression one needs to be especially sensitive to distinguishing between motor originated activation and occipital attentional mechanisms (see also Perry et al., 2011). In the current study we took several measures to try and avoid potential confounding in the form of attentional mechanisms affecting our results.

First, we measured suppression not only over motor regions but also over occipital ones. This analysis enabled a better assessment of whether differences in mu suppression truly reflect differences in motor activation. The fact that we did not find any difference within occipital regions between the two directions, suggests that our findings are not merely due to attentional mechanisms, as these would have been exhibited by occipital suppression. Effects on attentional mechanisms would have resulted in significant differences in alpha suppression between the two conditions, turn forward and turn away.

Second, poorly recognized and ambiguous expressions may employ more attention. However, our findings confirm that recognition accuracy was not affected by directionality, suggesting that differences in mu suppression were not a result of changes in the attention span due to differences in recognition.

Finally, mu suppression is believed to be a manifestation of neural motor resonance, which is suggested to reflect the simulation of observed action. We would expect that the more intense an expression is the more motor activation it will induce and so potentially we would expect stronger motor resonance to occur. Furthermore, more intense stimuli may engage more attention as there is more going on. Therefore, we asked to evaluate whether there were any systematic differences in perceived intensity between the two directions of the expressions. Our analysis showed no systematic differences in perceived intensity between the two directions, confirming that the observed effects were not due to intensity levels.

While previous work suggested that the recognition of still emotional expressions may be differentially influenced by gaze direction (e.g., Adams & Kleck, 2003; Hess, Adams & Kleck, 2007) such findings are typically found with low intensity or ambiguous expressions (e.g., Adams & Kleck, 2005; Sander et al., 2007). By contrast, when expressions are intense and unambiguous, as in the case of the current study, the effect of gaze on emotion recognition may be significantly diminished or absent (e.g, Bindemann et al., 2008; Graham & LaBar, 2007; N’Diaye et al., 2009). Nevertheless, while our participants recognized the expressions irrespective of direction, the relevance of the expression to them differed dramatically when they were the target of the expression.

Although not the focus of this paper, we found right lateralization in occipital sites. This may be due to the right hemisphere dominance in perception of faces and emotional expressions in particular (See for example Adolphs, Damasio, Tranel, & Damasio, 1996; Coolican, Eskes, McMullen, & Lecky, 2008)

In future studies it would be interesting to examine whether observing different emotion categories yield differences in mu suppression. Following Adam and Kleck's findings (Adams & Kleck, 2003), it would be intriguing to investigate with low intensity or ambiguous expressions whether patterns of mu suppression differ between approach-oriented emotions (anger and joy) and avoidance-oriented emotions (fear and sad) facing the observer or turning away from the observer. A second baseline of neutral movements (such as an actor chewing gum) would allow further exploration of the contribution of the motor system to the cognitive aspects of a social interaction when comparing it to affective movements (facial expressions).

In addition, previous work has discussed the interaction between head and gaze direction and the possible evolutionary advantage of the latter (Langton, 2000; Tomasello, Hare, Lehmann, & Call, 2007). The structure of the human eye enables rapid analysis of the other’s point of focus, constituting a possible evolutionary advantage in humans, promoting social interaction. Still, understanding where another individual is directing attention is a more complex task, also involving the head orientation (Langton, Watt, & Bruce, 2000). In this context, an intriguing question is whether it is the gaze or the head orientation that drive our effects. The present study did not allow us to tease apart the two but hopefully future work could look further into this interaction.

Our general aim was to learn about the processing of socially relevant stimuli in real life interactive engagements. The stimuli we used in this study do have some important advantages: the expressions are prototypical, easily recognized, and highly consistent across models. However, these carry also disadvantages, as the expressions are exaggerated and artificially posed. Additionally, although we instructed participants to imagine that they are engaged in a real interaction with the observed characters, we do not know to what extent they were actually able to adhere to this request.

To summarize, our findings are in line with previous studies suggesting that MNs play a role in our social processing abilities. Specifically, we show that the relevance of the social stimuli to the observer plays an important role in activating the MN system.