Cognitive, Affective, & Behavioral Neuroscience

, Volume 17, Issue 1, pp 174–184 | Cite as

Are you looking at me? Mu suppression modulation by facial expression direction

  • Noga S. Ensenberg
  • Anat Perry
  • Hillel Aviezer


Although we encounter numerous expressive faces on a daily basis, those that are not aimed at us will often be disregarded. Facial expressions aimed at our direction appear far more relevant and evoke an engaging affective experience, while the exact same expressions aimed away from us may not. While the importance of expression directionality is intuitive and commonplace, the neural mechanisms underlying this phenomenon are largely unknown. In the current study we measured EEG mu rhythm suppression, an established measure of mirror neuron activity, while participants viewed short video clips of dynamic facial expressions. Critically, the videos portrayed facial emotions which turned towards or away from the viewer, thus manipulating their degree of social relevance. Mirroring activity increased as a function of social relevance such that expressions turning toward the viewer resulted in increased sensorimotor activation (i.e., stronger mu suppression) compared to identical expressions turning away from the viewer. Additional analyses confirmed that expressions turning toward the viewer were perceived as more relevant and engaging than expressions turning away from the viewer, a finding not explained by perceived intensity or recognition accuracy. Mirror sensorimotor mechanisms may play a key role in determining the relevance of perceived facial expressions.


Emotion EEG Mu rhythms Cognition Facial expressions 


Imagine yourself walking down the street when suddenly a stranger looks directly at you and expresses an angry face. Now imagine a nearly identical scenario in which the stranger looks at a person beside you and expresses the same angry facial expression. Although the input to the visual system is nearly identical in both cases, most people would probably agree that being the target of one’s expression is a far more engaging affective experience which holds more direct relevance to the target. In this study we sought to explore the mechanisms underlying this distinction. Specifically, we examine if mu suppression, a well-established electrophysiological measure of motor simulation, is sensitive to being the target versus mere spectator of facial expressions.

Early accounts of mirror neurons (MNs) pointed to their characteristic discharging both when a monkey performs a specific goal-directed action and when it observes someone else performing that very action (Di Pellegrino, Fogassi, Gallese & Rizzolatti, 1992). This finding was interpreted as an evolutionary mechanism enabling action understanding through direct matching as a reenactment of the observed action (Rizzolatti & Craighero, 2004; Rizzolatti, Fogassi & Gallese, 2001). It was put forward as a possible biological mechanism enabling the simulation proposed by simulation theory (Goldman, 1989), allowing us to understand the intentions of others through the observation of their actions (Gallese & Goldman, 1998). This mechanism was also proposed in the case of facial action processing, enabling us to understand facial expressions (Casile, Caggiano & Ferrari, 2011).

MNs and mirroring behavior were originally reported as being present at birth, as evident by automatic imitation of facial movements by neonates (Bard, 2007; Meltzoff & Moore, 1977), suggesting a hardwired, genetic predisposition (Casile et al., 2011; Ferrari et al., 2012; Ferrari, Paukner, Ionica & Suomi, 2009). More recent accounts, however, stress the flexible nature of the MNs. These accounts suggest that neurons of the visual system are weakly connected to neurons in the motor system from birth, and that learning plays an important role in their development. Specifically, it has been posited that some of these visual-motor neuron connections are established through associative learning processes in which a specific route is strengthened when a motor action tends to be correlated with, and predicts a specific observed action (Cook, Bird, Catmur, Press & Heyes, 2014). According to this approach, the properties of MNs evolve to a large extent through social interaction as the system learns to relate between events that are likely to occur together.

A growing body of knowledge points to mu rhythms’ desynchronization, suppression of EEG activity over the sensorimotor cortex in the 8–13 Hz range, as a valid marker of MN activity (for a review see Pineda, 2005). Suppression of these rhythms is evident when one performs a goal directed action, and also when one sees someone else performing a similar action. Moreover, suppression of mu rhythms has been linked to a wide range of social information processing tasks (Cheng, Yang, Lin, Lee & Decety, 2008; Perry, Bentin, Bartal, Lamm, & Decety, 2010a; Perry, Troje, & Bentin, 2010b; Pineda & Hecht, 2009; Whitmarsh, Nieuwenhuis, Barendregt & Jensen, 2011).

Mu suppression was also found to be modulated by social relevance and participants' involvement in a social game: as stimuli became more relevant and participants became more involved, larger suppression was seen (Oberman & Ramachandran, 2007; Perry, Stein & Bentin 2011). Perry et al. (2010a, b) found mu suppression to be modulated by point-light displays conveying social information, approaching or withdrawing from the observer, pointing once again to the sensitivity of MNs to the social relevance of observed actions, not only to the actions per se. These findings are in line with Kilner, Marchant and Frith’s (2006) suggestion that MNs filter the observed actions surrounding us so that only actions most socially relevant to us will enter the system.

A number of studies set ground for assuming involvement of MNs in the decoding of emotional expressions (Ferrari et al., 2012; Keuken et al., 2011; Molenberghs, Cunnington & Mattingley, 2012; van der Gaag, Minderaa & Keysers, 2007), yet relatively few studies have examined mu rhythm suppression in response to facial expression perception. Pineda and Hecht (2009) report greater mu suppression while making social perceptual judgments about emotional facial expressions in comparison to a gender discrimination task or a social cognitive Theory of Mind (ToM) task. In a more recent study, Moore, Gorodnitsky, and Pineda (2012) found mu suppression in response to the perception of happy and disgusted face photos. While it seems clear that merely observing an emotional face triggers the MNs, an associative learning approach would predict differential mirroring as a function of social relevance. Indeed, Trilla Gros, Panasiti and Chakrabarti (2015) used an evaluative conditioning paradigm to associate faces with rewarding or non-rewarding value. They subsequently presented all faces portraying happy emotional expressions and found greater mu suppression in response to rewarding than to non-rewarding faces.

Facial relevance may also be determined by the dynamics of the face such as gaze or head direction (Emery, 2000) in at least two ways. First, studies have demonstrated that gaze may differentially facilitate or hinder the perception of emotional faces (Adams & Kleck, 2003, 2005; Sander, Grandjean, Kaiser, Wehrle & Scherer, 2007; Schrammel, Pannasch, Graupner, Mojzisch & Velichkovsky, 2009). For example, Adams and Kleck (2003, 2005) demonstrated that approach-oriented facial expressions (e.g., anger) are more rapidly classified and judged as more intense when the faces display direct gaze than averted gaze, while avoidance oriented expressions (e.g., fear) display the opposite pattern. Interestingly, these effects are most prominent when the faces displaying the emotions are ambiguous and/or of weak intensity (Graham & LaBar, 2007; N’Diaye, Sander, & Vuilleumier, 2009; Sander et al., 2007). By contrast, when facial expressions are intense and unambiguous, the impact of gaze is greatly reduced or even non-existent (Bindemann, Mike Burton, & Langton, 2008; N’Diaye et al., 2009).

The second way gaze may influence facial relevance is by indicating to the perceiver that she is the target of the expressions. For example, Van der Schalk, Hawk, Fischer and Doosje (2011) studied the effect of head-turning (towards vs. away the observer) on the interpretation of dynamic emotional facial displays. They showed that viewing facial expressions turning towards the observer increased the phenomenological perception of that expression as directed towards the self. This in turn enhanced one’s sense of having caused the other’s emotion, thereby deeming it more relevant to the self than an expression turning away from the observer. Importantly, in that study the expressions were intense and unambiguous such that emotion recognition itself was not affected by head direction.

It is this experience of being the target of another’s emotional display – irrespective of the specific emotion displayed – that triggered the current study. The fact that two highly similar dynamic facial displays differ dramatically in their social relevance as a mere function of direction is intriguing. Given the role of MNs in facial expression perception, it seems plausible that mu suppression may be sensitive to facial expression directionality. Using the same set of stimuli developed by van der Schalk et al. (2011), we hypothesized that MNs serve as a possible mechanism processing the enhanced relevance of facial expressions directed towards the observer. As these stimuli are highly intense and prototypical, they are equally recognizable and intense whether turning away from or towards the observer (van der Schalk et al. 2011). Nevertheless, only when the expressions turn towards the observer would they be perceived as directly relevant. Consequently, we predicted that MN activation, as evident by mu rhythm suppression, will be stronger in response to facial expressions turning towards the observer than in response to facial expressions turning away from the observer.



Thirty-one participants (17 Females, five left handed, M age = 23.6, SD = 2.7) took part in an EEG experiment and subsequently were asked to recognize the emotions and rate the perceived intensity of all the clips shown. 31 additional participants (21 Females, M age = 24.2 years, SD = 2.8) in the first and 25 participants (18 Females, M age = 24 years, SD = 2.2) in the second, took part in two separate behavioral experiments and were asked to rate their sense of being the target of the expressions and their sense of feeling involved in the interaction with the expressions. Participants were recruited from the Hebrew University and were either paid or given course credit for their participation. Participants had normal or corrected to normal vision and were selected based on self-report of neurological and psychiatric health.


Video clips of facial expressions were selected from the “Amsterdam Dynamic Facial Expressions Set” (van der Schalk et al., 2011). The stimuli consisted of 5-s long video clips of four male and four female actors, filmed from the shoulders up, depicting the following emotions: happiness, disgust, fear, sadness, pride, anger, surprise, and neutral. Each video clip appeared in two forms, manipulating the directionality of the expressions: facing away or towards the viewer. In facing away expressions, the clip started with the actor facing the viewer with a neutral expression, then moving sideward to a 45° angle and expressing an emotion. In facing toward expressions, the clip started with the actor facing away from the observer at a 45° angle showing a neutral face, then moving towards the viewer and expressing an emotion. All clips started with a neutral expression that developed to the depicted emotional expression and ended while the expression was at peak (see Fig. 1). Subjects saw one clip of each emotion of every actor, one turning towards and one away from them, 128 clips in total. This set has been validated in our laboratory in the past to confirm that all expressions are well recognized by Israeli viewers.
Fig. 1

The dynamic facial expressions clips were of two kinds: (a) Turn Away – the clip started with the actor facing the viewer, continued with them turning away from the viewer before making the expression. (b) Turn Forward – the clip started with the actor facing away from the viewer, continued with them turning towards the viewer before making the expression. ADFES images were reproduced with permission


EEG experiment

The experiment started with a 3.5-min resting state baseline condition during which participants were instructed to look at a fixation point on the center of the screen (see Huffmeijer, Alink, Tops, Bakermans-Kranenburg & van IJzendoorn, 2012; Popov, Miller, Rockstroh & Weisz, 2013 for similar procedures). The second block consisted of all facial expression clips shown one at a time, in randomized order. To keep subjects attended and engaged, they were requested to keep count of the surprise expressions. The frequency of surprise expressions varied between subjects, ranging from 8–12 (see Fig. 2). These trials were removed from analysis.
Fig. 2

A 3.5-min fixation block served as baseline after which participants viewed 112 non-target short video clips of male and female actors depicting the following emotions: happiness, disgust, fear, sadness, pride, and neutral. Surprised expressions were also shown and served as target stimuli. ADFES images were reproduced with permission

During the EEG task, participants were instructed to refrain from any movement including face movement and were monitored via a video camera throughout the experiment to make sure they complied with instructions.

Behavioral ratings of emotion and intensity

After completing the EEG experiment, participants continued to view the expression videos for a second time and categorized the emotion as well as the intensity of the expressions on a scale of 1 (low) to 7 (high). All clips were shown in a randomized order. These tasks enabled us to examine whether expressions turning towards or away from the viewer differed in perceived intensity or recognizability in a matter that may have influenced the EEG results.

Behavioral ratings of felt involvement

A second group of participants1 viewed the same set of videos in order to assess the degree to which they felt involved in interaction with the viewed expressions. Participants viewed all clips which were shown in a randomized order and were asked to imagine they were walking down the street when encountering the character shown on the screen. They were to rate between 1 and 7 to what extent they felt involved in the interaction. (“On a scale of 1–7, how much did you feel involved in the interaction?”).

Behavioral ratings of social relevance

A third group of participants1 viewed the same set of videos in order to assess the degree to which they felt that they were the target of the viewed expressions. Participants viewed all clips which were shown in a randomized order. They were asked to rate the relevance of the stimuli to them on a scale of 1 (low) to 7 (high) according to their sense of feeling the expression was directed at them ("to what extent, on a scale of 1–7, was the expression observed directed at you?").

Data acquisition and processing

The EEG signal was recorded from 64 Ag-AgCl pin-type active electrodes mounted on an elastic cap (ECI), and from an additional two electrodes placed behind each ear (mastoids). Blinks and eye movements were monitored using bipolar horizontal and vertical EOG derivations via two pairs of electrodes, one pair attached to the external canthi, and the other to the infraorbital and supraorbital regions of the right eye. EEG and EOG were sampled using a Biosemi Active II digital 24-bits amplification system. Off-line analysis was done using Brain Vision Analyzer II.

Data records were initially high-pass filtered at 0.5 Hz, and re-referenced offline to the average of the two mastoids. The correction of eye movements was done using an ICA procedure (Jung et al., 2000). Remaining artifacts exceeding 100 μV in amplitude were detected focusing on relevant sites (C3, C4, O1, O2), and epochs encompassing these artifacts were excluded. Based on previous literature, the EEG activity at the central sites was attributed to motor system activity yielding mu-suppression (Pfurtscheller, Stancák, & Neuper, 1996; Pineda, 2005). This was compared to alpha suppression at occipital sites, which is attributed to visual-attentional mechanisms (Sauseng & Klimesch, 2008).

Wavelet analysis

Motion throughout the clips was not uniformly balanced as the actors started by subtly moving forwards or sideward, continued by conveying one of the emotions and ended while producing very subtle changes at peak. As we expected that the suppression in the 8–13 Hz range would be affected by the observed motion, we validated the timing of the actions, using a wavelets analysis. Wavelets analysis was performed on single trials at each recording site (C3, C4, O1, O2). A complex Gaussian Morlet wavelet was used with width of the wavelet determined according to the Morlet parameter 5, in steps of 1 Hz. We then averaged the amplitudes at each time-frequency point at each recording site across trials for each subject in each condition. Finally, we calculated the suppression index for each point, as the logarithm of the ratio of the power during the experimental conditions relative to the power during the baseline condition for each point. The ratio, instead of simple subtraction, was used in order to control possible variability in absolute EEG power between subjects resulting from scalp thickness and electrode impedance. Moreover, since ratio data are not normally distributed as a result of lower bounding, a log transform was used for analysis. A log ratio of less than zero indicates suppression in the EEG amplitude whereas a value of more than zero indicates enhancement (see Oberman, Pineda & Ramachandran, 2007; Perry et al. 2011; Perry et al., 2010a, b). Alpha suppression was calculated in a similar fashion.

FFT analysis

Based on the wavelets analysis described above, time epochs in experimental blocks were segmented in epochs of 3 s that began 2 s after the onset of the video clip. The data from the full 5-s video clips were also analyzed (see Supplementary Fig. 5). The baseline block was segmented in epochs of 3 s or 5 s long accordingly. In order to extract mu-suppression we first computed the integrated power in the 8–13 Hz range using a Fast Fourier transform (FFT) at 0.5-Hz intervals. Using FFT we were able to extract power in different frequencies in each of the epochs collected. This measure was then averaged so that we were left with the average power in each frequency summed over all epochs of each participant. A mu suppression index was calculated similarly to that described in the wavelets analysis above, as the ratio of the power during the experimental conditions relative to the power during the baseline condition. This was used as a depended variable. Alpha suppression was calculated in a similar fashion. We compared mu and alpha suppression elicited by observation of emotional faces directed towards the observer to suppression elicited by emotional faces turning away from the observer.

Statistical analysis


Differences in recognition accuracy, intensity ratings, felt involvement, and relevance ratings were analyzed using a repeated measure ANOVA. The independent variables were Direction (turn forward/turn away) and Emotion (happiness, disgust, fear, sadness, pride, anger, surprise and neutral). The dependent variable was computed separately for each participant by averaging accuracy (of recognition) and ratings (of perceived intensity, felt involvement and perceived relevance).


Differences in mu suppression across conditions were analyzed using a repeated measure ANOVA. The independent variables included the Hemisphere (left/right) and the Direction (turn forward/turn away). The dependent variable was the average log transform values pertaining to the same experimental condition for each participant.


Behavioral results

Recognition and intensity

Overall, the emotional expressions were well recognized by the viewers (M=0.93 SE=0.008) and in good accordance with the published norms (van der Schalk et al., 2011). We ran an ANOVA with repeated measures, examining the factors direction (Turn Away, Turn forwards) and emotion (happiness, disgust, fear, sadness, pride, anger, surprise, and neutral). In line with prior work, we found a main effect for emotion [F(7,30) = 9.01, p < 0.001 ηp 2 = 0.231]. Some facial expression categories were better recognized than others (see Supplementary Fig. 1 and following table for the different scores by emotion). Importantly, no other significant effects were found, directionality did not influence the recognition accuracy [F(1,30) = 1.35, p > 0.2] and the direction x emotion interaction was not significant [F(7,30) = 0.79, p > 0.5] (Fig. 3A). As the current study was not designed to detect mu suppression to specific facial expressions, we averaged recognition of the different expression categories into a single facial expression recognition score.2
Fig. 3

Recognition accuracy (A), intensity (B) ratings, felt involvement (C), and relevance (D) ratings (as reported by independent validation sample) shown as a factor of direction

Looking at intensity ratings, an ANOVA with repeated measures analysis found a significant main effect for emotion [F(7,30) = 12.82, p < 0.001 ηp 2 = 0.299]. No other significant effects were found. Directionality did not influence intensity ratings [F(1,30) = 1.18, p >0.2] (Fig. 3 B) and the direction × emotion interaction was not significant [F(7,30) = 2, p > 0.08] (see Supplementary Fig. 2 and following table for the different ratings by emotion).

Felt involvment

Using ANOVA with repeated measures, examining the factors direction (Turn Forward, Turn Away) and emotion (happiness, disgust, fear, sadness, pride, anger, surprise, and neutral), we found a main effect for direction [F(1,24) = 44.784, p < 0.001 ηp 2 = 0.651], emotion [F(7,168) = 13.49, p < 0.001 ηp 2 = 0.36], and a significant interaction between the two [F(7,168) = 2.597, p < 0.05 ηp 2 = 0.098]. These results from the first independent validation sample show that when observing expressions turning towards them, participants had a stronger feeling that they were involved in the interaction (Fig. 3 C). There were differences between emotion categories and the higher felt involvement when viewing expressions turning forward was stronger for some emotions more than others. See Supplementary Fig. 3 and following table for differences between emotion categories.


Using ANOVA with repeated measures, examining the factors direction (Turn Forward, Turn Away) and emotion (happiness, disgust, fear, sadness, pride, anger, surprise, and neutral), we found a main effect for direction [F(1,30) = 166.3, p < 0.001 ηp 2 = 0.847], emotion [F(7,210) = 7.073, p < 0.001 ηp 2 = 0.191], and a significant interaction between the two [F(7,210) = 7.697, p < 0.001 ηp 2 = 0.204]. These results from the second independent validation sample show that when observing expressions turning towards them, observers had a stronger feeling that they, themselves, are the target of the expression (Fig. 3 D). There were differences between emotions and the higher felt relevance of expressions turning forward was stronger for some emotions more than others. See Supplementary Fig. 4 and following table for the ratings by emotion.
Fig. 4

Wavelet spectrographs for the critical segments of the videos as seen in central and occipital sites (scaled separately) (We thank the anonymous reviewer for suggesting the different scaling.)

To summarize the behavioral results, the directionality of the expressions did not alter recognition accuracy or perceived intensity. This finding is of importance for excluding a potentially confounding variable because emotional ambiguity or intensity may themselves influence mu suppression. By contrast, and in good accordance with prior work, participants who rated faces turning towards them (vs. away from them) felt that they were the target of the expressions, and felt more involved in the interaction with these expressions.

EEG results

In all following analyses, the suppression index was analyzed using ANOVA with repeated measures, Bonferroni corrected wherever multiple comparisons were made. The degrees of freedom were corrected using the Greenhouse-Geisser epsilon values (G-GE) when needed. We found no significant main effect of gender [F(1,29) = 1.7, p>0.1] or interactions with gender, therefore, data were pooled across genders.

Locating the critical segments of the videos

A visual inspection of the Wavelet spectrographs depicted that suppression at the 8–13 Hz range was strongest between the second and fifth seconds of the presented stimuli (Fig. 4). An examination of the videos confirmed that it was at this point that the actors expressed the facial expression and when most of the action occurred. Therefore, we analyzed the last 3 s of each clip.

Turn away versus forward

Using ANOVA with repeated measures, examining the factors hemisphere (Left, Right) and direction (Turn Away, Turn forwards) in the central electrodes (C3, C4) we found, as predicted, a significant main effect for direction: faces turning towards the viewer induced more suppression [M = −0.28, SE = 0.04] than faces turning away from the viewer [M = −0.23, SE = 0.04], [F(1,30) = 9.15, p = 0.005 ηp 2 = 0.234]. No effect was found for hemisphere [F(1,30) = 1.68, p>0.2] or for the interaction direction × hemisphere [F(1,30) = 1.32, p>0.2] (see Fig. 5 A).
Fig. 5

Mu suppression for the turn away and turn forward conditions, as measured in the last 3 s, in central (A) and occipital (B) sites

In order to strengthen the notion that we are looking at mu suppression centered over sensorimotor cortex, and not at a general attentional effect, we conducted the same analysis on occipital regions (O1, O2). A significant main effect was found for hemisphere indicating that the right hemisphere [M = −0.84, SE = 0.09] demonstrated more suppression than the left hemisphere [M = −0.71, SE = 0.08], [F(1,30) = 10.9, p < 0.005 ηp 2 = 0.267]. No effect was found for direction [F(1,30) = 1.39, p>0.2] or for the interaction hemisphere × direction [F(1,30) = 0.32, p>0.8] (Fig. 5 B).3


In the present study we investigated the effects of social relevance on mu suppression (8–13 Hz), an established measure of neural mirroring activity, by manipulating the direction of facial expressions turning towards or away from the observer. The results affirmed our main hypothesis, that mu suppression is stronger for facial expressions turning towards the observer compared to those turning away from the observer. This finding supports the notion that MNs are sensitive to the relevance of observed cues and may be involved in the ability of an observer to evaluate the relevance of facial expressions: the more relevant the stimuli are to the observer, the more activation of the MNs is seen.

As previously described, facial expressions turning towards and away from the observer were identical except for the fact that they were filmed from two different angles. The finding that such similar stimuli evoke different activation of the MNs supports the notion that the MNs are involved not only in rather low perceptual processes of action perception but also in higher cognitive processes, such as social interactions (Oberman et al., 2007; Perry et al., 2011). Our findings are in line with accumulating data presenting the MNs as a mechanism supporting the subtleties and complexities of interpersonal interaction.

Previous reports have suggested an innate preference for direct as opposed to averted gaze in newborns looking at neutral faces (Farroni, Csibra, Simion, & Johnson, 2002). However, it is yet unclear what happens throughout life and specifically in the case of perception of emotional facial expressions. Associative accounts postulate that the mirror properties of the human MNs are not wholly innate or fixed and thus may be modulated by experience (Cook et al., 2014). One possibility is that an innate preference to direct gaze aids in learning the heightened relevance of emotional expressions directed towards an individual, as opposed to away from an individual. Viewers learn that although these emotional displays are perceptually similar, they convey very different socio-emotional relevance (George, Driver & Dolan, 2001; Kampe, Frith & Frith, 2003).

Our findings make good ecological sense because it is economically beneficial to have a brain mechanism that not only simulates motor actions, but also filters the relevant information and prioritizes it. Hamilton (2013) suggests that MNs play a role in our ability to respond to the social world around us in real time and in a socially appropriate fashion. The appropriate response to a facial expression facing the observer will, in most instances, differ from the response to a facial expression targeted at someone standing beside them. Whereas in the first case one would most probably act rather quickly, in the latter one may even not act at all.

As previously suggested, being the direct target of an expression may potentially induce more attention when compared to not being the target. When studying mu suppression one needs to be especially sensitive to distinguishing between motor originated activation and occipital attentional mechanisms (see also Perry et al., 2011). In the current study we took several measures to try and avoid potential confounding in the form of attentional mechanisms affecting our results.

First, we measured suppression not only over motor regions but also over occipital ones. This analysis enabled a better assessment of whether differences in mu suppression truly reflect differences in motor activation. The fact that we did not find any difference within occipital regions between the two directions, suggests that our findings are not merely due to attentional mechanisms, as these would have been exhibited by occipital suppression. Effects on attentional mechanisms would have resulted in significant differences in alpha suppression between the two conditions, turn forward and turn away.

Second, poorly recognized and ambiguous expressions may employ more attention. However, our findings confirm that recognition accuracy was not affected by directionality, suggesting that differences in mu suppression were not a result of changes in the attention span due to differences in recognition.

Finally, mu suppression is believed to be a manifestation of neural motor resonance, which is suggested to reflect the simulation of observed action. We would expect that the more intense an expression is the more motor activation it will induce and so potentially we would expect stronger motor resonance to occur. Furthermore, more intense stimuli may engage more attention as there is more going on. Therefore, we asked to evaluate whether there were any systematic differences in perceived intensity between the two directions of the expressions. Our analysis showed no systematic differences in perceived intensity between the two directions, confirming that the observed effects were not due to intensity levels.

While previous work suggested that the recognition of still emotional expressions may be differentially influenced by gaze direction (e.g., Adams & Kleck, 2003; Hess, Adams & Kleck, 2007) such findings are typically found with low intensity or ambiguous expressions (e.g., Adams & Kleck, 2005; Sander et al., 2007). By contrast, when expressions are intense and unambiguous, as in the case of the current study, the effect of gaze on emotion recognition may be significantly diminished or absent (e.g, Bindemann et al., 2008; Graham & LaBar, 2007; N’Diaye et al., 2009). Nevertheless, while our participants recognized the expressions irrespective of direction, the relevance of the expression to them differed dramatically when they were the target of the expression.

Although not the focus of this paper, we found right lateralization in occipital sites. This may be due to the right hemisphere dominance in perception of faces and emotional expressions in particular (See for example Adolphs, Damasio, Tranel, & Damasio, 1996; Coolican, Eskes, McMullen, & Lecky, 2008)

In future studies it would be interesting to examine whether observing different emotion categories yield differences in mu suppression. Following Adam and Kleck's findings (Adams & Kleck, 2003), it would be intriguing to investigate with low intensity or ambiguous expressions whether patterns of mu suppression differ between approach-oriented emotions (anger and joy) and avoidance-oriented emotions (fear and sad) facing the observer or turning away from the observer. A second baseline of neutral movements (such as an actor chewing gum) would allow further exploration of the contribution of the motor system to the cognitive aspects of a social interaction when comparing it to affective movements (facial expressions).

In addition, previous work has discussed the interaction between head and gaze direction and the possible evolutionary advantage of the latter (Langton, 2000; Tomasello, Hare, Lehmann, & Call, 2007). The structure of the human eye enables rapid analysis of the other’s point of focus, constituting a possible evolutionary advantage in humans, promoting social interaction. Still, understanding where another individual is directing attention is a more complex task, also involving the head orientation (Langton, Watt, & Bruce, 2000). In this context, an intriguing question is whether it is the gaze or the head orientation that drive our effects. The present study did not allow us to tease apart the two but hopefully future work could look further into this interaction.

Our general aim was to learn about the processing of socially relevant stimuli in real life interactive engagements. The stimuli we used in this study do have some important advantages: the expressions are prototypical, easily recognized, and highly consistent across models. However, these carry also disadvantages, as the expressions are exaggerated and artificially posed. Additionally, although we instructed participants to imagine that they are engaged in a real interaction with the observed characters, we do not know to what extent they were actually able to adhere to this request.

To summarize, our findings are in line with previous studies suggesting that MNs play a role in our social processing abilities. Specifically, we show that the relevance of the social stimuli to the observer plays an important role in activating the MN system.


  1. 1.

    Due to a technical error the ratings of social relevance and felt involvement were conducted by separate groups of participants. This was to our benefit as these ratings were not contaminated by the recognition accuracy and intensity ratings. These could potentially bias judgments of social relevance and felt involvement.

  2. 2.

    2 EEG experiments require averaging over a large number of stimuli. Therefore having two independent variables (emotion as well as direction) would have led to an extremely long experiment.

  3. 3.

    Suppression over sensorimotor cortex is sometimes also reported in the beta rhythm (15–25 Hz, e.g. Hari, 2006). Therefore, we ran a similar analysis on the beta frequency range, which yielded no significant results for direction [F(1,30) = 1.56, p>0.2], hemisphere [F(1,30) = 0.12, p>0.9], or their interaction [F(1,30) = 0.48, p>0.8]


Author Notes

This work was supported by an Israel Science Foundation [ISF#1140/13] grant to Hillel Aviezer and by an EU Career Integration Grant to Hillel Aviezer [CIG #618597]. This research was originally initiated with Prof. Shlomo Bentin who sadly was killed in a car accident before the completion of this work.


  1. Adams, R. B., & Kleck, R. E. (2003). Perceived gaze direction and the processing of facial displays of emotion. Psychological Science, 14(6), 644–7. Retrieved from
  2. Adams, R. B., & Kleck, R. E. (2005). Effects of direct and averted gaze on the perception of facially communicated emotion. Emotion, 5(1), 3–11. doi: 10.1037/1528-3542.5.1.3 CrossRefPubMedGoogle Scholar
  3. Adolphs, R., Damasio, H., Tranel, D., & Damasio, a R. (1996). Cortical systems for the recognition of emotion in facial expressions. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 16(23), 7678–87. Retrieved from
  4. Bard, K. (2007). Neonatal imitation in chimpanzees (Pan troglodytes) tested with two paradigms. Animal Cognition, 10(2), 233–242. doi: 10.1007/s10071-006-0062-3 CrossRefPubMedGoogle Scholar
  5. Bindemann, M., Mike Burton, A., & Langton, S. R. H. (2008). How do eye gaze and facial expression interact? Visual Cognition, 16(6), 708–733. doi: 10.1080/13506280701269318 CrossRefGoogle Scholar
  6. Casile, A., Caggiano, V., & Ferrari, P. F. (2011). The mirror neuron system: A fresh view. The Neuroscientist, 17(5), 524–538. doi: 10.1177/1073858410392239 CrossRefPubMedPubMedCentralGoogle Scholar
  7. Cheng, Y., Yang, C.-Y., Lin, C.-P., Lee, P.-L., & Decety, J. (2008). The perception of pain in others suppresses somatosensory oscillations: A magnetoencephalography study. NeuroImage, 40(4), 1833–1840. doi: 10.1016/j.neuroimage.2008.01.064 CrossRefPubMedGoogle Scholar
  8. Cook, R., Bird, G., Catmur, C., Press, C., & Heyes, C. (2014). Mirror neurons: From origin to function. Behavioral and Brain Sciences, 37, 177–192. doi: 10.1017/s0140525x13000903 CrossRefPubMedGoogle Scholar
  9. Coolican, J., Eskes, G. A., McMullen, P. A., & Lecky, E. (2008). Perceptual biases in processing facial identity and emotion. Brain and Cognition, 66(2), 176–187. doi: 10.1016/j.bandc.2007.07.001 CrossRefPubMedGoogle Scholar
  10. di Pellegrino, G., Fadiga, L., Fogassi, L., Gallese, V., & Rizzolatti, G. (1992). Understanding motor events: A neurophysiological study. Experimental Brain Research, 91, 176–180.Google Scholar
  11. Emery, N. J. (2000). The eyes have it: The neuroethology, function and evolution of social gaze. Neuroscience and Biobehavioral Reviews, 24(6), 581–604. Retrieved from
  12. Farroni, T., Csibra, G., Simion, F., & Johnson, M. H. (2002). Eye contact detection in humans from birth. Proceedings of the National Academy of Sciences of the United States of America, 99(14), 9602–9605. doi: 10.1073/pnas.152159999 CrossRefPubMedPubMedCentralGoogle Scholar
  13. Ferrari, P. F., Paukner, A., Ionica, C., & Suomi, S. J. (2009). Reciprocal Face-to-Face Communication between Rhesus Macaque Mothers and Their Newborn Infants. Current Biology, 19(20), 1768–1772. doi: 10.1016/j.cub.2009.08.055 CrossRefPubMedPubMedCentralGoogle Scholar
  14. Ferrari, P. F., Vanderwert, R. E., Paukner, A., Bower, S., Suomi, S. J., & Fox, N. A. (2012). Distinct EEG amplitude suppression to facial gestures as evidence for a mirror mechanism in newborn monkeys. Journal of Cognitive Neuroscience, 24(5), 1165–1172. doi: 10.1162/jocn_a_00198 CrossRefPubMedGoogle Scholar
  15. Gallese, V., & Goldman, A. (1998). Mirror neurons and the simulation theory of mind-reading. Trends Cognitive Science, 2(12), 493–501.CrossRefGoogle Scholar
  16. George, N., Driver, J., & Dolan, R. J. (2001). Seen gaze-direction modulates fusiform activity and its coupling with other brain areas during face processing. NeuroImage, 13, 1102–12. doi: 10.1006/nimg.2001.0769.
  17. Goldman, A. I. (1989). Interpretation Psychologized. Mind & Language, 4(3), 161–185. doi: 10.1111/j.1468-0017.1989.tb00249.x CrossRefGoogle Scholar
  18. Graham, R., & LaBar, K. S. (2007). Garner interference reveals dependencies between emotional expression and gaze in face perception. Emotion, 7(2), 296–313. doi: 10.1037/1528-3542.7.2.296
  19. Hamilton, A. F. D. C. (2013). The mirror neuron system contributes to social responding. Cortex, 49(10), 2957–2959. doi: 10.1016/j.cortex.2013.08.012 CrossRefPubMedGoogle Scholar
  20. Hari, R. (2006). Chapter 17 Action-perception connection and the cortical mu rhythm. Progress in Brain Research, 159(06), 253–260. doi: 10.1016/S0079-6123(06)59017-X CrossRefPubMedGoogle Scholar
  21. Hess, U., Adams, R. B., & Kleck, R. E. (2007). Looking at You or Looking Elsewhere: The Influence of Head Orientation on the Signal Value of Emotional Facial Expressions. Motivation and Emotion, 31(2), 137–144. doi: 10.1007/s11031-007-9057-x CrossRefGoogle Scholar
  22. Huffmeijer, R., Alink, L. R. A., Tops, M., Bakermans-Kranenburg, M. J., & van IJzendoorn, M. H. (2012). Asymmetric frontal brain activity and parental rejection predict altruistic behavior: Moderation of oxytocin effects. Cognitive, Affective & Behavioral Neuroscience, 12(2), 382–392. doi: 10.3758/s13415-011-0082-6 CrossRefGoogle Scholar
  23. Jung, T. P., Makeig, S., Westerfield, M., Townsend, J., Courchesne, E., & Sejnowski, T. J. (2000). Removal of eye activity artifacts from visual event-related potentials in normal and clinical subjects. Clinical Neurophysiology, 111(10), 1745–1758. doi: 10.1016/S1388-2457(00)00386-2 CrossRefPubMedGoogle Scholar
  24. Kampe, K. K. W., Frith, C. D., & Frith, U. (2003). “Hey John”: signals conveying communicative intention toward the self activate brain regions associated with “mentalizing,” regardless of modality. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 23(12), 5258–63.Google Scholar
  25. Keuken, M. C., Hardie, A, Dorn, B. T., Dev, S., Paulus, M. P., Jonas, K. J., Pineda, J. A. (2011). The role of the left inferior frontal gyrus in social perception: An rTMS study. Brain Research, 1383, 196–205. doi: 10.1016/j.brainres.2011.01.073
  26. Kilner, J. M., Marchant, J. L., & Frith, C. D. (2006). Modulation of the mirror system by social relevance. Social Cognitive and Affective Neuroscience, 1(2), 143–148. doi: 10.1093/scan/nsl017 CrossRefPubMedPubMedCentralGoogle Scholar
  27. Langton, S. R. H., Watt, R. J., & Bruce, V. (2000). Do the eyes have it? Cues to the direction of social attention. Trends in Cognitive Sciences, 4(2), 50–59. doi: 10.1016/S1364-6613(99)01436-9 CrossRefPubMedGoogle Scholar
  28. Langton, S. R. H. (2000). The mutual influence of gaze and head orientation in the analysis of social attention direction. The Quarterly Journal of Experimental Psychology Section A, 53(3), 825–845. doi: 10.1080/713755908 CrossRefGoogle Scholar
  29. Meltzoff, A. N., & Moore, M. K. (1977). Imitation of facial and manual gestures by human neonates. Science, 198(4312), 74–78. doi: 10.1126/science.897687
  30. Molenberghs, P., Cunnington, R., & Mattingley, J. B. (2012). Brain regions with mirror properties: A meta-analysis of 125 human fMRI studies. Neuroscience and Biobehavioral Reviews, 36(1), 341–349. doi: 10.1016/j.neubiorev.2011.07.004 CrossRefPubMedGoogle Scholar
  31. Moore, A., Gorodnitsky, I., & Pineda, J. (2012). EEG mu component responses to viewing emotional faces. Behavioural Brain Research, 226(1), 309–316. doi: 10.1016/j.bbr.2011.07.048 CrossRefPubMedGoogle Scholar
  32. N’Diaye, K., Sander, D., & Vuilleumier, P. (2009). Self-relevance processing in the human amygdala: Gaze direction, facial expression, and emotion intensity. Emotion, 9(6), 798–806. doi: 10.1037/a0017845
  33. Oberman, L. M., Pineda, J. A., & Ramachandran, V. S. (2007). The human mirror neuron system: A link between action observation and social skills. Social Cognitive and Affective Neuroscience, 2(1), 62–66. doi: 10.1093/scan/nsl022 CrossRefPubMedPubMedCentralGoogle Scholar
  34. Oberman, L. M., & Ramachandran, V. S. (2007). The simulating social mind: The role of the mirror neuron system and simulation in the social and communicative deficits of autism spectrum disorders. Psychological Bulletin, 133(2), 310–327. doi: 10.1037/0033-2909.133.2.310 CrossRefPubMedGoogle Scholar
  35. Perry, A., Bentin, S., Bartal, I. B.-A., Lamm, C., & Decety, J. (2010a). “Feeling” the pain of those who are different from us: Modulation of EEG in the mu/alpha range. Cognitive, Affective & Behavioral Neuroscience, 10(4), 493–504. doi: 10.3758/CABN.10.4.493 CrossRefGoogle Scholar
  36. Perry, A., Troje, N. F., & Bentin, S. (2010b). Exploring motor system contributions to the perception of social information: Evidence from EEG activity in the mu/alpha frequency range. Social Neuroscience, 5(3), 272–284. doi: 10.1080/17470910903395767 CrossRefPubMedGoogle Scholar
  37. Perry, A., Stein, L., & Bentin, S. (2011). Motor and attentional mechanisms involved in social interaction-Evidence from mu and alpha EEG suppression. NeuroImage, 58(3), 895–904. doi: 10.1016/j.neuroimage.2011.06.060 CrossRefPubMedGoogle Scholar
  38. Pfurtscheller, G., Stancák, a, & Neuper, C. (1996). Event-related synchronization (ERS) in the alpha band--an electrophysiological correlate of cortical idling: A review. International Journal of Psychophysiology : Official Journal of the International Organization of Psychophysiology, 24(1-2), 39–46. Retrieved from
  39. Pineda, J. (2005). The functional significance of mu rhythms: Translating “seeing” and “hearing” into “doing”. Brain Research Reviews, 50(1), 57–68. doi: 10.1016/j.brainresrev.2005.04.005 CrossRefPubMedGoogle Scholar
  40. Pineda, J. A., & Hecht, E. (2009). Mirroring and mu rhythm involvement in social cognition: Are there dissociable subcomponents of theory of mind? Biological Psychology, 80(3), 306–314. doi: 10.1016/j.biopsycho.2008.11.003 CrossRefPubMedGoogle Scholar
  41. Popov, T., Miller, G. A., Rockstroh, B., & Weisz, N. (2013). Modulation of α power and functional connectivity during facial affect recognition. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 33(14), 6018–6026. doi: 10.1523/JNEUROSCI.2763-12.2013 CrossRefGoogle Scholar
  42. Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system. Annual Review of Neuroscience, 27, 169–192. doi: 10.1146/annurev.neuro.27.070203.144230 CrossRefPubMedGoogle Scholar
  43. Rizzolatti, G., Fogassi, L., & Gallese, V. (2001). Neurophysiological mechanisms underlying the understanding and imitation of action. Nature Reviews Neuroscience, 2(9), 661–670.CrossRefPubMedGoogle Scholar
  44. Sander, D., Grandjean, D., Kaiser, S., Wehrle, T., & Scherer, K. R. (2007). Interaction effects of perceived gaze direction and dynamic facial expression: Evidence for appraisal theories of emotion. European Journal of Cognitive Psychology, 19(3), 470–480. doi: 10.1080/09541440600757426 CrossRefGoogle Scholar
  45. Sauseng, P., & Klimesch, W. (2008). What does phase information of oscillatory brain activity tell us about cognitive processes? Neuroscience and Biobehavioral Reviews, 32(5), 1001–1013. doi: 10.1016/j.neubiorev.2008.03.014 CrossRefPubMedGoogle Scholar
  46. Schrammel, F., Pannasch, S., Graupner, S.-T., Mojzisch, A., & Velichkovsky, B. M. (2009). Virtual friend or threat? The effects of facial expression and gaze interaction on psychophysiological responses and emotional experience. Psychophysiology, 46(5), 922–931. doi: 10.1111/j.1469-8986.2009.00831.x CrossRefPubMedGoogle Scholar
  47. Tomasello, M., Hare, B., Lehmann, H., & Call, J. (2007). Reliance on head versus eyes in the gaze following of great apes and human infants: The cooperative eye hypothesis. Journal of Human Evolution, 52(3), 314–320. doi: 10.1016/j.jhevol.2006.10.001 CrossRefPubMedGoogle Scholar
  48. Trilla Gros, I., Panasiti, M. S., & Chakrabarti, B. (2015). The plasticity of the mirror system: How reward learning modulates cortical motor simulation of others. Neuropsychologia, 70, 255–262. doi: 10.1016/j.neuropsychologia.2015.02.033 CrossRefPubMedPubMedCentralGoogle Scholar
  49. van der Gaag, C., Minderaa, R. B., & Keysers, C. (2007). Facial expressions: What the mirror neuron system can and cannot tell us. Social Neuroscience, 2(3-4), 179–222. doi: 10.1080/17470910701376878 CrossRefPubMedGoogle Scholar
  50. van der Schalk, J., Hawk, S. T., Fischer, A. H., & Doosje, B. (2011). Moving faces, looking places: Validation of the Amsterdam Dynamic Facial Expression Set (ADFES). Emotion, 11(4), 907–920. doi: 10.1037/a0023853 CrossRefPubMedGoogle Scholar
  51. Whitmarsh, S., Nieuwenhuis, I. L. C., Barendregt, H. P., & Jensen, O. (2011). Sensorimotor Alpha Activity is Modulated in Response to the Observation of Pain in Others. Frontiers in Human Neuroscience, 5(October), 91. doi: 10.3389/fnhum.2011.00091 PubMedPubMedCentralGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2016

Authors and Affiliations

  • Noga S. Ensenberg
    • 1
  • Anat Perry
    • 2
  • Hillel Aviezer
    • 1
  1. 1.Department of PsychologyHebrew University of JerusalemJerusalemIsrael
  2. 2.Department of Psychology and Helen Wills Neuroscience InstituteUniversity of CaliforniaBerkeleyUSA

Personalised recommendations