Measuring Human-Robot Interaction Through Motor Resonance


In the last decades, the introduction of robotic devices in fields such as industries, dangerous environments, and medicine has notably improved working practices. The availability of a new generation of humanoid robots for everyday’s activities in human populated environments can entail an even wider revolution. Indeed, not only domestic activities but also social behaviors will adapt to a continuous interaction with a completely new kind of social agents.

In the light of this scenario, it becomes crucial to design robots suited to natural cooperation with humans, and contextually to develop quantitative methods to measure human-robot interaction (HRI). Motor resonance, i.e. the activation of the observer’s motor control system during action perception, has been suggested to be a key component of human social behavior, and as such is thought to play a central role for HRI.

In the literature there are reports of robots that have been used as tools to understand the human brain. The aim of this review is to offer a different perspective in suggesting that human responses can become a tool to measure and improve robot interactional attitudes. In the first part of the paper the notion of motor resonance and its neurophysiological correlates are introduced. Subsequently we describe motor resonance studies on the perception of robotic agents’ behavior. Finally we introduce proactive gaze and automatic imitation, two techniques adopted in human motor resonance studies, and we present the advantages which would follow their application to HRI.


Social interaction is a crucial element for human progress and evolution because it allows knowledge sharing and cooperation. In turn, the technological progress usually associated to human development influences to a great extent the way people interact with each other. In particular, the last decades saw the introduction of both new tools to communicate and new interaction partners. For example, the use of robotic devices in fields such as industries, dangerous environments, and medicine has notably modified working practices. The availability of a new generation of humanoid robots for everyday’s activities in human populated environments can entail an even wider revolution [1]. Indeed, in the future people will face with an increasing number of these non-biological agents which are expected to co-exist with humans, sharing the same working space, assisting them while exhibiting the same dexterity and body movement capabilities of humans. Therefore, not only domestic habits but also human social behaviors will evolve toward a continuous interaction with this completely new kind of “social agents” known as cognitive humanoid robots.

But how do humans relate with this emerging technology? Much effort is devoted today to allow close interaction of these robots with people from different perspectives. In the robotic domain the main concern is to build safe and robust, technologically innovative and functionally useful devices [24]. On the other side, neuroscientists have used robots or similar artificial agents as tools to investigate human brain functions [5, 6] by using robotic bodies as artificial, controllable displays of human behavior.

At present, human-robot interaction (HRI) is matter of substantial research focused on robot design which tries to understand the physical and behavioral features of a humanoid robot which prompts natural HRI. For instance, in the “Uncanny Valley Theory” [7], it has been proposed that the emotional response of a human observer becomes increasingly positive as the robot appears more human-like, until a point beyond which the response quickly becomes of strong repulsion. However, if robots appearance becomes less distinguishable from a human being, the emotional response becomes positive again and approaches human-to-human empathy levels. To this concern, Di Salvo et al. [8], aiming at identifying the most appropriate threshold of robots humanness, suggested that robot appearance needs to balance “humanness” and “robotness” in order to both stimulate pleasant social interactions and, at the same time, prevent false belief about the robot’s capabilities.

Recently, Bartneck et al. [9] proposed the use of standardized questionnaires to measure the users’ perception of robots and estimate HRI based on five concepts: anthropomorphism, animacy, likability, perceived intelligence, and perceived safety. Although interesting, the questionnaires just assess the conscious evaluations of the robotic devices, and do not allow a complete quantification of HRI. Moreover, as suggested by Dehais et al. [10], they do not take into account some cognitive and physical aspects of HRI. Trying to circumvent these issues, Dehais et al. selected three physiological measurements to describe participants’ responses when interacting with a mobile manipulator robot: galvanic skin conductance, muscle and ocular activities. In the same vein, in order to assess the efficacy of HRI during robot therapy interventions for demented patients, Wada et al. [11] acquired patients’ EEG signals. Furthermore, Rani et al. [12] presented a novel methodology for online stress detection based on heart rate measurement, with the aim to give the robot the ability to understand the user’s anxiety during cooperative tasks.

All these innovative studies aim at quantifying the unconscious processes which induce humans to perceive another agent (either human or robot) as an interaction partner. However, the physiological measurements adopted are not tightly related to the mechanisms at the basis of social interactions. One of those basic mechanisms is the coupling between action and perception, also named “motor resonance”, i.e., the automatic activation, during actions perception, of the perceiver’s motor system [13]. The word “resonance” has been chosen because, as two identical and close diapasons vibrate together when one of the two starts its vibration, during action observation the two motor brains “resonate” because they share a similar motor repertoire. This does not exclude, for the brains, the existence of anticipatory mechanisms that attribute to motor resonance a predictive connotation.

Thus, measuring HRI through motor resonance could represent an improvement to the existing studies in that it allows investigating specifically the unconscious human responses to robotic agents [14]. Moreover, considering that in order to be understood a given action must be shared on the two sides of the action-perception representation systems, motor resonance functions on a common (shared) motor knowledge that is the basis for any social interactive processes.

This review mainly focuses on this theoretical assumption, which appears to be particularly promising for the future of HRI studies. In the first section we introduce the notion of motor resonance and its neurophysiological correlates. Afterwards, we describe behavioral, neurophysiological, and neuroimaging studies where the measurement of motor resonance has been applied to the perception of robotic agents. Finally, we present two additional behavioral phenomena associated to the motor resonance mechanism, namely proactive gaze and automatic imitation. The application of these techniques, originally adopted in the study of human-human interaction (HHI), could improve HRI protocols as well. Indeed, proactive gaze and automatic imitation would allow studying the impact of a robot on human behavior in situations in which subjects are free to move and to interact with a real agent, in spite of observing a video, or even being constrained in an imaging device, as in the case of fMRI experiments. The use of such more ecological experimental settings could definitely improve the naturalness of the interaction, providing additional information about the natural human perception of robots.

The Neurophysiological Basis of Motor Resonance: The Mirror-Neuron System

Motor resonance is thought to be associated to the activation of the mirror-neuron system (MNS, for a comprehensive review see [15]). This system purportedly gives rise to a series of “resonance behaviors” in which, during the observation of actions performed by others, motor representations congruent with the observed actions become automatically activated in the observers’ brain [13]. It has been suggested that this automatic and non cognitively-mediated mechanism could yield the kind of “mutual understanding” at the basis of everyday HHI [16].

Mirror neurons (MN), originally discovered in the area F5 of the macaque brain [17], are cells that discharge both when individuals perform a given motor act and when they observe others performing the same motor act [1820]. Support for an analogous system in humans was provided for the first time by Fadiga et al. [21] and was afterwards confirmed by several neurophysiological and neuroimaging studies (see [15, 22] for reviews). Furthermore, a recent work validates the presence of these cells also in human brain through a single-neuron recording technique during surgical intervention in patients with severe intractable epilepsy [23, 24].

This mirror mechanism is organized into two main cortical networks. The first one is formed by regions in the parietal lobe and premotor cortices [25, 26], and seems to code mostly for the goal of the observed motor acts [6, 27]. The second one appears to be located in the insula and anterior cingulate cortex, and is thought to be involved in mirroring others’ emotions [28, 29]. Moreover, in humans the MNS is involved in imitative [16, 20], social/cognitive behaviors [30], and in speech/language processing [31]. Because MNS is thought to unify action perception and action execution and to mediate a significant part of human social behaviors [15, 16, 20] its involvement in HRI needs to be studied with particular attention.

Motor Resonance in HRI

Controversial data exist about the occurrence of motor resonance, and thus about the response of MNS, to humanoid artifacts. Indeed, while some authors found behavioral [32], neurophysiological [33] and neuroimaging [6] evidence in favor of a similarity of behaviors between human-human interaction and HRIs, different groups came to the opposite conclusion [5, 34].

Some doubts about the possibility to have motor resonance during the presentation of robotic stimuli come from the early studies on MN in the monkey [18, 19]. In fact, these works showed that while the observation of a human grasping induced neural firing, in contrast the grasping performed with a mechanical tool (a pair of pliers) did not induce the same activation. However, recent findings [35, 36] demonstrated that after prolonged training with a tool also tool-object interactions become effective in triggering F5 response. Therefore, the presence of a mechanical device is not sufficient per se to eliminate motor resonance. Nonetheless, in the case of interaction with humanoid robots it would be more interesting to test whether an automatic activation of MN without a previous training could occur. Indeed, this would demonstrate that the robot is perceived as an agent similar to a human being rather than a tool.

Further neurophysiological and neuroimaging studies have investigated the MNS activation in humans in presence of biological and robotic stimuli. In a Positron Emission Tomography—PET study Tai et al. [34] presented subjects with videos in which either a human or a robotic agent performed object grasping. They found that human grasping observation elicited a significant neural response in the left premotor cortex, while the observation of the same action performed by the robot model failed in inducing the same activation. The authors explained these results suggesting that the human MNS is biologically tuned, though not specifying which property of the agent—either the shape or the kinematics—caused this difference in neural responses. In agreement with these results, another PET study [37] investigated whether observation of real or virtual hands engages the same perceptual and visuomotor brain processes. Their findings showed different brain responses for these two kinds of stimuli indicating that only perception of biological agents’ actions activates the areas associated to the internal motor representations. Thus, from these findings motor resonance seems to be evoked only by biological agents.

Nevertheless, this result was partly confuted by an fMRI experiment where Gazzola et al. [6] compared the neural activations induced by the observation of human and robotic actions. In their task volunteers were shown videos of simple and complex movements performed either by a human actor or by an industrial robot. The kinematics of the robot was clearly non-biological, as it was characterized by only one degree of freedom motion and constant velocity. However, the authors found the same activation for the sight of both human and robotic actions. Gazzola and colleagues explained these contrasting results showing that the presentation of exactly the same robotic action several times does not activate the MNS. They then suggest that previous studies failed in finding mirror activations in response to robotic actions because of the repetitiveness of the stimuli presented. The MNS activation observed for the presentation of robotic actions characterized by non-biological kinematics seems to indicate that the human MNS can encode external events of which we understand the goal, even though their kinematics does not match exactly our motor programs. Accordingly, Chaminade et al. [38] showed no differences in the activation of traditional MN brain areas during the observation of robotic and human demonstrators.

A similar result was found in an EEG study by Oberman and colleagues [33]. The goal of their study was to characterize the properties of the visual stimuli which evoke MNS activation. In particular, they assessed whether the observation of a humanoid hand performing either a grasping action, or a pantomimed grasp with no target object, was sufficient to observe EEG μ power suppression, considered as a selective measure of MNS activity (see for a review [39]). The results of two experiments revealed that EEG activity was modulated to the same extent by human and robot motions, suggesting no difference in MNS activation due to non-biological stimuli. Furthermore, Shimada [40] measured the motor area activity using near-infrared spectroscopy during the observation of computer graphics rendering of a human or a robot grasping an object with either biological or mechanical kinematics. He found that MNS activity is modulated by the congruency between the appearance and the kinematics of the agent, and that robotic appearance does not automatically hinder motor resonance. Hence, MNS seems to resonate to pure robotic stimulus, even though the incongruence between appearances and kinematics would cause its deactivation.

To sum up, although the first neuroimaging studies [34, 37] excluded motor resonance when the action was performed by a non-biological agent, subsequent researches [6, 33, 38, 40] have cast a doubt on the validity of these assumptions. According to these recent studies, robotic agents have been shown to evoke a similar MNS activity as humans do.

The occurrence of motor resonance in presence of robotic agents has been examined not only in the field of neurophysiological and neuroimaging studies but also on the behavioral side. The behavioral quantification of motor resonance when observing both humanoid and non-humanoid robot performing actions was assessed mainly by means of motion interference and motor priming mechanisms. Motion interference could take place when an agent executes a movement while observing someone else performing a different motion: if the observed motion induces a distortion in the execution, then motion interference occurs. Kilner et al. in 2003 [5] proposed for the first time this measure to behaviorally estimate motor resonance. The motion interference mechanism was quantified by evaluating the variance of the participants’ trajectory when observing external movements. In this work they demonstrated that the observation of horizontal repetitive arm movements increases the variance of simultaneously performed vertical movements. This interference, however, did not occur when the demonstrator was not a biological agent but rather a robotic, non humanoid arm. The authors proposed these findings as an evidence of a separate processing in the brain of biological and non-biological movements. It remained however unclear which aspect of the human movement, absent in the robotic one, could trigger the interference phenomenon.

To investigate this topic a series of experiments were conducted by a different group, who replicated Kilner’s study modifying the robotic demonstrator [14, 32, 41]. In particular the industrial robot adopted by Kilner et al. was replaced by a complete humanoid robot, DB [42], which provided human-like appearance and was able to produce human-like movements. These changes in the robotic platform gave rise to an opposite result with respect to Kilner’s study. In fact, the observation of incongruent robotic movements increased the variability of subjects’ behaviors, similarly to what happened for HHI [32]. Interestingly, though the interference effect was qualitatively analogous to the one observed between human subjects, it was quantitatively lower: the movement variance, which doubled during the observation of humans performing incompatible movements, during the observation of humanoid incompatible actions increased of a factor of 1.5.

The authors went on to check the role of robot shape and movement kinematics by comparing the interference effect when robot violated (i.e. sine wave) or not (i.e., kinematics derived from captured human motion) the biological motion law [41]. The increase in variance for the observation of the incongruent motion was significant only in the condition in which the robot presented a biological speed profile. This implies that motion interference is not specific to HHI, but can also be observed, though slightly reduced, in human-humanoid interactions. Moreover, for this movement resonance to occur, not only the form of the observed co-actor is important, but also the kind of motion she/he/it is performing. Interestingly, the humanoid robot seems to be different from a generic non-biological stimulus as the movement of an object. In fact, humanoid robots seemed to generate interference only when they moved according to the biological law of motion and not for non-biological kinematics, while other artificial stimuli—as the video clip of a moving ball—produce interference with any kind of velocity profile [43].

Another task used to behaviorally evaluate the motor resonance mechanism is motor priming, i.e. observing an action facilitates the execution of the same action [44]. Press et al. [45] investigated the timing of hand opening (and closing) in response to the presentation of a human or robotic hand in a compatible (closed) or incompatible (opened) posture. The results show that the action was initiated faster when it was cued by the compatible stimulus, both for the human and the robotic agent. Human primes resulted to be more effective producing a faster response, but robotic stimuli were however able to elicit visuomotor priming. Moreover, in a second study Press et al. [46] demonstrated that sensorimotor training with congruent robotic actions (hand opening/closing) eliminated the human bias previously observed [45], i.e. the training canceled the difference between human and robotic stimuli in determining action facilitation.

A paradigm similar to that used in [45] was applied to access priming effect (named by the authors “automatic imitation”) in adults affected by autism spectrum disorder (ASD), a neurodevelopmental disability characterized by impaired social interactions and communication [47]. After observing a human or a robotic hand ASD participants showed an intact priming effect. Moreover, the higher animacy effect obtained in ASD (i.e., greater difference between human and robotic stimuli) than in the control group was interpreted by the authors as a deficiency related to inhibition impairments. Aiming at understanding if robotic stimuli could trigger imitation behaviors in children affected by ASD, Pierno et al. [48] compared the power of visuomotor priming elicited by a robot and a human demonstrator in a reach-to-grasp experiment. Participants had to observe the demonstrator’s motion, and then execute it. The priming effect was evaluated through kinematic analysis. Their results show that observing a robotic but not human arm movement elicit a visuomotor priming effect (reduced motion duration and anticipated peak of velocity) in autistic children while the opposite is found in normal children.

In two recent works Liepelt et al. [49, 50] investigated the motor priming effect with non-human agents by testing whether this mechanism is sensitive to the kind of actions the observed agent is executing, and whether the direct matching system, associated to the motor resonance mechanism, is influenced by the belief of movement animacy. In the first one [50] participants observed pictures of two agents (human and wooden hands) performing three kinds of gestures (intransitive, transitive, and communicative). They were asked to perform either congruent or incongruent actions. The results showed that observing incongruent movements affected all the performed motions with the exception of the communicative gestures of the wooden hand. This suggests that motor priming is agent-specific for communicative actions only: i.e. the reasonableness of the communicative gestures for the agent performing them might influence participants’ performance. In the second study [49] the authors manipulated the belief of the animacy of the stimulus by presenting to two different groups of participants either a human or a wooden hand wearing leather gloves before task execution. Afterwards, during a classical motor priming paradigm [51], the stimulus consisted in an ambiguous hand in a leather glove. The authors found a basic motor priming effect in both groups, but the strength was higher when participants attributed the action to the human hand, suggesting a top-down modulation of motor resonance mechanism [52, 53].

The message that can be derived from this review of neurophysiological, neuroimaging and behavioral data is that robotic agents can to a certain degree evoke motor resonance. The efficacy in activating the resonating mechanism, as expected, varies as a function of both the robot shape and the way it moves. If at the neurophysiological level the MNS activation seems to be present also when the non-biological agent moves with a non-biological kinematics [6], some behavioral effects, as motion interference and priming, apparently require a higher degree of human resemblance or attributed animacy not only in the physical appearance but, in some cases, also in the law of robot motion [14]. It is reasonable to assume that the robotic platforms which can evoke a higher degree of resonance may be the more adapt to induce humans to naturally interact with them. In favor to this hypothesis, the classic study on the “chameleon effect” [54] showed that participants who had been mimicked by the confederate (behavior that evokes a high degree of motor resonance in the observer) reported liking him/her more and judged the interaction as smoother. This finding suggests that motor resonance mechanisms may act as a kind of “social glue” [55], stimulating prosocial orientation in general [56] (for a review on human-human joint action studies [57]). In addition, Chaminade and Cheng [14] have even proposed an explanation of the aforementioned “Uncanny Valley” hypothesis in the framework of motor resonance, suggesting that the sense of eeriness felt in presence of robots extremely similar, but not equal, to humans derives from the missing resonance, due to an imperfect matching between the robotic actions with the ones that would be expected by a human.

Therefore, the study of resonance mechanisms from a robotics point of view could help to find guidelines in the design of new robots explicitly studied for HRI.

Motor Resonance in HHI: Proactive Gaze and Automatic Imitation Behaviors

In the previous section we have described several techniques which have been initially adopted to study HHI by examining motor resonance and which, subsequently, have been also useful to study HRI. The neurophysiological and neuroimaging approaches have the great advantage of highlighting exactly which area in the brain responds to a specified social stimulus. However, this kind of methodologies investigates the human brain in non-ecological situations. This could, of course, represent a disadvantage in studying social interaction. On the other hand, although behavioral methods do not allow a direct measure of brain activity, they can be performed during more natural interactions. In the middle are other techniques that until now have not been extensively applied to HRI, but which could provide interesting insights. These techniques allow the study of two socially relevant phenomena: proactive gaze and automatic imitation. In the following paragraphs we will present a brief review of the use of such techniques for the study of human behavior.

Proactive Gaze Behavior

Gaze plays a fundamental role in communication. It continuously informs people about what we are interested in and whether we are sharing the same focus of attention [58]. During the execution of our everyday actions, gaze anticipates the goal of our movements, allowing us to be ready to react to sudden changes in target object position [59]. In addition a fundamental aspect of human gazing is the ability to anticipate others’ goals [60]. Deriving in advance other people’s intentions is necessary to interact with them, without the need of continuously waiting for the conclusion of each single sub-movement to be able to cooperate in the achievement of a common goal. This ability is thought to depend on a motor resonance mechanism, based on the activity of the MNS [60, 61]. The observation of other people’s action activates in the observer the motor plan she/he would have used to achieve the same goal. As the observer knows well before completion the goal of her/his own covert motor plan, she/he understands in advance also the goal of the action she/he is observing. As a consequence, her/his eyes can move directly to the action goal, so that she/he becomes ready to interact with the target object and with the action partner.

Flanagan and Johansson [60] were the first in discovering the existence of proactive gaze behavior and demonstrated that when subjects observe an object manipulation task, their gaze predicts significant events (e.g. graspings, contacts) rather than reactively tracking the agent motion. They asked subjects to perform or observe a block-stacking task and found out that the coordination between observer’s gaze and actor’s hand was very similar to the visuo-manual coordination adopted during action execution. In fact, in both cases gaze was directed to subsequent action goals, represented, in this task, by the places where blocks were grasped or released. Interestingly, however, when the hands moving the object could not be seen, the observer’s gaze did not anticipate objects behavior anymore, but started tracking the moving blocks and being reactive rather than predictive. The authors explained these findings in terms of a motor resonance mechanism.

A further study brought evidence in favor of this hypothesis investigating gaze behavior in infants ranging from 6 months to 12 months of age [61]. The choice of this particular age range was made to compare subjects who have already mastered a grasp and transport action (12-months-olds) with a control group who did not (6-months-olds), though being able to show predictive behaviors (6-months-olds can anticipate the reappearance of temporary occluded objects [62]). According to the MNS hypothesis, proactive gaze reflects the mapping of observed actions onto one’s own representation of the same actions [60]. Thus, the development of gaze predictivity should follow action learning. The authors found that during the observation of a grasp and transport action the 12-months-olds focused on goals as the adults did, while 6 months olds did not. These results and the fact that predictivity appeared only when the agent moving the object was visible (and not when objects moved alone), provided further support to the hypothesis of a role of the MNS in the emergence of proactive gaze.

Gaze proactivity seems to be a ubiquitous mechanism, present also in the case of the observation of unpredictable actions. Even if the observers do not know in advance the target of actor’s action (for instance which of two possible blocks she/he will grasp [63] or who will be the actor [64]), they shift their gaze proactively to the target as soon as they become certain about the goal, reaching with their eyes the contact point well before the hand arrives. Anticipatory eye movements during action observation may occur also in a virtual environment, as shown by [65], who replicated the block stacking task described by [60] on a computer screen. Their results seem to indicate that rather than the presence of a direct actor-object interaction, gaze prediction would just depend on the possibility to identify an intentional entity causing the movements. The explanation of gaze proactivity in terms of MNS activation has been challenged by Eshuis et al. [66], who suggested that the tendency to anticipate others’ goals might not be mediated by a direct matching process (associated to a motor resonance phenomenon), but rather would depend on a general expectation that humans behave in a goal-directed and rational manner (teleological processing). A recent work by Gredebaeck and colleagues indicates that both direct matching and teleological processing are present since early infancy [67]. However, goal anticipation would be mediated only by a direct matching process, while teleological processing would be responsible for the retrospective evaluation of action rationality and would not require (or would require less) action experience. Indeed, the authors demonstrated that while both 6- and 12-months-olds reacted with surprise if presented with feeding actions performed in non-rational manner (i.e., with food being carried toward the hand instead of the mouth), only twelve-months-olds, who had a longer experience in being fed, anticipated with their eyes the goal of the observed action. Moreover, the degree of gaze anticipation correlated significantly with their life experience being fed. Further evidence in favor of the direct matching hypothesis comes from a study which found a tight correlation between manual ability and the ability to anticipate the goal of others’ actions in toddlers between 18 and 25 months of age [68]. Children who solved a puzzle efficiently were also more proficient at anticipating the goal of similar actions performed by others. This finding is in line with the theory that goal anticipation is facilitated by a matching process that maps the observed action onto one’s own motor representation of that action. In the same direction points the interesting study by Rochat et al. [69], challenging the teleological stance hypothesis by investigating the proactive gaze in behaving monkeys observing predictable and unpredictable human actions.

It emerges therefore that proactive gaze behavior is tightly connected to the activation of the MNS. As a consequence, evaluating subject’s predictive gaze behavior during action observation could represent an effective indirect measure of motor resonance. In particular, if applied to HRI, the degree of anticipation during the monitoring of robots behaviors would quantify how the humanoids are perceived as intelligent, goal-directed agents rather than just complex machines [7072].

The simplest implementation of this paradigm would be to replicate the same experiments previously conducted in HHI studies (i.e. observing someone performing a goal directed action) by replacing the human demonstrator with the robotic device to be tested. This way, it would be possible to contrast directly the natural gaze pattern adopted during the observation of human and robot goal-oriented actions [71]. A comparison between the timing of gazing (e.g. the number of predictive saccades) in the two conditions could give an indication of the degree of resonance evoked by the different actors. A proactive gaze behavior during robot observation would be the sign that the robotic platform can activate a motor resonance mechanism, thus indicating its/his/her ability to induce prosocial behaviors [14, 56]. Although gaze monitoring has been traditionally cumbersome and complex, the newest advances in technology have produced light and easy to use head mounted eye trackers, comfortably wearable even by children (see as examples the products by Positive Science, the eye tracking glasses produced by SMI, Tobii or the Mobile Eye-XG by ASL). New efforts have been also done in order to let this kind of devices become more affordable (see for instance the EyeGuide project by the Grinbath company). Now, eye tracking is proposed for applications such as market research in supermarkets and in general in natural environments. Not so far in future, then, it will be possible to easily monitor the gaze of a subject while she/he visits a place where both humans and robots are performing simple actions, as it sometimes happens in robotics labs or when robots are tested in common environments as supermarkets or industries.

Automatic Imitation Behavior

Imitation is a pervasive phenomenon that influences each aspect of everyday life: from automatic to voluntary behaviors, from the motor to the cognitive domain, from simple human-object interactions to community association [16]. Motor imitation, that is the possibility of interacting physically with others by sharing a behavioral state, represents a powerful biological resource for cognitive development [73] and social interaction [54]. While in everyday language the word imitation traditionally means “to copy”, i.e. to produce an approximate or a precise replica of the observed action, this broad definition includes a large variety of phenomena that could be approached at different levels. At the lowest level, imitation could be described as a sensorimotor transformation: i.e. a special case of translation of sensory information into action [74]. Indeed, in order to imitate the actor, the observed action must be translated into the specific motor commands to produce movements that visually match the model’s movements. For this reason, imitation is one of the most interesting behavioral evidence of the link between action and perception. Neurophysiological studies suggested that there is a resonance mechanism (i.e., motor resonance) between action and perception which is manifested from a behavioral point of view by the involuntarily and automatic contagion induced by motion observation in subsequent action: i.e., automatic imitation.

Automatic imitation [75] could be described as a permeating phenomenon that spreads from the motor to the emotional sphere, and represents the way by which people empathize and voluntarily decide to imitate others’ actions. Traditionally, the expressions automatic imitation was deliberately used as synonym of motor contagion [7678], motion compatibility [51], motor mimicry [79], motor priming [45, 46, 80, 81], and visuomotor priming [44].

Here, we would like to keep this important aspect of automatic imitation, but at the same time to approach it in a more general perspective, where priming mechanisms (responsible for initial motion facilitation) combine with the following implicit reproduction of some features of the observed action (imitation in the sense of movement reproduction). Indeed, until now most behavioral studies have quantified automatic imitation by means of reaction time and performance error measurements [45, 46, 51, 75, 8287], thus referring to its “priming component”. Although informative about the facilitation a person has to perform a movement after having observed it, these parameters do not allow assessing which features of the model’s actions are extracted and then used in action planning.

Though traditionally the goal of the observed action was proposed to play a dominant role during action imitation [16, 88, 89], some recent works focused on motion kinematic features attributing them a determinant function in automatic imitation. As example, Bove et al. [90] ascertained whether the frequency of self-paced finger movements was modified by prior observation of motion performed at different frequencies. Participants performed a simple finger sequence at different intervals after observation of videos of either landscapes or finger opposition movements. Their findings showed that the mere action observation influenced participants’ spontaneous movement tempo. Since this modification occurred both with and without explicit instructions to reproduce the stimulus tempo, this result gives an example of off-line automatic imitation, and puts forward the importance of the observed timing during action planning and subsequent execution.

Accordingly, Bisio et al. [91] succeeded in finding an off-line automatic imitation of the observed velocity, both for abstract and human stimuli. The aim of this study was to understand if the kinematics of a previously seen stimulus primes the executed action, and if this effect is sensitive to the kind of stimuli presented. To do that they proposed a simple imitation paradigm in which a dot or a human demonstrator moved in front of the participant who was instructed either to reach the final position of the stimulus or to imitate its motion. The results showed that participants’ movements were automatically contaminated by stimulus velocity. This effect was not affected by the kind of stimuli used, i.e., motor responses were influenced in the same manner after dot or human observation. In contrast, automatic imitation was sensitive to the stimulus kinematics: the contagion disappeared when dot kinematics violated the biological laws, suggesting that automatic imitation mechanisms are tuned by the possibility to match the external movement with the internal motor representation (direct matching hypothesis [92]).

Another example of automatic imitation of the observed movement comes from Tia et al.’s [93] study on postural reaction during postural imbalance observation. Participants looked at upright point-light display (i.e., an impoverished display composed of moving dots obtained by placing small light sources to the major joints of the human body used for the first time by Johansson [94]) of a gymnast balancing on a rope in an instable manner, causing a larger center of pressure (CoP) area. Participants automatically reacted to that stimulus increasing both CoP excursion area and antero-posterior CoP displacement, hence suggesting an on-line automatic imitation of the observed posture.

All these works gave the opportunity to quantitatively appreciate the effect of off- and on-line imitation triggered by the observation of artificial stimuli or human companions. But, what about robot observation? Does the observation of a robotic agent induce similar automatic imitation effect in the human observer? Indeed, both contemporary and sequential human-robot joint actions might answer to these questions. For instance, one could imagine the human facing a robot while it/she/he is performing different kinds of task, as pointing towards a target or moving an object. The concurrent or the immediately following human’s movement execution allows verifying the occurrence of automatic imitation, and thus of motor resonance mechanisms. In such a context motion capture techniques are the appropriate methodologies to describe how the HRI evolves at behavioral level. In particular, the analysis of robot and human movement kinematics would help to quantify the potential influence exerted by the robotic agent on human action in both its priming and contagion components.

Starting from the well-known “chameleon effect” (i.e., “nonconscious mimicry of the postures, mannerisms, facial expressions, and other behaviors of one’s interaction partners, such that one’s behavior passively and unintentionally changes to match that of others in one’s current social environment”, [54]), and also following MN discovery, imitation was proposed as an important aspect of human behavior, facilitating social interactions [95]. In particular, the automatic side of imitation represents a powerful tool to assess how humans perceive and interact with external agents, mostly free of cognitively applied preconceptions. Indeed, the application of this experimental paradigm permits to quantitatively describe if and how human actions adapt in presence of robotic agents, gaining insight into this new form of communication.


The need of producing humanoid robots capable of establishing natural interactions with humans is becoming more and more relevant as the development of humanoid robotic platforms progresses. This requirement implies the necessity of designing not only new controls for robot behavior, but also new evaluation methods to assess quantitatively how the robot is perceived by the human counterpart. One promising technique could rely on the assessment of motor resonance [14]. Indeed, it does not require a cognitive evaluation of the shape or of the behavior of the robot, but directly measures the natural, unconscious effects of the observation of robotic actions. This approach has been applied in the last decade by means of neuroimaging and neurophysiological studies [6, 33, 34, 40], which however tend to be quite invasive for subjects, and with behavioral experiments focusing on motor priming [45, 46, 49, 50, 52, 53] or motor interference [5, 32, 41]. We suggest two instruments already applied to study HHI as convenient behavioral measures of motor resonance: i.e. the monitoring of predictive gazing behavior, and a global concept of automatic imitation, where both priming and “imitation per se” effects contribute to quantitatively describe HRI. These methodologies have several advantages. First they are not invasive and therefore easily applicable. Second they guarantee the naturalness of the relationship between humans and robot. Indeed, because these techniques do not require restricted space (as in the case of neuroimaging investigations) they allow direct interaction with the robotic platforms, avoiding the use of videos, which could induce a sort of “virtual relationship” between the agents. Finally, since these mechanisms were proposed to be mediated by MNS activity [60, 75], the appearance of proactive gaze and automatic imitation behaviors seems to indicate that we are mapping on our motor repertoire the action of someone we are observing.

Of course several other cognitive processes might be involved during action observation and interaction in addition to MNS activity, and more than a few can be the factors affecting robot perception, including attention, emotional state, previous experience and cultural background. However, we believe that the analysis of the resonance phenomenon—which is thought to play such a basic role in human interactions—could represent an important source of information on the unconscious perception of robots behavior, which would be difficult to be obtained with other means.

Therefore, only the combination of these quantitative measures with physiological [1012] and qualitative information [9] would provide a comprehensive description of HRI. In this way, the outcome of the interaction could be approached from the conscious judgment provided by the human agent, as well as through the quantification of its automatic response. Altogether these data would help to understand if humans recognize the robotic agent as conspecific, or at least as an individual who could share our same goals and same actions—and not as an object.

As a result, these techniques represent an innovative test of the basic predisposition of humanoid robots to interactions useful both to give guidelines on how to build robots and to shed some further light on how humanoid robots are perceived by the human brain.


  1. 1.

    Gates B (2007) A robot in every home. Sci Am 296(1):58–65

    Article  Google Scholar 

  2. 2.

    Bicchi A, Peshkin M, Colgate JE (2008) Safety for physical human-robot interaction. In: Siciliano B, Khatib O (eds) Springer handbook of robotics 2008. Springer, Berlin, pp 1335–1348

    Google Scholar 

  3. 3.

    Haegele M, Nillson K, Pires JN (2008) Industrial robotics. In: Siciliano B, Khatib O (eds) Springer handbook of robotics 2008. Springer, Berlin, pp 963–985

    Google Scholar 

  4. 4.

    Taylor RH, Menciassi A, Fichtinger G, Dario P (2008) Medical robotics and systems. In: Sicialiano B, Khatib O (eds) Springer handbook of robotics 2008. Springer, Berlin, pp 1199–1222

    Google Scholar 

  5. 5.

    Kilner JM, Paulignan Y, Blakemore SJ (2003) An interference effect of observed biological movement on action. Curr Biol 13(6):522–525

    Article  Google Scholar 

  6. 6.

    Gazzola V, Rizzolatti G, Wicker B, Keysers C (2007) The anthropomorphic brain: the mirror neuron system responds to human and robotic actions. NeuroImage 35(4):1674–1684

    Article  Google Scholar 

  7. 7.

    Mori M (1970) Bukimi no tani (The uncanny valley). Energy 7(4):33–35

    Google Scholar 

  8. 8.

    DiSalvo CF, Gemperle F, Forlizzi J, Kiesler S (2002) All robots are not equal: The design and perception of humanoid robot heads. In: DIS2002 proceedings of the 4th conference on designing interactive systems: processes, practices, methods, and techniques

    Google Scholar 

  9. 9.

    Bartneck C, Kulic D, Croft E, Zoghbi S (2009) Measurement instruments for the anthropomorphism, animacy, likability, perceived safety of robots. Int J Soc Robot 1:71–81

    Article  Google Scholar 

  10. 10.

    Dehais F, Sisbot EA, Alami R, Causse M (2011) Physiological and subjective evaluation of a human-robot object hand-over task. Appl Ergon 2(6):785–791. doi:10.1016/j.apergo.2010.12.005

    Article  Google Scholar 

  11. 11.

    Wada K, Shibata T, Musha T, Kimura S (2005) Effects of robot therapy for demented patients evaluated by EEG. In: Proc IEEE/RSJ int conf intelligent robots and systems (IROS), pp 1552–1557

    Google Scholar 

  12. 12.

    Rani P, Sims J, Brackin R, Sarkar N (2002) Online stress detection using psychophysiological signal for implicit human-robot cooperation. Robotica 20(6):673–686

    Article  Google Scholar 

  13. 13.

    Rizzolatti G, Fadiga L, Fogassi L, Gallese V (1999) Resonance behaviors and mirror neurons. Arch Ital Biol 137(2–3):85–100

    Google Scholar 

  14. 14.

    Chaminade T, Cheng G (2009) Social cognitive neuroscience and humanoid robotics. J Physiol (Paris) 103(3–5):286–295

    Article  Google Scholar 

  15. 15.

    Rizzolatti G, Craighero L (2004) The mirror-neuron system. Annu Rev Neurosci 27:169–192

    Article  Google Scholar 

  16. 16.

    Iacoboni M (2009) Imitation, empathy, and mirror neurons. Annu Rev Psychol 60:653–670

    Article  Google Scholar 

  17. 17.

    di Pellegrino G, Fadiga L, Fogassi L, Gallese V, Rizzolatti G (1992) Understanding motor events: a neurophysiological study. Exp Brain Res 91(1):176–180

    Google Scholar 

  18. 18.

    Gallese V, Fadiga L, Fogassi L, Rizzolatti G (1996) Action recognition in the premotor cortex. Brain 119(Pt 2):593–609

    Article  Google Scholar 

  19. 19.

    Rizzolatti G, Fadiga L, Gallese V, Fogassi L (1996) Premotor cortex and the recognition of motor actions. Cogn Brain Res 3(2):131–141

    Article  Google Scholar 

  20. 20.

    Rizzolatti G, Fogassi L, Gallese V (2001) Neurophysiological mechanisms underlying the understanding and imitation of action. Nat Rev, Neurosci 2(9):661–670

    Article  Google Scholar 

  21. 21.

    Fadiga L, Fogassi L, Pavesi G, Rizzolatti G (1995) Motor facilitation during action observation: a magnetic stimulation study. J Neurophysiol 73(6):2608–2611

    Google Scholar 

  22. 22.

    Rizzolatti G, Fabbri-Destro M, Cattaneo L (2009) Mirror neurons and their clinical relevance. Nat Clin Pract Neurol 5(1):24–34

    Article  Google Scholar 

  23. 23.

    Mukamel R, Ekstrom AD, Kaplan J, Iacoboni M, Fried I (2010) Single-neuron responses in humans during execution and observation of actions. Curr Biol 20(8):750–756. doi:10.1016/j.cub.2010.02.045. S0960-9822(10)00233-2 [pii]

    Article  Google Scholar 

  24. 24.

    Keysers C, Gazzola V (2010) Social neuroscience: mirror neurons recorded in humans. Curr Biol 20(8):R353–354. doi:10.1016/j.cub.2010.03.013. S0960-9822(10)00327-1 [pii]

    Article  Google Scholar 

  25. 25.

    Buccino G, Binkofski F, Fink GR, Fadiga L, Fogassi L, Gallese V, Seitz RJ, Zilles K, Rizzolatti G, Freund HJ (2001) Action observation activates premotor and parietal areas in a somatotopic manner: an fMRI study. Eur J Neurosci 13(2):400–404

    Google Scholar 

  26. 26.

    Iacoboni M, Dapretto M (2006) The mirror neuron system and the consequences of its dysfunction. Nat Rev, Neurosci 7(12):942–951

    Article  Google Scholar 

  27. 27.

    Rizzolatti G, Fogassi L, Gallese V (2002) Motor and cognitive functions of the ventral premotor cortex. Curr Opin Neurobiol 12(2):149–154

    Article  Google Scholar 

  28. 28.

    Gallese V, Keysers C, Rizzolatti G (2004) A unifying view of the basis of social cognition. Trends Cogn Sci 8(9):396–403

    Article  Google Scholar 

  29. 29.

    Singer T (2006) The neuronal basis and ontogeny of empathy and mind reading: review of literature and implications for future research. Neurosci Biobehav Rev 30(6):855–863

    Article  Google Scholar 

  30. 30.

    Gallese V, Goldman A (1998) Mirror neurons and the simulation theory of mind-reading. Trends Cogn Sci 12:493–501

    Article  Google Scholar 

  31. 31.

    Fadiga L, Craighero L, Buccino G, Rizzolatti G (2002) Speech listening specifically modulates the excitability of tongue muscles: a TMS study. Eur J Neurosci 15(2):399–402

    Article  Google Scholar 

  32. 32.

    Oztop E, Franklin D, Chaminade T, Cheng G (2005) Human-humanoid interaction: is a humanoid robot perceived as a human? Int J Humanoid Robot 2:537–559

    Article  Google Scholar 

  33. 33.

    Oberman LM, McCleery JP, Ramachandran VS, Pineda JA (2007) EEG evidence for mirror neuron activity during the observation of human and robot actions: toward an analysis of the human qualities of interactive robots. Neurocomputing 70:2194–2203

    Article  Google Scholar 

  34. 34.

    Tai YF, Scherfler C, Brooks DJ, Sawamoto N, Castiello U (2004) The human premotor cortex is ‘mirror’ only for biological actions. Curr Biol 14(2):117–120

    Article  Google Scholar 

  35. 35.

    Umiltà MA, Escola L, Intskirveli I, Grammont F, Rochat M, Caruana F, Jezzini A, Gallese V, Rizzolatti G (2008) When pliers become fingers in the monkey motor system. Proc Natl Acad Sci USA 105(6):2209–2213

    Article  Google Scholar 

  36. 36.

    Rochat MJ, Caruana F, Jezzini A, Escola L, Intskirveli I, Grammont F, Gallese V, Rizzolatti G, Umiltà MA (2010) Responses of mirror neurons in area F5 to hand and tool grasping observation. Exp Brain Res 204(4):605–616

    Article  Google Scholar 

  37. 37.

    Perani D, Fazio F, Borghese NA, Tettamanti M, Ferrari S, Decety J, Gilardi MC (2001) Different brain correlates for watching real and virtual hand actions. NeuroImage 14(3):749–758

    Article  Google Scholar 

  38. 38.

    Chaminade T, Zecca M, Blakemore S-J, Takanishi A, Frith CD, Micera S, Dario P, Rizzolatti G, Gallese V, Umiltà MA (2010) Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures. PLoS ONE 5(7):e11577

    Article  Google Scholar 

  39. 39.

    Pineda JA (2005) The functional significance of mu rhythms: translating “seeing” and “hearing” into “doing”. Brains Res Rev 50(1):57–68

    Article  Google Scholar 

  40. 40.

    Shimada S (2010) Deactivation in the sensorimotor area during observation of a human agent performing robotic actions. Brain Cogn 72(3):394–399

    Article  Google Scholar 

  41. 41.

    Chaminade T, Franklin D, Oztop E, Cheng G (2005) Motor interference between humans and humanoid robots: effect of biological and artificial motion. In: International conference on development and learning

    Google Scholar 

  42. 42.

    Atkeson CG, Hale JG, Pollick FE, Riley M, Kotosaka S, Schaal S, Shibata T, Tevatia G, Ude A, Vijayakumar S, Kawato M (2000) Using humanoid robots to study human behavior. In: IEEE intelligent systems, pp 46–56

    Google Scholar 

  43. 43.

    Kilner J, de C Hamilton AF, Blakemore S-J (2007) Interference effect of observed human movement on action is due to velocity profile of biological motion. Soc Neurosci 2(3–4):158–166

    Article  Google Scholar 

  44. 44.

    Craighero L, Fadiga L, Umiltà CA, Rizzolatti G (1996) Evidence for visuomotor priming effect. NeuroReport 8(1):347–349

    Article  Google Scholar 

  45. 45.

    Press C, Bird G, Flach R, Heyes C (2005) Robotic movement elicits automatic imitation. Cogn Brain Res 25(3):632–640

    Article  Google Scholar 

  46. 46.

    Press C, Gillmeister H, Heyes C (2007) Sensorimotor experience enhances automatic imitation of robotic action. Proc Biol Sci 274(1625):2509–2514

    Article  Google Scholar 

  47. 47.

    Bird G, Leighton J, Press C, Heyes C (2007) Intact automatic imitation of human and robot actions in autism spectrum disorders. Proc Biol Sci 274(1628):3027–3031

    Article  Google Scholar 

  48. 48.

    Pierno AC, Mari M, Lusher D, Castiello U (2008) Robotic movement elicits visuomotor priming in children with autism. Neuropsychologia 46(2):448–454

    Article  Google Scholar 

  49. 49.

    Liepelt R, Brass M (2010) Top-down modulation of motor priming by belief about animacy. Exp Psychol 57(3):221–227

    Article  Google Scholar 

  50. 50.

    Liepelt R, Prinz W, Brass M (2010) When do we simulate non-human agents? Dissociating communicative and non-communicative actions. Cognition 115(3):426–434

    Article  Google Scholar 

  51. 51.

    Brass M, Bekkering H, Wohlschlaeger A, Prinz W (2000) Compatibility between observed and executed finger movements: comparing symbolic, spatial, and imitative cues. Brain Cogn 44(2):124–143

    Article  Google Scholar 

  52. 52.

    Longo MR, Bertenthal BI (2009) Attention modulates the specificity of automatic imitation to human actors. Exp Brain Res 192(4):739–744

    Article  Google Scholar 

  53. 53.

    Stanley J, Gowen E, Miall RC (2007) Effects of agency on movement interference during observation of a moving dot stimulus. J Exp Psychol Hum Percept Perform 33(4):915–926

    Article  Google Scholar 

  54. 54.

    Chartrand TL, Bargh JA (1999) The chameleon effect: the perception-behavior link and social interaction. J Pers Soc Psychol 76(6):893–910

    Article  Google Scholar 

  55. 55.

    Lakin JL, Jefferis VE, Cheng CM, Chartrand TL (2003) The chameleon effect as social glue: Evidence for the evolutionary significance of nonconscious mimicry. J Nonverbal Behav 27:145–162

    Article  Google Scholar 

  56. 56.

    van Baaren RB, Holland RW, Kawakami K, van Knippenberg A (2004) Mimicry and prosocial behavior. Psychol Sci 15(1):71–74. doi:01501012 [pii]

    Article  Google Scholar 

  57. 57.

    Knoblich G, Butterfill S, Sebanz N (2011) Psychological research on joint action: theory and data. In: Ross B (ed) The psychology of learning and motivation, vol 54. Academic Press, Burlington, pp 59–101

    Google Scholar 

  58. 58.

    Frischen A, Bayliss AP, Tipper SP (2007) Gaze queuing of attention: visual attention, social cognition, and individual differences. Psychol Bull 133(4):694–724

    Article  Google Scholar 

  59. 59.

    Johansson RS, Westling G, Bäckström A, Flanagan JR (2001) Eye-hand coordination in object manipulation. J Neurosci 21(17):6917–6932

    Google Scholar 

  60. 60.

    Flanagan JR, Johansson RS (2003) Action plans used in action observation. Nature 424(6950):769–771

    Article  Google Scholar 

  61. 61.

    Falck-Ytter T, Gredebaeck G, von Hofsten C (2006) Infants predict other people’s action goals. Nat Neurosci 9(7):878–879

    Article  Google Scholar 

  62. 62.

    Rosander K, von Hofsten C (2004) Infants’ emerging ability to represent occluded object motion. Cognition 91(1):1–22

    Article  Google Scholar 

  63. 63.

    Rotman G, Troje NF, Johansson RS, Flanagan JR (2006) Eye movements when observing predictable and unpredictable actions. J Neurophysiol 96(3):1358–1369

    Article  Google Scholar 

  64. 64.

    Webb A, Knott A, Macaskill MR (2010) Eye movements during transitive action observation have sequential structure. Acta Psychol 133(1):51–56

    Article  Google Scholar 

  65. 65.

    Gesierich B, Bruzzo A, Ottoboni G, Finos L (2008) Human gaze behaviour during action execution and observation. Acta Psychol 128(2):324–330

    Article  Google Scholar 

  66. 66.

    Eshuis R, Coventry KR, Vulchanova M (2009) Predictive eye movements are driven by goals, not by the mirror neuron system. Psychol Sci 20(4):438–440

    Article  Google Scholar 

  67. 67.

    Gredebäck G, Melinder A (2010) Infants’ understanding of everyday social interactions: a dual process account. Cognition 114(2):197–206

    Article  Google Scholar 

  68. 68.

    Gredebäck G, Kochukhova O (2010) Goal anticipation during action observation is influenced by synonymous action capabilities, a puzzling developmental study. Exp Brain Res 202(2):493–497

    Article  Google Scholar 

  69. 69.

    Rochat MJ, Serra E, Fadiga L, Gallese V (2008) The evolution of social cognition: goal familiarity shapes monkeys’ action understanding. Curr Biol 18(3):227–232

    Article  Google Scholar 

  70. 70.

    Sciutti A, Nori F, Jacono M, Metta G, Sandini G, Fadiga L (2011) Proactive gaze behavior: which observed action features do influence the way we move our eyes? J Vis 11(11):509. doi:10.1167/11.11.509

    Article  Google Scholar 

  71. 71.

    Sciutti A, Nori F, Jacono M, Metta G, Sandini G, Fadiga L (2011) Human and robotic goal oriented actions evoke motor resonance—a gaze behavior study. Program No. YY6 832.23. Neuroscience Meeting Planner. Washington, DC: Society for Neuroscience. Online

  72. 72.

    Sciutti A, Bisio A, Nori F, Metta G, Fadiga L, Sandini G (2012) Anticipatory gaze in human-robot interactions. In: “Gaze in HRI from modeling to communication” workshop at the 7th ACM/IEEE international conference on human-robot interaction, Boston, Massachusetts, USA, 2012. Online

    Google Scholar 

  73. 73.

    Meltzoff AN, Moore MK (1977) Imitation of Facial and Manual Gestures by Human Neonates. Science 198(4312):75–78

    Article  Google Scholar 

  74. 74.

    Wohlschlaeger A, Gattis M, Bekkering H (2003) Action generation and action perception in imitation: an instance of the ideomotor principle. Philos Trans R Soc Lond B, Biol Sci 358(1431):501–515

    Article  Google Scholar 

  75. 75.

    Heyes C (2011) Automatic imitation. Psychol Bull 137(3):463–483

    Article  Google Scholar 

  76. 76.

    Blakemore S-J, Frith C (2005) The role of motor contagion in the prediction of action. Neuropsychologia 43(2):260–267

    Article  Google Scholar 

  77. 77.

    Marshall PJ, Bouquet CA, Thomas AL, Shipley TF (2010) Motor contagion in young children: Exploring social influences on perception-action coupling. Neural Netw 23(8–9):1017–1025

    Article  Google Scholar 

  78. 78.

    Bouquet CA, Shipley TF, Capa RL, Marshall PJ (2011) Motor contagion: goal-directed actions are more contagious than non-goal-directed actions. Exp Psychol 58(1):71–78

    Article  Google Scholar 

  79. 79.

    Spengler S, Brass M, Kühn S, Schütz-Bosbach S (2010) Minimizing motor mimicry by myself: self-focus enhances online action-control mechanisms during motor contagion. Conscious Cogn 19(1):98–106

    Article  Google Scholar 

  80. 80.

    Castiello U, Paine M, Wales R (2002) Perceiving an entire object and grasping only half of it. Neuropsychologia 40(2):145–151

    Article  Google Scholar 

  81. 81.

    Edwards MG, Humphreys GW, Castiello U (2003) Motor facilitation following action observation: a behavioural study in prehensile action. Brain Cogn 53(3):495–502

    Article  Google Scholar 

  82. 82.

    Brass M, Bekkering H, Prinz W (2001) Movement observation affects movement execution in a simple response task. Acta Psychol 106(1-2):3–22

    Article  Google Scholar 

  83. 83.

    Catmur C, Walsh V, Heyes C (2009) Associative sequence learning: the role of experience in the development of imitation and the mirror system. Philos Trans R Soc Lond B, Biol Sci 364(1528):2369–2380

    Article  Google Scholar 

  84. 84.

    Gillmeister H, Catmur C, Liepelt R, Brass M, Heyes C (2008) Experience-based priming of body parts: a study of action imitation. Brain Res 1217:157–170

    Article  Google Scholar 

  85. 85.

    Heyes C, Bird G, Johnson H, Haggard P (2005) Experience modulates automatic imitation. Cogn Brain Res 22(2):233–240

    Article  Google Scholar 

  86. 86.

    Leighton J, Bird G, Heyes C (2010) ‘Goals’ are not an integral component of imitation. Cognition 114(3):423–435

    Article  Google Scholar 

  87. 87.

    Catmur C, Heyes C (2011) Time course analyses confirm independence of imitative and spatial compatibility. J Exp Psychol Hum Percept Perform 37(2):409–421

    Article  Google Scholar 

  88. 88.

    Bekkering H, Wohlschlaeger A, Gattis M (2000) Imitation of gestures in children is goal-directed. Q J Exp Psychol, A Hum Exp Psychol 53(1):153–164

    Article  Google Scholar 

  89. 89.

    Erlhagen W, Mukovskiy A, Bicho E (2006) A dynamic model for action understanding and goal-directed imitation. Brain Res 1083(1):174–188

    Article  Google Scholar 

  90. 90.

    Bove M, Tacchino A, Pelosin E, Moisello C, Abbruzzese G, Ghilardi MF (2009) Spontaneous movement tempo is influenced by observation of rhythmical actions. Brain Res Bull 80(3):122–127

    Article  Google Scholar 

  91. 91.

    Bisio A, Stucchi N, Jacono M, Fadiga L, Pozzo T (2010) Automatic versus voluntary motor imitation: effect of visual context and stimulus velocity. PLoS ONE 5(10):e13506

    Article  Google Scholar 

  92. 92.

    Iacoboni M, Woods RP, Brass M, Bekkering H, Mazziotta JC, Rizzolatti G (1999) Cortical mechanisms of human imitation. Science 286(5449):2526–2528

    Article  Google Scholar 

  93. 93.

    Tia B, Saimpont A, Paizis C, Mourey F, Fadiga L, Pozzo T (2011) Does observation of postural imbalance induce a postural reaction? PLoS ONE 6(3):e17799

    Article  Google Scholar 

  94. 94.

    Johansson G (1973) Visual perception of biological motion and model for its analysis. Percept Psychophys 14:201–211

    Article  Google Scholar 

  95. 95.

    Iacoboni M (2008) Mirroring people: The new science of how we connect with others. Farrar Straus & Giroux, New York

    Google Scholar 

Download references


This study was supported by the ITALK project (EU ICT Cognitive systems & robotics integrating project, grant 214668).

Open Access

This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

Author information



Corresponding author

Correspondence to Ambra Bisio.

Additional information

A. Sciutti and A. Bisio contributed equally to this work.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Sciutti, A., Bisio, A., Nori, F. et al. Measuring Human-Robot Interaction Through Motor Resonance. Int J of Soc Robotics 4, 223–234 (2012).

Download citation


  • Social interactions
  • Proactive gaze
  • Automatic imitation
  • Humanoid robots