Introduction

In this study, we examined the spatial dependency of action simulation by measuring participants’ engagement in motor imagery. We used two mental rotation tasks to study the spatial dependency of effector-specific- and object-oriented action simulation by presenting the stimuli in the spaces near to and far away from participants. The space immediately surrounding our body is often referred to as the peripersonal space (Rizzolatti et al. 1997). Objects within this peripersonal space (PPS) can be reached, grasped, and manipulated (Holmes and Spence 2004). Objects situated beyond this space, termed as extrapersonal space (EPS), cannot be grasped without moving oneself or the object. According to Gallese (2005), objects presented in PPS but not those in EPS are automatically mapped onto an egocentric frame of reference via action simulation (Graziano 1999; Farne et al. 2000; Gallese 2005, 2007). The presence of the actual action simulation itself, however, has never been directly tested empirically.

Besides the suggested properties of the PPS on the phenomenological level, the PPS has been shown to be multimodal in nature (Graziano 1999; Maravita et al. 2003) and neurally dissociateable from the EPS in both primates (Rizzolatti et al. 1981a, b; Fogassi et al. 1996, 1999; Graziano et al. 1994, 1997; Murata et al. 1997; Duhamel et al. 1998; Caggiano et al. 2009) and humans (di Pellegrino et al. 1997; Mattingley et al. 1997; Ladavas et al. 1998a, b; Pavani et al. 2000; Makin et al. 2007; Gallivan et al. 2009). Objects observed within PPS are typically mapped in motor terms, i.e., related to the egocentric frame of reference (Graziano 1999; Makin et al. 2007). Furthermore, Costantini et al. (2010) showed that affordances rely not only on the action possibilities of grasping or using an object, but also on the object being within reach. These findings point to the automatic simulation of an action toward the observed object when it is located within PPS. Moreover, the ability to simulate sensory consequences of potential movements has been shown to be crucial for action simulation (Coello and Delevoye-Turrell 2007).

In 2005, Gallese formulated the action simulation hypothesis, stating that observed objects within PPS are automatically mapped onto an egocentric frame of reference by action simulation (Gallese 2005, 2007; Gallese and Lakoff 2005; Knox 2009). This hypothesis was based on, among others, the findings of Graziano (1999), who showed an egocentric mapping of observed stimuli near the primates’ arm and the similar activation patterns of the ventral premotor cortex in humans during observation, naming, and imagined use of objects (Grafton et al. 1996; Chao and Martin 2000). According to Gallese (2007), the perception of an object within reach, automatically triggers a “plan” to act, that is, a simulated potential action. This implicitly induced simulated action would then, in turn, represent the observed object in motor terms, thereby mapping the object onto an egocentric frame of reference (Gallese and Lakoff 2005; Gallese 2007).

Still, today’s findings supporting the action simulation hypothesis do not provide direct empirical evidence for the implicit use of action simulation. That is, despite the important findings on differential firing of visuomotor neurons and elicitation of affordances to objects situated in PPS, no study has focused on behavioral performance inherently related to the use of action simulation. In true action simulation, the imagined movement must exhibit the same biomechanical constraints as the overt movement (Jeannerod 2006). Using this facet, the simulation of actions can be studied directly by testing the influence of biomechanical constraints on performance.

A well-established experimental paradigm to study the possible influence of biomechanical constraints is the mental rotation task of hands or graspable objects (Parsons 1994; de Lange et al. 2008b; ter Horst et al. 2010). In the mental rotation task of hands, participants have to judge the laterality of a presented picture of a rotated hand. The time needed to react typically increases with increasing angle of rotation (Sekiyama 1982) and is analogous to the time needed to move one’s own hand into the position of the presented hand (Parsons 1987). These features exemplify that the mental rotation of one’s own hands is restricted to the same biomechanical constraints as overt movement (Parsons 1994). This influence can be found in reaction time differences for hand stimuli rotated laterally and medially. That is, laterally rotated hands are rotated away from the body’s midsagital plane and result in prolonged RTs compared to medially rotated stimuli (rotated toward the midsagital plane) as laterally rotating one’s arm is more difficult (Parsons 1994; ter Horst et al. 2010). Besides biomechanical constraints, one’s posture also influences performance on the hand laterality judgment task (de Lange et al. 2005, 2006; Ionta et al. 2007). Ionta et al. (2007) showed that holding one’s hands behind the back decreases performance compared to keeping both hands on the lap. These biomechanical and postural influences point to the use of an underlying embodied process denoted as Motor Imagery (MI) (Ionta et al. 2007).

MI is defined as a process in which participants mentally simulate a movement from a first person perspective without overtly performing the movement and without sensory feedback due to overt movement (Decety 1996a, b). Moreover, it has been shown that MI is a form of action simulation (Currie and Ravenscroft 1997). This fits well with the simulation theory, stating that covert actions are neurally simulated actions and that all aspects of the action are involved during the simulation process, except for the movement execution itself (Jeannerod 2001, 2006).

In the present study, we addressed the research question whether action simulation, i.e., MI, during object observation exhibits spatial dependency. Specifically, we aimed to test whether the engagement in MI is enhanced for stimuli presented in the PPS compared to stimuli presented in the EPS, in accordance with the action simulation hypothesis (Gallese and Lakoff 2005). In order to test the spatial dependency of action simulation, we conducted two experiments. In these experiments, we addressed two consecutive questions in order to scrutinize the spatial dependency of action simulation. In the first experiment, we tested the spatial dependency of the automatic action simulation of the effector itself. In the second experiment, we tested whether the hypothesized automatically simulated movement of the effector toward an observed object, induced by mere passive observation of the object, exhibits spatial dependency. Both experiments are complementary, as experiment 1 focuses on the simulation of motor acts of the effector and experiment 2 focuses on the object-effector interaction. In experiment 1, we used a hand laterality judgment task. Typically, presenting rotated hands induce the use of MI to solve the task even when they are presented about 60 cm away from the participant (Parsons 1994; Shenton et al. 2004; Lust et al. 2006; Ionta and Blanke 2009; Ionta et al. 2007; ter Horst et al. 2010). In order to show a differential engagement in MI for hand stimuli presented in the PPS compared to the EPS, we needed a set of stimuli typically not inducing MI when presented in the EPS. Therefore, we used a stimulus set containing back view hand stimuli which were recently shown not to induce the use of MI when presented at a distance of 60 cm in contrast to hand stimulus sets that used combinations of back and palm view hand stimuli (ter Horst et al. 2010). We expected to replicate the findings of ter Horst et al. (2010) concerning the lack of engagement in MI for the presentation of mere back view stimuli when presented in the EPS. In contrast, for stimuli presented within PPS, we expected the participant to use MI. In experiment 2, we used an identical experimental design as for experiment 1. However, we replaced the hand stimuli with stimuli of graspable objects (i.e., cups). Participants were required to judge the laterality of the displayed cups. We hypothesized that the observation of graspable objects within PPS, but not EPS, automatically induces the use of MI. This expectation is in line with the action simulation hypothesis and would provide direct empirical evidence for an automatic coding of observed objects within PPS in motor terms. In sum, we hypothesize a facilitated use of MI for hand and cup stimuli presented in PPS compared to EPS. This hypothesis is confirmed if a lack of biomechanical influence on the performance for stimuli presented in the EPS is found in combination with a significant influence of those constraints on the performance for stimuli presented within PPS.

Experiment 1

Participants

In total, 21 healthy right-handed participants were included in the present study (16 women, age 20.5 ± 3.0 years, mean ± SD). Two participants were excluded from analysis due to an error percentage of more than 15%. All participants had normal or corrected-to-normal vision. No participant had a history of neurological or psychiatric disorder. The study was approved by the local ethics committee and all participants gave written informed consent prior to the experiment, in accordance with the Helsinki declaration.

Stimuli

Stimuli were derived from a 3D hand model designed with a 3D image software package (Autodesk Maya 2009, USA). The stimulus set consisted of back view left and right hand stimuli rotated over six different angles from 0° to 360° in steps of 60°. The left and right hand stimuli were mirror images of each other, but otherwise identical (Fig. 1). Stimuli were projected on a flat surface of 100 cm by 80 cm by a beamer (Sharp NoteVision) with a resolution of 1,024 × 768 pixels at 70 Hz. Stimulus size was 320 × 256 pixels (i.e., 31.25 by 25 cm). The size of the presented hands was realistic, approximately 20 cm by 12 cm. All stimuli were repeated 16 times resulting in a grand total of 384 stimuli (16 * 6 angles * 2 sides * 2 locations). Prior to the experiment, a test of 24 stimuli was run to familiarize the participants with the task.

Fig. 1
figure 1

Shown are all used hand stimuli. Angles represent in-plane angular disparity

Experimental procedure

Participants were seated in a chair positioned in front of the table. Stimulus presentation was controlled using custom-developed software in Presentation (Neurobehavioral systems, Albany, USA). Prior to the stimulus, a fixation cross was presented at the center of the table in between the two possible stimulus locations for a variable duration between 800 and 1,200 ms. The participants were instructed to focus on the fixation cross. After this, the stimulus was presented and was visible until a response was given. Participants had to respond by pressing the left button with their left hand for left hand stimuli and vice versa. After the response, a black screen was displayed for 1,000 ms. Participants were instructed to judge the laterality of the hand as fast and as accurate as possible, without explicit instructions on how to solve the task.

The participants positioned their hands on the table surface with the palms oriented downward, approximately 30 cm in front of their body. Both of the participant’s hands were occluded from view by a black cloth. The stimuli were presented in two locations, namely in between the participants’ hands, referred to as “Near,” and 60 cm in front of the participants hands, referred to as “Far” (i.e., 90 cm in front of the participant’s body), see Fig. 2. This resulted in different visual angles for stimuli in the “Near” (~25°) and “Far” (~4°) location. Stimuli at both locations had equal physical size. Stimuli were presented one at a time in only one of the two locations. All stimuli were presented in eight sequential blocks of 48 stimuli each with breaks in between. The order of location was randomized per block.

Fig. 2
figure 2

Experimental set-up. Hand stimuli are presented one at a time on the Near or Far location. The participants were seated with the hands at both sides of the stimulus presented at the Near location. During the experiment, the hands of the participants were occluded by a cloth

Data analysis

Reaction times smaller than 500 ms and larger than 3,500 ms were excluded from analysis (total loss 4.7% of all trials). These upper and lower boundaries are based on similar studies using a hand laterality judgment task (Sekiyama 1987; Parsons 1994; Ionta et al. 2007; Iseki et al. 2008). Analysis was performed on correct responses. Incorrect responses were a “left” response for a “right” hand and vice versa. We expected to find an influence of biomechanical constraints indicating the use of MI. This can be observed by differences in RTs between laterally and medially rotated hand stimuli (Parsons 1987, 1994; de Lange et al. 2008b; ter Horst et al. 2010) referred to as Direction Of Rotation (DOR). Medially rotated hand stimuli consisted of right hand 240° and 300°, and left hand 60° and 120° rotated stimuli. Laterally rotated stimuli consisted of right hand 60° and 120°, and left hand 240° and 300° rotated stimuli. Data analysis was performed using repeated measures analysis of variance (ANOVA).

In order to test whether participants mentally rotated the stimuli, we conducted a repeated measures ANOVA with the following design: two within-subjects factors (Location, Angle); with two levels for Location (Near, Far) and four levels for Angle (0°, 60°, 120°, and 180°). The values labeled 60° and 120° are the averaged RTs of 60° and 300°, and 120° and 240° rotated stimuli, respectively. A significant effect of Angle, accounted for by increasing RTs with increasing angles of rotation, would indicate that participants mentally rotated the hand stimuli (Shepard and Metzler 1971; Sekiyama 1982, 1987; Parsons 1994; Kosslyn et al. 1998; Ionta et al. 2007; ter Horst et al. 2010).

To test our hypothesis on the facilitated engagement in MI for stimuli presented in the location “Near” compared to stimuli presented in the location “Far,” we conducted a repeated measures ANOVA which tested the engagement in MI via the influence of biomechanical constraints. This influence would be evident by a significant DOR effect. This ANOVA had two within-subject factors (Location, DOR); with two levels for Location (Near, Far) and two levels for DOR (Lateral, Medial). The rationale for using two separate ANOVA’s is the exclusion of the 0° and 180° stimuli for testing the DOR effect as they are neither laterally nor medially rotated. The exclusion of these two rotational angles obviates valid testing of the typical Angle effect obtained in a mental rotation task. The latter ANOVA design was also used to analyze the accuracy data. Post hoc analysis was Bonferroni corrected and alpha level was set at P = 0.05.

Results experiment 1

The total number of erroneous responses (i.e., 4.4% of all trials) corresponds to former studies (Ionta et al. 2007; ter Horst et al. 2010). The ANOVA on the accuracy data revealed a significant DOR effect [F(1,21) = 4.404; P < .05; η2 = .173]. This effect was accounted for by a larger percentage of erroneous responses for laterally compared to medially rotated stimuli. No other effects were found significant.

For the correct responses, the ANOVA on RT’s per Location and the angular disparity revealed a significant effect of Angle [F(3,54) = 85.217; P < .001; η2 = .826]. This effect revealed an increasing RT for increasing angles of rotation, see Fig. 3. All angles differed significantly from each other (P < .001), except for 0° and 60°. No other effects were significant (all P > 0.25).

Fig. 3
figure 3

Reaction times as function of angular disparity in experiment 1 for both locations, mirrored at 180° (i.e., 60° and 120° represent average RT for 60° and 300°, and 120° and 240° rotated hand stimuli, respectively). Error-bars indicate standard error of the mean (SEM)

The ANOVA on the influence of biomechanical constraints (i.e., lateral vs. medial rotation) revealed a significant main effect of DOR [F(1,18) = 5.117; P < .05; η2 = .221] which was accounted for by an increased RT for laterally (893 ms) compared to medially (856 ms) rotated stimuli. Importantly, the interaction of Location by DOR was significant [F(1,18) = 7.221; P < .02; η2 = .286], see Fig. 4. This interaction showed a modulated difference between lateral and medial rotations as function of Location. The DOR effect was present in the “Near” [F(1,18) = 13.157; P < .002; η2 = .422], but not in the “Far” location (P = .432). No significant effect of Location was observed (P > 0.06).

Fig. 4
figure 4

Reaction times for both locations divided into Lateral rotation and Medial rotation. Lateral rotation indicates rotations away from the mid-sagittal plane and medial rotation indicates rotations toward the mid-sagittal plane. As can be seen, the significant interaction of Location by DOR (P < 0.02) represented by the differences in RTs between lateral and medial rotation (i.e., DOR) was modulated by the location at which the stimuli were presented. Double asterisk indicates significance at the P < 0.002 level. Error-bars indicate standard error of the mean (SEM)

Discussion experiment 1

In this first experiment, we tested the spatial dependency of simulated movements of the hand. We hypothesized that the perception of hand stimuli within PPS, but not EPS, would implicitly induce an action simulation of the effector.

Because of the low error rates and the increasing RT with increasing angles of rotation for stimuli in both PPS and EPS, we can conclude that the participants used mental rotation to solve the task (Parsons 1994). The overall performance did not differ between both locations as shown by the non-significant Location effect in the ANOVA on angular disparity. The ANOVA on biomechanical constraints, however, did reveal a marginally significant effect of Location. These differing results occur due to the exclusion of the 0° and 180° rotated stimuli in the latter ANOVA. Consequently, the marginal significant Location effect does not represent differences in overall performance between both locations. Importantly, we found an engagement in MI for hand stimuli presented within PPS, but not when the same stimuli were presented within EPS. This is evident from the presence of the DOR effect for Near but not the Far location and shows the influence of biomechanical constraints on the performance for stimuli presented in PPS (Parsons 1994; ter Horst et al. 2010), see Fig. 4. These findings indicate that the engagement in MI exhibits spatial dependency. The observed effects might be attributed to the experience of moving one’s hands in the PPS, thereby triggering the use of motor-related simulations of actions. Hands observed in EPS, typically not belonging to the self, might facilitate the use of a third person perspective strategy for judging the hands’ laterality.

In order to verify if the observed spatially dependent action simulation is also automatically triggered when a graspable object is observed within PPS, we conducted a second experiment. In this second experiment, we used stimuli of graspable objects (i.e., cups), which we presented within PPS and EPS.

Experiment 2

To study the possible spatial dependency of engagement in MI, we again focused on measuring the influence of biomechanical constraints on the performance. This influence can be found in differences in the difficulty of (mentally) grasping the presented cup. For example, if the left hand is used for grasping a cup, then it is easier when the handle of that cup is oriented toward the left than toward the right. In the second experiment, we used stimuli of rotated cups, which we defined as “left” and “right” cups. By dissociating between “left” and “right” cups, we were able to test for possible influences of biomechanical constraints. In the literature on the mental rotation of hands, it was shown that participants make an “estimated guess” of the stimulus laterality prior to the final judgment (Parsons 1987; de Lange et al. 2008a). In other words, participants subconsciously “decide” that they observe, for example, a left hand and perform a mental rotation of the own corresponding hand to verify their decision before making the final judgment (Parsons 1994). For this second experiment, we assumed that participants would mentally grasp the observed cup with the corresponding hand in order to make the final laterality judgment. That is, mentally grasping a left or a right cup with the left or right hand, respectively. This is also in agreement with the introspective results from pilot studies in our lab in which participants reported to mentally grasp the observed cup with the corresponding hand in order to rotate the cup into its canonical position before making the final laterality judgment.

Similar to experiment 1, we hypothesized that biomechanical constraints of mentally grasping a shown cup would only be observed for stimuli presented within PPS, but not EPS. This would be evident from prolonged RTs for rotated cup stimuli that are more difficult to grasp with the corresponding hand compared to rotated cup stimuli that are more easy to grasp within PPS. For cup stimuli presented in EPS, we expected a lack of biomechanical effects on the RT profile.

Participants

Twenty-five healthy participants took part in this study (24 women, mean age 19.3 ± 1.9 years, mean ± SD). None of the participants had participated in the first study. One participant was excluded from analysis due to an error percentage of more than 15%. All participants had normal or corrected-to-normal vision. No participant reported a history of neurological or psychiatric disorder. The study was approved by the local ethics committee and all participants gave written informed consent prior to the experiment, in accordance with the Helsinki declaration.

Stimuli and procedure

Stimuli were derived from a 3D model designed in a 3D image software package (Autodesk Maya 2009, USA). The cup stimuli consisted of pictures of rotated left and right cups. A left cup was defined as having the handle oriented to the left when situated upright and the face in front and vice versa for right cup stimuli, see Fig. 5. The cups were shown from both front view and back view. By including both views, the congruent and incongruent stimuli contained all angular disparities. Prior to the experiment, participants were familiarized with the “left” and “right” cups by showing a real “left” and “right” cup, identical to the stimuli. The participants were not allowed to touch the cups. Participants were instructed to judge as fast and as accurate as possible whether a left or right cup was shown by pressing a button with their left or right hand, respectively. The experimental setup of the second experiment was identical to that of the first experiment except for the used stimuli, i.e., graspable cups instead of hands.

Fig. 5
figure 5

Cup stimuli as used in experiment 2. Shown are pictures of cups denoted as left and right cups depending on the direction of the handle and the view of the cup (i.e., face in front or behind). A cup with the face visible and the handle oriented to the left is a “left cup” and vice versa for “right cups.” The orientation of the handles is also shown for all cup stimuli (i.e., Congruent or Incongruent)

Data analysis

Reaction times smaller than 500 ms and larger than 3,500 ms were excluded from analysis (total loss 1.5% of all trials). Analysis was performed on correct responses. Incorrect responses were a “left” response for a “right” cup and vice versa. Our analysis focused on the possible difference in RTs for stimuli with congruent and incongruent oriented handles. Congruent stimuli consisted of left cups with the handle oriented leftward and right cups with the handles oriented rightward. Incongruent stimuli consisted of left cup stimuli with the handles oriented rightward and right cup stimuli with the handles oriented leftward, see Fig. 5. For example, a “left” cup seen from the front (i.e., face in sight) has a rightward-oriented handle when rotated 180° and hence is denoted as incongruent. Data analysis was performed using repeated measures ANOVA with the factors Location (Near, Far), Direction of Handle (Congruent, Incongruent), and Angle (0°, 60°, 120°, 180°). This ANOVA design was also used to analyze the accuracy data. Post hoc analyses were Bonferroni corrected and alpha level was set at P = 0.05.

Results experiment 2

The total amount of erroneous responses was 5.0% of all trials. The ANOVA on accuracy data did not reveal any significant effects. The ANOVA on RTs did reveal a significant main effect of Angle [F(3,66) = 54.851; P < 0.001; η2 = .714] and Direction of Handle [F(1,22) = 13.956; P < 0.002; η2 = .388]. The Angle effect was accounted for by an increase in RTs with increasing angles of rotation, see Fig. 6. The effect of Location and the interaction of Location by Angle did not reach significance (P > 0.89 and P > 0.33, respectively). The effect of angular disparity varied with congruent and incongruent trials [F(3,66) = 110.349; P < 0.001; η2 = .834]. Post hoc analyses revealed significant Angle effects for both Congruent [F(3,69) = 175.142; P < 0.001; η2 = .884] and Incongruent stimuli [F(3,69) = 28.843; P < 0.001; η2 = .556]. Crucially, we also obtained a significant interaction of Location by Direction of Handle [F(1,22) = 6.766; P < 0.02; η2 = .235]. Planned simple effect analysis revealed a significant effect of Direction of Handle for the location “Near” [F(1,23) = 21.189; P < 0.001; η2 = .480], but not “Far” (P = 0.19). For stimuli in the “Near” location, mean RTs for Incongruent stimuli (1,206 ms) were larger than RTs for Congruent stimuli (1,064 ms). Importantly, mean RTs for stimuli in the “Far” location were virtually similar and not significantly different between the Incongruent (1,188 ms) and Congruent (1,151 ms) stimuli, see Fig. 7.

Fig. 6
figure 6

Reaction times as function of angular disparity in experiment 2 for both locations. Error-bars indicate standard error of the mean (SEM)

Fig. 7
figure 7

Reaction times for both locations divided into Congruent and Incongruent oriented handles. As can be seen, the significant interaction of Location by Direction of Handle (P < 0.05) represented by the differences in RTs between congruently and incongruently oriented handles (i.e., Direction of Handle) was modulated by the location at which the stimuli were presented. Asterisk indicates significance at the P < 0.05 level. Error-bars indicate standard error of the mean (SEM)

Discussion experiment 2

In experiment 2, we studied the spatial dependency of action simulation for an observed object. Based on the action simulation hypothesis, we hypothesized that the object stimuli within PPS, but not EPS, would induce action simulation.

Given the low error rates and increasing RT for increasing angles of rotation for stimuli in both locations, we can conclude that the participants effectively mentally rotated the observed objects. The results of experiment 2 show that the facilitation of the effector-specific engagement in MI for corporeal stimuli within PPS that was shown in experiment 1 is also present for the observation of graspable objects within PPS. This is evident from the observed influence of biomechanical constraints on performance for stimuli within PPS, but not within EPS. Moreover, this finding closely corresponds to the previously observed motoric mapping of objects situated within PPS, as evident from the selective firing of different visuomotor neurons to objects in the macaque area F5 (Murata et al. 1997). Collectively, these results imply that participants simulated a grasping movement toward the observed object in PPS, but not EPS.

Discussion

As a direct test of the action simulation hypothesis, we investigated the spatial dependency of the automatic action simulation toward stimuli observed in PPS. In the first experiment, we tested the spatial dependency of action simulation of the hand. In experiment 2, we studied the spatial dependency of the action simulation toward an observed object. Based on the action simulation hypothesis, we hypothesized that the perception of hand- (experiment 1) or object stimuli (experiment 2) within PPS, but not EPS, would implicitly induce an action simulation. In correspondence to our hypotheses, the results from both experiments show a spatial dependency of the use of MI. For both hand- and cup stimuli, an action is automatically simulated when they are situated within PPS, but not when they are situated in EPS.Footnote 1

According to the action simulation theory by Gallese (2005), an action is automatically simulated toward an observed object. The simulation, in turn, enables the mapping of the object in motor terms, thereby mapping the object onto an egocentric frame of reference, according to the simulation theory as proposed by Jeannerod (2001). This is in line with the notion of observed objects eliciting affordances (Gibson 1979). The simulation of an action toward an object might be regarded as the mental rehearsal of the affordances related to the object (Tipper et al. 2006). Costantini et al. (2010) showed that eliciting affordances related to an observed object are only present for objects observed in PPS, but not EPS. This was evident from an observable compatibility effect between instructed movement of one arm and the elicited affordances related to the observed object only for objects situated within PPS. Our results extend the findings of Costantini et al. (2010) by directly showing the actual influence of biomechanical constraints on movement at the cognitive level without any overt movement. The observed spatial dependency of the influence of biomechanical constraints on performance in our study provides direct evidence for the automatically induced action simulation toward objects observed within PPS, but not within EPS. Additionally, the results of experiment 1 show that the automatic action simulation is also present at an effector-specific level and does not necessarily have to involve the observation of graspable objects, but can also be triggered by the observation of corporeal objects. Importantly, the observation of hands or objects within PPS is not a prerequisite to be able to use MI. Indeed, the use of MI within EPS is also shown to be elicited in mental rotation tasks of corporeal objects (Parsons 1994; ter Horst et al. 2010) and non-corporeal objects (Kosslyn et al. 2001; Tomasino and Rumiati 2004). This engagement in MI is likely to be attributable to task instructions (Tomasino and Rumiati 2004) and stimulus properties (ter Horst et al. 2010). Consequently, the use of MI, or simulating an action, in itself does not necessitate the involvement of multisensory PPS mechanisms. However, when objects are presented within PPS, multisensory PPS mechanisms are involved in the action simulation (Graziano et al. 1997; Murata et al. 1997; Duhamel et al. 1998; Ladavas et al. 1998a, b; Makin et al. 2007; Gallivan et al. 2009). The involvement of multisensory mechanisms is likely to underlie the differential use of MI between stimuli presented within PPS and EPS in our study.

Our results are in apparent contrast with the findings by Coello et al. (2008). Their findings show that action simulation is only used for observed stimuli placed near the transition of PPS to EPS. These findings, however, are likely to cover a different aspect of the functionality of the PPS than covered in the action simulation theory. Coello et al. (2008) studied the relation between the use of action simulation in a reachability task, while the action simulation theory, on the other hand, cover the automatic use of action simulation toward observed graspable objects within PPS. Consequently, task differences are likely to underlie the observed differences in the use of action simulation in the results of Coello et al. (2008) and the results observed in our experiments.

Finally, we consider alternative interpretations. First, the results of experiment 1 might also be explained by the influence of visual experience. Lateral hand rotations at the “Near” location are more difficult to adopt than the same orientation at the “Far” location. This is especially so when the elbow is flexed and the upper arms in parallel to the body as in our set-up. Because of the biomechanical difficulty to adopt this posture, people rarely adopt it. It is likely that the visual experience of one’s own hand in this orientation in the “Near” location is also less than for the “Far” location, which, in turn, might explain the observed differences between lateral and medial rotations. Still, this interpretation of the results cannot completely account for our findings for two reasons. First, we obtained similar results in our second experiment and the visual experience of cups with the handle rotated leftward or rightward is likely not to differ. Secondly, for hand laterality judgment tasks, motor-related processes have been shown to be used by, for example, postural influences (de Lange et al. 2005, 2006; Ionta et al. 2007; Ionta and Blanke 2009). Importantly, postural effects have been shown to influence the performance for hand stimuli, but not letter stimuli, which typically induce the use of a visual strategy (de Lange et al. 2005). In addition, when participants are instructed to use a visual strategy, the DOR effect is not obtained in a hand laterality judgment task (Tomasino and Rumiati 2004; ter Horst et al. 2010). In our Experiment 1, we did obtain the influence of biomechanical constraints for stimuli in PPS, but not EPS. Second, another possible interpretation for the results of experiment 1 and 2 might be sought in the difference in visual angle between the stimuli presented at the Near and Far location. That is, despite the identical physical size of the stimuli in both locations, the visual angles differed. As a consequence, it may be argued that the larger visual angle of the stimuli at the Near location influenced the engagement in MI. At odds with this explanation is a recent study in which it was shown that a consistent visual angle of a cup shown nearby or far away does not influence the relationship between spatial positioning of objects and the automatic triggering of potential motor acts (Costantini et al. 2010). Moreover, on a more phenomenological level, maintaining identical visual angles for stimuli presented at the Near and Far location would result in an unrealistic situation as objects far away are presented smaller on the retina than objects situated nearby.

For the hand stimuli presented in the EPS, we hypothesized no influence of biomechanical constraints on the participants’ performance. As we indeed did not find a DOR effect for stimuli presented at the “Far” location, we presume that the participants used a more visually guided strategy such as Visual Imagery (VI) to solve the task. VI encompasses simulating the execution of a movement from a third person perspective. As a consequence, VI is not subject to biomechanical constraints and shown to be used effectively to solve the hand laterality judgment task (Tomasino and Rumiati 2004; ter Horst et al. 2010). As a consequence, it is likely that participants mentally rotated the stimuli presented within EPS in an allocentric frame of reference.

In sum, in the present study, we found that the presentation of stimuli of hands and graspable objects within PPS resulted in the engagement in MI compared to stimuli presented in EPS. These findings provide direct evidence for the action simulation hypothesis and show the automatic action simulation toward objects presented in PPS, but not when presented in EPS.