Introduction

The sense that our body belongs to us (self-attribution) and that we can see, feel and move it is termed body ownership (Gallagher 2000) while our sense of embodiment is the experience of existing within the borders of our body (Arzy et al. 2006). Our senses of ownership and embodiment are crucial aspects of self-awareness and understanding the mechanisms through which we construct, maintain and utilise an internal representation of our body has implications for understanding how we successfully interact with objects and people in the world. Intuitively it might seem that our internal bodily representation is a fixed and durable construct, but numerous lines of research highlight the modifiable nature of our senses of ownership, embodiment and self-awareness. Disorders of body ownership are well documented in neuropsychology. For example, somatoparaphrenia can occur following damage to a variety of right hemisphere brain structures (including parietal-temporal regions, the insula and basal ganglia) and causes a sense of disownership such that patients will often claim that their limb belongs to another person (see Vallar and Ronchi 2009 for a review). Another disorder with potentially severe consequences is body integrity identity disorder in which individuals experience a strong sense that a healthy limb does not ‘belong’ and desire to have it amputated so that their physical body matches the body image with which they identify (First 2005).

Phantom limb syndrome is a particularly interesting disorder of body representation reported in up to 80% of amputees in which an amputated limb is perceived as still being present (Ramachandran and Hierstien 1998). However, phantom limbs do not only occur in amputees. Supernumerary phantom limb (SPL) disorder (Pseudopolymelia) is usually associated with right hemisphere cortical damage and is characterised by the perception of one or more supernumerary limbs on the contralesional side of the patient’s body (Halligan et al. 1993, 1995). For example, a patient reported by Hari et al. (1998) experienced an additional left arm that appeared to be in the previous position of her real left arm during movement. It has been suggested that this phenomenon may be due to a mismatch between efferent and proprioceptive signals, causing a moving limb to be perceived in two places (Giummarra et al. 2008). This disorder raises the possibility that two representations of the same limb can occur simultaneously.

Whilst serious malfunctions in body representation can have a detrimental effect on the way we experience our body, the transitory nature of our senses of body ownership and embodiment may hold some survival value. Indeed, in virtue of the succession of physical changes, the human body experiences throughout a lifetime of development, ageing and physical injury, the maintenance of a rigid and concrete bodily representation would not be useful. A degree of flexibility also facilitates the extension of our internal bodily representation further than the physical borders of the body enabling the temporary incorporation of external objects (e.g. tools) into the body schema in order to attain goals. Iriki et al. (1996) discovered bimodal neurons in the postcentral gyrus of monkeys that are activated both in response to peripersonal somatosensory and visual hand-based stimuli and in response to stimuli near the end of a rake when it was used as a tool to retrieve food, thus indicating that the tool became represented as an extension of the hand. Prostheses worn by amputees can be thought of as tools that extend the existing limb. Indeed research suggests that wearing a prosthesis alters amputees’ internal representations of the amputated limb as they tend to overestimate the length of the remaining part of the limb (McDonnel et al. 1989). Amputees experiencing phantom limbs often report that the phantom limb is perceived to embody the prosthesis (Melzack 1990), thus suggesting that the prosthesis has been incorporated in the subject’s internal bodily representation.

The flexibility of body ownership has been investigated experimentally in healthy subjects through the illusory embodiment of a rubber limb. In the Rubber Hand Illusion (RHI), subjects watch a paintbrush stroking a rubber hand whilst simultaneously feeling a paintbrush stoking their own hand, which is hidden from sight. When tactile stimulation of the rubber hand and real hand are synchronous subjects report experiencing a sense of ownership of the rubber hand, and a remapping of the perceived location of the real hand towards the rubber hand location occurs (Botvinik and Cohen 1998). Subjective reports of ownership have been supported by the observation that threatening or contorting the embodied rubber hand elicits increased activity in cortical areas associated with anxiety i.e. the left insula and anterior cingulate gyrus (Ehrsson et al. 2007) and increased skin conductance (Armel and Ramachandran 2003).

The RHI provides an opportunity to explore the basis of ownership in healthy controls and indicates that ownership may be based on the convergence of multi-sensory inputs. To resolve the multi-sensory conflict created by seeing the tactile stimulation in one location and feeling it in another; visual and tactile inputs are matched. The visual capture of touch results in a unified multi-sensory event experienced in the location of the rubber hand (Makin et al. 2008; Botvinik and Cohen 1998). The notion that ownership in the RHI is related to the integration of multi-sensory inputs is supported by the observation that activity in the premotor cortex occurs at the illusion onset and is correlated with the strength of the sense of ownership of the rubber hand (Ehrsson et al. 2004, 2005) as parietal regions that integrate visual, tactile and proprioceptive information send information to the premotor cortex (Graziano and Botvinik 2001).

Armel and Ramachandran (2003) argue that a bottom-up Bayesian process of multi-sensory matching is both sufficient and necessary for the RHI to occur, regardless of top-down knowledge about the plausibility of ownership of the object in question. Indeed, they found that as long as tactile stimulation of both the rubber hand and the unseen real hand were synchronous the RHI occurred despite discrepancies in hand size and skin tone, when the rubber arm was placed in anatomically implausible locations and even when the rubber hand was extended by up to 3 feet. Most surprising is that when subjects viewed a tabletop being synchronously stroked with the real unseen hand, they reported feeling as though the tactile sensations were arising from the table, particularly when stimulation was applied to a common texture such as a plaster. These results have not been well replicated, however, and subsequent research suggests that the RHI can be prevented when the rubber hand is rotated by 180° (Ehrsson et al. 2004; Tsakiris and Haggard 2005) or when it is replaced with a piece of wood or an incongruent hand (i.e. a right hand, when the left had been stimulated) (Tsakiris and Haggard 2005). Constantini and Haggard (2007) found that the effects of matching visual and tactile stimulation on the sense of ownership in the RHI occurred only after the matching of current visual and proprioceptive states. This indicates that whilst a bottom-up process of matching of sensory inputs may be necessary to elicit the RHI, it is not sufficient as the illusion is also dependent on a top-down process of matching the rubber hand with pre-existing internal representations of the body (Tsakiris and Haggard 2005).

A distinction has commonly been made between two types of internal bodily representation: ‘body image’ for perception and ‘body Schema’ for action (Paillard 1991, 1999). The body schema is a dynamic representation of the body constructed primarily from bottom-up sensory input necessary for action and postural configuration, such as proprioceptive information from the muscles, joints and skin (Gallagher 1986; Paillard 1999; Kammers et al. 2006). Body image, however, is an internal mental representation of properties of the body such as its form and external appearance based on higher order top-down factors such as a person’s perceptual experience, conceptual understanding of their body, and emotional attitude towards it (Gallagher and Cole 1995). This distinction is highlighted by disorders in which disruption of one specific representation occurs. For example, hemispatial neglect is as disorder occurring after damage to the posterior right hemisphere in which patients sometimes act as though the entire contralesional side of space and their body does not exist. The ability to make automatic movements with the contralesional limb tends to be unimpaired indicating a disorder of body image, not body schema (Gallagher 2005). Conversely, Buxbaum et al. (2000) present evidence for an impaired body schema in a patient with apraxia.

The way in which multi-sensory conflict in the RHI is resolved may depend on which particular representation is disrupted. Kammers et al. (2009) asked participants to indicate the position of their real unseen hand using both perceptual judgements and motor judgements (i.e. a fast ballistic pointing movement with either the stimulated or non-stimulated hand to the other hand), with perceptual responses and reaching thought to be based predominantly on body image and body schema respectively (Kammers et al. 2006). Only perceptual judgements, and not reaching movements, were sensitive to the RHI indicating a distortion in the underlying body image, but not body schema. This result is notably different from that found by Botvinik and Cohen (1998) for whom intermanual reaches reflected the perceptual bias of the RHI. However, in that study, participants were required to draw their finger under the table until it was judged to be in alignment with the index finger of the other hand. Thus, rather than make an action response as in the Kammers et al. (2009) and current experiments, participants were effectively asked to use their finger as an indicator in a perceptual judgement task. Unlike ballistic goal-directed reaching movements, this form of judgement may be subject to the same perceptual biases as making verbal or visual perceptual judgements. Kammers et al. (2009) suggest that the visual capture of proprioception in the RHI only occurs for perceptual responses as proprioception is weighted more heavily than vision in the body schema for action, than the body image used for perceptual judgements. However, one of the drawbacks of traditional RHI studies is that they are based on an illusory representation of a static rubber hand, which, due to its static nature, may prevent incorporation into the body schema and when subjects are asked to make a motor response they may revert to a representation of their real dynamic hand.

The current study used spatially coincident computer-manipulated video feedback that allowed participants to view a live dynamic video image of their own hand in the same location as their real unseen hand. The image could be manipulated so that participants viewed two virtual supernumerary left hands (SNLs), both visually identical to the real left hand, but offset to the left and right of the position of the real (unseen) hand. Subjects were not informed as to which (if either) was in the same location as their actual hand. The dynamic nature of the video image meant that, unlike previous RHI research (Botvinik and Cohen 1998; Kammers et al. 2009) in which a static rubber limb was used, the to-be-embodied limb was dynamic and could be controlled by the participant potentially facilitating its incorporation into the action-based body schema. Additionally, whilst the classic RHI is usually limited to vision and passive touch, the current set-up allowed for the investigation of whether congruency between active touch and vision is sufficient to elicit a sense of ownership over a spatially offset representation of the limb. A previous study by Tsakiris et al. (2006) demonstrated that by using synchronous active movement for the RHI the perceptual shift of the real position of the limb was more complete, incorporating multiple digits, whereas using passive movement (participant’s finger moved passively by the experimenter) or passive touch (static hand stroked by a paintbrush) lead to more localised perceptual shifts that were confined to the stimulated finger. The suggestion was that action, and so the body schema, influences perceptual body awareness (body image/ownership)—with ownership in this case being inferred from an indication of proprioceptive drift. The current study takes this a step further by using direct low and high-level measurements of ownership to investigate whether synchronous active stroking can inform low-level motor responses (body schema) as well as high-level perceptual judgements of ownership (body image).

In the current study, participants carried out an active touch task in which they were required to stroke a toothbrush with their left index finger. Two representations of the left hand were viewed simultaneously and the visual feedback was manipulated across three conditions. In the ‘Left Synchronous’ (LS) condition, the hand on the left was viewed as stroking the brush synchronously with tactile stimulation of the real hand, whereas a half second visual delay was applied the hand on the right. In the ‘Right Synchronous’ (RS) condition, the hand on the right was viewed stroking the brush synchronously with the tactile stimulation of the real hand, whereas a half second delay was applied to the hand on the left. In the ‘Both Synchronous’ condition (BS), both hands were viewed as stoking the brush synchronously with the tactile stimulation of the real unseen hand. This manipulation allowed the exploration of whether, when two competing representations of the same limb are viewed, the one for which seen and felt touch are synchronous will be incorporated into body representations and, in addition, whether synchronous seen and felt touch of both limbs results in both being incorporated into body representations. Indeed, disorders such as SPL disorder suggest that it is possible to have two simultaneous representations of the same limb and a recent study by Ehrsson et al. (2009), in which two rubber limbs were presented either side of the unseen real hand, found that stroking the real hand and both rubber limbs synchronously with a paintbrush increased skin conductance to noxious stimuli applied to both fake limbs.

In the current study, the extent of the illusion in each condition was measured using a questionnaire and motor task to indicate whether either (or both) of the limbs had been incorporated into the body image, and body schema respectively (Kammers et al. 2006, 2009). The questionnaire was an adapted version of that used by Botvinik and Cohen (1998) designed to measure the subjective sense of ownership of each of the hands and provide a more thorough exploration of the subjective reported sense of ownership of two limbs observed by Ehrsson (2009).

Based on previous research indicating that both congruent vision and passive touch (Botvinik and Cohen 1998; Tsakiris and Haggard 2005) and congruent vision and active movement (Tsakiris et al. 2006) elicit the RHI, subjects should experience a stronger sense of ownership for a limb when seen and felt touch is synchronous rather than asynchronous i.e. a stronger sense of ownership of the left hand should be reported in the LS than in the RS condition and a stronger sense of ownership of the right hand should reported in the RS than in the LS condition. This was examined using planned comparisons conducted between ownership scores for both the left and right SNL in the LS and RS conditions. It was also predicted that synchronously stroking both images of the hand would cause both SNLs to be incorporated into the body image so that a stronger sense of ownership of both hands should be reported in the BS condition than either the LS or RS condition. Planned comparisons between ownership scores given in response to questions explicitly regarding both SNLs were used to directly test these predictions.

In the motor task participants were required to make an open-loop pointing movement towards a target positioned directly ahead of their real unseen hand, and spaced equally between the two seen hands. If the hand on the left is incorporated into the body schema (predicted to occur in the LS condition due to synchronous stroking of the left hand) then the handpath of the pointing movement of the real hand should err to the right of the target, whereas if the hand on the right is incorporated into the body schema (predicted to occur in the RS condition due to synchronous stroking of the right hand) then the handpath should err to the left of the target (Fig. 1a), therefore, planned comparisons between end point reaching error in the LS and RS conditions and baseline reaches were conducted.

Fig. 1
figure 1

a Real and SNL handpaths expected if the left, right, or actual limb is controlled in the pointing task. b Expected handpaths in the BS distractor condition pointing to the target with the left hand if both hands are incorporated into the body schema, demonstrating distractor avoidance behaviour for the right hand, and if only the left hand is incorporated into the body schema, driving the right hand through the distractor

For the BS condition, an additional pointing task was added in which distractor items were presented either side of the target. If both hands were incorporated into the body schema then reaching directly to the target location with one SNL would drive the other SNL through the distractor. Thus, if both left hands are part of the body schema a different handpath must be calculated (compared to the no distractor condition) in order to ensure both hands avoid the distractor items (Fig. 1b). Even though the distractor objects were visual images and as such provided no real obstruction to hand movement, alterations to the handpaths should still be observed as participants have been found to alter their handpaths in the presence of non-target LEDs even if they are not physical obstacles to (Tipper et al. 1997). A further planned comparison between mid-point error in the distractor and no-distractor tasks was conducted to directly test this prediction.

Method

Participants

Twelve University of Nottingham undergraduate volunteers (1 Male) with an age range of 19–22 years participated in the study. All participants were right-handed and had normal or corrected-to-normal vision and gave informed consent.

Apparatus

Participants viewed real-time video images of their left hand and arm from the same perspective as if viewing the actual hand using a MIRAGE system (Newport et al. 2009): Participants sat at a table and looked into a mirror suspended horizontally above the work surface. A computer monitor was suspended equidistant above the mirror facing downwards. Live video images of the participant’s moving hand beneath the mirror (delay < 17 ms) were captured and displayed via the monitor and mirror. The location and angles of the camera, monitor and mirror were such that viewed real time images of the participant’s limb appeared in the same spatial location and from the same perspective as if viewing the limb directly (Fig. 2). Captured images could be displayed veridically, or manipulated by in-house software. Image manipulation involved duplicating and overlaying images of the hand with a lateral shift to the left and right as required. When duplicate images of the left hand were displayed the left image (left SNL) was shifted leftwards and the right image rightwards (right SNL) of the real hand such that the index finger of the left SNL was 6 cm to the left of the real index finger and the index finger of the right SNL was 6 cm to the right of the real index finger. All measurements were taken from the centre of the index finger nail tip. Thus the two images were equidistant from the location of the real hand (which could not be seen) and did not overlap. A temporal delay could be applied to either hand image independently. Questionnaire items were also displayed via MIRAGE so that participants were not required to move for this part of the experiment. A 5 mm infra-red reflective marker was attached to the nail of the index finger so that spatial handpaths could be recorded. A black fabric bib ensured that participants were unable to see any portion of their real limb.

Fig. 2
figure 2

a Schematic representation of the MIRAGE system. The angles of the camera and mirrors enable real-time video capture of the participants actual limb to be presented in the same spatial position and same visual perspective as if viewing the limb directly

Procedure

Upon being seated at the apparatus participants were given a brief acclimatisation period (10 seconds) during which they viewed an un-manipulated image their left hand moving freely under the screen (i.e. in real-time and in its real location) in order that they could become accustomed to the experimental set-up. The right hand was kept on the participant’s lap, and the index finger of the left hand was placed on an unseen tactile start point, where it remained between tasks.

In all subsequent conditions participants stroked the bristles of a toothbrush at 1 Hz for 20 s with their left index finger while viewing one or two images of their left hand. In the baseline condition only one image of the left hand was shown in the same location as the real hand. In the experimental conditions two images of the moving left hand were shown and a 0.5 second temporal delay was applied to the left SNL, the right SNL or neither SNL so that the stroking was synchronous for the right (RS condition), left (LS) or both (BS) SNLs respectively. Participants were informed that they would be shown an extra hand but were given no information as to which (if any) corresponded to the position of their real hand. For each condition participants performed two 20 s sessions of stroking, verbally responded to questionnaire items (taking approximately two minutes) and pointed three times to a visual target with their left hand (but without vision of the hand or any image of the hand). The order of the pointing and questionnaire tasks was counterbalanced between participants with a separate period of stroking performed prior to each task.

For the pointing task participants made open-loop reaches towards a visual target three times. The target was displayed as a white circle against a black background, presented 10 cm directly in front of the index finger. In an additional pointing with distractors task (presented in the BS condition only) red boxes were presented 3 cm to either side of the target. The distance between each distractor and the start point was 5 cm. At the end of each reach all participants were able to successfully return to the tactile start point. For all pointing conditions the hand (real or otherwise) was not visible at any point.

Two questionnaires were presented on-screen. Questionnaire A was presented after the baseline active touch task in which a single un-manipulated image of the real left hand was viewed stroking the brush for 20 s. Questionnaire A included 6 questions related to participants’ experience of their hand as viewed through the screen and was included primarily to confirm that participants felt a sense of ownership over a video-image of their hand when in its true location. Questionnaire B, presented after active touch in all the other conditions, included 14 questions regarding the subject’s experience of their limb ownership in each of the three conditions (LS, RS, BS), with the aim of establishing which of the seen limbs were incorporated into the participant’s body image. The questions were based on those used by Botvinik and Cohen (1998) so that the results would be comparable with previous RHI experiments. Questionnaire B was presented 3 times, once for each condition, and participants verbally responded according to a 7 point rating scale (1 = strongly disagree, 7 = strongly agree). The questionnaire items were presented on-screen in a random order and the participants gave verbal responses so that the hand could not be viewed and was not required to move.

Results

Questionnaire responses

Questionnaire A: The results of Questionnaire A demonstrate that the image elicited a strong sense of ownership. After only 20 seconds stimulation participants were able to treat the video image of their left hand as if it were there actual left hand, showing strong agreement with the statements “it seemed like the image of the hand was my hand” (median = 6), and “it seemed like the image of the hand belonged to me” (median = 6).

Questionnaire B: Questionnaire B items revealed that when one SNL was subjected to a temporal delay (asynchronous), participants felt that the synchronous hand was touching the brush, that it belonged to them and was part of their body. When both (left) hands stroked synchronously, participants felt that both left hands belonged to them and that both were part of their body (Fig. 3a). In order to compare the sense of ownership for each hand in each condition statistically in each condition scores from the items ‘It seemed like the left/right/both hands belonged to me’ and ‘It seemed like the left/right/both hand was part of my body’ related to this experience were collapsed such that a measure of ownership of the LEFT, RIGHT and BOTH hands was computed for each condition (LS, RS, LS). (Fig. 3b).

Fig. 3
figure 3

a Questionnaire scores. Medians and IQRs after synchronous stroking by the left (left), right (middle) and both (right) SNLs. b Sense of ownership ratings over the left (light grey), right (dark grey) and both (open) hands when the synchronously stroked hand was either the left (LS), right (RS) or both hands (BS). Note that ownership was claimed over both hands only when both hands were synchronous. Error bars show standard error. c Typical handpaths during open-loop reaching. Left handpaths of the unseen real hand (indicated by faint central index finger) following synchronous stroking by the left and right SNL. Mean end-point errors were significantly different to baseline reaches. The hand images represent the locations of the left and right SNLs during stroking (not visible during reaching). Right handpaths following simultaneous synchronous stroking by both SNLs with and without distractors (open squares). Distractors made no difference to endpoint errors or mid-path deviations

A 2 × 3 repeated measures ANOVA was carried out on this data with the factors Sense of Ownership (Left, Right, Both) and Hand Synchronicity (LS, RS, BS). A significant main effect of Sense of Ownership was found (F(2,22) = 8.511, p < 0.05), as was a significant main effect of Hand Synchronicity (F(2,22) = 9.649, p < 0.05) and a significant interaction (F(4,44) = 38.811, p < 0.001).

Planned comparisons revealed that the mean ownership of the left hand was significantly greater in the LS condition than in the RS condition (F(1,22) = 71.845, p < 0.0001); ownership of the right hand was significantly greater in the RS condition than the LS condition (F(1,22) = 64.574, p < 0.0001) and ownership of both hands was greater in the BS condition than either the RS (F(1,22) = 32.766, p < 0.0001) or LS (F(1,22) = 29.094, p < 0.0001) conditions.

Pointing data

End-point and Midpoint error (in degrees) was calculated for each experimental condition (BS, LS and RS) and the baseline condition. End-point error was calculated as the angle between a straight line from the start point to the target and from the start point to end-point of the movement. The same procedure was used for midpoint calculation, except that mid-point hand location (as the hand passed the distractor) was used instead of end-point location. Due to a recording error the pointing data were lost for one participant in the LS condition. As a consequence their data were omitted from all reaching planned comparisons.

The mean baseline end-point error (0.13°, SD 6.51) was compared to the mean end-point error for each of the three conditions (RS, LS and BS) using a-priori pairwise comparisons. A rightward mean end-point error was observed in the LS condition (mean = 11.44°, SD = 11.37) which was significantly more rightward than the mean baseline endpoint error (F(1,40) = 17.630, p = 0.0001). A leftward mean end-point error was observed in the RS condition (mean = −21.53°, SD = 6.09) and was significantly more leftward than the mean baseline endpoint error (F = (1,40) 54.152, p = 0.0001). Finally, a leftward mean end-point error was found in the BS condition (mean = −16.08°, SD = 12.45) and was significantly more leftward than the baseline mean end-point error (F(1,40) = 31.668 p = 0.0001). A priori pairwise comparisons revealed that there was no significant difference between the mean mid-point error in the distractor (mean = 22.32, SD = 11.38), and no distractor task (mean = 22.32, SD = 9.27) in the BS condition (F = 0.659 (1, 40), p = 0.4216) indicating that participants did not take distractor avoidance into account when executing their reaches (Fig. 3c for example handpaths). For the purposes of comparison, a deviation of 11.44° (recorded for the LS condition) equates to a rightward deviation of 2 cm at the target distance, suggesting that for the action response the encoded position of the real limb had been shifted 2 cm towards the position of the left (synchronous) SNL. For the RS condition, an error of −21.53° corresponds to a rightward deviation of 3.9 cm suggesting that the position of the real limb had been shifted ~4 cm towards the position of the right (synchronous) SNL and an endpoint error of −16.08° (BS condition) would be equivalent to a lateral shift of 2.9 cm towards the right SNL. As can be seen from Fig. 4, the observed pattern of results was very similar for all participants with high ownership ratings for the left hand resulting in rightward reaches and high ownership ratings for the right hand resulting in leftward reaches.

Fig. 4
figure 4

The relationship between reach accuracy and ownership ratings for all participants following left and right synchronous stroking for the questions ‘it seemed as though the hand on the left belonged to me’ (top left), ‘it seemed as though the hand on the right belonged to me’ (top right), ‘it seemed like the hand on the left was part of my body’ (bottom left) and ‘it seemed like the hand on the right was part of my body’ (bottom right). Low ownership ratings for the left SNL reliably produced leftward (−ve) errors consistent with control of the right SNL while high left SNL ownership ratings produced responses consistent with control of the left SNL (two leftmost panels). Low ownership ratings for the right SNL, on the other hand, produced rightward (+ve) errors consistent with control of the left SNL and high ownership ratings for the right SNL produced leftward (−ve) reaches consistent with control of the right SNL (two rightmost panels)

Discussion

Through the use of computer-manipulated real-time visual feedback subjects were exposed to two visual representations of their left hand, both visually identical to the real hand but offset to either side of the real hand location. Visual feedback of active touch was manipulated such that seen and felt touch was either synchronous for the left hand only, the right hand only or both hands. Questionnaire items related to ownership of the hand demonstrated that when only one hand was seen and felt to be synchronous there was a sense of ownership over that hand, but not the asynchronous hand. When both left hands stroked simultaneously and synchronously a sense of ownership was claimed over both hands.

As predicted, the questionnaire data indicated that synchronous active touch and vision can elicit a sense of ownership of an offset video image of a limb: a stronger sense of ownership being experienced for the limb for which seen and felt touch were synchronous, than the limb for which it was asynchronous. This finding is in agreement with previous research, which has demonstrated that illusory ownership of both a rubber limb (Botvinik and Cohen, 1998) and a virtual limb (Slater et al. 2008) can be elicited by synchronous passive touch and vision. Additionally active movement and visual feedback of the movement has been found to elicit a shift in perceived hand position towards a video image of a hand (Tsakiris et al. 2006). The current study corroborates and extends these findings and is the first to explore direct competition between simultaneously presented synchronous and asynchronous limbs in addition to presenting two synchronous stroking hands.

Ownership of the rubber hand in the RHI can be explained by a process of visual, tactile and proprioceptive matching (Botvinik and Cohen, 1998). In the current study active touch was felt in the location of the real hand, but seen in the location of the synchronously stroked offset limb. Visual and tactile inputs were matched and, due to the greater weighting of visual information, a unified multi-sensory event was experienced in the location of the synchronously stroking seen hand.

The questionnaire results indicate that when both limbs stroked the brush synchronously with the real unseen hand a strong sense of ownership of both occurred. When asked directly about feelings of ownership over both of the left hands (“it seemed like both (left) hands were part of my body”, “it seemed like both (left) hands belonged to me”) a significantly greater sense of ownership was reported when both hands were synchronous (BS condition) compared to when only the left (LS condition) or right (RS condition) was synchronous. This finding is consistent with recent research by Ehrsson (2009) in which two rubber hands were synchronously stroked with the real unseen hand, eliciting an increased skin conductance response when either limb was injured and indicates that integration of visual and tactile inputs may be sufficient to elicit the incorporation of two representations of the same limb into the subjects body image.

The use of both a questionnaire and a motor response measure allowed investigation of whether each limb had been embodied into the body image, and body schema (Kammers et al. 2006, 2009) based on the distinction that has been made between body schema as a representation of the body based on bottom-up sensory input necessary for action, and body image as a top-down representation for perception (Paillard 1991, 1999). The pointing task required participants to point towards a target without vision of either hand. In the LS condition a rightward end-point error was observed which was significantly greater than the baseline error. This indicated that the hand on the left had been incorporated into the subjects’ body schema, as the subjects had attempted to point to the target using the hand located on the left. Conversely, in the RS condition a leftwards end-point error was observed that was significantly greater than the baseline error. This indicated that the hand on the right had been incorporated into the subjects’ body schema as the subjects had attempted to point to the target using the hand located on the right.

As predicted, the results indicated that when two offset representations of the same limb were viewed, the one for which touch and vision were congruent was incorporated into the body schema. As in the classic RHI (Botvinik and Cohen 1998), it seems that a proprioceptive re-mapping of the perceived location of the real hand took place towards the location of the seen touch. The significant error demonstrated in the RS and LS conditions has implications for our understanding of the extent of the RHI. Kammers et al. (2009) reported that perceptual judgements about limb ownership, but not action movements were sensitive to distortion in the RHI and argued that this demonstrated that visual capture of touch only occurs for the body image (for perceptual judgements) and that the body schema (for action) resists the illusion due to proprioception being weighted more heavily than vision. In the current experiment, however, when an offset representation of the left hand was seen to stroke a paintbrush synchronously with the unseen real left hand, the perceived real hand location was indeed remapped towards the location of the offset limb, as revealed by pointing errors consistent with a remapped limb position. In Kammers et al.’s (2009) study, however, the resistance of the RHI to action may have been an artefact of the task which required subjects to incorporate a static fake limb into their body schema. Whilst subjects may have answered perceptual questions related to the ownership of the rubber limb, when asked to make a motor response they may have reverted to a representation of their real dynamic limb. It is not currently known whether the representation of the fake limb replaces, or exists in parallel to, the representation of the real limb. The dynamic nature of the visual representation of the limb used in the current study may have facilitated its incorporation into an action-based body schema, allowing the body schema to be updated in a way that is not possible with static limbs, however life-like in appearance.

Although participants claimed a sense of ownership over two limbs, no differences in handpath were observed between the distractor and no distractor task in the BS condition. This suggests that while both limbs could be incorporated into the body image, they were not simultaneously incorporated into the body schema. Participants were apparently happy to allow one of the representations of their hand to pass through a visual distractor. Whilst this result indicates that both SNL’s were not simultaneously incorporated into the body schema, a significant leftward end-point error was observed in the BS pointing task. This suggests that when both limbs synchronously stroked the toothbrush, and were in direct competition for assimilation into the body schema, it was the hand on the right that was incorporated into the body schema, rather than the left, and this was the case for all but one participant.

The conclusion that two supernumerary limbs can be assimilated into the body image, but that only one can be incorporated into the body schema must be tempered with the following caveat: while two limbs could be embodied if both moved synchronously, participants may have felt that pointing with two hands was an impossible task and therefore chose to control only the one that they felt most ownership for, treating as irrelevant the perceived (but unseen) handpath of the other hand. Having said that, although the two hands were yoked in reality by the duplication of the video image, participants never saw them move together in a yoked fashion and the index fingers had already been seen to operate independently so it is also possible that participants were under the impression that they could move them in independent directions such that both avoided the distractor. Nonetheless, even if participants made a conscious, or unconscious, decision to only control one hand, the vast majority of them chose the one on the right.

It is interesting to note that a bias was apparent throughout the experiment for the right SNL for both perceptual ownership and reaching control. In the RS condition, participants made reaching movements as though the location of their hand was shifted ~4 cm towards the location of the right SNL. This is a considerable shift following only 20 s of synchronous stimulation, given that the right SNL was located just 6 cm to the right of the real hand. In contrast, the shift for the LS condition was not so large with reaches being made as though the real hand were located ~2 cm towards the left SNL (i.e. only half the shift of that observed in the equivalent RS condition). Reaches in the BS condition were executed as though the actual limb location were shifted towards the right SNL, but to a lesser extent (~3 cm vs. ~4 cm). That is, a synchronously stroking right SNL seemed to be favoured by the body schema above a synchronously stroking left SNL in all conditions. Furthermore, when asked about each hand independently in the BS condition (“it seemed like the right/left hand was part of my body” etc.), ownership scores were also marginally higher for the right compared to the left SNL suggesting that the right SNL may also be favoured by the body image when in direct competition.

It is unclear whether the asymmetries observed for the ownership scores in the BS condition also account for why it is the right SNL that is controlled during reaching—that is, top-down processes stemming from greater feelings of ownership dictate which limb is controlled. There are several reasons to suggest that this is not the case. First, if this were the case then the end-point error observed in the LS condition should be equivalent to that observed in the RS condition, as feelings of ownership between the two conditions do not appear to differ. The magnitude of the error in the RS condition, however, was twice that in the LS condition, suggesting some dissociation between the two. In addition, the single participant who reached rightwards rather than leftwards in the BS condition counter-intuitively gave higher ownership ratings for the right hand compared to the left hand (means = 6, 3, respectively). It might be speculated that if there case were a general preference for the right hand in the BS condition, it could be accounted for by the asymmetry of the distance from the body. The right SNL was always closer to the body than either the left SNL or the real hand and it seems plausible that a limb that is closer to the body (i.e. the right SNL) might more readily be accepted into the body representations (although ownership scores, measured via questionnaire, do not appear to support this in the LS and RS conditions). It should also be noted that proprioceptive drift—a natural drift towards the body in the perceived position of body parts when occluded from view (Wann and Ibrahim 1992)—is unlikely to be a confound. Baseline recording, in which the active touch task was performed with video feedback of the hand in its true location, revealed a very small (0.13°) positive error, suggesting that the drift, if any, took the hand away from the body rather than towards it. Finally, and at the request of a reviewer, post hoc t tests were conducted on the questionnaire ratings in the BS condition. These analyses did not reveal significant differences in ownership between the right and left SNLs or the right and both SNLs when both had been stroked synchronously (max. t(11) = 2.57, p = 0.52 Bonferroni corrected). Even if this had been significant it would merely have indicated that ownership and control were similarly affected, not that one was the result of the other. Whatever the mechanisms involved, the asymmetric nature of both the questionnaire and pointing data suggest that probabilistic conscious and unconscious decisions about multiple representations of the limb may not be treated as equally as Ehrsson (2009) suggests. Indeed, given the distributed network involved in bodily representations (Vallar and Ronchi 2009) it is likely that the dissociation of effects observed here correspond to separable underlying neural processes for body image for ownership and body schema for control.

In summary, after only 20 s of synchronous active stimulation participants incorporated a synchronous fake limb into both the body image and body schema, as indicated by the reported sense of ownership and pointing movements. Participants could also be made to embody multiple fake limbs into their body image representation. While a pointing task did not indicate that both fake representations of the limb were taken into consideration when planning the handpath, a subjective sense of ownership for both was reported. The current study provides insight about the way the body is experienced and highlights the transitory nature of body representations that can be so readily modified. The current experimental set-up provides an important means for testing the extent to which body representation can be distorted and, as such, may provide an important tool in the research of disorders of body representation.