Experimental Brain Research

, Volume 204, Issue 3, pp 343–352

Hands only illusion: multisensory integration elicits sense of ownership for body parts but not for non-corporeal objects

Authors

    • Department of Psychology, Royal HollowayUniversity of London
  • Lewis Carpenter
    • Department of Psychology, Royal HollowayUniversity of London
  • Dafydd James
    • Department of Psychology, Royal HollowayUniversity of London
  • Aikaterini Fotopoulou
    • Department of Psychology, Institute of Cognitive NeuroscienceUniversity College London
Research article

DOI: 10.1007/s00221-009-2039-3

Cite this article as:
Tsakiris, M., Carpenter, L., James, D. et al. Exp Brain Res (2010) 204: 343. doi:10.1007/s00221-009-2039-3

Abstract

The experience of body ownership can be successfully manipulated during the rubber hand illusion using synchronous multisensory stimulation. The hypothesis that multisensory integration is both a necessary and sufficient condition for body ownership is debated. We systematically varied the appearance of the object that was stimulated in synchrony or asynchrony with the participant’s hand. A viewed object that was transformed in three stages from a plain wooden block to a wooden hand was compared to a realistic rubber hand. Introspective and behavioural results show that participants experience a sense of ownership only for the realistic prosthetic hand, suggesting that not all objects can be experienced as part of one’s body. Instead, the viewed object must fit with a reference model of the body that contains important structural information about body parts. This body model can distinguish between corporeal and non-corporeal objects, and it therefore plays a critical role in maintaining a coherent sense of one’s body.

Keywords

Rubber hand illusionBody ownershipMultisensory integrationBody modelBody representations

Introduction

Body ownership refers to the special perceptual status of one’s own body, which makes bodily sensations seem unique to oneself, that is, the feeling that ‘‘my body’’ belongs to me (Tsakiris et al. 2007b). Body ownership gives somatosensory signals a special phenomenal quality, and it is fundamental to self-consciousness: the relation between my body and ‘‘me’’ differs from both the relation between my body and other people’s bodies and the relation between ‘‘me’’ and external objects. The rubber hand illusion (RHI), first introduced by Botvinick and Cohen (1998), is an experimental paradigm that allows the controlled manipulation of body ownership. Watching a rubber hand being stroked synchronously with one’s own unseen hand causes the rubber hand to be attributed to one’s own body, to “feel like it is my hand” (Botvinick and Cohen 1998). This illusion does not occur when the rubber hand is stroked asynchronously with respect to the participant’s own hand. One behavioural correlate of the RHI is an induced change in the perceived location of the participant’s own hand towards the rubber hand. Botvinick and Cohen (1998) showed that, after synchronous visuo-tactile stimulation of the rubber hand and the participant’s hand, intermanual reaches with the participant’s unstimulated hand to the felt position of her stimulated own unseen hand indicated a displacement of the felt position towards the rubber hand. In other words, participants perceived the position of their hand to be closer to the rubber hand than it really was. Similar patterns of mislocalizations and proprioceptive drifts have been obtained with different response methods (Tsakiris and Haggard 2005; see Kammers et al. 2008 for a dissociation between different response types). Interestingly, the prevalence of illusion over time (Botvinick and Cohen 1998) and the subjective intensity of the experience of body ownership during the RHI (Longo et al. 2008) are positively correlated with drifts in the felt location of the subject’s own hand towards the rubber hand.

The manipulation of body ownership with the RHI has been well established in several replications (Armel and Ramachandran 2003; Ehrsson et al. 2004; Longo et al. 2008; Moseley et al. 2008; Tsakiris and Haggard 2005; Tsakiris et al. 2007a, b) and modifications of the classic paradigm (Austen et al. 2004; Capelari et al. 2009; Durgin et al. 2007; Ehrsson 2007; Ehrsson et al. 2005, 2007, 2009; Hägni et al. 2008; Kammers et al. 2009; Lenggenhager et al. 2007; Petkova and Ehrsson 2008; Schütz-Bosbach et al. 2006, 2009; Slater et al. 2008; Tsakiris et al. 2006; Tsakiris 2008). One key research question across these studies has focussed on the necessary and sufficient conditions for inducing a sense of ownership. Botvinick and Cohen (1998) suggested that intermodal matching between vision and touch is both necessary and sufficient for self-attribution of the rubber hand. Indeed, the first RHI studies have shown that the presence of synchronized visual and tactile stimulation is a necessary condition for the inducement of the RHI, since RHI does not occur after asynchronous stimulation (Botvinick and Cohen 1998; Ehrsson et al. 2004; Tsakiris and Haggard 2005). But does this make intermodal matching sufficient for the experience of body ownership?

Armel and Ramachandran (2003) adopted a strong version of the Botvinick and Cohen view by arguing that any object can be experienced as part of one’s body if the appropriate intermodal matching was present. They stimulated the participant’s hand and the rubber hand synchronously. After the stimulation period, the experimenter ‘injured’ the rubber hand (e.g. the experimenter bent one of the rubber fingers backwards), and skin conductance responses (SCRs) were measured from the subject’s unstimulated hand. As predicted, SCRs were significantly higher after synchronous stimulation than after the control condition. Similar differences, albeit smaller in magnitude, between SCRs for synchronous versus asynchronous conditions were found when participants observed a table, instead of a rubber hand, being stroked while tactile stimulation was delivered on the participant’s own hand. According to Armel and Ramachandran (2003), both the rubber hand and the table, and in principle any other object, can be experienced as part of one’s body, provided that strong visuo-tactile correlations are present. It was argued that the illusion that “the fake hand/table is my hand” is the result of a purely bottom-up mechanism, which associates synchronous visuo-tactile events: any object can become part of “me”, simply because strong statistical correlations between different sensory modalities are both necessary and sufficient conditions for self-attribution (see Makin et al. 2008).

This “bottom-up” view has been challenged by recent studies that found no evidence for an induced sense of ownership when the viewed object does not resemble a body part. For example, replacing the realistic rubber hand with a neutral, non-corporeal, object abolishes the positive effect of synchronous stimulation (Tsakiris et al. 2008; Tsakiris and Haggard 2005; see also Graziano et al. 2000; Holmes et al. 2006). More recently, Haans et al. (2008) assessed the strength of the RHI for a viewed object that could have a hand shape or not (e.g. hand glove or a flat sheet). The results, contrary to what Armel and Ramachandran (2003) predicted, showed that only a hand-shaped object induced a sense of ownership as measured with a questionnaire.

The evidence on the lack of RHI for non-corporeal objects suggests that multisensory integration is necessary but not a sufficient condition for the experience of ownership during the RHI. Instead, factors other than the mere correlation between synchronized visual and tactile events modulate the RHI. Current multisensory input may be modulated by anatomical and structural representations of the body, arising from prior experience, but also from innate body representations. Body representations involve the interpretation of peripheral inputs in the context of a rich internal model of the body’s structure; body-related percepts are not simply correlated, but they are integrated against a set of background conditions that preserve the coherence of bodily experience (Graziano and Botvinik 2001). These background conditions may modulate the integration of the multisensory input in a top-down manner. On this view, intermodal matching may not be sufficient for the experience of body ownership. Instead, intermodal matching, and in the case of the RHI, synchronous visuo-tactile stimulation, is a necessary condition for causing the onset of the RHI, but only if an internal reference model of the body is not violated. Tsakiris et al. (2008) suggested that the brain maintains a coherent sense of one’s body by a test-for-fit process that underpins the distinction between corporeal and non-corporeal objects on the basis of visuo-tactile evidence. However, all previous studies have used either non-corporeal objects or rubber hands and direct comparison between these two classes of stimuli.

In the present study, we investigate in greater detail the extent to which the match between the external object and the internal reference description of body parts will modulate the experience of ownership in the RHI. Thus, a series of five different objects, ranging from a non-corporeal wooden object to hand-like wooden objects, and to a prosthetic hand, was used. A viewed object that was transformed in three stages from a plain wooden block to a wooden hand was compared to a realistic rubber hand. Five different groups of participants were exposed in five different objects being stroked with a paintbrush while receiving synchronous or asynchronous tactile stimulation on their own hand. The experience of ownership was first quantified by questionnaires adapted from Longo et al. (2008) and, second, by a behavioural proxy of the RHI, namely the change in the felt location of one’s hand after multisensory stimulation (see Tsakiris and Haggard 2005). Proprioceptive mislocalizations can be observed in the absence of experienced ownership after only visual exposure to rubber hands (Holmes et al. 2006). However, under conditions of visuo-tactile stimulation that can elicit the RHI, the felt location of one’s hand towards or away from the viewed object in the classic RHI manipulations has been shown to correlate with the sense of body ownership (Botvinick and Cohen 1998; Longo et al. 2008), suggesting that proprioceptive drifts can be used as a behavioural proxy of the ownership: proprioceptive drifts towards the viewed object indicate incorporation and experienced ownership, while proprioceptive drifts away from the viewed object indicate failure of incorporation and disownership.

On the basis of the existing accounts of RHI, three different predictions can be generated. The “bottom-up” hypothesis predicts that all stimuli will elicit comparable feelings of ownership (see Armel and Ramachandran 2003). The “body model” hypothesis predicts that only objects that match the structural model of the body will elicit a sense of ownership (Tsakiris et al. 2008). The “proportional hypothesis” predicts that the sense of ownership will increase proportionally to the structural similarity of the external object to a hand.

Methods

Experimental design

A 2 × 5 mixed design was used. Synchronous or asynchronous visuo-tactile stimulation between the viewed object and the participant’s left hand was a within-subjects factor. The form of the viewed object was a between-subjects factor with five levels (see Fig. 1). Stimulus 1 was a plain wooden block, pale and beige in colour, common structural features with a hand. Stimulus 2 was built upon Stimulus 1 by adding a thumb-like feature to the right-hand side of the block. In Stimulus 3, we added one more structural feature on the shape of the object by creating a wrist shape. Stimulus 4 was a ‘wooden hand” by possessing the features of Stimuli 2 and 3, as well as the outline of four additional fingers to the thumb. Stimulus 5 was a realistic prosthetic hand. All stimuli had comparable overall size.
https://static-content.springer.com/image/art%3A10.1007%2Fs00221-009-2039-3/MediaObjects/221_2009_2039_Fig1_HTML.jpg
Fig. 1

The five stimuli used as a between-subjects factor

RHI questionnaire

We adopted a total of 16 questions (see Appendix) from Longo et al. (2008). The questions referred to four different components of the experience of embodiment during the RHI paradigm: (a) five statements referring to a sense of ownership, (b) four statements referring to perceived location, (c) four statements referring to the experience of loss one’s hand, and (d) three statements referring to the sense of agency (see Appendix). Participants completed two versions of the questionnaire, one for the synchronous and one for the asynchronous condition. Participants answered each statement by choosing a number from a 7-point Likert Scale, from “−3 being strongly disagree”, and “+3 being strongly agree”. The questions appeared in a random order.

Procedure

Participants sat in front of a table. At the beginning of each block, the experimenter placed the participant’s left hand at a fixed point inside a frame, the top side of which was covered by one-way and two-way mirrors. The two-way mirror was used to make the viewed object appear (during stimulation) and disappear (during judgment). At the beginning of each block, both the participant’s left hand and the viewed object were out of sight. A pre-test baseline estimate of finger position was obtained prior to stimulation. Participants saw a ruler reflected on the mirror. The ruler was positioned 18 cm above the mirror, to appear at the same gaze depth as the rubber hand. Participants were asked, “Where is your index finger?” and in response, they verbally reported a number on the ruler. They were instructed to judge the position of their finger by projecting a parasagittal line from the centre of their fingertip to the ruler. During the judgments, there was no tactile stimulation, and the lights under the two-way mirror were switched off to make the object invisible, leaving only the ruler visible. After the judgment, the ruler was removed, and the lights under the two-way mirror were turned on to make the object appear, aligned with the participant’s midline. The participants were viewing the object in the same depth plane as their own hand. The distance between the real hand and the viewed object was 17.5 cm. Stimulation was delivered manually by the experimenter with the use of two identical paintbrushes. In the synchronous visuo-tactile stimulation blocks, the experimenter manually stroked with two paintbrushes both the participant’s hand and the viewed object at the same time and the same locations. In the asynchronous visuo-tactile stimulation blocks, the experimenter stroked the participant’s hand first, while the viewed object was stroked with a latency of 500–1,000 ms in the corresponding location. Each stimulation period lasted 240 s and was timed with a stopwatch.

After the stimulation period, the lights were turned off. The ruler was always presented with a random offset to ensure that participants judged finger position anew on each trial and that they could not simply repeat previous responses. Participants were asked, “Where is your index finger?” After their answer, the ruler was removed, and they were asked to move their left hand and have a rest for a few moments. Following the rest period, their left hand was again passively placed at the same predetermined point, under the frame and out of sight. The same process was followed for each block. Each participant completed two synchronous, followed by two asynchronous conditions, or vice versa, resulting in a total of four blocks. The order of presentation was counterbalanced across participants. After the first block of synchronous and asynchronous stimulation, participants were asked to fill in the RHI questionnaire.

Participants

A total of 40 healthy naive participants (mean age 20.5, 24 females), with normal or corrected to normal vision, took part in this study after giving their informed written consent. Eight participants were randomly assigned in each of the five visual stimulus conditions. The study was approved by the Departmental Ethics Committee, Department of Psychology, Royal Holloway, University of London.

Results

Proprioceptive drift

We obtained a baseline pre-test judgment about the felt location of the participant’s index finger prior to visuo-tactile stimulation and a post-test judgment after stimulation. The pre-test judgments were subtracted from the post-test judgments. The resulting values show the change in the perceived position of the hand between the start and end of the stimulation period, across conditions. We use the term proprioceptive drift to refer to this quantity (see Fig. 2). A positive proprioceptive drift represents a mislocalization of the participant’s hand toward the viewed object, while a negative drift represents a mislocalization of the participant’s hand away from the viewed object.
https://static-content.springer.com/image/art%3A10.1007%2Fs00221-009-2039-3/MediaObjects/221_2009_2039_Fig2_HTML.gif
Fig. 2

Mean proprioceptive drifts towards or away from the viewed condition. Error bars indicate standard errors. Zero represents the felt position of the participant’s hand prior to stimulation

The mean proprioceptive drifts per subject for each condition were submitted into a 2 × 5 mixed ANOVA. The main effect of the within-subjects factor of type of stimulation (i.e. synchronous vs. asynchronous) was significant (F(1,35) = 18.11, p < 0.05).The main effect of the between-subjects factor of viewed object (i.e. stimuli 1–5) was significant (F(4,35) = 4.56, p < 0.05). The interaction between the two factors was significant (F(4,35) = 2.89, p < 0.05). Post hoc comparisons with Bonferroni correction showed that the mean differences between stimuli 5 and 4, 5 and 3, 5 and 1 were significantly different (p < 0.05), whereas the mean differences between stimuli 5 and 2 were not significantly different (p > 0.05). None of the other comparisons between stimuli 1, 2, 3 and 4 reached significance.

We then used simple effects analysis (Howell 1997) to compare the proprioceptive drift between synchronous and asynchronous conditions for each visual stimulus condition. Differences in proprioceptive drifts between synchronous and asynchronous visuo-tactile stimulation were significant for stimulus 5, that is when subjects saw the rubber hand (t(7) = 2.7, p < 0.05, two-tailed), for stimulus 3 (t(7) = 4.32, p < 0.05, 2-tailed), and stimulus 1 (t(7) = 2.4, p < 0.05), whereas differences for stimulus 4 (t(7) = 1.4, p > 0.05) and stimulus 2 (t(7) = −0.154, p > 0.05) were not significant. Importantly, only the proprioceptive drift after synchronous condition when looking at stimulus 5 (i.e. the rubber hand) was significantly different from zero, that is, from the felt position of the hand prior to stimulation (t(7) = 3.67, p < 0.01, Bonferroni correction). None of the other comparisons of proprioceptive drift against zero reached significance (t(7) = 1.84, p > 0.01 for stimulus 1 after synchronous stimulation, t(7) = −0.97, p > 0.01 for stimulus 1 after asynchronous stimulation, t(7) = 2.55, p > 0.01 for stimulus 2 after synchronous stimulation, t(7) = 1.85, p > 0.01 for stimulus 2 after asynchronous stimulation, t(7) = 2.5, p > 0.01 for stimulus 3 after synchronous stimulation, t(7) = −0.63, p > 0.01 for stimulus 3 after asynchronous stimulation, t(7) = 1.01, p > 0.01 for stimulus 4 after synchronous stimulation, t(7) = −0.95 for stimulus 4 after asynchronous stimulation, t(7) = 1.48, p > 0.01 for stimulus 5 after asynchronous stimulation.

Introspective evidence

Participants answered a total of 16 questions for each synchronous and asynchronous condition on a −3 to +3 scale, referring to four different components of the experience of embodiment during the RHI paradigm: (a) ownership, (b) location, (c) loss one’s hand, and (d) agency.

Ownership questions

The mean ratings (see Fig. 3a) for the ownership questions per condition were submitted into a 2 × 5 mixed ANOVA. The main effect of stimulation was significant (F(1,35) = 24, p < 0.05). The main effect of viewed stimulus was not significant (F(4,35) = 1.84, p < 0.05). The interaction between the two factors was significant (F(4,35) = 5.31, p < 0.05). Post hoc comparisons with Bonferroni correction showed that the mean differences between stimuli 5 and 4, 5 and 3, 5 and 1 were significantly different (p < 0.05, two-tailed), whereas the mean differences between stimuli 5 and 2 were not significantly different (p > 0.05). None of the other comparisons between stimuli 1, 2, 3 and 4 reached significance.
https://static-content.springer.com/image/art%3A10.1007%2Fs00221-009-2039-3/MediaObjects/221_2009_2039_Fig3_HTML.gif
Fig. 3

Mean ratings for the ownership statements (a), for the location statements (b), for the loss of one’s hand statements (c) and for the agency statements (d). Error bars indicate standard errors

Simple effects analysis (Howell 1997) was used to compare the ratings for the ownership questions between synchronous and asynchronous conditions for each visual stimulus condition. Differences in the ratings for the ownership questions between synchronous and asynchronous conditions were significant for stimulus 5 (t(7) = 3.19, p < 0.05, two-tailed), stimulus 4 (t(7) = 3, p < 0.05, two-tailed), but not for stimulus 3 (t(7) = 0.83, p > 0.05), stimulus 2 (t(7) = 1.55, p > 0.05), and stimulus 1 (t(7) = 2.29, p > 0.05).

In spite of significant differences between synchronous and asynchronous stimulation for stimuli other than the rubber hand, only the rubber hand produced positive ratings for the experience of ownership after synchronous stimulation, which were also significantly different from the asynchronous stimulation. Because we observed positive ratings only for stimulus 5 after synchronous stimulation, we statistically examined whether these positive ratings (i.e. agreement) were significantly different from the “neither agree/disagree” response (i.e. point zero in the Likert scale). The ownership ratings after synchronous condition were significantly higher than zero (t(7) = 4.45, p < 0.01, Bonferroni correction) only for stimulus 5 (i.e. rubber hand), indicating that for this conditions only, participants showed a significant agreement with the ownership statements, given that point zero in the Likert scale represent a “neither agree, nor disagree” response. None of the ownership ratings after synchronous conditions for the other levels of viewed stimulus were significantly higher than zero.

Location questions

The mean ratings for the location questions per condition (see Fig. 3b) were submitted into a 2 × 5 mixed ANOVA. The main effect of stimulation was significant (F(1,35) = 39.35, p < 0.05). The main effect of viewed stimulus was significant (F(4,35) = 2.9, p < 0.05). The interaction between the two factors was not significant (F(4,35) = 0.99, p > 0.05). Post hoc comparisons with Bonferroni correction showed that the mean differences between stimuli 5 and 1 were significantly different (p < 0.05). None of the other comparisons reached significance.

Simple effects analysis (Howell 1997) was used to compare the ratings for the location questions between synchronous and asynchronous conditions for each visual stimulus condition. Differences in the rating for the location questions between synchronous and asynchronous conditions were significant for stimulus 5 (t(7) = 3.37, p < 0.05, two-tailed), stimulus 4 (t(7) = 3.14, p < 0.05, two-tailed), stimulus 2 (t(7) = 3.39, p < 0.05, two-tailed), and stimulus 1 (t(7) = 2.96, p < 0.05, two-tailed), but not for stimulus 3 (t(7) = 1.59, p > 0.05). However, only the location ratings after synchronous condition when looking at stimulus 5 (i.e. the rubber hand) were significantly higher than zero (t(7) = 12.77, p < 0.01, Bonferroni correction) indicating a significant agreement with this statement. None of the location ratings after synchronous conditions for the other levels of viewed stimulus were significantly higher than 0.

Loss of one’s hand questions

The mean ratings for the loss of one’s hand questions per condition (see Fig. 3c) were submitted into a 2 × 5 mixed ANOVA. The main effect of stimulation was significant (F(1,35) = 13.16, p < 0.05). The main effect of viewed stimulus was not significant (F(4,35) = 0.32, p > 0.05). The interaction between the two factors was not significant (F(4,35) = 1.73, p > 0.05). None of the post hoc comparisons between different stimulus levels reached significance.

Simple effects analysis (Howell 1997) was used to compare the ratings for the loss of one’s hand questions between synchronous and asynchronous conditions for each visual stimulus condition. Differences in the rating for the loss of one’s hand questions between synchronous and asynchronous conditions were not significant for any stimulus level (all t(7) < 2.29, p > 0.05).

Agency questions

The mean ratings for the agency questions per condition (see Fig. 3d) were submitted into a 2 × 5 mixed ANOVA. The main effect of stimulation was significant (F(1,35) = 19.19, p < 0.05). The main effect of viewed stimulus was not significant (F(4,35) = 2.35, p > 0.05). The interaction between the two factors was not significant (F(4,35) = 2.23, p > 0.05). Post hoc comparisons with Bonferroni correction showed that the mean differences between stimuli 5 and 1 were significantly different (p < 0.05, two-tailed). None of the other comparisons reached significance.

Simple effects analysis (Howell 1997) was used to compare the ratings for the agency questions between synchronous and asynchronous conditions for each visual stimulus condition. Differences in the rating for the agency questions between synchronous and asynchronous conditions were significant for stimulus 5 (t(7) = 4.40, p < 0.05, two-tailed), and stimulus 4 (t(7) = 2.37, p < 0.05, two-tailed), but not for stimuli 3, 2 and 1 (all t(7) < 2.29, p > 0.05).

Discussion

The present experiment investigated the relation between the visual form of the external object and the experience of ownership, by systematically varying the appearance of the external object. Consistent introspective and behavioural results suggest that participants experienced a sense of ownership when the object was a realistic prosthetic rubber hand, while other objects that shared some structural features but not the overall form with hands did not elicit an illusion of ownership, as measured by introspective and behavioural methods. These findings corroborate previous results and suggest that only objects that have the same visual form with body parts can be experienced as part of one’s body. The ownership questions clearly show that participants experienced a sense of ownership only when they saw the rubber hand, whereas a “wooden object” that had hand-like features such as fingers and wrist did not elicit a sense of ownership. Similarly, the pattern of proprioceptive drifts shows that the rubber hand elicited significant changes in the felt location of the participant’s hand between synchronous and asynchronous visuo-tactile stimulation conditions, and importantly, this was the only condition during which changes after synchronous condition were significantly different from the felt location of the participant’s hand prior to stimulation. While other conditions elicited changes in proprioception that were significantly different between synchronous and asynchronous stimulation, none of these other conditions showed significant differences in the felt location of the participant’s hand before and after stimulation. Overall, the analysis shows that not all objects can be experienced as part of one’s body, as the “bottom-up” hypothesis would predict. Instead, the viewed object must fit with a reference model of the body that contains structural information about body parts (Tsakiris et al. 2008) and bodies in general (Lenggenhager et al. 2007). In addition, it is only when the viewed objects match this reference description, that sensations of touch referral were observed on the basis of introspective evidence (see analysis of the location statements).

One potential confound in the present study is the difference in the texture between the wooden stimuli, the rubber hand and the participant’s own hand. This difference may have resulted in significant differences in the perceived sensory quality of the tactile stimulation delivered on the viewed objects, and as a result they could have affected the experience of ownership. Two recent studies suggest that this difference by itself is unlikely to prevent the RHI for neutral objects. Haans et al. (2008) assessed the strength of the RHI for a viewed object that could have a hand shape or not, with a natural-skin texture or not. The results showed that a hand-shaped object induced a stronger RHI, even when both the hand-shaped and the neutral object had the same skin-like texture. Importantly, the difference between the hand-like object and the neutral object was significant even when the hand-like object did not have skin-like texture, whereas the neutral object had. Similarly, Schütz-Bosbach et al. (2009) asked participants to watch a rubber hand being stroked by either a piece of soft or rough fabric while participants received synchronous or asynchronous tactile stimulation that was either congruent or incongruent with respect to the sensory quality of the material touching the rubber hand (e.g. soft or rough fabric). Results showed that discrepancies in the sensory quality of visual and tactile stimulation did not affect the RHI strength as measured behaviourally (e.g. proprioceptive drift) and introspectively (e.g. RHI questionnaire). Therefore, differences in the surface texture between the external object and the participant’s hand or differences in the sensory quality of tactile stimulations do not seem to affect the experience of ownership, and therefore they cannot sufficiently account for the observed differences.

Could the present results be accounted by differences in spatial correspondence between felt and seen touch? During visuo-tactile stimulation, we stroked the viewed object and the participant’s hand in spatially corresponding locations. For certain stimuli that lacked any resemblance to a hand (e.g. stimulus 1) the spatial correspondence of the tactile stimulation would be more difficult to achieve than for stimuli that had all the features of a hand (e.g. stimulus 4). However, for each stimulus level, we delivered stimulation in the corresponding spatial location, for example if we stimulated the little finger of the participant’s hand, we stimulated the left side of wooden block, and if we stimulated the participant’s middle finger from the knuckle to the fingertip, we stimulated along the midline of the wooden block in the same direction and length. For stimuli that included finger-like features, this correspondence was more easily achieved, and perhaps more easily perceived by the participants. Importantly, if the precision of spatial correspondence modulated the effectiveness of the RHI, then we would expect to see a proportional increase of the RHI as the wooden stimulus became more like a real hand, but this hypothesis is not supported by the present introspective and behavioural results.

The results of the present study taken together with previous studies on the RHI suggest that multisensory integration by itself is not sufficient for body ownership. Instead, other factors such as the visual form congruency between the viewed object and the felt body part (Tsakiris and Haggard 2005; Haans et al. 2008; see also Holmes et al. 2006), the anatomical congruency between viewed and felt body part (Tsakiris and Haggard 2005; Graziano et al. 2000; Pavani et al. 2000), the postural congruency between the viewed and felt body part (Austen et al. 2004; Tsakiris and Haggard 2005; Costantini and Haggard 2007; Ehrsson et al. 2004; Pavani et al. 2000), the volumetric congruency between the viewed and the felt body part (Pavani and Zampini 2007), the spatial relation between viewed and felt body part (Lloyd 2007), modulate the inducement of the RHI and the experience of body ownership. Tsakiris (2009) suggested that during the RHI the visual form of the external object is compared against a body model that contains a reference description of the visual, anatomical and structural properties of one’s own body (Tsakiris et al. 2008; Costantini and Haggard 2007; Tsakiris and Haggard 2005). This critical comparison will test the fitness for incorporability of the viewed object. Objects that do not pass this test will not be experienced as part of one’s body even if visuo-tactile stimulation is synchronous (Haans et al. 2008; Tsakiris et al. 2008; Tsakiris and Haggard 2005; but see Armel and Ramachandran 2003). Features such as skin colour or skin texture do not enter into this comparison (see also Longo et al. 2009) and this is a further argument why this body model should not be equated with a conscious body image or with a primary sensory representation. In addition, the kind of reference model of the body postulated here is not the same as a higher level explicit knowledge regarding the appearance of the body. In the context of the present experiment, we specifically used the type of the viewed object as a between-subjects factor to avoid any potential cognitive biases about the shape or realism of the object to influence the participant’s judgment from one visual stimulus condition to the other. Following Graziano and Botvinik (2001), the body model postulated here encompass an implicit knowledge structure that encodes the body’s form and the constraints on how the body’s parts can be configured.

The brain could use the body model to decide on the compatibility and eventual incorporability of the viewed object. The present experimental manipulation relates to this comparison between objects that may or may not be experienced as part of one’s body. This decision may be based on a process that monitors whether sensations, events and objects should be attributed to one’s body or not. The behavioural (Tsakiris and Haggard 2005) and electrophysiological (Press et al. 2008) data suggest that the process of filtering what may or may not become part of one’s body is not the same as the process of multisensory integration that drives the RHI (for a discussion see Tsakiris 2009). Tsakiris et al. (2008) hypothesized that disrupting activity in the right temporo-parietal junction (rTPJ) would impair the test-for-fit process that underpins the distinction between corporeal and non-corporeal objects on the basis of visuo-tactile evidence. Single-pulse TMS delivered over rTPJ 350 ms after synchronous visuo-tactile stimulation between either the participant’s own hand and a rubber hand, or the participant’s own hand and a neutral object, reduced the extent to which the rubber hand was incorporated into the mental representation of one’s own body, and it also increased the incorporation of a neutral object, as measured by the proprioceptive drift towards or away from the viewed object. An object (i.e. a rubber hand) that would normally have been perceived as part of the subject’s own body was no longer significantly distinguished from a clearly neutral object, suggesting that the disruption of neural activity over rTPJ blocked the contribution of the body model in the assimilation of current sensory input, making the discrimination between what may or may not be part of one’s body ambiguous.

Even though the present study did not directly investigate the neural basis of the distinction between the different objects used, it provides evidence about the type of information used by the body model. Interestingly, the comparison between the rubber hand (i.e. stimulus 5) and the wooden hand (i.e. stimulus 4) shows that the mere presence of body-like features in the viewed object is not by itself sufficient for inducing a sense of ownership when multisensory stimulation is synchronous. Thus, in addition to structural descriptions of the body, the body model seems to encode more global visual representations that refer to the overall form and configuration of the body. Other authors have suggested the existence of a stored and not stimulus-evoked body structural description that would contain representations about (a) the shape and contours of the human body, (b) a detailed plan of the body surface, (c) the location of body parts, the boundaries between them, and their internal part-relation (De Vignemont et al. 2005; Graziano and Botvinik 2001; Schwoebel and Coslett 2005; Sirigu et al. 1991; Tsakiris and Fotopulou 2008). It is important to note that the role of the body model would be critical for instances of incorporation, but not for excorporation (e.g. tool-use, see Holmes and Spence 2006; de Preester and Tsakiris 2009 for a discussion on differences between tool-use and incorporation).

Overall, the findings of the present study do not lend support to the “bottom-up” (see Armel and Ramachandran 2003), or to the proportional hypothesis. The experience of ownership is induced only when the form of the object-to-be-incorporated fits with the structural representation of the stimulated body part. In addition, the analysis of the introspective evidence suggests that the uniqueness of corporeal-like objects in eliciting a sense of ownership is not simply a quantitative difference in the magnitude of similar experiences, but instead there are important qualitative differences in the kinds of embodied experiences that corporal and non-corporeal objects elicit. The test-for-fit between sensory input and the body model can provide a criterion for distinguishing between ownership and disownership. The proposed distinction between corporeal and non-corporeal objects seems to be of relevance to neurological cases of impairments in body awareness. Daprati et al. (2000) reported a case of a 50-year-old man who, after a right thalamic-temporo-parietal lesion, developed hemispatial neglect and somatoparaphrenia. During a self-recognition task, the patient often produced confabulating responses when seeing a hand, by saying that it is a non-corporeal object (e.g. a needle, a crucifix). Vallar and Ronchi (2009) point to the fact that on the basis of the available case studies to date, one main feature of somatoparaphrenia may be a blurred distinction between corporeal and extracorporeal objects. This observation points to the critical role of the body model in maintaining a coherent sense of one’s body.

Acknowledgments

Dr. Tsakiris and Dr. Fotopoulou were supported by the European Platform for Life Sciences, Mind Sciences, and the Humanities” grant by the Volkswagen Stiftung for the “Body-Project: interdisciplinary investigations on bodily experiences”. The authors would like to thank Philip Roberts for developing the stimuli.

Copyright information

© Springer-Verlag 2009