Alleviating the ‘crossed-hands’ deficit by seeing uncrossed rubber hands
- First Online:
- Cite this article as:
- Azañón, E. & Soto-Faraco, S. Exp Brain Res (2007) 182: 537. doi:10.1007/s00221-007-1011-3
- 247 Views
Localizing and reacting to tactile events on our skin requires the coordination between primary somatotopic projections and an external representation of space. Previous research has attributed an important role to early visual experience in shaping up this mapping. Here, we addressed the role played by immediately available visual information about body posture. We asked participants to determine the temporal order of two successive tactile events delivered to the hands while they adopted a crossed or an uncrossed-hands posture. As previously found, hand-crossing led to a dramatic impairment in tactile localization, which is a phenomenon attributed to a mismatch between somatotopic and externally-based frames of reference. In the present study, however, participants watched a pair of rubber hands that were placed either in a crossed or uncrossed posture (congruent or incongruent with the posture of their own hands). The results showed that the crossed-hands deficit can be significantly ameliorated by the sight of uncrossed rubber hands (Experiment 1). Moreover, this visual modulation seemed to depend critically on the degree to which the visual information about the rubber hands can be attributed to one’s own actions, in a process revealing short-term adaptation (Experiment 2).
KeywordsMultisensory integrationRubber-hand illusionSomatosensationTemporal order judgementsVision
Just noticeable difference
Point of subjective simultaneity
Primary somatosensory cortex
Stimulus onset asynchrony
Temporal order judgement
Locating external objects that contact our skin and eventually initiating an appropriate behavioural response implies the engagement of different sensory systems using disparate spatial frames of reference. For instance, in order to maintain constancy regarding the perceived location of objects across changes in posture, several sources of sensory information about the spatial position of the tactile stimulus on the skin (somesthesia) and about the posture of the body (arising from proprioception, vision and even audition) must be integrated. If successful, this integration leads to a representation of the stimulus in an abstract spatial frame of reference. In other words, a representation that is beyond the anatomical projections found in primary somatosensory cortex (SI; Penfield and Rasmussen 1950; Sutherland 2006) and independent of the actual skin area being stimulated (e.g. Aglioti et al. 1999; Craig 2003; Driver and Grossenbacher 1996; Lakatos and Shepard 1997; Moscovitch and Behrmann 1994; Rinker and Craig 1994; Schicke and Röder 2006; Smania and Aglioti 1995; Soto-Faraco et al. 2004). The literature contains many curious demonstrations of tactile mislocalization that arise, precisely, when a strong incongruence between these reference frames is introduced (such as in the Japanese or the Aristotle illusions; Benedetti 1985, 1988; Henri 1898; Ponzo 1910; Zampini et al. 2005b).
The crossed-hands effect
Recently, the processes underlying the build-up of the frame of reference for tactile representations have been addressed in healthy humans using a variety of temporal order judgement (TOJ) tasks (e.g. Shore et al. 2002, 2005; Röder et al. 2004; Wada et al. 2004; Yamamoto and Kitazawa 2001). For example, Yamamoto and Kitazawa (2001) measured the minimal temporal interval required for the accurate perception of the order of two tactile stimuli delivered to analogous fingers at alternate hands. The results showed that observers’ accuracy in ordering the two stimuli in time (i.e. simply responding with the finger that had been stimulated first) was dramatically reduced when their arms were placed in a crossed posture, as compared to an uncrossed position. In fact, the time interval yielding 84% accuracy was 300 ms for the hands crossed condition, whereas it was only about 70 ms in the (normal) uncrossed posture. Furthermore, in the crossed position, a number of participants systematically reversed the order of responses (more than what was expected by mere chance) when the two tactile events were presented at short intervals (<300 ms). The results of Yamamoto and Kitazawa support the idea that tactile stimuli delivered at the hands are referred to external spatial locations, suggesting the existence of a rapid process of remapping between the initial somatotopically-based representations onto a more abstract (perhaps allocentric) frame of reference (see also Shore et al. 2005; Soto-Faraco et al. 2004). If tactile representations were encoded solely in terms of the sensory receptor surface (i.e. somatotopic frame of reference), crossing the hands over the midline should not affect judgements of temporal order, as stimulus encoding and motor response output should be independent of the hand location in space.
Thus, the crossed-hands deficit has been interpreted as originating from a failure in the process whereby the location of tactile stimuli is mapped onto an external frame of reference (e.g. Shore et al. 2002; Yamamoto and Kitazawa 2001). Indeed, Yamamoto and Kitazawa proposed that there is a default tactile localization system which works under the assumption that the limbs are placed in an uncrossed (canonical1) position. The remapping would be an active process triggered on demand when needed (e.g. when the hands are in a crossed posture), resulting in a time cost (about 300 ms, proposed on the basis of the occurrence of reversal errors at shorter intervals). Hence, when the interval between two tactile events is shorter than the time needed to complete the remapping, the second stimulus arrives before the location of the first stimulus has been consolidated and, as a consequence, the crossed-hands deficit arises.
The role of the visual system
The abstract frame of reference where tactile events are represented has been proposed to be strongly based on visual information because of the dominant role of vision in spatial sensory-motor coordination (e.g. Batista et al. 1999; Pouget et al. 2002; Röder et al. 2004). Following up on this argument, Röder and collaborators (2004) recently found that, in stark contrast to what occurs with sighted people, congenitally blind participants’ performance in the bimanual tactile TOJ task was completely unaffected by crossing the hands over the midline. Interestingly, the performance of a group of developmentally (late) blind participants who had been blind for variable periods of time (in some cases as much as 40 years) was indistinguishable from that of sighted participants; that is, their TOJs were rather inaccurate when their hands were crossed. Röder et al. suggested that a crossmodal link between vision and touch (based on a visual frame of reference) is established early in development and, once formed, remains for life. A similar line of argument can be used to explain the reduction in the crossed-hands deficit found when people cross their hands behind their backs (where no prior visual experience leads to the configuration of visual–spatial representations; Kóbor et al. 2006). These patterns of results clearly suggest that early visual experience shapes the way in which touch will be processed (e.g. Tipper et al. 2001).
Given that visually based representations formed in the course of development may be in use during tactile processing (as shown by the late blind results of Röder et al. 2004), the construction of a frame of reference for touch can also depend on the visual input that is immediately available to the observer. In fact, the role of visual input in tactile localization can be assessed by introducing a conflict between visible information about body posture and propioceptive input (e.g. Gallace and Spence 2005; Holmes and Spence 2005; Maravita et al. 2002c; Soto−Faraco et al. 2004). For example, in the rubber−hand illusion, observers often refer tactile sensations to a fake rubber limb placed in a plausible posture (i.e. according to the posture adopted by the subject; Botvinick and Cohen 1998; Pavani et al. 2000). Furthermore, improvements often observed in tactile TOJ accuracy and in tactile selective attention based on increasing the spatial distance between the hands (Driver and Grossenbacher 1996; Lakatos and Shepard 1997; Shore et al. 2005) can be elicited even when an apparent separation is visually introduced, despite the hands being physically kept at a constant distance (e.g. Gallace and Spence 2005; Soto−Faraco et al. 2004). In the present study we will address the role of sight during the representation of tactile stimuli in space, as indexed by the crossed-hands effect, using both real and rubber hands.
Materials and methods
Nineteen undergraduate students from the University of Barcelona took part in the study in exchange for course credit (mean age 20 years; SD = 2.09). All had normal or corrected-to-normal vision, were right-handed, and reported normal tactile sensitivity. Given previous reports that professional pianists might have better tactile temporal resolution abilities than non-pianists under crossed and uncrossed-hands postures (Kóbor et al. 2006), we ensured that none of our participants had previous experience playing piano or other musical instruments. The experiment was performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki. All of the participants gave their informed consent prior to their inclusion in the study and were naïve as to the purpose of the experiment.
A two-level wooden box was custom built for the experiment (see Fig. 1). Participants were asked to wear rubber kitchen gloves and to place their hands into the box (lower level), occluded from view, with each index finger resting on a response-button. Additionally, two rubber hands (made of stuffed kitchen gloves) were placed on top of the box (upper level), with the index fingers glued to two response-buttons which were identical to—and placed just above—those used by participants. A green light-emitting diode (LED) was placed just beneath each of the rubber hands’ index fingers (on the visible response-button). The rubber hands were visible to the participants up to the forearm (the proximal end of the rubber gloves was covered with a black cloth extending to the participants neck). The box incorporated a system of pulleys and strings connecting the response-buttons at the top and the bottom levels of the box so that the index fingers of the rubber hands moved (i.e. ‘pushed’ the button) every time the participant moved their own finger in order to press the button. The index finger of the rubber and real hands was connected in an anatomically congruent fashion throughout the experiment. An additional orange LED placed centrally between the two visible (fake) response-buttons was used as a fixation point.
Tactile stimulation was delivered to the dorsal surface of the middle phalanx of each ring finger. It consisted of a 7 ms tap with supra-threshold intensity delivered through 8 mm diameter tappers (Solenoid Tactile Tapper, M&E Solve, UK) driven by a 9 volt square wave. The distance between the two ring fingers was kept constant at 20 cm throughout the different experiment conditions. White noise was presented continuously from a central loudspeaker placed behind the wooden box in order to mask any noise resulting from the operation of the tactile stimulators. A PC computer running EXPE6 software (Pallier et al. 1997) was used to control stimulus presentation and register button-press responses through a custom-made relay box connected to the parallel port.
Design and procedure
Participants were tested individually in a dimly lit, sound-attenuated, room. They were asked to sit and rest their index fingers (palms down) on the response-buttons inside the wooden box, adopting either a crossed or an uncrossed arms posture. In the experimental blocks where the hands were crossed, the left arm was placed over the right arm in half of the participants, and vice-versa in the other half.
In each trial, two successive tactile stimuli were delivered, one to each ring finger, at varying stimulus onset asynchronies (SOAs) using the method of constant stimuli (−960, -480, -240, -120, -60, -30, -10, 10, 30, 60, 120, 240, 480 and 960 ms, where negative values indicate that the left hand was stimulated earlier than the right, and vice versa for positive values). Participants were asked to judge the order of the stimuli and to press the button of the hand that had been stimulated first (or second; task counterbalanced across participants) as accurately as possible. Each trial began with the illumination of the fixation LED for 500 ms. After an interval randomly varying from 500 to 1,000 ms, the two tappers were activated for 7 ms each, with an interval determined by one of the predefined SOAs. After response, an inter-trial interval of 1,500 ms led to the next trial.
Participants completed four experimental blocks with the posture of their own (real) hands (crossed and uncrossed), and the posture of the rubber hands (crossed and uncrossed) alternated in a factorial manner (order counterbalanced across subjects using a Latin square design). Thus, there were two congruent conditions in which both real and rubber hands were uncrossed (II) or both real and rubber hands were crossed (XX), and two incongruent conditions; the real hands uncrossed and the rubber hands crossed (IX); or the real hands crossed and the rubber hands uncrossed (XI). No feedback was given during the experiment, and observers were reminded to maintain central fixation throughout. A practice block containing a reduced set of SOAs (i.e. −750, −450, −300, −200, −150, −100, 100, 150, 200, 300, 450 and 750 ms) was ran before each experimental block, with the corresponding arm posture. Practice trials provided feedback after mistakes (the fixation LED flickered).
Prior to each practice block, participants performed a visual task in order to acquaint them with the real-to-rubber hand mapping (Maravita et al. 2002c). This task consisted of visual trials in which one of the two green LEDs placed beneath the rubber-hand fingers was lit, and the observer’s task was to “push” the visible button corresponding to the LED (no tactile stimulus was presented). The real-to-rubber hand mapping was always anatomical so that visual and motor frames of reference coincided. Thus, if participants’ real hands were uncrossed but the visible rubber hands were crossed, participants had to move their right hand (placed in the right hemispace) in order to “push” the button beneath the right rubber hand (placed on the left). Once the correct response-button had been pressed, the LED was switched off and the next trial began. The participants ran a block of 72 visual trials prior to each of the 4 blocks (one per each combination of hand postures). Some additional visual trials were interspersed (mixed with the tactile TOJ trials) during the practice and the experimental blocks. Therefore, each experimental block contained 160 trials (140 tactile trials and 20 visual) divided into 10 epochs in which all the 14 different SOAs were presented in random order (for a total of 640 trials in the experiment). The practice block consisted of 40 trials (36 tactile and 4 visual).
Results and discussion
Regarding the conditions where the participants’ real hands were crossed (see Fig. 2b), when the rubber hands were also placed in a crossed posture (XX), the time constant was 346 ms (CI = 230 to 457 ms). However, when the rubber hands were placed in an uncrossed posture (XI), the time constant decreased significantly (σ = 189 ms; CI = 146 to 231 ms; P = 0.002; see Fig. 2c). The PSS was not significantly different between the two conditions (55 ms, CI = −14 to 125 ms; and 0 ms, CI = −27 to 26 ms; P = 0.071; respectively). The lower asymptote was different between both distributions (lower: P = 0.008), whereas no differences were detected in the upper asymptote (P = 0.759).
The results of Experiment 1 replicate previous findings regarding the dramatic deficit in resolving the order of two successive taps at the fingers when adopting a crossed-hands posture (e.g. Shore et al. 2002; Yamamoto and Kitazawa 2001). More importantly, the present results demonstrate that on-line visual information regarding limb position is capable of modulating tactile processing significantly. This finding suggests that the postural information arising from the sight of the limbs plays an active role in the process of remapping tactile events onto externally defined coordinates. The significant difference found in the time constant between XX and XI conditions is clearly led by the postural appearance of the rubber hands: participants adopting a crossed-hands posture needed 157 ms lesser to resolve the temporal order of two taps if the visible rubber hands were uncrossed, than if they were seen in a crossed position. Somehow surprisingly, but in accordance with the predictions, performance was better when the rubber hands were placed in a position (uncrossed) that was incongruent with the posture adopted by the participants (crossed), than when both rubber and real hands were placed in the same posture (crossed). Therefore, one cannot attribute the observed differences simply to congruency errors produced by the mismatch between felt and seen positions of the hands.
The effect of the rubber-hand position was not as clear when the participants adopted an uncrossed-hands posture. Although there was a trend toward a decline in performance when the seen rubber hands were crossed, the difference in the time constant (accuracy) did not reach significant levels. One possibility to explain this weaker effect is the easiness of the task when the hands are uncrossed (given that this is the canonical or typical posture that people usually adopt), which might have led to ceiling performance. Another possibility is that visual information has no effect at all on tactile spatial localization when our hands are uncrossed, so that the obtained trend would only indicate participants’ confusion in the presence of posturally incongruent rubber hands. However, this explanation seems unlikely given that postural incongruence between real and rubber hands did not seem to have a negative effect when participants adopted a crossed-hands posture (i.e. XI condition). It is important to point out that, in the uncrossed-hands posture, we actually detected significant differences in the asymptotes of the distributions as a function of rubber-hand placement, with better performance in II than in IX conditions. These differences, however, are difficult to interpret directly as a worsening in temporal accuracy because they reflect incorrect responses at the large SOAs. These mistakes could be caused by simple output confusion rather from perceptual mislocalization of touch due to the vision of the rubber hands. It is precisely because of the importance of separating these two sources of variation (perceptual mislocalization vs. output confusion errors), that we chose to use a 4-parameter model.2
The individual data followed, in general, the overall pattern. In particular, JNDs (calculated here as the 75% threshold instead of the 84% threshold, for comparability with other studies; e.g. Shore et al. 2002, 2005; Röder et al. 2004) were larger in the IX condition (mean = 65 ms) than in the II condition (mean = 58 ms), and in the XX condition (mean = 233 ms) than in XI (mean = 127 ms). For example, Subject 1 obtained a JND of 24 ms in the II condition and 36 ms in the IX condition. When he adopted a crossed hands position, the JND was 61 ms in the XI condition and 105 in the XX condition. However, some of the participants’ data sets were far from adjusting well to a Gaussian model and instead followed an N-shaped pattern. This made, in some cases, the PSS and JND values calculated with the Gaussian model meaningless (in the order of seconds), and therefore the use of statistics based on averaging individual data was not viable (see also Craig and Belser 2006, who also pointed out this problem). Here, in order to better characterize the performance at the individual level in the crossed-hands conditions, we decided to use a Gaussian flip model.
Finally, another potentially important aspect of the strong visual modulation found in Experiment 1 is the contribution of the feeling of ownership of the rubber hands (i.e. rubber-hand illusion; Austen et al. 2004; Botvinick and Cohen 1998; Lloyd 2007; Pavani et al. 2000). In order to facilitate this type of illusion, in Experiment 1 we introduced a visual task that required participants to engage in the coordination between motor and visual input (Maravita et al. 2002c) as well as an anatomic congruency between the movements of the participants’ real and fake rubber hands. An interesting question here is whether the visual modulation observed in tactile processing could arise equally well in an automatic fashion, without the need for prior visuo-motor experience.
In Experiment 2, the active habituation task was removed, keeping only the simple correspondence between the movement of the real finger and that of the rubber hand. Previous literature has revealed important differences in terms of the perception and referral of tactile events depending on whether the observer incorporates the sight of (rubber) hands as part of their own body schema (e.g. Botvinick and Cohen 1998; Pavani et al. 2000). Several studies addressing rubber-hand spatial biases have shown that an active habituation period is necessary to create the rubber-hand illusion (e.g. Armel and Ramachandran 2003; Botvinick and Cohen 1998; Ehrsson et al. 2005; Lloyd et al. 2006). Furthermore, it has also been reported that the active use of tools (as opposed to passively holding) modifies (extends) effective peri-personal space toward the tip of the tool, as if the tool were somehow able to receive tactile stimulation (Berti and Frassinetti 2000; Farnè and Làdavas 2000; Holmes et al. 2004; Holmes and Spence 2004; Iriki et al. 1996; Maravita et al. 2001, 2002a, b; Maravita and Iriki 2004). Hence, in Experiment 2 we examined the effects of visual information on tactile spatial coding when participants are not previously habituated to the visual consequences of the rubber hands movement, and thus, the occurrence of the rubber-hand illusion should be less likely.
Materials and methods
Twenty new undergraduate students, naive as to the purpose of the study, took part in the experiment in exchange for course credit (mean age 22 years, SD = 2.23). All had normal or corrected-to-normal vision, were right-handed and reported normal tactile sensitivity.
Apparatus, design and procedure
All aspects of the method were as in Experiment 1 with the exception that the visual habituation task was removed from the experiment (and so were the green LEDs placed on the visible response-buttons). Therefore, only tactile TOJ trials were presented in both the practice and the experimental blocks. Practice blocks contained 36 trials and experimental blocks contained 140 trials, per condition.
Results and discussion
When participants held their hands in a crossed posture (see Fig. 3b), the σ was 219 ms (CI = 155 to 281 ms) if the visible rubber hands were placed in an uncrossed posture (XI), whereas it was 281 ms (CI = 218 to 341 ms) if the visible rubber hands were crossed (XX). However, this 62 ms difference in time constant as a function of the posture of the rubber hands fell short of significance (P = 0.168; see Fig. 3c). The PSS values were 6 ms (CI = -29 to 42 ms) and 93 ms (CI = 55 to 132 ms), respectively (a significant difference, P = 0.021). Finally, the upper asymptotes were significantly different (P < 0.001) whereas the lower asymptotes were not (P = 0.939).
The results of Experiment 2 show that the modulation of tactile processing by available visual information (previously seen in Experiment 1) vanished after removing the visuo-motor task introduced to reinforce the correspondence between the participant’s own actions and the seen movement of the visible rubber hands. The data followed a trend similar to what was found in Experiment 1, albeit greatly reduced to non-significant differences. In fact, when the real hands where crossed, observers needed 62 ms less to achieve the 84% accuracy threshold if they watched a pair of uncrossed rubber hands than when watching them in a crossed posture. In the same way, when adopting an uncrossed posture participants needed 11 ms more to resolve temporal order if they watched a pair of crossed rubber hands rather than if watching them in an uncrossed fashion. The differences between the two experiments (i.e. non-significant trends in Experiment 2 vs. clearly significant effects in Experiment 1) highlight the potential dependence of this visual modulation of touch localization on the degree to which the movements of the seen rubber hands are linked to one’s motor behaviour. Moreover, the mere synchrony between the movements of real and fake fingers seems insufficient to induce visual modulation of tactile localization in full.
In order to further study the adaptation of the rubber to real hand mapping, we addressed the potential relationship between the proportion of correct responses in the visual trials and the tactile TOJ accuracy in a control experiment. A new group of participants (n = 12) were tested in the IX condition alone, and ran two blocks of trials; one with the visual task (like in Experiment 1) and one without the visual task (like in Experiment 2). First, we analysed the results of the block containing the visual task. Participants were divided into two groups according to their performance in the visual trials: good performers (n = 6; mean 2.5% error rate; ranging from 0 to 5%) and poor performers (n = 6; mean 19.5% error rate; ranging from 15 to 35% errors). The time constant σ for the tactile TOJ task was numerically lower (i.e. more accurate) for the good performers in the visual task than for the poor ones (129 ms vs. 139 ms, respectively), although this difference did not reach significance (P = 0.801). It should be noted, however, that the small sample size (only six subjects per group) might make it difficult to find small between-groups effects. After participants had completed both blocks of the control experiment (with and without visual task) we enquired them about their experience of ownership regarding the rubber hands using a questionnaire with seven items adapted from Botvinick and Cohen (1998).3 Each item included a statement to which the participant had to assign a value according to his/her experience in both blocks (with or without visual task). We used a 7-point scale ranging from (-3) “I agree more strongly in test 1 (i.e. with lights) than test 2 (i.e. without lights)” to (3) “I agree more strongly in test 2 than test 1” (order counterbalanced). Point 0 was “Not at all in agreement in either experiment”. None of the statements resulted in responses significantly different from 0 except for statement #2 “It seemed as if the hand which pressed the top button was my real hand” (mean 1.5; preference for the block containing the visual task; t = 3.32, df = 11; P = 0.007). For statements #4 through #7 (control items), responses were almost always zero. Statements #1 and #3 produced some stronger preferences, but the direction of the preference was equally distributed across the two blocks. These results suggest that some kind of subjective difference (but not simply a choice bias), between experiments, was experienced by the subjects, caused by the visuo-motor link between rubber and real hands. The differences in the time constant between both blocks did not reach significance (mean in the no-visual task condition = 121; mean in the visual task condition = 130; P = 0.668). This lack of differences in TOJ performance can possibly be explained by the extra experience that all subjects had with the XI task (i.e. the manipulation was introduced as a within-participants factor), given their high overall accuracy (Craig and Belser 2006), as well as by the fact that each participant experienced the visual-task block (half of them prior to the no-visual-task block). However, the main purpose and conclusion of this control experiment relates to the measurement of the stronger impression of ownership of the rubber hands in the block including the visual trials.
This study examined the role that available visual information regarding body posture plays on the localization of tactile events. The data of the two experiments reported here are in line with the previous finding that crossing the arms causes observers to misreport the order of two tactile stimuli delivered at the hands in quick succession (e.g. Kóbor et al. 2006; Shore et al. 2002; Yamamoto and Kitazawa 2001). However, they extend these previous reports in two important ways. First, this robust and reproducible arm-crossing deficit could be significantly ameliorated as a consequence of altered visual information about hand posture. Second, this visual influence seems to depend on the degree to which the visual information about the body is strongly coupled to the observer’s own actions, in a process revealing short-term adaptation. The implications of these two findings are discussed below.
The finding that temporal order accuracy (time interval necessary for correctly judging the order of two taps) in the crossed-hands tactile TOJ task significantly improved when participants watched a pair of uncrossed rubber hands provide evidence for how visual and somatotopic information are combined during tactile information processing. This visual modulation provides strong support for the argument that the marked arm-crossing deficit seen in previous studies is rooted in a failure during the process whereby the locations of tactile stimuli are mapped from a somatotopic reference frame onto a more external coordinate system. Such a process would be based on, or strongly shaped up by, a visual reference frame (e.g. Pouget et al. 2002; Röder et al. 2004). Indeed, the benefit observed by adding visible uncrossed-hands strengthens even more the claim of incompatible matching (Röder et al. 2004; Yamamoto and Kitazawa 2001) between somatotopic and visual coordinates in a crossed posture. The results, in fact, revealed that when the stimulation is somehow encoded in a consistent manner across a visual reference frame (i.e. rubber hands; visual coordinates) and somatotopic coordinates (the real hands), performance in the temporal order judgement task is more accurate. In particular, when the participants’ view is directed to uncrossed rubber hands (visual/spatial cues are currently aligned with the somatotopic coordinates of the participants real crossed-hands), performance is even significantly better than when no misalignment between seen posture and somatotopic projections is introduced (i.e. real and rubber hands crossed).
As discussed above, several previous findings support an important role of early visual experience in determining tactile remapping processes. For example, the crossed-hand effect typically occurs to the same extent even when the eyes of the participant are closed (or participants are blindfolded; e.g. Kóbor et al. 2006; Yamamoto and Kitazawa 2001) and even in blind individuals, as long as their blindness is acquired rather than congenital (Röder et al. 2004). In particular, these results have led to the hypothesis that the remapping process might be carried out through an automatic integrated system (acquired earlier in the course of development). Such a system transforms tactile events at the skin from somatotopic coordinates (typically found in primary somatosensory areas) to a more abstract frame of reference based on visual coordinates. However, these previous studies do not provide information about the role of immediate visual information regarding current body posture. Our results highlight the potentially critical influence that on-line visual information about body posture exerts on tactile spatial localization. And, importantly, the present data suggest that this tactile remapping system has some degree of short-term flexibility that allows for a quick adaptation as new postures are adopted. Previous investigations have indeed shown evidence of this kind of multisensory short-term adaptation. For example, manipulation of visual experience (e.g. deprivation) reversibly improves tactile spatial acuity (Facchini and Aglioti 2003) and tactile discrimination (e.g. of Braille characters; Kauffman et al. 2002). However, this quick adaptation has its limitations, as it was shown in Experiment 2, where the visuo-tactile remapping process seemed to be less dependent on the altered visual information when visuo-motor experience with the sight of hands was weaker, perhaps leading to a reduction of the feeling of ownership regarding the rubber hands (rubber-hand illusion).
This study demonstrates the influence that available visual information about body posture exerts during the spatial remapping processes leading to tactile localization. These cross-modal effects abide to previous claims about the existence of a visuo-tactile remap system established early in development but, crucially, they indicate a certain amount of short term cross-modal flexibility that allows for an adaptive use of on-line visual information. Finally, the present results also indicate that this immediate visual influence (of body posture) on the tactile remapping process depends strongly on the degree to which the movements of the seen body parts (e.g. hands) are linked to one’s motor behaviour, possibly facilitating their incorporation as part of one’s own body schema.
The body is in a typical posture when the left limbs are placed on the left of the body and vice versa for the right limbs.
In fact, in order to confirm this, we also explored the data using a 2-parameter model. The analysis of this condition results in a very significant, but clearly artifactual, difference in terms of time constant (P < 0.001).
Statement #1 (“I felt as if my hands were uncrossed”); #2 (“It seemed as if the hand which pressed the top button was my real hand”) #3 (“I felt as if the rubber hands were my hands”) #4 (“It appeared as if my hands (under the box) were displacing towards the top (above the box)”), #5 (“It seemed as if I might have two right hands or two left hands”), #6 (“I felt as if my (real) hands were turning ‘rubbery’”) and #7 (“It appeared (visually) as if the rubber hands (on the top of the box) were displacing towards the bottom (towards my real hands)”).
This work was supported by a grant from the Spanish Ministerio de Educación y Ciencia TIN2004-04363-C03-02. E.A. is supported by a fellowship Beca de Formación de Profesorado Universitario from the Spanish Ministerio de Educación y Ciencia. We would like to thank Joan López-Moliner for his valuable help with the data analyses. We would also like to thank Mikel Santesteban and Aida Mallorquí.