The effects of immediate vision on implicit hand maps
- 306 Downloads
Perceiving the external spatial location of the limbs using position sense requires that immediate proprioceptive afferent signals be combined with a stored body model specifying the size and shape of the body. Longo and Haggard (Proc Natl Acad Sci USA 107:11727–11732, 2010) developed a method to isolate and measure this body model in the case of the hand in which participants judge the perceived location in external space of several landmarks on their occluded hand. The spatial layout of judgments of different landmarks is used to construct implicit hand maps, which can then be compared with actual hand shape. Studies using this paradigm have revealed that the body model of the hand is massively distorted, in a highly stereotyped way across individuals, with large underestimation of finger length and overestimation of hand width. Previous studies using this paradigm have allowed participants to see the locations of their judgments on the occluding board. Several previous studies have demonstrated that immediate vision, even when wholly non-informative, can alter processing of somatosensory signals and alter the reference frame in which they are localised. The present study therefore investigated whether immediate vision contributes to the distortions of implicit hand maps described previously. Participants judged the external spatial location of the tips and knuckles of their occluded left hand either while being able to see where they were pointing (as in previous studies) or while blindfolded. The characteristic distortions of implicit hand maps reported previously were clearly apparent in both conditions, demonstrating that the distortions are not an artefact of immediate vision. However, there were significant differences in the magnitude of distortions in the two conditions, suggesting that vision may modulate representations of body size and shape, even when entirely non-informative.
KeywordsBody representation Body schema Position sense Vision
This research was supported by a grant from the European Research Council (ERC-2013-StG-336050) to MRL.