Skip to main content

Advertisement

Log in

Visual object localisation in space

Interaction of retinal, eye position, vestibular and neck proprioceptive information

  • Research Article
  • Published:
Experimental Brain Research Aims and scope Submit manuscript

Abstract.

Perceptual updating of the location of visual targets in space after intervening eye, head or trunk movements requires an interaction between several afferent signals (visual, oculomotor efference copy, vestibular, proprioceptive). The nature of the interaction is still a matter of debate. To address this problem, we presented subjects (n=6) in the dark with a target (light spot) at various horizontal eccentricities (up to ±20°) relative to the initially determined subjective straight-ahead direction (SSA). After a memory period of 12 s in complete darkness, the target reappeared at a random position and subjects were to reproduce its previous location in space using a remote control. For both the presentation and the reproduction of the target's location, subjects either kept their gaze in the SSA (retinal viewing condition) or fixated the eccentric target (visuo-oculomotor). Three experimental series were performed: A, "visual-only series": reproduction of the target's location in space was found to be close to ideal, independently of viewing condition; estimation curves (reproduced vs presented positions) showed intercepts ≈0° and slopes ≈1; B, "visual-vestibular series": during the memory period, subjects were horizontally rotated to the right or left by 10° or 18° at 0.8-Hz or 0.1-Hz dominant frequency. Following the 0.8-Hz body rotation, reproduction was close to ideal, while at 0.1 Hz it was partially shifted along with the body, in line with the known vestibular high-pass characteristics. Additionally, eccentricity of target presentation reduced the slopes of the estimation curves (less than 1); C, "visual-vestibular-neck series": a shift toward the trunk also occurred after low-frequency neck stimulation (trunk rotated about stationary head). When vestibular and neck stimuli were combined (independent head and trunk rotations), their effects summed linearly, such that the errors cancelled each other during head rotation on the stationary trunk. Variability of responses was always lowest for targets presented at SSA, irrespective of intervening eye, head or trunk rotations. We conclude that: (1) subjects referenced "space" to pre-rotatory SSA and that the memory trace of the target's location in space was not altered during the memory period; and that (2) they used internal estimates of eye, head and trunk displacements with respect to space to match current target position with the memory trace during reproduction; these estimates would be obtained by inverting the physical coordinate transformations produced by these displacements. We present a model which is able to describe these operations and whose predictions closely parallel the experimental results. In this model the estimate of head rotation in space is not obtained directly from the vestibular head-in-space signal, but from a vestibular estimate of the kinematic state of the body support.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Additional information

Electronic Publication

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mergner, .T., Nasios, .G., Maurer, .C. et al. Visual object localisation in space. Exp Brain Res 141, 33–51 (2001). https://doi.org/10.1007/s002210100826

Download citation

  • Received:

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s002210100826

Navigation