Recent models of human postural control have focused on the nonlinear properties inherent to fusing sensory information from multiple modalities. In general, these models are underconstrained, requiring additional experimental data to clarify the properties of such nonlinearities. Here we report an experiment suggesting that new or multiple mechanisms may be needed to capture the integration of vision into the postural control scheme. Subjects were presented with visual displays whose motion consisted of two components: a constant-amplitude, 0.2 Hz oscillation, and constant-velocity translation from left to right at velocities between 0 cm/s and 4 cm/s. Postural sway variability increased systematically with translation velocity, but remained below that observed in the eyes-closed condition, indicating that the postural control system is able to use visual information to stabilize sway even at translation velocities as high as 4 cm/s. Gain initially increased as translation velocity increased from 0 cm/s to 1 cm/s and then decreased. The changes in gain and variability provided a clear indication of nonlinearity in the postural response across conditions, which were interpreted in terms of sensory reweighting. The fact that gain did not decrease at low translation velocities suggests that the postural control system is able to decompose relative visual motion into environmental motion and self-motion. The eventual decrease in gain suggests that nonlinearities in sensory noise levels (state-dependent noise) may also contribute to the sensory reweighting involved in postural control. These results provide important constraints and suggest that multiple mechanisms may be required to model the nonlinearities involved in sensory fusion for upright stance control.
Sensory reweighting Multisensory integration Vision Adaptation Postural control Human