As images are projected onto the flat retina when identifying objects scattered in space, there may be considerable ambiguity in depth (i.e. z-direction) perception. Therefore, position information can be distorted, especially along the z-axis. In this paper, virtual grids using haptic and auditory feedback are proposed to complement ambiguous visual depth cues. This study experimentally investigates the influence of virtual grids on position identification in a 3-D workspace. A haptic grid is generated using the PHANTOM® OmniTM and a sound grid is generated by changing the frequency characteristics of the sound source based on the hand movement of the operator. Both grids take the form of virtual planes placed at regular intervals of 10mm through three axes (i.e. x, y, and z). The haptic and sound grids are conveyed to subjects separately or simultaneously according to test conditions. In cases of bimodal presentation, the grids are displayed with cross-modal synchrony. The statistically significant results indicate that the presence of the grid in space increased the average values of precision. In particular, errors in the z-axis decreased by more than 50% (F=19.82, p<0.01).
- grid plane
- depth ambiguity