Coordinating spatial referencing using shared gaze
- 385 Downloads
To better understand the problem of referencing a location in space under time pressure, we had two remotely located partners (A, B) attempt to locate and reach consensus on a sniper target, which appeared randomly in the windows of buildings in a pseudorealistic city scene. The partners were able to communicate using speech alone (shared voice), gaze cursors alone (shared gaze), or both. In the shared-gaze conditions, a gaze cursor representing Partner A’s eye position was superimposed over Partner B’s search display and vice versa. Spatial referencing times (for both partners to find and agree on targets) were faster with shared gaze than with speech, with this benefit due primarily to faster consensus (less time needed for one partner to locate the target after it was located by the other partner). These results suggest that sharing gaze can be more efficient than speaking when people collaborate on tasks requiring the rapid communication of spatial information. Supplemental materials for this article may be downloaded from http://pbr.psychonomic-journals.org/content/supplemental.
KeywordsJoint Attention Communication Condition Spatial Reference Locate Partner Search Error
Unable to display preview. Download preview PDF.
- Baldwin, D. A. (1995). Understanding the link between joint attention and language. In C. Moore & P. J. Dunhams (Eds.), Joint attention: Its origins and role in development (pp. 131–158). Hillsdale, NJ: Erlbaum.Google Scholar
- Baron-Cohen, S. (1995). The eye direction detector (EDD) and the shared attention mechanism (SAM): Two cases for evolutionary psychology. In C. Moore & P. J. Dunhams (Eds.), Joint attention: Its origins and role in development (pp. 41–60). Hillsdale, NJ: Erlbaum.Google Scholar
- Brennan, S. E. (2005). How conversation is shaped by visual and spoken evidence. In J. C. Trueswell & M. Tannenhaus (Eds.), Approaches to studying world-situated language use: Bridging the language-asproduct and language-action traditions (pp. 95–129). Cambridge, MA: MIT Press.Google Scholar
- Jacob, R. J. K. (1995). Eye tracking in advanced interface design. In W. Barfield & T. A. Furness (Eds.), Virtual environments and advanced interface design (pp. 258–308). New York: Oxford University Press.Google Scholar
- Logan, G. D., & Sadler, D. D. (1996). A computational analysis of the apprehension of spatial relations. In P. Bloom, M. A. Peterson, L. Nadel, & M. Garret (Eds.), Language and space (pp. 493–529), Cambridge, MA: MIT Press.Google Scholar