, Volume 27, Issue 6, pp 996-1007
Date: 04 Mar 2013

View dependence in scene recognition after active learning

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access

Abstract

Human spatial encoding of three-dimensional navigable space was studied, using a virtual environment simulation. This allowed subjects to become familiar with a realistic scene by making simulated rotational and translational movements during training. Subsequent tests determined whether subjects could generalize their recognition ability by identifying novel-perspective views and topographic floor plans of the scene. Results from picture recognition tests showed that familiar direction views were most easily recognized, although significant generalization to novel views was observed. Topographic floor plans were also easily identified. In further experiments, novel-view performance diminished when active training was replaced by passive viewing of static images of the scene. However, the ability to make self-initiated movements, as opposed to watching dynamic movie sequences, had no effect on performance. These results suggest that representation of navigable space is view dependent and highlight the importance of spatial-temporal continuity during learning.