Learning efficient visual search for stimuli containing diagnostic spatial configurations and color-shape conjunctions
- 108 Downloads
Visual search is often slow and difficult for complex stimuli such as feature conjunctions. Search efficiency, however, can improve with training. Search for stimuli that can be identified by the spatial configuration of two elements (e.g., the relative position of two colored shapes) improves dramatically within a few hundred trials of practice. Several recent imaging studies have identified neural correlates of this learning, but it remains unclear what stimulus properties participants learn to use to search efficiently. Influential models, such as reverse hierarchy theory, propose two major possibilities: learning to use information contained in low-level image statistics (e.g., single features at particular retinotopic locations) or in high-level characteristics (e.g., feature conjunctions) of the task-relevant stimuli. In a series of experiments, we tested these two hypotheses, which make different predictions about the effect of various stimulus manipulations after training. We find relatively small effects of manipulating low-level properties of the stimuli (e.g., changing their retinotopic location) and some conjunctive properties (e.g., color-position), whereas the effects of manipulating other conjunctive properties (e.g., color-shape) are larger. Overall, the findings suggest conjunction learning involving such stimuli might be an emergent phenomenon that reflects multiple different learning processes, each of which capitalizes on different types of information contained in the stimuli. We also show that both targets and distractors are learned, and that reversing learned target and distractor identities impairs performance. This suggests that participants do not merely learn to discriminate target and distractor stimuli, they also learn stimulus identity mappings that contribute to performance improvements.
KeywordsPerceptual learning Visual search
The authors wish to thank numerous undergraduate research assistants for their help with data collection. E.R. was supported by a National Science Foundation graduate research fellowship during data collection for this project (DGE-1313911). The research was supported by internal Dartmouth funding, Templeton Foundation Grant 14316 to P.T. and National Science Foundation Grant 1632738 to P.T. E.R. is currently supported by a postdoctoral Ruth L. Kirschstein National Research Service Award from the National Institutes of Health (F32MH108317).
- Lindsey, D. T., Brown, A. M., Reijnen, E., Rich, A. N., Kuzmova, Y. I., & Wolfe, J. M. (2010). Color channels, not color appearance or color categories, guide visual search for desaturated color targets. Psychological Science, 21(9), 1208–1214. doi: https://doi.org/10.1177/0956797610379861 CrossRefPubMedPubMedCentralGoogle Scholar
- Townsend, J., & Ashby, F. (1978). Methods of modeling capacity in simple processing systems. In N. Castellan & F. Restle (Eds.), Cognitive theory (Vol. 3, pp. 199–239). Hillsdale, NJ: Erlbaum.Google Scholar
- Wang, R., Wang, J., Zhang, J.-Y., Xie, X.-Y., Yang, Y.-X., Luo, S.-H., … Li, W. (2016). Perceptual learning at a conceptual level. Journal of Neuroscience, 36(7), 2238–2246. https://doi.org/10.1523/JNEUROSCI.2732-15.2016
- Watanabe, T., & Sasaki, Y. (2015). Perceptual learning : Toward a comprehensive theory. Annual Review of Psychology, 66(August), 197–221. doi: https://doi.org/10.1146/annurev-psych-010814-015214 CrossRefPubMedGoogle Scholar
- Wolfe, J. M. (1998). Visual search. In H. Pashler (Ed.), Attention. London, UK: University College London Press.Google Scholar