Advertisement

Psychological Research

, Volume 75, Issue 4, pp 279–289 | Cite as

Contextual remapping in visual search after predictable target-location changes

  • Markus Conci
  • Luning Sun
  • Hermann J. Müller
Original Article

Abstract

Invariant spatial context can facilitate visual search. For instance, detection of a target is faster if it is presented within a repeatedly encountered, as compared to a novel, layout of nontargets, demonstrating a role of contextual learning for attentional guidance (‘contextual cueing’). Here, we investigated how context-based learning adapts to target location (and identity) changes. Three experiments were performed in which, in an initial learning phase, observers learned to associate a given context with a given target location. A subsequent test phase then introduced identity and/or location changes to the target. The results showed that contextual cueing could not compensate for target changes that were not ‘predictable’ (i.e. learnable). However, for predictable changes, contextual cueing remained effective even immediately after the change. These findings demonstrate that contextual cueing is adaptive to predictable target location changes. Under these conditions, learned contextual associations can be effectively ‘remapped’ to accommodate new task requirements.

Keywords

Visual Search Target Location Recognition Test Proactive Interference Search Display 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgments

This work was supported by Deutsche Forschungsgemeinschaft (DFG) Research Group (FOR 480) and CoTeSys Excellence Cluster (142) grants. We would like to thank Bernhard Hommel, Takatsune Kumada, and Stefan Pollmann for valuable comments on an earlier draft of the manuscript.

References

  1. Anderson, M. C., & Neely, J. H. (1996). Interference and inhibition in memory retrieval. In E. L. Bjork & R. A. Bjork (Eds.), Memory (pp. 237–313). San Diego: Academic Press.CrossRefGoogle Scholar
  2. Brady, T. F., & Chun, M. M. (2007). Spatial constraints on learning in visual search: modeling contextual cuing. Journal of Experimental Psychology: Human Perception and Performance, 33, 798–815.PubMedCrossRefGoogle Scholar
  3. Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 433–436.PubMedCrossRefGoogle Scholar
  4. Brockmole, J. R., & Henderson, J. M. (2006a). Using real-world scenes as contextual cues for search. Visual Cognition, 13, 99–108.CrossRefGoogle Scholar
  5. Brockmole, J. R., & Henderson, J. M. (2006b). Recognition and attention guidance during contextual cueing in real-world scenes: Evidence from eye movements. Quarterly Journal of Experimental Psychology, 59, 1177–1187.CrossRefGoogle Scholar
  6. Chua, K., & Chun, M. M. (2003). Implicit scene learning is viewpoint dependent. Perception & Psychophysics, 65, 72–80.CrossRefGoogle Scholar
  7. Chun, M. M. (2000). Contextual cueing of visual attention. Trends in Cognitive Sciences, 4, 170–178.PubMedCrossRefGoogle Scholar
  8. Chun, M. M., & Jiang, Y. (1998). Contextual cueing: implicit learning and memory of visual context guides spatial attention. Cognitive Psychology, 36, 28–71.PubMedCrossRefGoogle Scholar
  9. Conci, M., & von Mühlenen, A. (2009). Region segmentation and contextual cuing in visual search. Attention, Perception & Psychophysics, 71, 1514–1524.CrossRefGoogle Scholar
  10. Geyer, T., Shi, Z., & Müller, H. J. (2010). Contextual cueing in multi-conjunction visual search is dependent on color- and configuration-based intertrial contingencies. Journal of Experimental Psychology: Human Perception and Performance, 36, 515–532.PubMedCrossRefGoogle Scholar
  11. Jiang, Y., & Leung, A. W. (2005). Implicit learning of ignored visual context. Psychonomic Bulletin & Review, 12, 100–106.CrossRefGoogle Scholar
  12. Jiang, Y., & Wagner, L. C. (2004). What is learned in spatial contextual cuing-configuration or individual locations? Perception & Psychophysics, 66, 454–463.CrossRefGoogle Scholar
  13. Makovski, T., & Jiang, Y.V. (2010). Contextual cost: when a visual-search target is not where it should be. Quarterly Journal of Experimental Psychology, 63, 216–225.CrossRefGoogle Scholar
  14. Manginelli, A. A., & Pollmann, S. (2009). Misleading contextual cues: how do they affect visual search? Psychological Research, 73, 212–221.PubMedCrossRefGoogle Scholar
  15. Ogawa, H., & Kumada, T. (2006). Attentional prioritization to contextually new objects. Psychonomic Bulletin & Review, 13, 543–548.CrossRefGoogle Scholar
  16. Ogawa, H., Takeda, Y., & Kumada, T. (2007). Probing attentional modulation of contextual cueing. Visual Cognition, 15, 276–289.CrossRefGoogle Scholar
  17. Oliva, A., & Torralba, A. (2007). The role of context in object recognition. Trends in Cognitive Sciences, 11, 520–527.PubMedCrossRefGoogle Scholar
  18. Palmer, S. E. (1975). The effects of contextual scenes on the identification of objects. Memory & Cognition, 3, 519–526.CrossRefGoogle Scholar
  19. Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spatial Vision, 10, 437–442.PubMedCrossRefGoogle Scholar
  20. Pisella, L., & Mattingley, J. B. (2004). The contribution of spatial remapping impairments to unilateral visual neglect. Neuroscience and Biobehavioral Reviews, 28, 181–200.PubMedCrossRefGoogle Scholar
  21. Wolfe, J. M., Klempen, N., & Dahlen, K. (2000). Postattentive vision. Journal of Experimental Psychology: Human Perception and Performance, 26, 693–716.PubMedCrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2010

Authors and Affiliations

  1. 1.Allgemeine und Experimentelle Psychologie, Department PsychologieLudwig-Maximilians Universität MünchenMunichGermany

Personalised recommendations