Advertisement

Attention, Perception, & Psychophysics

, Volume 77, Issue 6, pp 1841–1847 | Cite as

Automatic capture of attention by conceptually generated working memory templates

  • Sol Z. Sun
  • Jenny Shen
  • Mark Shaw
  • Jonathan S. Cant
  • Susanne Ferber
Article

Abstract

Many theories of attention propose that the contents of working memory (WM) can act as an attentional template, which biases processing in favor of perceptually similar inputs. While support has been found for this claim, it is unclear how attentional templates are generated when searching real-world environments. We hypothesized that in naturalistic settings, attentional templates are commonly generated from conceptual knowledge, an idea consistent with sensorimotor models of knowledge representation. Participants performed a visual search task in the delay period of a WM task, where the item in memory was either a colored disk or a word associated with a color concept (e.g., “Rose,” associated with red). During search, we manipulated whether a singleton distractor in the array matched the contents of WM. Overall, we found that search times were impaired in the presence of a memory-matching distractor. Furthermore, the degree of impairment did not differ based on the contents of WM. Put differently, regardless of whether participants were maintaining a perceptually colored disk identical to the singleton distractor, or whether they were simply maintaining a word associated with the color of the distractor, the magnitude of attentional capture was the same. Our results suggest that attentional templates can be generated from conceptual knowledge, in the physical absence of the visual feature.

Keywords

Visual working memory Visual short-term memory Attention Conceptual knowledge Semantic memory Long-term memory Embodied cognition 

When navigating complex environments, the visual system is faced with the problem of selecting a subset of the available information for further processing. In such instances, visual attention is guided by both exogenous factors, such as saliency (Itti & Koch, 2000; Jonides, 1981), and endogenous factors, internal to the observer (Corbetta & Shulman, 2002; Wolfe, 1994). Such endogenous factors include goal-states (Folk, Remington, & Johnston, 1992), reward contingencies (Della Libera & Chelazzi, 2006, 2009), and recent history of attended features or locations (Awh, Belopolsky, & Theeuwes, 2012; Brascamp, Blake, & Kristjánsson, 2011; Maljkovic & Nakayama, 1996). Many theories of attention propose that endogenous factors exert influence on attention through visual working memory (WM) representations. For instance, the biased competition model posits that visual WM can hold an attentional template, which biases attention in favor of perceptually similar objects (Chelazzi, Duncan, Miller, & Desimone, 1998; Chelazzi, Miller, Duncan, & Desimone, 1993; Desimone & Duncan, 1995; Soto, Heinke, Humphreys, & Blanco, 2005; for reviews, see Soto, Hodsoll, Rotshtein, & Humphreys, 2008; Woodman, Carlisle, & Reinhart, 2013). Previous experiments investigating biased competition in humans have employed a dual-task paradigm consisting of a visual search task embedded in the delay period of a WM task (Downing, 2000; Downing & Dodds, 2004; Soto et al., 2005). If the contents of WM automatically guide attention, then attention should be directed to items in the search array that match the contents of memory, even when it is never advantageous to do so (Woodman & Luck, 2007). While the results of earlier studies were mixed, it is now believed that visual WM contents can modulate both search times and eye movements, under certain conditions (Hollingworth & Luck, 2009; Olivers, Meijer, & Theeuwes, 2006; Woodman, Luck, & Schall, 2007; Woodman et al., 2013).

While most previous experiments have explored the role of perceptually encoded WM templates in guiding attention, other studies have examined whether verbal and conceptual information can also exert such guidance. Early work by Potter (1975) showed that object identification in a rapid visual serial presentation (RSVP) stream is about as accurate when the target is verbally cued as when the target is cued with a visual object. Similarly, Soto and Humphreys (2007) demonstrated that verbal descriptions of objects maintained in WM (e.g., “red square”) automatically direct attention to the physical referents in an intervening search task. With respect to conceptual knowledge, previous studies have also demonstrated that metaphorical meaning can facilitate processing at congruent regions of space (e.g., “God” facilitates processing of items presented above a central fixation cross; Gozli, Chasteen, & Pratt, 2013; Gozli, Chow, Chasteen, & Pratt, 2013). Finally, Moores, Laiti, and Chelazzi (2003) showed that search times are impaired when distractor items are semantically associated with targets (e.g., searching for a wine bottle when a wine glass is present).

Taken together, these studies converge on the idea that verbal information held in WM can automatically capture attention and that conceptual information can drive implicit shifts of attention. This suggests that attentional templates could be generated by conceptual knowledge in the physical absence of the target-defining feature (e.g., color, orientation; Awh, Vogel, & Oh, 2006), an idea that has not yet been explicitly tested. Although most experiments investigating biased competition employ tasks in which a visual object is encoded to serve as a template, we hypothesized that in naturalistic settings, attentional templates may be more commonly generated by conceptual knowledge than perceptual input. In most situations, convenient sampling of the perceptual features of the target may not be available. In these cases, the goal-state of the observer (e.g., find a blue pen) could result in the retrieval of a conceptual representation (pen) from long-term memory (LTM). Subsequently, the features of this concept (blue, cylindrical) are activated in WM, which in turn guide attention.

This hypothesis is consistent with sensorimotor (or embodied) models of conceptual knowledge (Barsalou, 1999; Warrington & McCarthy, 1987) that assume such knowledge is partially represented by the same neural substrates involved in perceiving or acting on the physical, real-world referents (Hsu, Kraemer, Oliver, Schlichting, & Thompson-Schill, 2011; Simmons et al., 2007). Indeed, a recent study has shown that conceptual color information associated with auditory words can guide overt shifts of attention to objects with similar features (Huettig & Altmann, 2011). Additionally, Goodhew, Kendall, Ferber, and Pratt (2014) showed that attentional sets could orient attention to targets that differed from distractors based on meaning (e.g., searching for the word red in an array of other color words displayed in black type). Finally, Yee, Ahmed, and Thompson-Schill (2012) showed that directing attention to color using a Stroop task (Stroop, 1935) facilitated conceptual color priming in a subsequent word categorization task. To summarize, prior research has shown that conceptual color information can reliably influence behavior (Goodhew et al., 2014; Huettig & Altmann, 2011; Yee et al., 2012) and that verbal information held in WM can influence attentional processes (Hollingworth & Luck, 2009; Olivers et al., 2006; Soto et al., 2005; Woodman et al., 2013). However, it is unclear whether conceptual knowledge associated with verbal information maintained in WM can also influence attentional processes in a seemingly unrelated task.

Here, we tested whether color concepts associated with words maintained in WM could activate an attentional template that causes attentional capture by items with shared color features. To this end, we modified the dual-task paradigm used in past studies (Dombrowe, Olivers, & Donk, 2010; Olivers, 2009; Olivers et al., 2006; Soto, Heinke, Humphreys, & Blanco, 2005; Woodman & Luck, 2007 ). Participants held either a colored disk (replication of Dombrowe et al., 2010) or a word associated with a color concept in WM while engaging in a visual search task. The search array always contained a color singleton distractor among other items colored gray, and the color of this singleton distractor was either related or unrelated to the contents of memory.

When a colored disk was held in memory, we predicted that a related singleton distractor would impair search times relative to an unrelated distractor. Critically, if attentional templates can be generated by conceptual knowledge in the absence of the physical feature (i.e., color), then we should observe this same pattern of results when participants are maintaining a word associated with a color in WM.

Method

Participants

Twenty-two individuals (19 female, 19 right-handed) participated in the experiment. Participants had a mean age of 19.6 years (range: 18–26 years) and reported normal or corrected-to-normal visual acuity, normal color vision, and were native English speakers. Participants provided written informed consent and received partial course credit as compensation. Procedures were approved by the University of Toronto Research Ethics Board.

Stimuli and apparatus

Stimuli were presented using E-Prime 2.0 (Psychology Software Tools, Pittsburgh, PA) on a CRT monitor with a resolution of 1,280 × 960, and a refresh rate of 80 Hz. Displays were viewed at a distance of 40 cm, and head position was secured with a chinrest.

Thirty words associated with either red (e.g., Lipstick), green (e.g., Grass), or blue (e.g., Ocean) served as memory stimuli in the conceptual condition (see Table 1). The majority of the words were selected from the University of South Florida Free Association Norms Database (Nelson, McEvoy, & Schreiber, 1998). However, many of the words in the database listed as strong associates for the three color concepts did not appear to evoke vivid mental imagery of the colors (e.g., “Eyes,” listed as strong associate for blue). Thus, we supplemented our word list with other words selected based on subjective associations with color (e.g., “Ocean,” associated with blue). We limited our word list to 10 words per color in order to only include words with the strongest associations with color. Words were presented centrally in white type on a black background in size 24 Arial font and subtended approximately 2.3° of visual angle in height and a mean of 5.2° in width. Images of red (CIE L*a*b* Coordinates: 54, 81, 70), green (83, −75, 77), and blue (30, 68, −112) disks served as memory stimuli in the perceptual condition, each subtending 3.1° × 3.1°, also presented centrally.
Table 1

Working memory stimuli in the conceptual condition

RED

GREEN

BLUE

Rose

Emerald

Sky

Blood

Grass

Sapphire

Ruby

Spinach

Sea

Lipstick

Broccoli

Lake

Tomato

Forest

Lagoon

Flame

Trees

Ocean

Stopsign

Cucumber

Water

Strawberry

Lime

Jeans

Firetruck

Leaf

Policeman

Ketchup

Peas

Cold

Items in the search array consisted of eight disks, each subtending 2.9° × 2.9° (off-center), arranged equidistantly on an imaginary circle around a central fixation cross. The medial edge of each disk deviated 12° from fixation. All items were colored gray (79, 0, 0) except the singleton distractor, which was presented in red, green, or blue. All distractor disks contained a central “X” drawn in black (0, 0, 0) with line segments 1.8° long and 0.2° thick. The target disk contained a single line tilted 45° in orientation toward either the left or the right (see Fig. 1).
Fig. 1

Trials began with a 500 ms fixation cross, followed by the presentation of either a colored disk (perceptual condition) or a word (conceptual condition) to be encoded for 800 ms. After an interstimulus interval (ISI) of 150 ms, the search array was presented until response, with a limit of 4,000 ms. Participants were instructed to find the disk containing a single line (as opposed to an “X”) and make a speeded response to indicate the direction of the line’s tilt (left or right). On related trials, the color of the singleton distractor matched the color of the disk held in memory, or the color concept associated with the word held in memory. On unrelated trials, the color of this distractor was randomly assigned to a different color. After an ISI of 500 ms, a memory probe was presented centrally. Depending on the trial type, the probe was either a colored disk (perceptual condition) or a word (conceptual condition). Participants made a nonspeeded same/different response to indicate whether the probe was identical to the studied item

Design and procedure

Trial types consisted of Modality (Conceptual vs. Perceptual) crossed with Distractor-type (Related vs. Unrelated) for a total of four trial types. Participants completed 75 trials of each type for a total of 300 trials. Manipulation of Modality was blocked with presentation of block order counterbalanced across participants. Each block contained 150 trials. Manipulation of Distractor-type was randomized within blocks.

Trials began with a 500 ms fixation cross, followed by the presentation of either a word (conceptual condition) or colored disk (perceptual condition) to be encoded for 800 ms. The color of the study disk, or color concept associated with the study word, was randomly selected for each trial, with the constraint that each of the three colors/color concepts (red, green, and blue) appeared on one third of trials. In the conceptual condition, the actual word presented was randomly selected out of the list of 10 words associated with the selected color concept. Participants were instructed to remember the presented item until the end of the trial. After an interstimulus interval (ISI) of 150 ms, the search array was presented until response, with a limit of 4,000 ms. Participants were instructed to find the disk containing a single line (as opposed to an “X”) and make a speeded response to indicate the direction of the line’s tilt (left or right). The location of the target disk was randomly selected for each trial. A salient color singleton distractor was always present in the array. On related trials, the color of this distractor matched the color of the disk held in memory, or the color concept associated with the word held in memory. On unrelated trials, the color of this distractor was randomly assigned to one of the two other colors. The distractor location was randomly selected with the constraint that it was separated from the target by at least one item. After an ISI of 500 ms, a single word (conceptual condition) or disk (perceptual condition) was presented centrally as a memory probe and participants made a nonspeeded response indicating whether it was identical to the item studied at the beginning of the trial (see Fig. 1). In the perceptual condition, the color of the memory probe disk was identical to the studied disk on 50 % of all trials. On all other trials, the color of the memory probe disk was randomly chosen from the two other colors. In the conceptual condition, the memory probe word was identical to the studied word on 50 % of trials. On all other trials, the memory probe word was randomly selected from the remaining pool of 29 words. Participants responded with their right hand using their index and middle fingers on the “k” (left) and “l” (right) keys for the search task and their left hand on the “a” (same) and “s” (different) keys for the WM task.

Results

Working memory

Participants were accurate in their WM judgements, averaging 95.5 % correct overall. Additionally, WM accuracy was uniformly at ceiling across all conditions. Specifically, a 2 × 2 repeated measures analysis of variance (ANOVA) with factors of Modality (Conceptual vs. Perceptual) and Distractor-type (Related vs. Unrelated) revealed no significant main effects or interaction (all ps > .05).

Visual search

Targets were correctly identified in 96.1 % of trials. Reaction times (RTs) were only included in the analysis if both the visual search and WM responses were correct on any given trial. RTs outside 2.5 standard deviations were excluded for each participant and for each condition. Both search accuracy (proportion correct) and mean RTs were submitted to a 2 × 2 repeated measures ANOVA with factors of Modality (Conceptual vs. Perceptual) and Distractor-type (Related vs. Unrelated). For accuracy, performance was uniformly at ceiling across all conditions, resulting in no significant main effects or interaction (all ps > .05; see Table 2).
Table 2

Descriptive statistics for all conditions in the visual search task

Visual Search Performance

 

Accuracy (prop. correct)

RT (ms)

Condition

Mean

SE

Mean

SE

Perceptual - Related

0.963

0.007

639.820

24.315

Perceptual - Unrelated

0.961

0.013

619.551

21.711

Conceptual - Related

0.958

0.008

616.757

22.982

Conceptual - Unrelated

0.963

0.007

601.003

21.237

For RT, visual search times did not differ based on whether the item held in memory was a colored disk or a word, which resulted in a non-significant main effect of Modality, F(1, 21) = 1.02, p = .32, η p 2 = 0.04. More importantly, search times were slower when the singleton distractor was related to the contents of memory, relative to when it was unrelated, which resulted in a significant main effect of Distractor-type, F(1, 21) = 10.32, p < .01, η p 2 = 0.33. Finally, the degree to which memory-related distractors impaired search times did not differ based on whether the item in memory was a colored disk or a word, which resulted in a nonsignificant interaction between Modality and Distractor type, F(1, 21) = 0.23, p = .64, η p 2 = 0.01 (see Fig. 2).
Fig. 2

Singleton distractors related to the contents of WM impair search times relative to unrelated distractors. The degree of impairment does not differ between conceptually generated and perceptually encoded attentional templates. Error bars represent 1 SEM calculated from within-subject variability (c.f., Cousineau, 2005)

To further examine our data, we also conducted Bonferroni-Holm corrected paired t tests between the related and unrelated conditions separately for the conceptual and perceptual conditions (see Table 3). These analyses revealed that search times were significantly slower in the presence of a memory-related distractor both when a colored disk was held in memory, t(21) = 2.23, p adj < .05, d = 0.48, and when a word associated with a color concept was held in memory, t(21) = 2.53, p adj < .01, d = 0.67.
Table 3

Descriptive statistics for Related minus Unrelated difference scores

Difference Scores (Related–Unrelated)

 

RT (ms)

Condition

Mean

SE

Perceptual

20.269

9.073

Conceptual

15.754

4.990

Discussion

Our results demonstrate that attentional templates can be generated from conceptual knowledge in the physical absence of a perceptual feature. Specifically, color concepts associated with words held in WM automatically directed attention toward a distractor of the same color in the search array. This in turn increased search times compared with a condition where the distractor color was unrelated to the color concept held in WM. Our findings are consistent with the biased competition model, which posits that the contents of WM can bias attention toward sensory inputs with similar perceptual features (Chelazzi et al., 1998; Desimone & Duncan, 1995). Additionally, we contribute the novel finding that these attentional templates can be generated from semantic concepts retrieved from LTM. In the perceptual condition, we replicated past studies showing that visual objects held in WM cause attentional capture by perceptually similar objects in the environment (Dombrowe et al., 2010; Olivers et al., 2006; Soto et al., 2005). Interestingly, the magnitude of attentional capture did not differ between our conceptual and perceptual conditions. Thus, regardless of whether participants were maintaining a perceptually colored disk identical to the singleton distractor or whether they were simply maintaining a word associated with the color of the distractor, the magnitude of attentional capture was the same. Moreover, our findings are consistent with the results of Soto and Humphreys (2007), where verbal descriptions of search items (e.g., “red square”) held in WM caused automatic attentional capture by their physical referents. We extend this work by showing that implicit conceptual associations can also drive automatic attentional capture, and that the verbal information held in WM need not explicitly refer to distractors in the search array for automatic capture to occur.

Recent studies have suggested that search for real-world objects can be guided by verbal cues, and that verbal attentional templates are not as efficient as perceptual templates (Nako, Smith, & Eimer, 2015). While these results may initially seem contradictory with our findings, Nako and colleagues also reported that guidance of search by verbal cues was positively related to the imaginability of the cue words. This idea is related to the target-typicality effect, in which verbal category cues are most efficient in guiding search when targets are highly typical of the category (Maxfield, Stalder, & Zelinsky, 2014). In our experiment, verbal stimuli were selected on the basis that they evoked strong mental imagery of their respective color, which is likely why we found attentional capture in the conceptual condition to be on par with the perceptual condition.

Huettig and Altmann (2011) recently showed that color concepts associated with auditorily presented words could increase the probability of spontaneously fixating on perceptually similar objects. In their task, participants’ eyes were tracked as they viewed a set of four images while listening to sentences. Together with our findings, it seems that conceptual knowledge of perceptual features can drive both overt and covert orienting of attention towards perceptually similar objects in the environment. However, Huettig and Altmann’s (2011) experiments employed an unconstrained free-viewing task, which makes it difficult to ascertain whether their observed effects are driven by automatic attentional capture, or some form of explicit strategy. In our study, we provide strong evidence that conceptually generated attentional templates can cause automatic attentional capture by perceptually similar objects. In our task, the memory-matching item in the search array was always a distractor and never the target. Thus, because participants should actively avoid attending to the memory-matching item, the effects we observed are highly suggestive of automatic attentional capture (Olivers, 2009; Olivers et al., 2006; Woodman & Luck, 2007).

The finding that conceptual knowledge can influence processing of perceptual inputs is predicted by sensorimotor models of knowledge representation (Barsalou, 1999; Warrington & McCarthy, 1987). Indeed, neuroimaging studies have shown that the same regions involved in color perception (i.e., left fusiform gyrus, right lingual gyrus) are activated during the verification of conceptual color properties of words (e.g., GRASS–green; Simmons et al., 2007; also see Hsu et al., 2011). Our findings add to this literature by demonstrating that these shared representations can serve the functional purpose of generating attentional templates, thereby amplifying incoming sensory input with shared perceptual features.

Conclusions

Our results demonstrate that conceptual knowledge of color associated with words maintained in WM can modulate the processing of nonverbal visual inputs with shared color features. Additionally, the degree to which conceptually generated templates caused attentional capture appears to be on par with perceptually encoded templates used in most previous studies. These results add to a growing body of literature that clarifies how attentional capture is guided by WM templates. Moreover, we contribute the novel finding that these templates may be generated conceptually, in the physical absence of the visual perceptual features that define the search target. These findings shed light on how conceptual knowledge can activate WM templates, which in turn guide attention in complex visual environments.

Notes

Acknowledgments

This work was supported by a Natural Sciences and Engineering Research Council (NSERC) Alexander Graham Bell Canada Graduate Scholarship and Ontario Graduate Scholarship (OGS) awarded to S. Z. S., Canadian Institutes of Health Research (CIHR) and NSERC Discovery grants awarded to S. F., and an NSERC Discovery grant awarded to J. S. C. The authors would like to thank Jay Pratt, Matthew Lowe, Justin Ruppel, Kristin Wilson, and Celia Fidalgo for valuable discussion. The authors declare no conflict of interest.

References

  1. Awh, E., Vogel, E. K., & Oh, S. H. (2006). Interactions between attention and working memory. Neuroscience, 139(1), 201–208.PubMedCrossRefGoogle Scholar
  2. Awh, E., Belopolsky, A. V., & Theeuwes, J. (2012). Top-down versus bottom-up attentional control: A failed theoretical dichotomy. Trends in Cognitive Sciences, 16(8), 437–443.PubMedCentralPubMedCrossRefGoogle Scholar
  3. Barsalou, L. W. (1999). Perceptions of perceptual symbols. Behavioral and Brain Sciences, 22(4), 637–660.CrossRefGoogle Scholar
  4. Brascamp, J. W., Blake, R., & Kristjánsson, Á. (2011). Deciding where to attend: priming of pop-out drives target selection. Journal of Experimental Psychology: Human Perception and Performance, 37(6), 1700–1707.PubMedCentralPubMedGoogle Scholar
  5. Chelazzi, L., Miller, E. K., Duncan, J., & Desimone, R. (1993). A neural basis for visual search in inferior temporal cortex. Nature, 363, 345–347.PubMedCrossRefGoogle Scholar
  6. Chelazzi, L., Duncan, J., Miller, E. K., & Desimone, R. (1998). Responses of neurons in inferior temporal cortex during memory-guided visual search. Journal of Neurophysiology, 80(6), 2918–2940.PubMedGoogle Scholar
  7. Corbetta, M., & Shulman, G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews Neuroscience, 3(3), 201–215.PubMedCrossRefGoogle Scholar
  8. Cousineau, D. (2005). Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson’s method. Tutorials in Quantitative Methods for Psychology, 1(1), 42–45.Google Scholar
  9. Della Libera, C., & Chelazzi, L. (2006). Visual selective attention and the effects of monetary rewards. Psychological Science, 17(3), 222–227.PubMedCrossRefGoogle Scholar
  10. Della Libera, C., & Chelazzi, L. (2009). Learning to attend and to ignore is a matter of gains and losses. Psychological Science, 20(6), 778–784.PubMedCrossRefGoogle Scholar
  11. Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual Review of Neuroscience, 18(1), 193–222.PubMedCrossRefGoogle Scholar
  12. Dombrowe, I., Olivers, C. N., & Donk, M. (2010). The time course of working memory effects on visual attention. Visual Cognition, 18(8), 1089–1112.CrossRefGoogle Scholar
  13. Downing, P. E. (2000). Interactions between visual working memory and selective attention. Psychological Science, 11(6), 467–473.PubMedCrossRefGoogle Scholar
  14. Downing, P., & Dodds, C. (2004). Competition in visual working memory for control of search. Visual Cognition, 11(6), 689–703.CrossRefGoogle Scholar
  15. Folk, C. L., Remington, R. W., & Johnston, J. C. (1992). Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance, 18(4), 1030–1044.PubMedGoogle Scholar
  16. Goodhew, S. C., Kendall, W., Ferber, S., & Pratt, J. (2014). Setting semantics: Conceptual set can determine the physical properties that capture attention. Attention, Perception, & Psychophysics, 76, 1577–1589.CrossRefGoogle Scholar
  17. Gozli, D. G., Chasteen, A. L., & Pratt, J. (2013a). The cost and benefit of implicit spatial cues for visual attention. Journal of Experimental Psychology: General, 142(4), 1028–1046.CrossRefGoogle Scholar
  18. Gozli, D. G., Chow, A., Chasteen, A. L., & Pratt, J. (2013b). Valence and vertical space: Saccade trajectory deviations reveal metaphorical spatial activation. Visual Cognition, 21(5), 628–646.CrossRefGoogle Scholar
  19. Hollingworth, A., & Luck, S. J. (2009). The role of visual working memory (VWM) in the control of gaze during visual search. Attention, Perception, & Psychophysics, 71(4), 936–949.CrossRefGoogle Scholar
  20. Hsu, N. S., Kraemer, D. J., Oliver, R. T., Schlichting, M. L., & Thompson-Schill, S. L. (2011). Color, context, and cognitive style: Variations in color knowledge retrieval as a function of task and subject variables. Journal of Cognitive Neuroscience, 23(9), 2544–2557.PubMedCrossRefGoogle Scholar
  21. Huettig, F., & Altmann, G. T. (2011). Looking at anything that is green when hearing “frog”: How object surface colour and stored object colour knowledge influence language-mediated overt attention. The Quarterly Journal of Experimental Psychology, 64(1), 122–145.PubMedCrossRefGoogle Scholar
  22. Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40(10), 1489–1506.PubMedCrossRefGoogle Scholar
  23. Jonides, J. (1981). Voluntary versus automatic control over the mind’s eye’s movement. Attention and performance IX, 9, 187–203.Google Scholar
  24. Maljkovic, V., & Nakayama, K. (1996). Priming of pop-out: II. The role of position. Perception & Psychophysics, 58, 977–991.CrossRefGoogle Scholar
  25. Maxfield, J. T., Stalder, W. D., & Zelinsky, G. J. (2014). Effects of target typicality on categorical search. Journal of Vision, 14(12), 1–11.PubMedCentralPubMedCrossRefGoogle Scholar
  26. Moores, E., Laiti, L., & Chelazzi, L. (2003). Associative knowledge controls deployment of visual selective attention. Nature Neuroscience, 6(2), 182–189.PubMedCrossRefGoogle Scholar
  27. Nako, R., Smith, T. J., & Eimer, M. (2015). Activation of new attentional templates for real-world objects in visual search. Journal of Cognitive Neuroscience, 27(5), 902–912.Google Scholar
  28. Nelson, D. L., McEvoy, C. L., & Schreiber, T. A. (1998). The University of South Florida word association, rhyme, and word fragment norms. Retrieved from http://w3.usf.edu/FreeAssociation/
  29. Olivers, C. N. (2009). What drives memory driven attentional capture? The effects of memory type, display type, and search type. Journal of Experimental Psychology: Human Perception and Performance, 35(5), 1275–1291.PubMedGoogle Scholar
  30. Olivers, C. N., Meijer, F., & Theeuwes, J. (2006). Feature-based memory-driven attentional capture: visual working memory content affects visual attention. Journal of Experimental Psychology: Human Perception and Performance, 32(5), 1243–1265.PubMedGoogle Scholar
  31. Potter, M. C. (1975). Meaning in visual search. Science, 187, 965–966.PubMedCrossRefGoogle Scholar
  32. Simmons, W. K., Ramjee, V., Beauchamp, M. S., McRae, K., Martin, A., & Barsalou, L. W. (2007). A common neural substrate for perceiving and knowing about color. Neuropsychologia, 45(12), 2802–2810.PubMedCentralPubMedCrossRefGoogle Scholar
  33. Soto, D., & Humphreys, G. W. (2007). Automatic guidance of visual attention from verbal working memory. Journal of Experimental Psychology: Human Perception and Performance, 33(3), 730–737.PubMedGoogle Scholar
  34. Soto, D., Heinke, D., Humphreys, G. W., & Blanco, M. J. (2005). Early, involuntary top-down guidance of attention from working memory. Journal of Experimental Psychology: Human Perception and Performance, 31(2), 248–261.PubMedGoogle Scholar
  35. Soto, D., Hodsoll, J., Rotshtein, P., & Humphreys, G. W. (2008). Automatic guidance of attention from working memory. Trends in Cognitive Sciences, 12(9), 342–348.PubMedCrossRefGoogle Scholar
  36. Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 28, 643–662.CrossRefGoogle Scholar
  37. Warrington, E. K., & McCarthy, R. A. (1987). Categories of knowledge, further fractionations, and an attempted integration. Brain, 110(5), 1273–1296.PubMedCrossRefGoogle Scholar
  38. Wolfe, J. M. (1994). Guided search 2.0 a revised model of visual search. Psychonomic Bulletin & Review, 1(2), 202–238.CrossRefGoogle Scholar
  39. Woodman, G. F., & Luck, S. J. (2007). Do the contents of visual working memory automatically influence attentional selection during visual search? Journal of Experimental Psychology: Human Perception and Performance, 33(2), 363–377.PubMedCentralPubMedGoogle Scholar
  40. Woodman, G. F., Luck, S. J., & Schall, J. D. (2007). The role of working memory representations in the control of attention. Cerebral Cortex, 17(Suppl. 1), i118–i124.PubMedCentralPubMedCrossRefGoogle Scholar
  41. Woodman, G. F., Carlisle, N. B., & Reinhart, R. M. (2013). Where do we store the memory representations that guide attention? Journal of Vision, 13(3), 1–17.PubMedCentralPubMedCrossRefGoogle Scholar
  42. Yee, E., Ahmed, S. Z., & Thompson-Schill, S. L. (2012). Colorless green ideas (can) prime furiously. Psychological Science, 23(4), 364–369.PubMedCentralPubMedCrossRefGoogle Scholar

Copyright information

© The Psychonomic Society, Inc. 2015

Authors and Affiliations

  • Sol Z. Sun
    • 1
    • 2
  • Jenny Shen
    • 1
  • Mark Shaw
    • 1
  • Jonathan S. Cant
    • 2
  • Susanne Ferber
    • 1
    • 3
  1. 1.Department of PsychologyUniversity of TorontoTorontoCanada
  2. 2.Department of PsychologyUniversity of Toronto ScarboroughTorontoCanada
  3. 3.Rotman Research Institute at BaycrestTorontoCanada

Personalised recommendations