Introduction

Limits on the processing capabilities of the human visual system make it impossible to identify more than one or, perhaps, a very small number of objects at the same time (Lachter et al., 2004). To be sure, there are visual processes that can be carried out across many items over a wide portion of the visual field (Whitney & Yamanashi Leib, 2018) and it is possible to respond, at above chance levels, to the gist of a scene (Greene & Oliva, 2009; Oliva, 2005) or to the presence of a category like “animal” (Li et al., 2002; Thorpe et al., 1996; Thorpe et al., 2001). Nevertheless, if you want to pick up a pen or find a woodpecker in a tree, you will need to search for it. In cases where identification of a target item requires that the eyes fixate on that item, the rate of search will be, at best, on the order of three to four items per second – the rate of voluntary saccadic eye movements. If eye movements are not a limiting factor, the rate of search is still constrained. Estimates of the maximum rate will depend on one’s model of search (e.g., are observers searching through distracting items with or without replacement? Horowitz & Wolfe, 2005), but that estimate will fall somewhere in the range of 20–50 items per second (Wolfe, 2021).

If items were identified in a strictly sequential manner, this rapid rate would produce implausibly short object identification times (Johnson & Olshausen, 2003). One solution is to propose parallel processing of groups of stimuli (Hulleman & Olivers, 2017; Palmer et al., 2000; Pashler, 1987). An alternative is to propose a pipeline or “carwash” architecture in which items are selected in series, say, about once every 50 ms, but then identified over the course of 200–300 ms, meaning that several items can be in the process of identification at one time – a serial-parallel hybrid model (Wolfe, 2021).

Regardless of the precise details, it is clear that visual search proceeds in a capacity-limited manner to deploy attention from object to object until a target is found or the search is abandoned. These searches are often very fast. Many search tasks are so effortless that we don’t typically think of them as searches. The act of eating the next pea from the dinner plate may require searching for the fork and searching for the peas but, even though the visual world contains a vast number of possible objects of attention, such searches will be done without the diner consciously noting the search. This is possible because the deployment of attention is not random. It is “guided” (Wolfe et al., 1989) by multiple sources of guidance (Wolfe & Horowitz, 2017). We can very briefly summarize a few of these. Stimulus salience will guide attention in a “bottom-up” manner (Nothdurft, 1993). One could imagine that a shiny fork in an otherwise matte scene might attract attention in this manner. The searcher’s goals can guide attention in a top-down manner to items that have the basic features of the target (Egeth et al., 1984). If the goal is to acquire some peas, top-down guidance to green items would be useful. Knowledge about the scene would make a notable difference in this fork and peas example (Biederman et al., 1973; Henderson & Ferreira, 2004; Vo et al., 2019). Forks are found next to the plate. Peas are on the plate, not on the walls or the floor. Finally (but not exhaustively), the prior history of search can influence subsequent search. Though it has not been studied in the dining context, other research suggests that, having acquired some peas for the preceding mouthful, searchers would be a little quicker to acquire more peas than they would be if they switched to acquiring potatoes. This last form of guidance, known as “priming,” is the topic of the present paper.

In their classic study, Maljkovic and Nakayama (1994) showed observers a set of diamonds, all of one color (red or green) except for one target item of the other color. The observers needed to indicate if the left or the right vertex of the target diamond was clipped off. The color of the oddball item and the other distractors items could be swapped or could remain the same, randomly, from trial to trial. Maljkovic and Nakayama found that observers were a bit faster if the colors remained the same from one trial to the next compared to the condition where the colors were swapped. This was the case, even though a search for a red item among homogeneous green items is about as simple a search as one can get and even though the color varied randomly from trial to trial. This basic finding has been replicated with a variety of different features including orientation (Hillstrom, 2000), shape (Lamy et al., 2006), and size (Huang et al., 2004). The phenomenon is known as Priming of Pop-out (PoP).

A vast literature has grown up around these effects, closely tied to the literature on “attentional capture” by salient singletons (for reviews of the capture literature, see: Luck et al., 2021; Theeuwes et al., 2010). For present purposes, an important hypothesis, emerging from this work, has been the idea that top-down, goal-directed guidance is a by-product of priming. The most energetic statements of this view probably come from Jan Theeuwes as can be seen in the title of his 2013 paper “Feature-based attention: It is all bottom-up priming” (for the origins of the idea see Theeuwes, 1991, 1992, 2013). Lamy and Kristjánsson (2013) endorse a slightly less absolute view, saying, “We conclude that priming accounts for considerable portions of effects attributed to top-down guidance, but that top-down guidance can be independent of intertrial priming.”

In its broadest interpretation, this claim would be unfortunate if true (Wolfe et al., 2003). Priming, by definition, involves repeated search trials. What you found on the last trial (or what you rejected on the last trial) does influence your next search. However, most searches in the real world are not part of the repeated series of the same type of search. You look in the kitchen drawer for the can opener. If you know that the can opener has a yellow handle, it seems plausible that you can guide your attention in a top-down, goal-directed manner to yellow items regardless of whether your immediately preceding searches were searches for something yellow, or not. In fact, the claim is less dramatic than it seems. In most cases, this is a claim about the first deployment of attention and/or the first deployment of the eyes during search. It is not necessarily a claim about all deployments across an extended episode of searching. Because the great bulk of the literature on priming and visual search involves singleton search, the first deployment of attention may be the only deployment of interest. Observers are typically making some response concerning a unique singleton; often a highly salient color singleton (Lamy & Kristjánsson, 2013). For that initial deployment of attention, it makes sense to ask if the top-down intentions of the observer make any difference.

Figure 1 illustrates the point. If the task was to specify the orientation in the item of a unique color, “run” trials like #2 would tend to be somewhat faster than trials where the colors “switch” from the previous trial, as in #3. If the task were to give the orientation of the item in the less salient diamond item, salience and priming would combine to misdirect attention to the color singleton first on many trials.

Fig. 1
figure 1

Examples of a sequence of singleton searches of the sort popularized by Theeuwes (1991)

Suppose, however, that the task was a more extended visual search task. In Fig. 2, the task would be to report the orientation of the target, T. Color is irrelevant. The slope of the reaction time (RT) × set size function is a standard measure of search efficiency. If all the items were the same color, the slope for an “inefficient” search for a T among Ls would be around 20–40 ms/item for relatively large items that do not require fixation on each item. If it were known that the target T was red (as in trials 1 and 2 of Fig. 2), search would be “guided” to red items and the slope would be ~50% of the unguided version (Egeth et al., 1984; Wolfe et al., 1989).

Fig. 2
figure 2

A sequence of inefficient search trials in which observers would report on the orientation of the T, present on every trial

This is cartooned in Fig. 3A. If the T is always red, then the red T on one trial could prime search for the red T on the next trial. In principle, no goal directed, top-down guidance would be required. To test this hypothesis, we need to do priming experiments with something other than singleton search tasks. Accordingly, this paper reports on a series of experiments with a structure like Fig. 2. Observers are asked to report the orientation of the stem of a “T” (left or right) in a succession of trials where a T is present on every trial. The color (or, in Experiments 4 and 5, the shape) of the T is irrelevant. However, that color can repeat, as in frames 1–2 in Fig. 2, creating a “run” trial or it can switch, as in frames 2–3 in Fig. 2.

Fig. 3
figure 3

Schematic Reaction Time vs. Set Size functions, illustrating the effects of guidance (A) and three possible consequences of priming (B–D)

There are three qualitatively different outcomes, as shown in Fig. 3B–D. There might simply be no priming with these more complex search tasks, in which case, the Switch/Run manipulation will make no difference to RT (3B). There could be an additive effect of priming (3C). As discussed below, additive effects can arise from several sources. The important point is that the additive result is different from the result shown in 3D. 3D shows the result if finding a red T on the last trial biased attention toward all red items on the next trial. In that case, the slope of Run trials should be shallower – more guided – than the slope of Switch trials (3D).

In a series of five experiments, we find strong support for an additive effect of the target feature from the previous trial on the RT on the next trial. The effect of priming on search slopes has been examined before. A comprehensive review of these studies and of the broader feature priming literature can be found in Ramgir and Lamy (2022). Many of the relevant studies involve conjunction searches. In those experiments, observers were typically searching for a target with one of two combinations of color and orientation. For instance, the target might be either red and vertical or green and horizontal while the distractors are green-vertical and red-horizontal items. It is known that such two-target tasks are typically much slower than simple conjunction search tasks (Wolfe, 1992). Several studies found additive (3C) priming effects in such experiments (Hillstrom, 2000; Kristjánsson et al., 2002). Becker and Horstmann (2009) found a modest change in slopes (3D). They used eye-tracking data to argue that this was evidence for guidance which would account for the increased slope on Switch trials.

These conjunction experiments differ in one important way from the standard priming of pop-out (PoP) experiments. In PoP, the observer’s task and target remain the same across the trials (e.g., observers would report the orientation of the line in the diamond or, perhaps, in the color singleton (see Fig. 1)). In the conjunction experiments, in contrast, the target changes on Switch trials. This would also change the relevant feature guidance. Imagine that you were searching for a red vertical target. You could guide your attention towards stimuli that are red and/or vertical and you would find the target fairly efficiently (Wolfe et al., 1989). If the target on the next trial was also red and vertical, those guiding settings, if preserved, would continue to work. If you knew that the target was in one color subset, you could make a volitional change in your top-down guidance (e.g., Kaptein et al., 1995). If the target had switched to green and horizontal, the initial guidance for red and vertical would fail. At that point, you could change your guidance to green and horizontal and conduct another efficient search, albeit a search slowed by the failed search for red vertical. Alternatively, one might decide not to guide attention at all, leading to inefficient search. In Wolfe (1992), the two-target conjunction task produced shallow slopes that were several hundred ms slower than search for just one type of conjunction targets. This suggests that observers choose to do two sequential guided searches rather than abandoning guidance and performing a single, unguided search for a target. The task switch from one set of guiding signals to another may well account for some of the very large priming effects seen in the earlier conjunction priming papers.

There is nothing wrong with the conjunction priming paradigms. They are simply asking a somewhat different question than is asked by the classic PoP experiments where the task remains constant over Run and Switch trials. The T among Ls task, shown in Fig. 2, is intended to maintain this task consistency. The target is always simply the letter “T.” The color is present but uninformative. There is no need for any deliberate task switching when the color of the T switches from one color to the other. Any difference between Run and Switch trials would be more akin to the PoP effects seen in the singleton experiments. In the five experiments presented below, the data favor the answer illustrated in 3C. There is evidence for priming in search tasks and that priming appears to be transient, speeding response but not decreasing the slope of the RT × Set Size function. A very similar task was used by Lamy et al. (2008) in a study focused on contextual cueing in clinical populations (schizophrenia and depression). Their observers also looked for a T among L in displays with a color variation that was not relevant to the task. Observers were faster when the target color repeated on successive trials and this priming effect did not interact with the display set size (as in Fig. 3C). The point was mentioned briefly in that paper as it was not a primary focus. In the present paper, we examine this question in detail.

The key question is whether feature priming produces guidance that persists throughout the next search (Fig. 3D). The answer appears to be that it does not. The effects of priming appear to occur either before or after the main search processes when the target is actually found amidst distractors. Ramgir and Lamy (2022) argue that feature priming will "affect processes that occur after the competition for attention is resolved." Another possibility, detailed in the Discussion section below, is that priming guides the initial deployment of attention but not the whole course of an inefficient search for a T among Ls.

Experiment 1: Color priming

Method

Stimuli and procedure

In Experiment 1, observers searched for a T among Ls. The experiment was written in Matlab with PsychToolBox extensions (Brainard, 1997a, 1997b). Due to the COVID-19 crisis, the experiment was run online using Zoom, giving us a level of supervision similar to the lab but not the same level of control over viewing conditions. Items were displayed on a slightly irregular and invisible 5 × 5 grid. The grid filled a square field with each side equal to 80% of the height of the monitor and each letter subtended 10% of that height. On the host monitor, at an approximate viewing distance of 60 cm, this would correspond to approximately 3.1° letters arranged in a 5 × 5 grid subtending approximately 25° on a side. As noted, the remote viewing conditions varied, but observers were constrained to use a desktop or laptop computer and not a handheld device.

The stimuli were large and salient. Set sizes were 6, 12, and 18 items. Targets were capital Ts. Distractors were capital Ls. Each was composed of a vertical and a horizontal bar subtending 3.1° × 0.7° on the host computer. Items were randomly red (RGB: 200, 0, 0) or green (RGB: 0, 180, 60). More precise descriptions of the color would not be meaningful, given the online nature of the task. A target T was present on every trial. Half the items were red and the other half were green. The color of the T was randomly chosen on every trial so that the chance of two successive trials having the same color was 50%. Color was totally irrelevant to the task and observers were told that this was the case. Observers were tested for 600 trials after 10 practice trials.

There were two versions of Experiment 1. In 1a, observers clicked on the location of the T (“localize” condition). In 1b, observers identified the T as red or green in an effort to direct observers’ attention more forcefully to the color of the target (2-alternative, forced-choice (2AFC) condition). Observers responded by moving the cursor to the left (green) or right (red) side of the screen. RT was registered when the cursor was detected anywhere in a large region flanking the left and right sides of the stimulus array. This method appeared to be robust for online studies and produced conventional RT results in pilot studies. Stimuli were visible until the response was made on each trial. The experiment was preregistered on the Open Science Framework (https://osf.io/n79z6/), where the data are also posted.

Observers and power

The experiment aims to see whether the color of the target on one trial influences search on the next; in particular, whether the slope of the RT × Set Size function was shallower on “Run” trials, than on “Switch” trials. A standard TvL target-present slope would be around 30 ms/item (actually the current task proved to be a bit easier, but all these calculations are proportional). In our hands, the SD of slope measures tends to be about 0.3 of the slope. If we want to detect a reduction in slope to 20 ms/item using a paired T-test, with an alpha of 0.5 and a beta (power) of 0.90, we would need nine observers (as computed by G*Power). We typically run 12 observers but, being uncertain about the size of the priming effect in this paradigm, we ran 16 observers, giving us a theoretical power of > 0.95.

The 16 observers were recruited from the Brigham and Women’s Hospital Visual Attention Lab volunteer pool. All had at least 20/25 acuity with correction if needed and passed the Ishihara color vision test (Ishihara, 1980). All gave informed consent in accord with our approval from the Institutional Review Board of Brigham and Women’s Hospital (IRB #2009P001253). Observers were paid $12/h.

Results – Experiment 1a: Localize condition

Reaction times (RTs) were filtered to remove outliers of shorter than 250 ms and longer than 4,000 ms. This removed 0.6% of all RTs. Errors were extremely rare, accounting for only seven out of > 9,500 trials.

Figure 4A shows the RT × Set Size functions for this condition. It is clear that there is a modest priming effect and that it is additive. Repeating the color speeds the RT but does not alter the slope of the RT × Set Size function. This conclusion is supported by a 2-way ANOVA. The main effect of the Run/Switch variable is significant (F (1, 15) = 27.62, p < .0001, partial-eta-squared = 0.648). The main effect of set size is, of course, significant (F (2, 30) = 97.12, p < .0001, partial-eta-squared = .866). An interaction of Run/Switch with set size would indicate that the slopes were different for Run and Switch conditions, but that interaction is not significant (F (2, 30) = 0.3834, p = .6848, partial-eta-squared = .025).

Fig. 4
figure 4

Reaction Time (RT) vs. Set Size functions for Experiment 1a (A) and 1b (B). Priming effect is 28 ms for Experiment 1a and 40 ms for 1b

Results – Experiment 1b: 2AFC condition

Again, RTs were filtered to remove outliers of shorter than 250 ms and longer than 4,000 ms. This removed 0.6% of all RTs. Errors accounted for 2% of all trials with no observer having more than 6% errors.

Figure 4B shows the RT × Set Size functions for this condition. Experiment 1b was conducted in order to see whether focusing attention on color would increase the potency of the priming effect. It did not. As before, there is a modest priming effect and it is additive. The main effect of the Run/Switch variable is significant (F (1, 15) = 6.579, p = .0216, partial-eta-squared = .305), as is the effect of set size (F (2, 30) = 116.6, p < .0001, partial-eta-squared = .886). The interaction of Run/Switch with set size remains insignificant (F (2, 30) = 0.4221, p = .6595, partial-eta-squared = .027).

Comparing localize and 2AFC methods

We used 2AFC and localization methods in Experiment 1 in order to determine if one method produced stronger priming effects than the other. There was, in fact, no evidence that the method made a difference beyond changing the mean RT. A 2-Way ANOVA on the mean RTs showed an obvious main effect of the response method (F (1, 30) = 6.26, p < .0001, partial-eta-squared = .470). The effect of the Run/Switch variable was also significant (F (1, 30) = 9.7, p = .004, partial-eta-squared = .244). This reflects the priming effect. The interaction of Run/Switch with the response method was insignificant (F (1, 30) = 1.1, p = .301, partial-eta-squared = .036), suggesting that size of the priming effect did not reliably differ based on the method. The same ANOVA on the slope data showed no significant effects (all F(1,30) < 2, all p > .18, all partial-eta-squared < .06), indicating that the search efficiency was not reliably influenced by response type or the Run/Switch variable.

Discussion

There are two interesting aspects to these results. Firstly, they provide evidence for a modest (20–40 ms), but clear color priming effect in a relatively inefficient search task where the color is irrelevant to the search. As noted earlier, the bulk of the search priming literature concerns singleton searches where the actual search is extremely efficient. The first deployment of attention is often the only relevant deployment of attention in such tasks. In the present experiment, observers are making multiple deployments of attention. The data are consistent with a priming effect that only influences the first of those deployments, as if the observer was saying (implicitly): "Oh, the last target was red. I guess I will start with a red item." The data are also consistent with an effect that occurs after the search is complete, as if the observer were more willing to commit to the final response if the irrelevant color of the current target matches the color of the previous target. Experiment 1 does not produce the very large priming effects seen in some of the conjunction priming experiments. We assume that this is because, in our task, switch trials did not require a switch in target type (the target was always a T). Only the target color might change between trials.

Experiment 2: Predictive priming

Methods

In Experiment 1, the color of the T on the current trial did not predict the color on the next trial. There was a 50% chance that the target color on the next trial would match the color on the current trial. In Experiment 2, we changed the transition probability to 25% so that the color of the current trial was a fairly reliable predictor of the color of the next trial. If the target was red on one trial, there would be a 75% chance that the target would be red on the next trial. In keeping with our power calculations, we aimed for 12 observers, but thirteen were tested and are reported here. All other aspects of the methods were the same as in the localization version of Experiment 1. The experiment was pre-registered on the Open Science Framework (https://osf.io/ht5rj/) where the data are also posted.

Results

Figure 5 shows that the results for Experiment 2 are very similar to those for Experiment 1. In this case, the main effect of the Run/Switch variable was marginal (F (1, 12) = 4.392, p = .0580, partial-eta-squared = .268). There was the usual effect of set size (F (2, 24) = 31.83, p < .0001, partial-eta-squared = .726). With nearly identical slopes in the Run and Switch conditions, the interaction of Run/Switch with set size remains insignificant (F (2, 24) = 0.1592, p = .6595, partial-eta-squared = .013).

Fig. 5
figure 5

Reaction Time (RT) × Set Size functions for Experiment 2. Priming effect is 29 ms for Experiment 2

Because the probability of a Switch was only 0.25 and the chance of a Repeat was 0.75 on each trial, Experiment 2 produced a significant number of runs with lengths from 1 to 5 (where a run of 1 is a Switch trial). This allowed us to test the possibility that priming might alter the slope of the RT × Set Size function after multiple trials of the same color. However, an ANOVA with run length as a factor showed no systematic effect on slopes (F(8, 96=1.704, p = .1072, partial-eta-squared = .124). This analysis does produce a significant main effect of run length (F(4, 48=3.258, p = .0192, partial-eta-squared = .214).

Thus, making color on one trial predictive of the color on the next did not strengthen the priming effect. A modest priming effect was present but, as in Experiment 1, it was additive and did not alter the slope of the RT × Set Size functions. We did not explicitly suggest that observers should use the preceding trial as a fairly reliable prediction of the next trial, but it would not have been surprising if they had noticed the tendency of colors to repeat. In any case, they did not make use of this information to guide search.

Experiment 3: Guided versus unguided search

Method

Perhaps color guidance does not work for these particular stimuli. That is, perhaps searching for a red T is no more efficient than searching for a T with predictive color stimuli. If that were the case, it would not be surprising if priming failed to improve the efficiency of search. Accordingly, in Experiment 3, we included a standard guided condition as a control condition. In that condition, observers looked for a red T on every trial. Knowing that the target was always red, they should guide toward red item (Egeth et al., 1984). We made several other methodological changes in thinking that they might enhance priming.

Figure 6 cartoons a sequence of trials. In this experiment, observers search for a T that can be red, green, or black. This means that, if priming guides attention to the primed color for the duration of the next trial, only one third of the letters will be in the primed color and it should be easier to see an effect on slope of the RT × Set Size functions. We also changed the response. Now all letters were tilted and observers were instructed to report if the stem of the T was tilted top-left or top-right. Observers answered by pressing the corresponding keys on the keyboard (either the left or right arrow key). If they responded incorrectly, the trial was removed from analysis. Observers were tested for 20 practice and 300 experimental trials. The probability that the color of the T would remain the same on the next trial was 50%. In addition to this block of trials, observers ran a second block of trials in which all Ts were red (standard guided condition). Observers were told explicitly to search for red Ts in this "Guided Search" condition. Observers were tested for 20 practice and 300 experimental trials in the Guided Search block, as well. Block order was randomized across observers. In all other particulars, Experiment 3 replicated Experiments 1 and 2. The experiment was inadvertently not pre-registered. However, the data are posted at OSF (https://osf.io/qvd87/).

Fig. 6
figure 6

Example of a sequence of trials for Experiment 3

Results

Figure 7 shows that the basic priming results were replicated again. There is a clear priming effect of about 80 ms (F(1, 9) = 27.87, p = .0005, partial-eta-squared = .756), but, as before, it is additive with Set Size. There is no evidence for an RT × Set Size interaction as there would be if there was a slope difference between a Run and Switch trial (F(2, 18) = 0.09, p = .91, partial-eta-squared = .011). To assess whether these stimuli produce standard color guidance, we use all the trials, Run and Switch, to create an "Unguided" function (dotted line in Fig. 7) and we compare that to the results for the Guided block of trials in which all of the targets are red. Note that the distractors were identical in the Guided and Unguided blocks of trials. As can be clearly seen in Fig. 7, there is a very robust effect of color guidance: both as a main effect (F(1, 9) = 36.97, p = .0002, partial-eta-squared = .804) and, importantly, on the interaction of guidance with set size (F(2, 18) = 5.995, p = .0101, partial-eta-squared = .400). The interaction reflects the decrease in the slope of the RT × Set Size function from 34 to 20 msec/item. If guidance were perfect, we would expect the slope to be reduced to about 11 msec/item because only 1/3rd of the items would be relevant in the guided search. Nevertheless, the results clearly show a robust effect of color guidance.

Fig. 7
figure 7

Reaction Time (RT) × Set Size functions for Experiment 3. Note that "Unguided" results (dotted line) are the average of the "Switch" and "Run" data. Guided data come from a separate block of trials where all targets are red Ts. Error bars are ± 1 SEM. Priming effect is 91 ms for the Unguided condition

Research on priming of pop-out effects has asked if the priming effect occurs early or late in the course of the trial (Lamy et al., 2010). The sustained feature guidance that we are failing to find here would be an example of an early, feature priming effect in which the color of the previous target guides the deployment of attention on the current trial. The alternative response- or retrieval-based account, proposes that priming occurs after the target is found but before the response is made (Huang et al., 2004). The observer checks on the last response and their response is faster if that response is the same as that previous trial, slower if it is not. One marker for this late-stage account is an interaction between feature repetition and response repetition. We can look for that interaction in the data for Experiment 3 since the orientation of the target T is unrelated to its color. An ANOVA with Stimulus color and Response direction as factors shows the main effect of stimulus priming (F(1,11) = 10.9, p = .007, partial-eta-squared = .50). There is no main effect of response repetition (F(1,11) = 0.80, p = .39, partial-eta-squared = .07). The critical interaction of stimulus and response does not quite reach the 0.05 level of significance (F(1,11) = 4.4, p = .06, partial-eta-squared = .29). This suggestive result can be seen as generally supportive of Lamy et al.'s (2010) "dual-stage account of inter-trial priming effects" in Priming of Pop-out, where they argue that both feature priming and late response checking contribute to the pattern of RTs. We will return to this topic in the Discussion.

Experiment 4: Shape priming

Experiments 13 are consistent in showing a priming effect that is additive with the effects of set size. All of those experiments (and, indeed, the majority of the priming literature) involve priming by color. In Experiments 4 and 5, we test whether these results generalize to a different priming feature; in this case, shape.

Methods

Stimuli and procedure

Experiment 4 was designed as a free-standing online app. It was written in JavaScript using the React library. The Experiment was hosted on the Firebase platform.

The stimuli are shown in Fig. 8. Observers searched for a T among Ls and, as in Experiment 3, they reported the orientation of the stem of the T that was present on every trial. Two very different shapes (here called “Sharp” and “Dot”) were used. The shape was irrelevant to the task and changed at random from trial to trial. The shape of the T on one trial did not predict its shape on the next trial and each stimulus fit into a 50 × 50-pixel box. The stimuli were placed in a pseudorandom manner starting from the center column of the display. The first stimulus was placed at a random vertical position within the boundaries of the center column and each subsequent stimulus was placed in the columns directly adjacent to it. In this way, the display expanded in size from the center column. This process produced displays of roughly constant density but of different spatial extent. Larger set sizes occupied more real estate than smaller set sizes because they filled more columns. This differs from typical methods that hold overall display size constant while allowing density to increase with set size. These display issues are orthogonal to the question of priming that is at the heart of these experiments.

Fig. 8
figure 8

Stimuli (left) and two sample trials (right) for Experiment 4

Because of the unsupervised, online nature of the experiment, we did not have the normal control over viewing distance, lighting, etc. that we would have in the lab or even with supervised online testing via zoom. Observers were asked to find the T among the Ls and to respond with whether the T was tilted to the left or right by pressing the corresponding keys (the left and right arrow keys, respectively). They were instructed to respond as quickly and accurately as possible. In order to shorten the experiment and make it more compatible with on-line practice, we reduced the number of trials to 10 practice and 200 experimental trials. In other matters, Experiment 4 resembled Experiment 3. The experiment was pre-registered on the Open Science Framework (https://osf.io/8znur/) where the data files are publicly available.

Observers and power

Given the reduced numbers of trials and the vagaries of on-line research, we doubled our intended number of observers to 24. We collect data from 37 individuals. Nine of these were eliminated because they did not complete the study or had unacceptably high error rates (greater than 20%). This left 28 observers in this study.

Observers were recruited via the Amazon Mechanical Turk (MTurk) and tested on CloudResearch online platform. Participants were restricted to individuals located in the USA with an approval rate above 95%. Observers attested to 20/25 vision with correction. Procedures were approved by the Institutional Review Board at Brigham and Women’s Hospital (IRB #2007P000646). Observers were paid $8/h.

Results

To our surprise, as is shown in Fig. 9, these stimuli produced a very inefficient search for the T among Ls. As happens online, there was also substantial noise in the RT data. Accordingly, we performed a more elaborate outlier filtering process for these data. We initially removed observers who did not complete the experiment and the small number of RTs greater than 10 s in length. Then, for each observer, we separated all of the correct RTs into groups of run and switch trial and organized them by set size. Within each cell, we removed any RTs greater than 3 SD away from the mean. After this filtering, we removed from the experiment any observer who had less than 80% of trials remaining overall or who had less than 70% of trials remaining in any one cell of the experiment. This left us with 28 observers whose data are plotted in Fig. 9.

Fig. 9
figure 9

Reaction Time × Set Size functions for shape priming stimuli in Experiment 4. Circles show Run data and squares show Switch data. Solid lines show the results for the “Sharp” targets, while dashed lines show results for the “Dot” targets. Error bars show ± 1 SEM. Priming effect is 111 ms for the Sharp stimuli and 223 ms for the Dot stimuli

Though we had not pre-registered the separate analysis of target type, exploratory data analysis showed that observers were faster to respond to “Dot” stimuli than to “Sharp” stimuli so we included target type as factor in a three-way ANOVA with Run/Switch and Set Size as the other factors. The target type main effect is significant (F(1, 27) = 14.5, p < .0001, partial-eta-squared = .350). More importantly, there is a clear priming main effect (F(1, 27) = 31.63, p < .0001, partial-eta-squared = .54). Looking at Fig. 9, there is a hint of a slope difference with Switch slopes being numerically steeper than Run slopes. However, the corresponding Set Size × Run/Switch interaction is not significant (F(2, 54) = 1.54, p = .22, partial-eta-squared = .054), nor is the triple interaction, including the target type (F(2, 54) = 0.688, p = .51, partial-eta-squared = .025). The interaction of Set Size × Target Type is significant (F(2, 54) = 3.29, p = .045, partial-eta-squared = .108). This shows that the Sharp slopes are steeper than the Dot slopes and suggests that the analysis can see a reliable slope difference, when it is present.

As in Experiment 3, we can examine the interaction of Stimulus and Response effects. An ANOVA with Stimulus and Response Priming as factors shows the main effect of stimulus priming (F(1,27) = 17.4, p = .0003, partial-eta-squared = .39). There is no main effect of response priming (F(1,25) = 1.74, p = .20, partial-eta-squared = .06). In this case, the critical interaction of stimulus and response factors is not significant (F(1,27) = 0.7, p = .42, partial-eta-squared = .02). This could be taken as more supportive of an early, feature-based origin for the priming seen in Experiment 4.

Discussion

Experiment 4 shows that priming in an inefficient search for a T among Ls is not limited to priming by color. The shape of the preceding target influenced the RT for the next target. Somewhat unintentionally, the experiment also showed that this basic pattern of results continues to hold even with a very inefficient search. We can only speculate about why the search was so inefficient. One possibility is that these stimuli produced particularly severe peripheral crowding effects (Levi, 2008; Strasburger, 2020). Another is that these Ts and Ls were oddly hard to identify. In any case, the data are roughly consistent with a search in which observers needed to fixate on items at random until they stumbled upon the target. In a similar vein, it is surprising that the Sharp stimuli are harder to report than the Dot stimuli. It is possible that it was harder for observers to decide on the orientation of those items. The thin stem of the Dot Ts might have been less ambiguous than the triangular stem of the Sharp Ts.

These stimulus factors, while of some interest, are not germane to the main issue of interest here. Shape priming, like color priming, produced clear priming but no evidence for an increase in the slope of the RT × Set Size function for Switch trials.

Experiment 5: Shape priming replication

Methods

Germane or not, we were puzzled by the inefficiency of the search in Experiment 5 and by the difference between the responses to the two target types. Accordingly, we performed a second shape priming experiment, using the “Blob” and “Arrow” stimuli, shown in Fig. 10.

Fig. 10
figure 10

Stimuli (left) and two sample trials (right) for Experiment 5

The shapes were designed, albeit in an ad hoc manner, with the intent to make search easier. To further that goal, we also doubled the size of the items to 100 × 100 pixels and reduced the set sizes to four, seven, and ten. In all other respects, Experiment 5 replicated Experiment 4. The data are available at https://osf.io/gaxs2/.

We tested 31 observers online. From this group we obtained 23 usable data sets as described below.

Results

RTs were filtered in the same manner as in Experiment 4, eliminating RTs greater than 3 SD from the mean for the RTs in an observer × TrialType × Set Size cell. We then removed observers who did not complete the experiment or who had less than 80% acceptable trials overall. We also removed observers who had less than 70% acceptable trials in any one cell.

The results in Fig. 11 show that our efforts to make the task easier were a failure as this search was also very inefficient. The results of Experiment 5 constitute a clear replication of Experiment 4. Observers were faster to respond to “Arrow” stimuli than to “Blob” stimuli (F(1, 22) = 18.7, p <= .001, partial-eta-squared = .459). As before, there is a solid priming main effect (F(1, 22) = 7.8, p < .011, partial-eta-squared = .262). Again, the Set Size × Run/Switch interaction is not significant (F(2, 44) = 1.54, p = .8249, partial-eta-squared = .036) nor is the triple interaction, including the Target Type (F(2, 44) = 0.3852, p = .6826, partial-eta-squared = .017). As in Experiment 4, the interaction of Set Size × Target Type is significant (F(2, 44) = 3.064, p = .036, partial-eta-squared = .141). This time, Blob slopes were steeper than Arrow slopes.

Fig. 11
figure 11

Reaction Time × Set Size functions for the shape priming stimuli in Experiment 5. Circles show Run data and squares show Switch data. Solid lines show results for the “Blob” targets, while dashed lines show results for the “Arrow” targets. Error bars show ± 1 SEM Priming effect is 47 ms for the Blob stimuli and 68 ms for the Arrow stimuli

An ANOVA with Stimulus and Response Priming as factors shows the main effect of stimulus priming (F(1,23) = 5.9, p = .023, partial-eta-squared = .20). There is no main effect of response priming (F(1,23) = 0.83, p = .37, partial-eta-squared = .04) nor is the interaction significant (F(1,23) = 2.9, p = .10, partial-eta-squared = .11).

General discussion

The five experiments presented here tell a clear story. A task-irrelevant feature of the target on one trial in an inefficient visual search will have an impact on the next trial. If the feature repeats, RT for that subsequent trial will be shorter, on average, than if the feature changes. The efficiency of that search, as measured by the slope of the RT × Set Size function, will not differ significantly between Run and Switch trials. The effect of priming is largely additive with the effects of set size. The magnitude of this additive effect is quite consistent across experiments (in the tens of milliseconds). The usual interpretation of an additive RT component in visual search is that it reflects an effect outside of the search process. If some factor influences each deployment of attention or the rate of parallel processing of all of the items in a search, one would expect a change in the search slope.

While these data reject the hypothesis that the preceding trial sets up automatic feature guidance for the entirety of the current trial, priming could still reflect a form of feature guidance. The idea is illustrated in Fig. 12A. Perhaps the effects of feature priming are transient, guiding only the start of the current trial. If priming had a transient effect on an attention-guiding “priority map”, it could bias the first item selected in search. Guided Search (Wolfe, 2021) and many other models (Miconi et al., 2016; Moran et al., 2013; Schwarz & Miller, 2016; Scolari et al., 2014) propose that attention is deployed to the peak of a priority map (Fecteau & Munoz, 2006; Serences & Yantis, 2006) that receives input from several sources (Wolfe & Horowitz, 2017). Awh et al. (2012) focused on three of those sources: Physical salience (bottom-up guidance), Current goals (top-down guidance), and Selection history (priming). Simple accounts of feature guidance would predict that guidance by a basic feature would bias attention toward that feature for the entire guided search. In that case, priming would be expected to bias the entire search toward the primed feature. This would have produced a reduction in slope on Run trials; the result that was not seen here.

Fig. 12
figure 12

Two accounts of feature priming effects, showing how priming could influence the beginning of search (A) or the end (B)

However, guidance does not need to be sustained over the entire search. One of the puzzles concerning the role of bottom-up salience in visual search is why observers do not get fixated on salient but irrelevant elements in an image. Consider a chest X-ray, for example. A radiologist may be looking for the faint hints of pneumonia (low contrast and diffuse). The ribs, the spine, and the heart are all much more salient but the radiologist has no trouble directing attention to more task-relevant features. Similarly, in natural scenes, one would not want attention to get stuck on the highlights on glossy cars while you are looking for a street name (Einhauser et al., 2008; Henderson et al., 2007). One possibility would be to attend to each salient item and then inhibit that location, allowing attention to less salient items (Klein, 1988, 2000). However, the data indicate that we lack the ability to reliably inhibit all rejected distractors (Horowitz & Wolfe, 1998, 2003). Moreover, in the radiology example, one would not want to assume that the radiologist looks at and inhibits all the salient bright spots before turning to the subtle stimuli that are the actual objects of the search.

One solution is provided by the work of Donk and her colleagues. (Donk & Soesman, 2010; Van Zoest & Donk, 2008). They argue that bottom-up salience has a transient impact on the priority map as is cartooned in Fig. 12A (red). Salience effects rise rapidly, producing various attention capture effects, but then fade quite rapidly declining to a lower level though, presumably, not all the way back to baseline. It is probably a good idea to attend to highly salient items in the current scene, but then to let top-down, user-driven goals come to dominate an ongoing search in a more sustained manner. After all, the observer’s goals for search ought to be sustained across the entire search. If you are looking for a red car, it would be foolish to stop guiding to red after the first deployment of attention. In Fig. 12A (blue), we show sustained top-down guidance by goals rising to a plateau and remaining there. It rises comparatively slowly since there is evidence that it can take a substantial period of time for top-down guidance to reach full strength (E. M. Palmer et al., 2019).

An early locus account of priming effects would hold that priming behaves like bottom-up salience in having a robust but transient effect on the priority map (cartooned in green in Fig. 12A). The shapes of the time courses in Fig. 12 should not be taken as anything more than a cartoon. The idea is that priming would bias the first deployment of attention and/or of the eyes to items with the primed feature but that the attention guiding effects of this bias would fade fairly rapidly during search. An introspective feel for this sort of priming may be experienced in "hybrid foraging" tasks (Wolfe et al., 2016) like searching for specific pieces in a box of LEGO (Hout et al., 2022; Sauter et al., 2020). Initially, the box is a jumble of pieces, but when you find one red window frame, suddenly the other red window frames seem to pop-out with increased saliency (Theeuwes and Van der Berg, 2013). That salience is useful if you want to collect multiple LEGO windows. It can be overridden by top-down command, if it is time to look for something else.

It is also possible to propose a later locus for priming effects. A generic way to think about such a late locus is shown in Fig. 12B. One way of thinking about search is as a series of decisions that can be modeled as diffusion processes (Hawkins & Heathcote, 2020; Ratcliff, 1978). An item is selected and information begins accumulating as to its identity as, in this case, a target, T, or a distractor, L. We can imagine that the threshold for concluding that an item is a "T" is higher when the color switches (orange) and lower when it does not (green). Higher thresholds take longer to reach on average. As a result, Switch trials will be longer, on average, than Run trials. In this account, the color of the target would not be expected to have an effect on the distractor threshold (shown in red). Distractors on each trial are some mix of target and non-target colors. Any effects of priming from the previous target attributes would be similar on each trial, regardless of the features of the target on the current trial. Thus, it would take about the same amount of time to reject a distractor on a Run trial as on a Switch trial. Since there is not a priming effect on each distractor, there is no effect of set size on RT.

What is this decision process where runs and switches have an effect on targets decisions but not on distractor decisions? A number of researchers have endorsed the idea that there is a checking step before the observer is willing to commit to confirming the presence of a target. One could conceive of this as an act of retrieval from episodic memory as the observer seeks to confirm that this is what they were asked to find (Huang, Holcombe, & Pashler, H., 2004). The target that was retrieved on the last trial could influence retrieval on the next trial, with faster retrieval for repeated responses. Of course, the priming effect does not need to be entirely attributed to an early process or a late process. As Lamy, Yashar, and Ruderman (2010) propose, both processes could be active. One way to look for evidence of a late process of checking on response is to look for an interaction between feature and response factors. In the present experiments, Experiment 3 shows a marginally significant interaction while Experiments 4 and 5 do not. However, these experiments were not designed to discriminate between early, late, or combined accounts of priming. So, while the data could be seen as favoring an early, transient feature priming account (e.g., Becker, 2007), the data should not be seen as definitive on this point.

While the results of the present experiments cannot adjudicate between an early versus a late locus for the effects of feature priming, they do show that priming does not produce sustained guidance throughout an entire, extended search. Other accounts of the results can also be rejected. An additive RT effect in a search experiment could reflect a difference in the perceptual processing required before search begins. Wolfe et al. (2002) invoked this sort of pre-search, perceptual processing to explain why it took longer to search a messy desktop stimulus than a clean one. In the present experiments, a perceptual account seems unlikely, because the visual stimuli are essentially identical (on average) on Run and Switch trials. Given that the task is the same on every trial, it is hard to see how an early visual pre-processing step could be influenced by the status of the preceding trial.

A late locus, based on priming of the motor response, can also be rejected. In the two versions of Experiment 1, the motor response was related to the color in the 2AFC version (Fig. 4B), but not in the Localize version (Fig. 4A). If the additive priming effects were the result of motor response priming, we might expect to see the most robust priming effects when the priming feature was response related (Becker, 2007). In fact, the priming effect is, if anything, a bit smaller in the 2AFC condition of Experiment 1 (avg = 27 ms) than in the Localize condition (avg = 40 ms). The difference is not significant (p = .33) and that difference would be in the opposite direction from the predictions of a motor response priming account. In Experiments 35, where motor response and color repetition were decoupled, there was no statistically reliable main effect of the motor responses. Moreover, a robust response priming effect might be expected to be larger (Miller, 1998).

Clear evidence for the more plausible early and late accounts will require further experimentation. Eye tracking could provide more clarity by showing whether priming influenced the first deployment of the eyes in these extended search tasks (e.g., Becker, 2010; Kruijne & Meeter, 2016). Given the vast range of results on this general topic, it seems likely that the conclusions will remain open to debate (Ramgir & Lamy, 2022). If pressed, we would favor the early locus, based on the introspection, described above, that finding a target seems to enhance the visibility of similar items. We grant, however, that introspection about LEGO search is not a substitute for convincing data.

While these results do argue against the hypothesis that top-down guidance is a form of feature priming, the data and the ideas about transient priming and/or changes in decision thresholds do not contradict the empirical basis of the more sweeping claims about priming. As noted in the introduction, those claims are mostly based on studies where the first deployment of attention is the only relevant deployment. One could reasonably assert that “Feature-based attention … is all bottom-up priming” (Theeuwes, 2013), if one was talking about that first deployment. Even in that case, there is an argument, discussed in the introduction, about whether this bottom-up priming is completely dominant. The conclusion of the present paper is that any dominance of guidance by priming appears to be fleeting. Others would argue that the priming comes after the work of attention is done. The important conclusion is that feature priming does not produce guidance by the primed feature that is sustained across the length of an inefficient search.