Color and Visual Search, Color Singletons
Visual search is a task involving the detection of a unique item within a multi-item display.
Characteristics of the Visual Search Paradigm
Historical Background on Visual Search Tasks in Attention Research
The visual search task is one of the most commonly used paradigms in vision research. There are over 5,000 articles in the Institute for Scientific Information’s (ISI) database that refer to visual search in their title. The popularity of the visual search paradigm stems from the fact that it operationalizes a vital task performed by both humans and nonhuman animals. Eckstein’s review  summarizes many examples of everyday search situations. In natural environments, foraging for food involves searching for edible fruit, whilst in man-made environments, operators monitor complex images in order to detect security-relevant or medically relevant information. In many real world search situations, color is an important determinant of performance due to its ability to make certain features of the scene more or less conspicuous. For example, in order to avoid detection by predators, prey often adopts coloration that acts as camouflage, precluding it to “pop-out” when seen in its natural environment.
Visual search task came to prominence in the 1980s, providing the initial evidence base for Treisman’s highly influential Feature Integration Theory (FIT). FIT posited that attentional deployment is guided by multiple, distinct feature maps that are activated in parallel . Visual search provided an excellent paradigm to test this theory, with the potential to reveal the underlying neural representations of feature maps using relatively simple, behavioral methods (for a review, see ). The slope of the reaction–time function (milliseconds per item) was considered to be a particularly important variable, providing insight into the amount of time needed for attention to process one item before moving on to the next item. In parallel search, the slope was shown to be constant across different set sizes, which according to FIT was due to the target’s unique representation in a retinotopic feature map, activated in parallel to other such basic level maps. Not all the tenets of FIT held up in the face of stringent experiments that followed, so FIT was supplanted by other models of search. Out of these, the most notable is Wolfe’s Guided Search model which was initially published in  and revised in .
Initially, visual search studies relied on purely behavioral methods, but they were soon joined with neuroscientific methods, which had the potential to confirm and extend the discoveries made about feature maps underlying attentional deployment. With its millisecond resolution, EEG was a perfect complement to the traditional reaction time approach of visual search paradigms, allowing a more in-depth look at the timing of processes occurring during search. EEG methods thus extended the scope for testing the diverse competing theories of visual attention such as FIT and its many successors. While EEG was used to establish the timing of various attentional processes, functional magnetic resonance imaging (fMRI) studies were used to determine the extent of the neural networks activated during visual search (for an overview, see ).
Color Search and Its Underlying Representations: Cone-Opponent or Hue-Based?
Color was considered one of the basic visual features by FIT due to the fact that it could support parallel visual search. In fact, a long line of studies demonstrated that color was one of the most potent feature dimensions for causing a stimulus to pop-out from its surroundings (for a review, see ). As long as the difference in chromaticity between the target and the distractors is sufficiently large, search for color is efficient . However, in spite of decades of visual search research using color targets and distractors, it is still not fully understood which chromatic representations guide the attentional selection of color. In a seminal early study, D’Zmura  showed that search for equally saturated colors is parallel if target and distractor chromaticities can be linearly separated within a hue-based color space. However, while D’Zmura  led the way in providing support for selection based on relative distances in a hue-based color space, Lindsey et al.’s  more recent findings were strongly in favor of cone-opponent influences on attentional selection. In particular, Lindsey et al.’s study demonstrated that cone-opponent chromatic representations determine the efficiency of attentional selection. These cone-opponent representations originate in the two separate retinogeniculate pathways: the first distinguishes between reddish and greenish hues through opposing the signal from the L and M cones (L-M) and the second distinguishes between bluish and yellowish hues through opposing the S-cone signal with a combination of L and M cone signals (S-(L + M)). Lindsey et al. found that search was particularly ineffective for desaturated S-(L + M) increments (bluish), whilst being particularly effective for pinkish colors that combine an L-M increment with some S-(L + M) information. Recent visual search experiments demonstrate that absolute featural tuning to color gets overruled by relational tuning to color when targets and distractors can be distinguished on the basis of a relative search criterion, e.g., “redder than” or “yellower than”. For example, in a study by Harris, Remington, and Becker  observers searched for orange among yellow distractors by selecting items that were “more reddish” when the trials were blocked together, and only tuned into orange as a particular feature when the search displays of orange singletons among red distractors were randomly mixed with search displays of orange singletons among yellow distractors, rendering such relational search templates ineffective.
As visual search is thought to be driven by feature maps situated in the earliest areas of the cortex , findings of subcortical representations influencing color search over and above hue-based cortical representations will need to be addressed in future research. One particular problem with the use of visual search to study representations that underlie attention to color is that the visual search paradigm combines bottom-up, salience-driven, and top-down goal-driven influences on attention. The only way to disentangle bottom-up influences from top-down influences in visual search is to use task-irrelevant color singletons (for a review, see ). A recent study by Ansorge and Becker  used a spatial cueing paradigm instead of classical visual search in order to circumvent the bottom-up/top-down confound inherent in the search task, but the results again failed to support a single representational space being used for color selection. Finally, conflicting experimental findings are likely to be at least in part due to the many methodological differences between studies investigating visual search for color. The studies rely on both different stimulus and task set-ups (search for single or dual targets; differences between stimuli in terms of saturation and lightness) and on different dependent variables that are meant to reflect performance (manual reaction times, reaction time slopes, eye movements, event-related potentials). For example, while the study by Lindsey and colleagues strongly suggests that cone-opponent signals are important in driving attentional effects, the relation of these effects to the level of luminance contrast in the stimulus remains unclear. Li, Sampson, and Vidyasagar  demonstrated that while search times for targets defined by L-M contrast are able to benefit from the added luminance signals, this is not the case for targets defined by S-cone contrast. Asymmetrical interactions between luminance and chromatic signals in determining salience would provide a mechanism through which cone-opponent signals are able to influence visual search performance, without discounting any further potential influences from hue-based representations.
Visual search experiments led the way in researching the deployment of attention to color. While visual search remains a highly useful paradigm for studying attention to color, it may be advantageous to consider the knowledge on color representations gained from visual search tasks in a more broad context. This is due to its peculiar susceptibility to bottom-up/top-down confounds generated by the search context, e.g., the choice of target/distractor chromoluminance levels.
- 7.Wolfe, J.M.: Visual search. In: Pashler, H. (ed.) Attention. University College London Press, London (1998)Google Scholar