Encyclopedia of Color Science and Technology

2016 Edition
| Editors: Ming Ronnier Luo

Color and Visual Search, Color Singletons

  • Jasna Martinovic
  • Amanda Hardman
Reference work entry
DOI: https://doi.org/10.1007/978-1-4419-8071-7_82

Synonyms

Definition

Visual search is a task involving the detection of a unique item within a multi-item display.

Characteristics of the Visual Search Paradigm

In a visual search task, the item that is being searched for is known as the target; other items are known as distractors. Figure 1 presents an example of visual search displays, containing a target item and varying numbers of distractor items. The total number of items in the display is known as set size. Items in a search display can differ along various feature dimensions, for example, color, orientation, or shape. If items in a visual search task vary along a single dimension such as color, the observer may be looking for a target of a specific hue (e.g., red) in a set of different hue distractors. This is known as feature search. In the case of feature search, distractors can be heterogeneous – varying in hue – or homogeneous – all of them the same hue (e.g., blue, as in the example in Fig. 1a). If all the distractors share the same color, the uniquely colored target is defined as a target singleton. But irrelevant singletons are sometimes also used in visual search tasks, e.g., a single orange distractor can be present in a display with a red target and a number of blue distractors. Detection of color singletons is typically very efficient, with reaction times for singleton color targets that are independent of set size (see Fig. 1a). Such efficient visual search is also known as a pop-out effect or parallel visual search. If items vary along multiple dimensions, the participant may be looking for a red bar-oriented upright among a set of distractors that differ in both hue and orientation (see Fig. 1b). This is known as conjunction search. Conjunction search is typically inefficient, producing reaction time costs as set size increases, which is a characteristic of serial visual search.
Color and Visual Search, Color Singletons, Fig. 1

Examples of visual search displays. Examples of (a) feature singleton and (b) conjunction search for a range of different set sizes, going from 6 to 9 items. The targets are (a) a red circle and (b) a red vertical bar. The relative reaction time for each set size is shown underneath each search set. Reaction times for a feature singleton are most often independent of set size, while the reaction times for conjunction search most often increase linearly with the addition of each extra item

Historical Background on Visual Search Tasks in Attention Research

The visual search task is one of the most commonly used paradigms in vision research. There are over 5,000 articles in the Institute for Scientific Information’s (ISI) database that refer to visual search in their title. The popularity of the visual search paradigm stems from the fact that it operationalizes a vital task performed by both humans and nonhuman animals. Eckstein’s review [1] summarizes many examples of everyday search situations. In natural environments, foraging for food involves searching for edible fruit, whilst in man-made environments, operators monitor complex images in order to detect security-relevant or medically relevant information. In many real world search situations, color is an important determinant of performance due to its ability to make certain features of the scene more or less conspicuous. For example, in order to avoid detection by predators, prey often adopts coloration that acts as camouflage, precluding it to “pop-out” when seen in its natural environment.

Visual search task came to prominence in the 1980s, providing the initial evidence base for Treisman’s highly influential Feature Integration Theory (FIT). FIT posited that attentional deployment is guided by multiple, distinct feature maps that are activated in parallel [2]. Visual search provided an excellent paradigm to test this theory, with the potential to reveal the underlying neural representations of feature maps using relatively simple, behavioral methods (for a review, see [3]). The slope of the reaction–time function (milliseconds per item) was considered to be a particularly important variable, providing insight into the amount of time needed for attention to process one item before moving on to the next item. In parallel search, the slope was shown to be constant across different set sizes, which according to FIT was due to the target’s unique representation in a retinotopic feature map, activated in parallel to other such basic level maps. Not all the tenets of FIT held up in the face of stringent experiments that followed, so FIT was supplanted by other models of search. Out of these, the most notable is Wolfe’s Guided Search model which was initially published in [4] and revised in [5].

Initially, visual search studies relied on purely behavioral methods, but they were soon joined with neuroscientific methods, which had the potential to confirm and extend the discoveries made about feature maps underlying attentional deployment. With its millisecond resolution, EEG was a perfect complement to the traditional reaction time approach of visual search paradigms, allowing a more in-depth look at the timing of processes occurring during search. EEG methods thus extended the scope for testing the diverse competing theories of visual attention such as FIT and its many successors. While EEG was used to establish the timing of various attentional processes, functional magnetic resonance imaging (fMRI) studies were used to determine the extent of the neural networks activated during visual search (for an overview, see [6]).

Color Search and Its Underlying Representations: Cone-Opponent or Hue-Based?

Color was considered one of the basic visual features by FIT due to the fact that it could support parallel visual search. In fact, a long line of studies demonstrated that color was one of the most potent feature dimensions for causing a stimulus to pop-out from its surroundings (for a review, see [7]). As long as the difference in chromaticity between the target and the distractors is sufficiently large, search for color is efficient [8]. However, in spite of decades of visual search research using color targets and distractors, it is still not fully understood which chromatic representations guide the attentional selection of color. In a seminal early study, D’Zmura [9] showed that search for equally saturated colors is parallel if target and distractor chromaticities can be linearly separated within a hue-based color space. However, while D’Zmura [9] led the way in providing support for selection based on relative distances in a hue-based color space, Lindsey et al.’s [10] more recent findings were strongly in favor of cone-opponent influences on attentional selection. In particular, Lindsey et al.’s study demonstrated that cone-opponent chromatic representations determine the efficiency of attentional selection. These cone-opponent representations originate in the two separate retinogeniculate pathways: the first distinguishes between reddish and greenish hues through opposing the signal from the L and M cones (L-M) and the second distinguishes between bluish and yellowish hues through opposing the S-cone signal with a combination of L and M cone signals (S-(L + M)). Lindsey et al. found that search was particularly ineffective for desaturated S-(L + M) increments (bluish), whilst being particularly effective for pinkish colors that combine an L-M increment with some S-(L + M) information. Recent visual search experiments demonstrate that absolute featural tuning to color gets overruled by relational tuning to color when targets and distractors can be distinguished on the basis of a relative search criterion, e.g., “redder than” or “yellower than”. For example, in a study by Harris, Remington, and Becker [11] observers searched for orange among yellow distractors by selecting items that were “more reddish” when the trials were blocked together, and only tuned into orange as a particular feature when the search displays of orange singletons among red distractors were randomly mixed with search displays of orange singletons among yellow distractors, rendering such relational search templates ineffective.

As visual search is thought to be driven by feature maps situated in the earliest areas of the cortex [12], findings of subcortical representations influencing color search over and above hue-based cortical representations will need to be addressed in future research. One particular problem with the use of visual search to study representations that underlie attention to color is that the visual search paradigm combines bottom-up, salience-driven, and top-down goal-driven influences on attention. The only way to disentangle bottom-up influences from top-down influences in visual search is to use task-irrelevant color singletons (for a review, see [13]). A recent study by Ansorge and Becker [14] used a spatial cueing paradigm instead of classical visual search in order to circumvent the bottom-up/top-down confound inherent in the search task, but the results again failed to support a single representational space being used for color selection. Finally, conflicting experimental findings are likely to be at least in part due to the many methodological differences between studies investigating visual search for color. The studies rely on both different stimulus and task set-ups (search for single or dual targets; differences between stimuli in terms of saturation and lightness) and on different dependent variables that are meant to reflect performance (manual reaction times, reaction time slopes, eye movements, event-related potentials). For example, while the study by Lindsey and colleagues strongly suggests that cone-opponent signals are important in driving attentional effects, the relation of these effects to the level of luminance contrast in the stimulus remains unclear. Li, Sampson, and Vidyasagar [15] demonstrated that while search times for targets defined by L-M contrast are able to benefit from the added luminance signals, this is not the case for targets defined by S-cone contrast. Asymmetrical interactions between luminance and chromatic signals in determining salience would provide a mechanism through which cone-opponent signals are able to influence visual search performance, without discounting any further potential influences from hue-based representations.

Concluding Comments

Visual search experiments led the way in researching the deployment of attention to color. While visual search remains a highly useful paradigm for studying attention to color, it may be advantageous to consider the knowledge on color representations gained from visual search tasks in a more broad context. This is due to its peculiar susceptibility to bottom-up/top-down confounds generated by the search context, e.g., the choice of target/distractor chromoluminance levels.

Cross-References

References

  1. 1.
    Eckstein, M.P.: Visual search: a retrospective. J. Vis. 11(5), 14 (2011)CrossRefGoogle Scholar
  2. 2.
    Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cogn. Psychol. 12, 97–136 (1980)CrossRefGoogle Scholar
  3. 3.
    Nakayama, K., Martini, P.: Situating visual search. Vision Res. 51(13), 1526–1537 (2011)CrossRefGoogle Scholar
  4. 4.
    Wolfe, J.M., Cave, K.R., Franzel, S.L.: Guided search – an alternative to the feature integration model for visual-search. J. Exp. Psychol. Hum. Percept. Perform. 15(3), 419–433 (1989)CrossRefGoogle Scholar
  5. 5.
    Wolfe, J.M.: Guided search 2.0 – a revised model of visual-search. Psychon. Bull. Rev. 1(2), 202–238 (1994)CrossRefGoogle Scholar
  6. 6.
    Muller, H.J., Krummenacher, J.: Visual search and selective attention. Vis. Cogn. 14(4–8), 389–410 (2006)CrossRefGoogle Scholar
  7. 7.
    Wolfe, J.M.: Visual search. In: Pashler, H. (ed.) Attention. University College London Press, London (1998)Google Scholar
  8. 8.
    Nagy, A.L., Sanchez, R.R.: Critical color differences determined with a visual-search task. J. Opt. Soc. Am. A. 7(7), 1209–1217 (1990)ADSCrossRefGoogle Scholar
  9. 9.
    D’Zmura, M.: Color in visual search. Vision Res. 31, 951–966 (1991)CrossRefGoogle Scholar
  10. 10.
    Lindsey, D.T., et al.: Color channels, not color appearance or color categories, guide visual search for desaturated color targets. Psychol. Sci. 21(9), 1208–1214 (2010)CrossRefGoogle Scholar
  11. 11.
    Harris, A.M., Remington, R.W., Becker, S.I.: Feature specificity in attentional capture by size and color. J. Vis. 13(3), 12 (2013)CrossRefGoogle Scholar
  12. 12.
    Zhaoping, L.: Understanding vision: theory, models, and data. Oxford University Press, Oxford (2014)CrossRefGoogle Scholar
  13. 13.
    Theeuwes, J., Olivers, C.N.L., Belopolsky, A.: Stimulus-driven capture and contingent capture. Wiley Interdiscip. Rev. Cogn. Sci. 1(6), 872–881 (2010)CrossRefGoogle Scholar
  14. 14.
    Ansorge, U., Becker, S.I.: Contingent capture in cueing: the role of color search templates and cue-target color relations. Psychol. Res. 78(2), 209–221 (2014)CrossRefGoogle Scholar
  15. 15.
    Li, J.C., Sampson, G.P., Vidyasagar, T.R.: Interactions between luminance and colour channels in visual search and their relationship to parallel neural channels in vision. Exp. Brain Res. 176(3), 510–518 (2007)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.School of PsychologyUniversity of AberdeenAberdeenUK