Skip to main content
Log in

Dynamic Modeling of Visual Search

  • Original Paper
  • Published:
Computational Brain & Behavior Aims and scope Submit manuscript

Abstract

In 1998/1999, three participants trained for up to 74-h-long sessions to find a target present on half the trials in visual displays of 1, 2, or 4 initially novel objects. There were four targets and four foils that never changed. Displays occurred simultaneously, or the objects occurred successively, or the four features of each object occurred successively. When successive, the SOAs were short (17, 33, or 50 ms), so the displays appeared simultaneous, making it likely that the search strategy was the same in all conditions. A 2004 publication examined only the simultaneous condition and found evidence suggesting serial search as well as some small amount of automatic attention to targets and occasional early or late termination of search. A 2021 publication examined only the displays with single objects, obtaining evidence for dynamic perception of features. These studies drew conclusions from modeling subtle aspects of the response time distributions; extending such modeling to all conditions would have been complex making it difficult to understand the main processes at work. Here, we present a simple way to extend the 2021 model to the conditions with multiple item displays. It is a hybrid model with parallel automatic processing of features from all display items, processing that finishes during the first comparison, combined with serial comparisons that terminate when a target is found, or when none is found. When objects occur sequentially, there is a tendency to compare first the first object presented that probability rising with SOA. This model gives a good qualitative account of the accuracy and median response times from all the conditions. This success suggests that a more complex model incorporating the dynamic processes of the 2021 model would provide an excellent quantitative account for the accuracy and response time distributions for all the conditions of this visual search study.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data Availability

The key measures in the data that were analyzed and modeled are provided in Tables 5 and 6 in Appendix 3. Moreover, a version of this submission with additional details and results, and the program used to generate model predictions, has been stored on PsychArchives and can be accessed through https://doi.org/10.23668/psycharchives.12560.

Notes

  1. The search may be highly non-random from the participant’s point of view. It could be, for example, that search proceeds in a fixed spatial order when not governed by onset. It is random for the purpose of modeling since we did not collect and do not know the spatial position of the target on each trial.

  2. If one imagines a study like the present one, but using stimuli with more complex features, such as words, it is still possible that high level features could be perceived during the time that the first word is compared. Evidence is found in many studies showing semantic priming by words presented incidentally in nearby spatial and temporal positions and even when minimally visible (e.g., Draine & Greenwald, 1998).

  3. The demonstration that this model predicts the data quite well suggests that the medians of the response time distributions are a decent “stand-in” for the entire distribution.

References

  • Bezanson, J., Edelman, A., Karpinski, S., & Shah, V. B. (2017). Julia: A fresh approach to numerical computing. SIAM Review, 59(1), 65–98. https://doi.org/10.1137/141000671

    Article  Google Scholar 

  • Cousineau, D., Donkin, C., & Dumesnil, É. (2015). Unitization of features following extended training in a visual search task. In J. G. W. Raaijmakers, A. H. Criss, R. L. Goldstone, R. M. Nosofsky, & M. Steyvers (Eds.), Cognitive modeling in perception and memory: A festschrift for Richard M. Shiffrin (pp. 3–15). Psychology Press.

  • Cousineau, D., & Shiffrin, R. M. (2004). Termination of a visual search with large display size effects. Spatial Vision, 17, 327–352.

    Article  PubMed  Google Scholar 

  • Cox, G. E., & Shiffrin, R. M. (2017). A dynamic approach to recognition memory. Psychological Review, 124(6), 795–860. https://doi.org/10.1037/rev0000076

    Article  PubMed  Google Scholar 

  • Draine, S. C., & Greenwald, A. G. (1998). Replicable unconscious semantic priming. Journal of Experimental Psychology: General, 127, 286–303.

    Article  PubMed  Google Scholar 

  • Eriksen, C. W. (1988). A source of error in attempts to distinguish coactivation from separate activation in the perception of redundant targets. Perception & Psychophysics, 44(2), 191–193. https://doi.org/10.3758/bf03208712

    Article  Google Scholar 

  • Eriksen, C. W., & Hoffman, J. E. (1972). Temporal and spatial characteristics of selective encoding from visual displays. Perception & Psychophysics, 12(2), 201–204. https://doi.org/10.3758/bf03212870

    Article  Google Scholar 

  • Geisler, W. S., Perry, J. S., & Najemnik, J. (2006). Visual search: The role of peripheral information measured using gaze-contingent displays. Journal of Vision, 6(9), 1. https://doi.org/10.1167/6.9.1

    Article  Google Scholar 

  • Gondan, M., & Heckel, A. (2008). Testing the race inequality: A simple correction procedure for fast guesses. Journal of Mathematical Psychology, 52(5), 322–325. https://doi.org/10.1016/j.jmp.2008.08.002

    Article  Google Scholar 

  • Harding, S. M., Cousineau, D., & Shiffrin, R. M. (2021). Dynamic perception of well-learned perceptual objects. Computational Brain and Behavior, 4(4), 497–518.

    Article  Google Scholar 

  • Huang, C., Vilotijević, A., Theeuwes, J., & Donk, M. (2021). Proactive distractor suppression elicited by statistical regularities in visual search. Psychonomic Bulletin & Review, 28(3), 918–927. https://doi.org/10.3758/s13423-021-01891-3

    Article  Google Scholar 

  • Lefebvre, C., Cousineau, D., & Larochelle, S. (2008). Does training under consistent mapping conditions lead to automatic attention attraction to targets in search tasks? Perception & Psychophysics, 70(8), 1401–1415. https://doi.org/10.3758/PP.70.8.1401

    Article  Google Scholar 

  • Ma, X., & Abrams, R. A. (2023). Feature-blind attentional suppression of salient distractors. Attention, Perception & Psychophysics, 85(5), 1409–1424. https://doi.org/10.3758/s13414-023-02712-6

    Article  Google Scholar 

  • Maxcey, A. M., Shiffrin, R. M., Cousineau, D., & Atkinson, R. C. (2021). Two case studies of very long-term retention. Psychonomic Bulletin & Review, 29(2), 563–567. https://doi.org/10.3758/s13423-021-02002-y

    Article  Google Scholar 

  • Rydell, R. J., Shiffrin, R. M., Boucher, K. L., Van Loo, K., & Rydell, M. T. (2010). Stereotype threat prevents perceptual learning. Proceedings of the National Academy of Sciences, 107(32), 14042–14047. https://doi.org/10.1073/pnas.1002815107

    Article  Google Scholar 

  • Schneider, W., & Shiffrin, R. M. (1977). Controlled and automatic human information processing: I. Detection, search, and attention. Psychological Review, 84, 1–66.

    Article  Google Scholar 

  • Shiffrin, R. M. (1988). Attention. In R. C. Atkinson, R. J. Herrnstein, G. Lindzey, & R. D. Luce (Eds.), Stevens’ handbook of experimental psychology (2nd ed., pp. 739–811). New York: Wiley.

    Google Scholar 

  • Shiffrin, R. M., & Czerwinski, M. P. (1988). A model of automatic attention attraction when mapping is partially consistent. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 562–569.

    PubMed  Google Scholar 

  • Shiffrin, R. M., & Gardner, G. T. (1972). Visual processing capacity and attentional control. Journal of Experimental Psychology, 93, 72–82.

    Article  PubMed  Google Scholar 

  • Shiffrin, R. M., & Lightfoot, N. (1997). Perceptual learning of alphanumeric-like characters. In R. L. Goldstone, P. G. Schyns, & D. L. Medin (Eds.), The Psychology of Learning and Motivation (Vol. 36, pp. 45–82). San Diego: Academic Press.

    Google Scholar 

  • Shiffrin, R. M., & Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending, and a general theory. Psychological Review, 84, 127–190.

    Article  Google Scholar 

  • Townsend, J. T. (1971). A note on the identifiability of parallel and serial processes. Perception & Psychophysics, 10, 161–163. https://doi.org/10.3758/BF03205778

    Article  Google Scholar 

  • Townsend, J. T., & Nozawa, G. (1995). Spatio-temporal properties of elementary perception: An investigation of parallel, serial, and coactive theories. Journal of Mathematical Psychology, 39(4), 321–359. https://doi.org/10.1006/jmps.1995.1033

    Article  Google Scholar 

  • Treisman, A., & Gelade, G. (1980). A feature integration theory of attention. Cognitive Psychology, 12, 97–136.

    Article  PubMed  Google Scholar 

  • Williams, P., Eidels, A., & Townsend, J. T. (2014). The resurrection of Tweedledum and Tweedledee: Bimodality cannot distinguish serial and parallel processes. Psychonomic Bulletin & Review, 21(5), 1165–1173. https://doi.org/10.3758/s13423-014-0599-0

    Article  Google Scholar 

  • Wolfe, J. M. (1994). Guided search 2.0 a revised model of visual search. Psychonomic Bulletin & Review, 1(2), 202–238. https://doi.org/10.3758/bf03200774

    Article  Google Scholar 

  • Wolfe, J. M. (1998). What can 1 million trials tell us about visual search? Psychological Science, 9(1), 33–39. https://doi.org/10.1111/1467-9280.00006

    Article  Google Scholar 

  • Wolfe, J. M., Cave, K. R., & Franzel, S. L. (1989). Guided search: An alternative to the feature integration model for visual search. Journal of Experimental Psychology: Human Perception and Performance, 15(3), 419–433. https://doi.org/10.1037/0096-1523.15.3.419

    Article  PubMed  Google Scholar 

  • Yantis, S., & Jonides, J. (1996). Attentional capture by abrupt onsets: New perceptual objects or visual masking? Journal of Experimental Psychology: Human Perception and Performance, 22(6), 1505–1513. https://doi.org/10.1037/0096-1523.22.6.1505

    Article  PubMed  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Contributions

Zainab Mohamed and Richard Shiffrin did the majority of the new modeling and the writing. Sam Harding produced the model (previously published) used in the present model for the first comparisons of every search. He re-fit that model to obtain parameter values that would better fit median response times. Denis Cousineau collected the data in 1998/9, and he and Sam Harding helped edit the present submission.

Corresponding author

Correspondence to Zainab Rajab Mohamed.

Ethics declarations

Ethics Approval

Not applicable.

Consent to Participate

Not applicable.

Consent for Publication

Not applicable.

Competing Interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A version of this submission with additional details and results, and the program used to generate model predictions, has been stored on PsychArchives. It can be accessed through https://doi.org/10.23668/psycharchives.12560.

Appendices

Appendix 1

Please see Table 4 here.

Table 4 Harding et al. (2021) model predictions for single-object trials

Appendix 2

Predictions for each participant for the many conditions of the present studies require separate equations that take into account simultaneous or sequential conditions, set size, target or foil tests, order of objects or features, time between object or features, target decisions that stop search, foil decisions that continue search, and whether target or foil decisions are correct or wrong. This appendix gives the equations that produce the predictions. To save space, we do not try to explain the derivations of each. Rather, we choose one of the conditions with somewhat complicated equations and explain its derivation for accuracy and median correct response times. The reasoning involved is used for all the equations.

Predictions are derived only for probability correct and correct median response times. There are too few errors to produce reliable estimates. In addition, predictions will be distorted because a significant proportion of the few errors will be “glitches” (e.g., random responses caused by lapses of attention). Nonetheless, the predictions for accuracy and correct response time must take errors of comparison into account, because errors of comparison can lead to correct responses. For example, for a target trial with set size four, with the target object compared last, a correct response will occur if an error is made on any of the first three foil objects (thereby terminating search correctly) or when a correct comparison is made for the last object (a target).

Let us denote any condition by the sextuplet [(T,F); (M,QO,QF); (1,2,4); (Sj); (Zj); (E,L)]. T/F denotes a target or foil trial; M, QO, and QF denote, respectively, siMultaneous, seQuential Object, and seQuential Feature conditions; 1, 2, and 4 denote the set size; Sj (j = 1,2,3) refers to SOAs of 17, 33, and 50; Zj gives the probability that first onset in the sequential object condition, with SOA Sj, causes the first comparison to be that object; E denotes target objects or features Early, and L denotes target objects or features Late. A dash indicates that value is missing for that condition. For example,

  • [T, M, 2, -, -, -] denotes a target trial in the simultaneous condition with set size 2.

  • [F, QO, 4, Sj, Zj, -] denotes a foil trial in the sequential object condition with set size 4 and SOA Sj and probability of starting comparisons with the first object being Zj

  • [T, QF, 1, Sj, -, L] denotes a target trial in the sequential feature condition with set size 1, SOA of Sj, and target features late.

A “p” before a bracket indicates that the equation is for probability correct. A “t” before the bracket indicates that the equation is for correct median response time.

The probabilities and times for correct and error responses for the first comparison in any condition are given by the model of Harding et al. (2021) that was adapted to fit the median response times (rather the full distributions of response times) and accuracy for the conditions with a single object presented. pcT1 and pcF1 give that model’s probability correct for targets and foils, respectively; similarly, peT1 and peF1 give the probability of error for targets and foils (both being one minus the probability correct). For that model, the median response times for targets for correct and error responses are denoted tcT1 and teT1. For foils, these are denoted tcF1 and teF1. These values are those predicted using the parameters given in Appendix 1.

The parameters that are estimated for the present modeling are as follows:

  1. 1.

    Probabilities and times for subsequent comparisons (these are same for all comparisons after the first): pcT, pcF, peT, and peF and tcT, tcF, teT, and teF (when fitting the model we set the correct and error times for subsequent comparisons to be equal, but the equations include them separately)

  2. 2.

    For the sequential object conditions, the probabilities that the first presented item will be chosen for the first comparison are denoted Zj and are specified to increase monotonically with SOA. Finally, when fitting the model, the Z probabilities were set to zero for participant A.

The predictions are straightforward, although tedious: A correct response occurs at the end of a certain path of comparisons. The probability of such a path is given by the product of the probabilities of the path segments. The response time for such a path is the sum of the times for each segment. Every response time includes a so-called “base time” or “residual time” representing times not modeled, such as motor time to press a key. This base time is included in the predictions for the first comparison (from our fit of the Harding et al., 2021 model). To obtain the accuracy predictions, one sums the probabilities of all paths to a correct response. To obtain the median response time predictions, one takes the probability of a given path times the time for that path and then sums those products for all such paths. One might think that it is more justified to use mean response times rather than medians in these equations. However, the means are distorted by long response times produced by factors like mistakes, glitches, and lapses of attention that are not part of the model. Thus, we believe that the uses of medians will be a better approximation than the use of means.

As an example, consider a sequential object target trial with set size 2 and target presented first, the proportion of correct responses is computed as

$$p\left(T,QO,2,{S}_{j},{Z}_{j},E\right)= \left\{{Z}_{j}+\frac{1}{2}\left(1-{Z}_{j}\right)\right\}\left\{{p}_{c}{T}_{1}+\left({p}_{e}{T}_{1}*{p}_{e}F\right)\right\}+\left\{\frac{1}{2}\left(1-{Z}_{j}\right)\right\}\left\{{p}_{e}{F}_{1}+\left({p}_{c}{F}_{1}*{p}_{c}T\right)\right\},$$
(1)

where Zj term is the probability that the target is compared first due to onset; if not, the order is random so there is a ½ chance that the target will be the first compared. If the target is first, then it can be correctly compared, or it can be identified as a foil, and then, the next comparison (of a foil) can mistakenly be identified as a target. Finally, there is a ½ chance that the foil will be compared first; if so, a mistaken identification as a target will end the search correctly, but if correctly identified as a foil, then a correct identification of the target in the next comparison will also produce a correct response.

For the same condition, the prediction for the correct median response time is more complex and given by

$$\begin{array}{c}t\left(T,QO,2,{S}_{j},{Z}_{j},E\right)= \left\{{Z}_{j}+\frac{1}{2}\left(1-{Z}_{j}\right)\right\}\left\{\left({p}_{c}{T}_{1}\right)\left[{t}_{c}{T}_{1}\right]+\left({p}_{e}{T}_{1}*{p}_{e}F\right)\left[{t}_{e}{T}_{1}+{t}_{e}F\right]\right\}+\\ \left\{\frac{1}{2}\left(1-{Z}_{j}\right)\right\}\left\{\left({p}_{e}{F}_{1}\right)\left[{t}_{e}{F}_{1}\right]+\left({p}_{c}{F}_{1}*{p}_{c}T\right)\left[{t}_{c}{F}_{1}+{t}_{c}T\right]\right\},\end{array}$$
(2)

The response time predictions are based on the probability of each path of comparisons leading to a given response, multiplied by the time for that path, and then summed over all paths leading to that response. In (2), the left most expression in the first line in curly brackets is the probability of the first comparison being the target. The next expression gives first the probability of a correct first comparison (of the target) times the time for that correct comparison and then adds the probability of two successive errors (giving a correct response due to errors) times the time for two successive errors. The second line in (2) gives the predicted correct response time when search begins with the foil, not the target that was presented first. This can occur in two ways: an error in the first comparison or two successive correct comparisons. Each of these possibilities has a probability multiplied by the time for that path, the two being summed in the following curly brackets. Such reasoning is used to produce the predictions below for all the different conditions, but the reasoning behind each is omitted.

Set Size 1

$$p\left[T,M,1,-,-,-\right]= {p}_{c}{T}_{1}$$
$$t\left[T,M,1,-,-,-\right]= {t}_{c}{T}_{1}$$
$$p\left[F,M,1,-,-,-\right]= {p}_{c}{F}_{1}$$
$$t\left[F,M,1,-,-,-\right]= {t}_{c}{F}_{1}$$

Set Size 2

Simultaneous Presentation

$$p\left(T,M,2,-,-,-\right)= \frac{1}{2}\left\{{p}_{c}{T}_{1}+\left({p}_{e}{T}_{1}*{p}_{e}F\right)\right\}+ \frac{1}{2}\left\{{p}_{e}{F}_{1}+ \left({p}_{c}{F}_{1}*{p}_{c}T\right)\right\}$$
$$\begin{aligned}t\left(T,M,2,-,-,-\right)&= \frac{1}{2}\left\{\left({p}_{c}{T}_{1}\right)\left[{t}_{c}{T}_{1}\right]+\left({p}_{e}{T}_{1}*{p}_{e}F\right)\left[{t}_{e}{T}_{1}+{t}_{e}F\right]\right\}\\&+ \frac{1}{2}\left\{\left({p}_{e}{F}_{1}\right)\left[{t}_{e}{F}_{1}\right]+ \left({p}_{c}{F}_{1}*{p}_{c}T\right)\left[{t}_{c}{F}_{1}+{t}_{c}T\right]\right\}\end{aligned}$$
$$p\left(F,M,2,-,-,-\right)= {p}_{c}{F}_{1}*{p}_{c}F$$
$$t\left(F,M,2,-,-,-\right)= {t}_{c}{F}_{1}+ {t}_{c}F$$

Sequential Object Presentation

$$p\left(T,QO,2,{S}_{j},{Z}_{j},E\right)= \left\{{Z}_{j}+\frac{1}{2}\left(1-{Z}_{j}\right)\right\}\left\{{p}_{c}{T}_{1}+\left({p}_{e}{T}_{1}*{p}_{e}F\right)\right\}+\left\{\frac{1}{2}\left(1-{Z}_{j}\right)\right\}\left\{{p}_{e}{F}_{1}+\left({p}_{c}{F}_{1}*{p}_{c}T\right)\right\}$$
$$\begin{aligned}t\left(T,QO,2,{S}_{j},{Z}_{j},E\right)= &\left\{{Z}_{j}+\frac{1}{2}\left(1-{Z}_{j}\right)\right\}\left\{\left({p}_{c}{T}_{1}\right)\left[{t}_{c}{T}_{1}\right]+\left({p}_{e}{T}_{1}*{p}_{e}F\right)\left[{t}_{e}{T}_{1}+{t}_{e}F\right]\right\}+\\&\left\{\frac{1}{2}\left(1-{Z}_{j}\right)\right\}\left\{\left({p}_{e}{F}_{1}\right)\left[{t}_{e}{F}_{1}+{S}_{j}\right]+\left({p}_{c}{F}_{1}*{p}_{c}T\right)\left[{t}_{c}{F}_{1}+{t}_{c}T+{S}_{j}\right]\right\}\end{aligned}$$
$$p\left(T,QO,2,{S}_{j},{Z}_{j},L\right)=\left\{{Z}_{j}+\frac{1}{2}\left(1-{Z}_{j}\right)\right\}\left\{{p}_{e}{F}_{1}+\left({p}_{c}{F}_{1}*{p}_{c}T\right)\right\}+ \left\{\frac{1}{2}\left(1-{Z}_{j}\right)\right\}\left\{{p}_{c}{T}_{1}+\left({p}_{e}{T}_{1}*{p}_{e}F\right)\right\}$$
$$\begin{aligned}t\left(T,QO,2,{S}_{j},{Z}_{j},L\right)&= \left\{{Z}_{j}+\frac{1}{2}\left(1-{Z}_{j}\right)\right\}\left\{\left({p}_{e}{F}_{1}\right)\left[{t}_{e}{F}_{1}\right]+\left({p}_{c}{F}_{1}*{p}_{c}T\right)\left[{t}_{c}{F}_{1}+{t}_{c}T\right]\right\} + \\ &+ \left\{\frac{1}{2}\left(1-{Z}_{j}\right)\right\} \left\{\left({p}_{c}{T}_{1}\right)\left[{t}_{c}{T}_{1}+{S}_{j}\right]+\left({p}_{e}{T}_{1}*{p}_{e}F\right)\left[{t}_{e}{T}_{1}+{t}_{e}F+{S}_{j}\right]\right\}\end{aligned}$$
$$p\left(F,QO,2,{S}_{j},-,-\right)= {p}_{c}{F}_{1}*{p}_{c}F$$
$$t\left(F,QO,2,{S}_{j},-,-\right)= \begin{array}{c}\left\{{Z}_{j}+\frac{1}{2}\left(1-{Z}_{j}\right)\right\}\left[{t}_{c}{F}_{1}+ {t}_{c}F\right]+ \left\{\frac{1}{2}\left(1-{Z}_{j}\right)\right\}\left[{t}_{c}{F}_{1}+ {t}_{c}F+{S}_{j}\right]\\ \end{array}$$

Set Size 4

Simultaneous Presentation

$$\begin{aligned}p\left(T,M,4,-,-,-\right)&= \frac{1}{4}\left\{{p}_{c}{T}_{1}+\left({p}_{e}{T}_{1}*{p}_{e}F\right)+\left({p}_{e}{T}_{1}*{p}_{c}F*{p}_{e}F\right)+\left({p}_{e}{T}_{1}*{p}_{c}F*{p}_{c}F*{p}_{e}F\right)\right\}\\&+ \frac{1}{4}\left\{{p}_{e}{F}_{1}+\left({p}_{c}{F}_{1}*{p}_{c}T\right)+\left({p}_{c}{F}_{1}*{p}_{e}T*{p}_{e}F\right)+\left({p}_{c}{F}_{1}*{p}_{e}T*{p}_{c}F*{p}_{e}F\right)\right\}\\&+ \frac{1}{4}\left\{{p}_{e}{F}_{1}+\left({p}_{c}{F}_{1}*{p}_{e}F\right)+\left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}T\right)+\left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{e}T*{p}_{e}F\right)\right\}\\&+ \frac{1}{4}\left\{{p}_{e}{F}_{1}+\left({p}_{c}{F}_{1}*{p}_{e}F\right)+\left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{e}F\right)+\left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}F*{p}_{c}T\right)\right\}\end{aligned}$$
$$\begin{aligned}t\left(T,M,4,-,-,-\right)&=\frac{1}{4}\left\{\begin{array}{c}\left({p}_{c}{T}_{1}\right)\left[{t}_{c}{T}_{1}\right]+\left({p}_{e}{T}_{1}*{p}_{e}F\right)\left[{t}_{e}{T}_{1}+{t}_{e}F\right]+\left({p}_{e}{T}_{1}*{p}_{c}F*{p}_{e}F\right)\left[{t}_{e}{T}_{1}+{t}_{c}F+{t}_{e}F\right]+\\ \left({p}_{e}{T}_{1}*{p}_{c}F*{p}_{c}F*{p}_{e}F\right)\left[{t}_{e}{T}_{1}+{t}_{c}F+{t}_{c}F+{t}_{e}F\right] \end{array}\right\}\\&+\frac{1}{4}\left\{\begin{array}{c}\left({p}_{e}{F}_{1}\right)\left[{t}_{e}{F}_{1}\right]+\left({p}_{c}{F}_{1}*{p}_{c}T\right)\left[{t}_{c}{F}_{1}+{t}_{c}T\right]+ \left({p}_{c}{F}_{1}*{p}_{e}T*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}T+{t}_{e}F\right]+\\ \left({p}_{c}{F}_{1}*{p}_{e}T*{p}_{c}F*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}T+{t}_{c}F+{t}_{e}F\right]\end{array}\right\}\\&+\frac{1}{4}\left\{\begin{array}{c}\left({p}_{e}{F}_{1}\right)\left[{t}_{e}{F}_{1}\right]+\left({p}_{c}{F}_{1}*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}F\right]+ \left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}T\right)\left[{t}_{c}{F}_{1}+{t}_{e}F+{t}_{c}T\right]+\\ \left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{e}T*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{c}F+{t}_{e}T+{t}_{e}F\right]\end{array}\right\}\\&+\frac{1}{4}\left\{\begin{array}{c}\left({p}_{e}{F}_{1}\right)\left[{t}_{e}{F}_{1}\right]+\left({p}_{c}{F}_{1}*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}F\right]+ \left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}F+{t}_{e}F\right]+\\ \left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}F*{p}_{c}T\right)\left[{t}_{c}{F}_{1}+{t}_{c}F+{t}_{c}F+{t}_{c}T\right]\end{array}\right\}\end{aligned}$$
$$p\left(F,M,4,-,-,-\right)= {p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}F*{p}_{c}F$$
$$t\left(F,M,4,-,-,-\right)= {t}_{c}{F}_{1}+ {t}_{c}F + {t}_{c}F + {t}_{c}F$$

Sequential Object Presentation

$$\begin{aligned}p\left(T,QO,4,{S}_{j},{Z}_{j},E\right)&=\left\{{Z}_{j}+\frac{1}{4}\left(1-{Z}_{j}\right)\right\}\left\{{p}_{c}{T}_{1}+\left({p}_{e}{T}_{1}*{p}_{e}F\right)+\left({p}_{e}{T}_{1}*{p}_{c}F*{p}_{e}F\right)+\left({p}_{e}{T}_{1}*{p}_{c}F*{p}_{c}F*{p}_{e}F\right)\right\}\\&+ \left\{\frac{1}{4}\left(1-{Z}_{j}\right)\right\}\left\{{p}_{e}{F}_{1}+\left({p}_{c}{F}_{1}*{p}_{c}T\right)+\left({p}_{c}{F}_{1}*{p}_{e}T*{p}_{e}F\right)+\left({p}_{c}{F}_{1}*{p}_{e}T*{p}_{c}F*{p}_{e}F\right)\right\}\\&+ \left\{\frac{1}{4}\left(1-{Z}_{j}\right)\right\}\left\{{p}_{e}{F}_{1}+\left({p}_{c}{F}_{1}*{p}_{e}F\right)+\left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}T\right)+\left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{e}T*{p}_{e}F\right)\right\}\\&+ \left\{\frac{1}{4}\left(1-{Z}_{j}\right)\right\}\left\{{p}_{e}{F}_{1}+\left({p}_{c}{F}_{1}*{p}_{e}F\right)+\left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{e}F\right)+\left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}F*{p}_{c}T\right)\right\}\end{aligned}$$
$$\begin{aligned}t\left(T,QO,4,{S}_{j},{Z}_{j},E\right)&= \left\{{Z}_{j}+\frac{1}{4}\left(1-{Z}_{j}\right)\right\}\left\{\begin{array}{c}\left({p}_{c}{T}_{1}\right)\left[{t}_{c}{T}_{1}\right]+\left({p}_{e}{T}_{1}*{p}_{e}F\right)\left[{t}_{e}{T}_{1}+{t}_{e}F\right]+ \\ \left({p}_{e}{T}_{1}*{p}_{c}F*{p}_{e}F\right)\left[{t}_{e}{T}_{1}+{t}_{c}F+{t}_{e}F\right]+ \\ \left({p}_{e}{T}_{1}*{p}_{c}F*{p}_{c}F*{p}_{e}F\right)\left[{t}_{e}{T}_{1}+{t}_{c}F+{t}_{c}F+{t}_{e}F\right]\end{array}\right\}\\&+ \left\{\frac{1}{4}\left(1-{Z}_{j}\right)\right\}\left\{\begin{array}{c}\left({p}_{e}{F}_{1}\right)\left[{t}_{e}{F}_{1}+{2S}_{j}\right]+\left({p}_{c}{F}_{1}*{p}_{c}T\right)\left[{t}_{c}{F}_{1}+{t}_{c}T+{2S}_{j}\right]+ \\ \left({p}_{c}{F}_{1}*{p}_{e}T*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}T+{t}_{e}F+{2S}_{j}\right]+ \\ \left({p}_{c}{F}_{1}*{p}_{e}T*{p}_{c}F*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}T+{t}_{c}F+{t}_{e}F+{2S}_{j}\right]\end{array}\right\}\\&+ \left\{\frac{1}{4}\left(1-{Z}_{j}\right)\right\}\left\{\begin{array}{c}\left({p}_{e}{F}_{1}\right)\left[{t}_{e}{F}_{1}+{2S}_{j}\right]+\left({p}_{c}{F}_{1}*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}F+{2S}_{j}\right]+ \\ \left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}T\right)\left[{t}_{c}{F}_{1}+{t}_{e}F+{t}_{c}T+{2S}_{j}\right]+ \\ \left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{e}T*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{c}F+{t}_{e}T+{t}_{e}F+{2S}_{j}\right]\end{array}\right\}\\&+ \left\{\frac{1}{4}\left(1-{Z}_{j}\right)\right\}\left\{\begin{array}{c}\left({p}_{e}{F}_{1}\right)\left[{t}_{e}{F}_{1}+{2S}_{j}\right]+\left({p}_{c}{F}_{1}*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}F+{2S}_{j}\right]+ \\ \left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}F+{t}_{e}F+{2S}_{j}\right]+ \\ \left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}F*{p}_{c}T\right)\left[{t}_{c}{F}_{1}+{t}_{c}F+{t}_{c}F+{t}_{c}T+{2S}_{j}\right]\end{array}\right\}\end{aligned}$$
$$\begin{aligned}p\left(T,QO,4,{S}_{j},{Z}_{j},L\right) &= \left\{{Z}_{j}+\frac{1}{4}\left(1-{Z}_{j}\right)\right\}\ast\{\\&\frac{1}{3}\left\{{p}_{e}{F}_{1}+\left({p}_{c}{F}_{1}*{p}_{c}T\right)+\left({p}_{c}{F}_{1}*{p}_{e}T*{p}_{e}F\right)+\left({p}_{c}{F}_{1}*{p}_{e}T*{p}_{c}F*{p}_{e}F\right)\right\}\\&+ \frac{1}{3}\left\{{p}_{e}{F}_{1}+\left({p}_{c}{F}_{1}*{p}_{e}F\right)+\left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}T\right)+\left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{e}T*{p}_{e}F\right)\right\}\\&+ \frac{1}{3}\left\{{p}_{e}{F}_{1}+\left({p}_{c}{F}_{1}*{p}_{e}F\right)+\left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{e}F\right)+\left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}F*{p}_{c}T\right)\right\}\}\end{aligned}$$
$$\begin{aligned}+\left\{\frac{3}{4}\left(1-{Z}_{j}\right)\right\}*\{&\\&\frac{1}{3}\left\{{p}_{c}{T}_{1}+\left({p}_{e}{T}_{1}*{p}_{e}F\right)+\left({p}_{e}{T}_{1}*{p}_{c}F*{p}_{e}F\right)+\left({p}_{e}{T}_{1}*{p}_{c}F*{p}_{c}F*{p}_{e}F\right)\right\}\\&+ \frac{2}{9}\left\{{p}_{e}{F}_{1}+\left({p}_{c}{F}_{1}*{p}_{c}T\right)+\left({p}_{c}{F}_{1}*{p}_{e}T*{p}_{e}F\right)+\left({p}_{c}{F}_{1}*{p}_{e}T*{p}_{c}F*{p}_{e}F\right)\right\}\\&+ \frac{2}{9}\left\{{p}_{e}{F}_{1}+\left({p}_{c}{F}_{1}*{p}_{e}F\right)+\left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}T\right)+\left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{e}T*{p}_{e}F\right)\right\}\\&+ \frac{2}{9}\left\{{p}_{e}{F}_{1}+\left({p}_{c}{F}_{1}*{p}_{e}F\right)+\left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{e}F\right)+\left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}F*{p}_{c}T\right)\right\}\}\end{aligned}$$
$$\begin{aligned}t\left(T,QO,4,{S}_{j},{Z}_{j},L\right)=\left\{{Z}_{j}+\frac{1}{4}\left(1-{Z}_{j}\right)\right\}* \{&\\&\frac{1}{3}\left\{\begin{array}{c}\left({p}_{e}{F}_{1}*{t}_{e}{F}_{1}\right)+\left({p}_{c}{F}_{1}*{p}_{c}T\right)\left[{t}_{c}{F}_{1}+{t}_{c}T\right]+ \left({p}_{c}{F}_{1}*{p}_{e}T*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}T+{t}_{e}F\right]+\\ \left({p}_{c}{F}_{1}*{p}_{e}T*{p}_{c}F*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}T+{t}_{c}F+{t}_{e}F\right]\end{array}\right\}\\&+ \frac{1}{3}\left\{\begin{array}{c}\left({p}_{e}{F}_{1}*{t}_{e}{F}_{1}\right)+\left({p}_{c}{F}_{1}*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}F\right]+ \left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}T\right)\left[{t}_{c}{F}_{1}+{t}_{e}F+{t}_{c}T\right]+\\ \left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{e}T*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{c}F+{t}_{e}T+{t}_{e}F\right]\end{array}\right\}\\&+ \frac{1}{3}\left\{\begin{array}{c}\left({p}_{e}{F}_{1}*{t}_{e}{F}_{1}\right)+\left({p}_{c}{F}_{1}*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}F\right]+ \left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}F+{t}_{e}F\right]+\\ \left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}F*{p}_{c}T\right)\left[{t}_{c}{F}_{1}+{t}_{c}F+{t}_{c}F+{t}_{c}T\right] \end{array}\right\}\}\end{aligned}$$
$$\begin{aligned}+\left\{\frac{3}{4}\left(1-{Z}_{j}\right)\right\}*\{&\\&\frac{1}{3}\left\{\begin{array}{c}\left({p}_{c}{T}_{1}\right)\left[{t}_{c}{T}_{1}+{3S}_{j}\right]+\left({p}_{e}{T}_{1}*{p}_{e}F\right)\left[{t}_{e}{T}_{1}+{t}_{e}F+{3S}_{j}\right]+\\ \left({p}_{e}{T}_{1}*{p}_{c}F*{p}_{e}F\right)\left[{t}_{e}{T}_{1}+{t}_{c}F+{t}_{e}F+{3S}_{j}\right]+ \\ \left({p}_{e}{T}_{1}*{p}_{c}F*{p}_{c}F*{p}_{e}F\right)\left[{t}_{e}{T}_{1}+{t}_{c}F+{t}_{c}F+{t}_{e}F+{3S}_{j}\right]\end{array}\right\}\\&+\frac{2}{9}\left\{\begin{array}{c}\left({p}_{e}{F}_{1}\right)\left[{t}_{e}{F}_{1}+{1.5S}_{j}\right]+\left({p}_{c}{F}_{1}*{p}_{c}T\right)\left[{t}_{c}{F}_{1}+{t}_{c}T+{1.5S}_{j}\right]+ \\ \left({p}_{c}{F}_{1}*{p}_{e}T*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}T+{t}_{e}F+{1.5S}_{j}\right]+ \\ \left({p}_{c}{F}_{1}*{p}_{e}T*{p}_{c}F*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}T+{t}_{c}F+{t}_{e}F+{1.5S}_{j}\right]\end{array}\right\}\\&+ \frac{2}{9}\left\{\begin{array}{c}\left({p}_{e}{F}_{1}\right)\left[{t}_{e}{F}_{1}+{1.5S}_{j}\right]+\left({p}_{c}{F}_{1}*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}F+{1.5S}_{j}\right]+ \\ \left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}T\right)\left[{t}_{c}{F}_{1}+{t}_{e}F+{t}_{c}T+{1.5S}_{j}\right]+ \\ \left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{e}T*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{c}F+{t}_{e}T+{t}_{e}F+{1.5S}_{j}\right]\end{array}\right\}\\&+ \frac{2}{9}\left\{\begin{array}{c}\left({p}_{e}{F}_{1}\right)\left[{t}_{e}{F}_{1}+{1.5S}_{j}\right]+\left({p}_{c}{F}_{1}*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}F+{1.5S}_{j}\right]+ \\ \left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{e}F\right)\left[{t}_{c}{F}_{1}+{t}_{e}F+{t}_{e}F+{1.5S}_{j}\right]+ \\ \left({p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}F*{p}_{c}T\right)\left[{t}_{c}{F}_{1}+{t}_{c}F+{t}_{c}F+{t}_{c}T+{1.5S}_{j}\right] \end{array}\right\}\}\end{aligned}$$
$$p\left(F,QO,4,{S}_{j},-,-\right)= {p}_{c}{F}_{1}*{p}_{c}F*{p}_{c}F*{p}_{c}F$$
$$\begin{aligned}t\left(F,QO,4,{S}_{j},-,-\right)=&\left\{{Z}_{j}+\frac{1}{4}\left(1-{Z}_{j}\right)\right\}\left[{t}_{c}{F}_{1}+ {t}_{c}F+ {t}_{c}F+ {t}_{c}F\right]+\\&\left\{\frac{3}{4}\left(1-{Z}_{j}\right)\right\}\left[{t}_{c}{F}_{1}+ {t}_{c}F+ {t}_{c}F+ {t}_{c}F+2{S}_{j}\right]\end{aligned}$$

Equations of sequential feature presentation conditions are identical to simultaneous presentation conditions, but the response times and proportions of correct responses for the first comparison were replaced with the values predicted by Harding et al. model (2021), which are provided in Appendix 1.

Appendix 3

Please see Tables 5 and 6 here.

Table 5 Observed median RT and average accuracies for each experimental condition
Table 6 Predicted median RT and average accuracies for the proposed model “hybrid” and the alternative model “serial”. Note that the serial model was not fitted to sequential feature presentation condition

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mohamed, Z.R., Cousineau, D., Harding, S.M. et al. Dynamic Modeling of Visual Search. Comput Brain Behav 6, 601–625 (2023). https://doi.org/10.1007/s42113-023-00177-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42113-023-00177-2

Keywords

Navigation