Advertisement

Attention, Perception, & Psychophysics

, Volume 77, Issue 2, pp 659–680 | Cite as

Evaluating perceptual integration: uniting response-time- and accuracy-based methodologies

  • Ami Eidels
  • James T. Townsend
  • Howard C. Hughes
  • Lacey A. Perry
Article

Abstract

This investigation brings together a response-time system identification methodology (e.g., Townsend & Wenger Psychonomic Bulletin & Review 11, 391–418, 2004a) and an accuracy methodology, intended to assess models of integration across stimulus dimensions (features, modalities, etc.) that were proposed by Shaw and colleagues (e.g., Mulligan & Shaw Perception & Psychophysics 28, 471–478, 1980). The goal was to theoretically examine these separate strategies and to apply them conjointly to the same set of participants. The empirical phases were carried out within an extension of an established experimental design called the double factorial paradigm (e.g., Townsend & Nozawa Journal of Mathematical Psychology 39, 321–359, 1995). That paradigm, based on response times, permits assessments of architecture (parallel vs. serial processing), stopping rule (exhaustive vs. minimum time), and workload capacity, all within the same blocks of trials. The paradigm introduced by Shaw and colleagues uses a statistic formally analogous to that of the double factorial paradigm, but based on accuracy rather than response times. We demonstrate that the accuracy measure cannot discriminate between parallel and serial processing. Nonetheless, the class of models supported by the accuracy data possesses a suitable interpretation within the same set of models supported by the response-time data. The supported model, consistent across individuals, is parallel and has limited capacity, with the participants employing the appropriate stopping rule for the experimental setting.

Keywords

Response time Accuracy Parallel processing Redundant targets Interaction contrast No probability response contrast Integration Coactivation OR task AND task 

How does the cognitive system combine information from separate sources? This question is central to basic human information processing and also possesses many potential applications, from clinical science to human factors and engineering. In the present study, we bring together two previously distinct approaches that can combine to provide strong converging evidence about some of the critical properties of human information processing. The approaches are applicable to the two primary measures of performance in psychological research, response accuracy and response times (hereafter, RTs), so when considered together they allow for strong inference regarding the mechanisms underlying cognitive performance.

With regard to the measure of RT, we employed Townsend and Nozawa’s (1995) systems factorial technology (hereafter, SFT) framework, and expanded it empirically as we will outline shortly. With regard to the measure of response accuracy, we built on the seminal efforts of Marilyn Shaw and colleagues (e.g., Mulligan & Shaw, 1980; Shaw, 1982). Her work, in her own terminology, was oriented toward the issue of perceptual integration versus separate processing of multiple inputs. A key ingredient in her approach was the NO response probability contrast (NRPC), which we will also define soon. The NRPC statistic permitted Shaw and colleagues to disconfirm several classes of models and provide support for one (Mulligan & Shaw, 1980; Shaw, 1982) for the stimuli that they considered.

However, several major questions remained unanswered. Since the experiments used to test their models are all accuracy-based, the models are not given a time-oriented, dynamic explanation. For instance, Shaw’s response-accuracy measure cannot tell whether processing is parallel or serial, or what is the system’s ability to handle changes in processing workload over time. Thus, we sought to enlist both RT and accuracy measures in order to more completely understand the underlying properties of perceptual systems. An important advantage of the basic measures that we consider in this study (both the RT and accuracy measures) is that they are nonparametric, and thus are robust across many specific parameterized models.

Another limitation of Shaw’s approach was that it was applicable only to tasks with an OR decision rule. Here we develop a new, appropriate contrast for AND tasks and derive the relevant model predictions. To illustrate the difference between OR and AND rules used to combine the information prior to a response, consider a display with two (or more) signals. In the OR case, participants may be asked to detect the presence of one signal or another or both. In the AND case, response is required only if signals are presented on both channels (e.g., channels A and B). Our study offers a complete investigation of human performance with OR and AND rules, in terms of both RT and accuracy, without making specific parametric assumptions.1

We begin by outlining several important properties of the human information-processing system, and the RT- and accuracy-based tools used to identify these properties. Our exposition of the experiments is then divided into an RT section (Study I) and an accuracy section (Study II) and, within each, to OR and AND experiments. We first present a new RT study (Exp. 1: OR) employing the double factorial design and the systems factorial methodology devised for it (Townsend & Nozawa, 1995; see the explication below). We further present a second new RT experiment with a different logical processing rule (Exp. 2: AND). In Study II, we report a third new experiment with the Shaw-type paradigm and analyses involving accuracy (Exp. 3: OR), and extend them by adding an experiment based on a different logical processing rule (Exp. 4: AND), analogous to the novel design in our RT paradigm (Exp. 2). Experiments 2 and 4 are, to the best of our knowledge, the first empirical (perceptual) tests for these factorial techniques in AND designs. Moreover, to analyze the AND accuracy data we extended Shaw’s machinery and derived an appropriate accuracy measure for AND designs.

Following a discussion of the two sets of experiments, a unified theoretical framework is presented that permits placing the RT- and accuracy-based approaches within a common framework. To begin with, our general approach is outlined. Then we will be in a position to interpret Shaw’s models within that extended theory and methodology.

Basic properties of the human information-processing system

When presented with signals from multiple sources, say, from two different spatial locations, a number of aspects of information processing are basic for characterizing the perceptual or cognitive systems. First, within the issue of architecture, people may process both signals at the same time that is, in parallel, or process one first and then process the other, that is, serial processing. Second, the cognitive system may employ different stopping rules: it can complete the processing of both signals, also called the exhaustive stopping rule, or it can halt processing after the completion of only one signal, the minimum-time or first-terminating stopping rule. Third, workload capacity, denoted C(t), refers to processing efficiency as a function of workload (e.g., Townsend & Ashby, 1983; Townsend & Wenger, 2004b). Specifically, C(t) measures the relative cost versus benefit to performance when an additional channel or information source is added in the stimulus. Finally, independence (or the lack of) refers to possible interdependencies between different processing channels.

Systems factorial technology comprises a set of possible approaches to identifying the above system properties within a unified framework (Townsend, 1992; Townsend & Wenger, 2004a). This approach entails an interrelated taxonomy for elementary cognitive processes (Townsend, 1974; Townsend & Ashby, 1983), augmented by a mathematical theory and associated experimental methodology (Townsend & Nozawa, 1995; Townsend & Wenger, 2004a) to experimentally characterize the psychological system of interest. The systems factorial methodology employs RTs for assessing the different dimensions of processing.

The redundant-target task had been proven useful in studying the above issues. In one version of such a task, participants may be presented with a target signal on one location, on another, or on both (there also exists a nontarget display, which, depending on the particular procedure, can be either blank or comprised of nontarget items). Participants are instructed to respond affirmatively if they detect at least one target—that is, if a target appears in one location, or the other, or both (a disjunctive rule), hence the name OR task. The condition in which two targets appear is called a redundant-target (sometimes double-target) condition, since one target is sufficient for the participant to respond affirmatively.

In a different version, the same stimulus types as in the OR design may appear, but now the instructions are to respond YES if and only if both locations are occupied by targets—that is, if a target is in both one location and the other (a conjunctive rule), and therefore the name AND task. In each basic design, in principle, participants might extract information from the two spatial locations serially, in parallel, or in some hybrid fashion. The choice of stopping rule, however, should be dictated by the task demands, if participants are to conform to the instructions and perform accurately with most efficiency. Thus, in the OR design, whereas a participant might still process both targets on a redundant trial, due to choice or inability to do otherwise, such an option is not as efficient as stopping as soon as the first is completed. Conversely, in an AND task, the task imposes an exhaustive stopping rule.2

Except for serial and parallel modes of processing, another important type of architecture is again parallel but rather than each channel handling its own detection, it is assumed that the information or activation within each channel is added together with that from the other channel in a subsequent pooled outlet. This final pooling channel possesses a detector criterion for deciding whether, at any point in time, there is sufficient support to report the presence of a signal, from either or both input channels. In the redundancy literature, this type of system is typically referred to as a coactive system (e.g., Colonius & Townsend, 1997; Diederich & Colonius, 1991; Houpt & Townsend, 2011; Miller, 1982; Schwarz, 1994; Townsend & Nozawa, 1995). For a NO response to occur, it must be the case that the added activations fail to meet the criterion. The logical notion of a stopping rule becomes vacuous in a coactive system, because the decision threshold is only assessing activation on the single “final” channel.3 A coactive system with a single, common detection mechanism that aggregates activation from all channels before decision is illustrated in Fig. 1b and can be compared with the separate-channel, parallel system in Fig. 1a.
Fig. 1

A schematic illustration of parallel-independent (a) and coactive (b) architectures

Theoretical and empirical developments brought by the present investigation

The present investigation pursues the issues of architecture, stopping rule, capacity, and independence. It involves both OR as well as the AND designs and formulates an RT study as well as an accuracy study, the latter following along the lines of Shaw and colleagues. Within the RT analyses, we can discern architecture, capacity, and stopping rule. Independence is indirectly and partially assessable within the RT paradigm (see the Workload capacity section below). In contrast, independence appears as a major component in the predictions in the accuracy analyses and the stopping rule provides robust implications as well when the accuracy-based NRPC is used.

Architecture is less precisely assayed in the accuracy experiments. However, the combination of RT and accuracy together permit, under assumptions of parsimony, a unified account of both data sets.

Theoretical and empirical aspects are necessarily intertwined in this research, but segregation according to the major contributive features will aid the presentation. The present work offers a number of experimental contributions. Predominantly, the overall goals of this investigation employed a combination of RT and accuracy experimental paradigms for purposes of seeking a unified interpretation, comparison and linkage of RT and accuracy models.

We show that the linkage of RT with accuracy permits converging evidence of architecture, stopping rules and performance efficiency not available with either alone.

Furthermore, to allow a direct comparison between performance on RT and accuracy tasks, it was important to use the same participants in all the conditions, which of course, has not been done before. Finally, it was also desirable to employ a very simple and reasonably well understood basic paradigm and stimuli to facilitate comparisons across the accuracy and RT designs. This was accomplished in such a way as to fulfill a need recognized by Shaw and colleagues, as detailed below. We used a simple dot detection task close to that of Townsend and Nozawa (1995). That experiment recorded RTs in a redundant-target task with two dots, but used only the OR design, finding overwhelming evidence for parallel, minimum-time processing with capacity varying from super to quite limited. The present RT study also contained an AND condition that requires exhaustive (conjunctive) processing. We assess architecture and probe capacity with an alternate measure of capacity, appropriate for AND experiments. On that note, Shaw (1982) and Mulligan and Shaw (1980) had also focused only on OR decisions. Thus, our new AND accuracy experiment effectively extends both the theoretical and the empirical domain of the original papers.

The final experimental contribution of this article involves adjustment to the procedure used by Mulligan and Shaw (1980), to potentially allow for integration of information. The data and analyses of Mulligan and Shaw supported independent decision models, not integration as they defined it in their terminology. Their stimuli, however, were presented peripherally (40 deg off center). Miller (1982); Berryhill, Kveraga, Webb, and Hughes (2007), and others, with RTs and central presentations, have found evidence against race (separate channels) models and in favor of coactive models. It is possible that information is integrated differently on and off the center of the perceptual field. Therefore, in Mulligan and Shaw’s words, “replication with stimuli at 0 deg azimuth . . . would be a worthwhile endeavor” (p. 476). Of course, no two dots can occupy the same place in space at 0 deg azimuth, but our stimuli were fairly close to it (± 1 deg above and below a central fixation point).

On the theoretical side, we provide a unified approach and taxonomy for the Shaw models within our theory and models. Furthermore, we extend Shaw’s family of model predictions to AND paradigms. Finally, we prove that architecture and the appropriate temporal dynamics play virtually no role in the NRPC accuracy predictions. That is, different processing architectures (serial, parallel) predict the same NRPC pattern. However, both Shaw’s models and our models make differential predictions concerning the OR and AND stopping rule, independent of architecture. That is, the stopping rule is critical, but not the architecture. Our coactivation model and the Mulligan and Shaw (1980) weighted-integration model are excluded here because as noted earlier, the concept of stopping rule is inapplicable so they make identical predictions for OR and AND designs.

In the upcoming section, we shall briefly recount two tests for assessing the stopping rule and architecture that are based on RT distributions: mean interaction contrast, and survivor interaction contrast. We will further outline a third measure, the workload capacity coefficient, which assesses the processing capacity of the system, as workload varies, and at the same time indirectly informs us about architecture. We will then survey the methodology of Shaw and colleagues (Mulligan & Shaw, 1980; Shaw, 1982), in which response accuracy is the independent variable. Then we present data from two studies, each involving two experiments, in which the same participants performed with high-accuracy (RT task) and parathreshold stimuli (accuracy task). We now turn to a brief presentation of our theory-driven methodology.

Analysis of RT data: system factorial technology

Systems factorial technology is a theory-driven experimental methodology that allows for a taxonomy of four critical characteristics of the cognitive system under study: architecture (serial vs. parallel), stopping rule (exhaustive vs. minimum time), workload capacity (limited, unlimited, or super) and channel independence. The first three are directly tested by our RT methodology. Independence can only be indirectly assessed, as opposed to accuracy, through measures such as capacity (e.g., Townsend & Wenger, 2004b). Architecture and stopping rule are the primary characteristics targeted in this study, but we shall see that capacity and possibly channel dependencies may be implicated in the interpretations.

Systems factorial technology analysis is based on a factorial manipulation of two factors with two levels, and it utilizes two main statistics: the mean interaction contrast (MIC; Ashby & Townsend, 1980; Schweickert, 1978; Schweickert & Townsend, 1989; Sternberg, 1969) and the survivor interaction contrast (SIC; Townsend & Nozawa, 1995). The latter extension makes use of data at the distributional level rather than means, and therefore permits analysis at a more powerful and detailed level (Townsend, 1990; Townsend & Nozawa, 1988, 1995). Both statistics are independent of the underlying stochastic distribution. The only real assumption necessary to propel this methodology and to calculate the MIC and SIC is that of selective influence. The concept of selective influence was treated as being equivalent to statistical main effects at the level of means for many years, in the sense that, for instance, a higher level of salience of a stimulus will lead to a significantly faster mean RT. It is now acknowledged that selective influence must act at the level of ordering the RT distributions, not just means (Townsend & Ashby, 1983; Townsend & Schweickert, 1989; Townsend, 1990). Townsend, Dzhafarov, and their colleagues (Dzhafarov, 2003; Kujala & Dzhafarov, 2008) continue to investigate the underlying theory and underpinning conditions for selective influence.

Mean interaction contrast

The MIC statistic describes the interaction between the mean response times (MRT) of two factors with two levels each, and can be presented as follows: MIC = (MRTLL – MRTLH) – (MRTHL – MRTHH) = MRTLL – MRTLH – MRTHL + MRTHH.

There are two subscript letters; the first denotes the level of the first factor (H = high, L = low), and the second indicates the level of the second factor. For the sake of concreteness, consider for example the visual target detection task that we used in Experiment 1. The two factors in this task may be the salience (contrast, intensity) levels of each of two bright dots displayed against a dark background. The first factor may then be the salience of a target presented at the top position, and the second factor may be the salience of a target presented at the bottom. Thus, the first and second subscript letters refer to the intensity level (H, L) of the top and bottom targets, respectively. Note that the MIC gives the difference between differences of mean RTs, which is literally the definition of an interaction. MIC = 0 indicates that the effect of one factor on processing latency is exactly the same, whether the level of the other factor is L or H. Conversely, if the two factors interact, then manipulating the salience of one factor would yield different effects, depending on the level of the other factor; hence, MIC ≠ 0. Underadditive interaction, or MIC < 0, is a typical prediction of parallel exhaustive processing, whereas overadditivity, or MIC > 0, is associated with parallel minimum-time processing or coactive models. Serial models, with either an exhaustive or a minimum-time stopping rule, predict additivity, or MIC = 0 (Townsend & Ashby, 1983; Townsend & Nozawa, 1995).

Survivor interaction contrast

The survivor interaction contrast function (SIC) is defined as SIC(t) = [SLL(t) – SLH(t)] – [SHL(t) – SHH(t)], where S(t) denotes the RT survivor function. In brief, to calculate the SIC, we divide the time scale into bins (say, of 10 ms each) and calculate the proportion of responses given within each time bin, to produce an approximation to the density function, f(t), and the cumulative probability function, F(t). That is, F(t) is equal to the probability that RT is less than or equal to t. The survivor function, S(t), is the complement of the cumulative probability function, [1 – F(t)] = S(t), and tells us the probability that the process under study finishes later than time t. To produce the SIC, one calculates the difference between differences (hence, the interaction term, as the name suggests) of the survivor functions of the four corresponding factorial conditions in the same way that the function is derived for the means, but does so for every bin of time.

Note that this statistic produces an entire function across the values of observed RTs. Furthermore, each architecture and stopping rule has a specific signature, with respect to the shape of the SIC function (Townsend & Nozawa, 1988, 1995). For example, the SIC function for a parallel minimum-time model is positive for all times t, whereas the SIC function of a coactive model starts negative and then crosses the abscissa and becomes positive.

If we integrate the SIC function from zero to infinity it is known to give the MIC (Townsend & Nozawa, 1995), and in both models the MIC actually turns out to be positive. So, it is the finer-grained SIC that allows for a decisive test between a coactive and a parallel minimum-time mode of processing. Although the MIC is not nearly as diagnostic as the SIC, it reinforces the SIC results and provides a means of statistically assessing any interactions associated with the SIC function. The MIC and SIC predictions of parallel, serial, and coactive models are summarized on the right-hand side of Table 1.
Table 1

Different processing models and their accuracy and response time predictions

Mulligan and Shaw (1980) and Our Accuracy Models’ Predictions

Response-Time Models’ (e.g., Townsend & Nozawa, 1995) Predictions

Model Type

Predictions

Model Type

Predictions

Sharing

 

Parallel-independent

 

 OR

NRPC > 0; log NRPC = 0

 OR

MIC > 0; SIC(t) > 0

 AND

NRPC < 0; log YRPC = 0

 AND

MIC < 0; SIC(t) < 0

Mixture: All or none

 

Serial

 

 OR

NRPC = 0

 OR

MIC = 0; SIC(t) = 0

 AND

?

 AND

MIC = 0; SIC(t) < 0 for small t, > 0 for large t.

Mixture: Not all or none

   

 OR

NRPC > 0

  

 AND

?

  

Weighted integration

z NRPC = 0

Coactivation

MIC > 0; SIC(t) < 0 for small t, > 0 for large t.

\( \begin{array}{l}\mathrm{NRPC}=P\left[\mathrm{NO}\Big|\left(\O, \O \right)\right]-P\left[\mathrm{NO}\Big|\left(\mathrm{T},\O \right)\right]-P\left[\mathrm{NO}\Big|\left(\O, \mathrm{B}\right)\right]+P\left[\mathrm{NO}\Big|\left(\mathrm{T},\mathrm{B}\right)\right]\hfill \\ {} \log\;\mathrm{NRPC}= \log \left(P\left[\mathrm{NO}\Big|\left(\O, \O \right)\right]\right)- \log \left(P\left[\mathrm{NO}\Big|\left(\mathrm{T},\O \right)\right]\right)- \log \left(P\left[\mathrm{NO}\Big|\left(\O, \mathrm{B}\right)\right]\right)+ \log \left(P\left[\mathrm{NO}\Big|\left(\mathrm{T},\mathrm{B}\right)\right]\right)\hfill \\ {} \log\;\mathrm{YRPC}= \log \left(P\left[\mathrm{YES}\Big|\left(\O, \O \right)\right]\right)- \log \left(P\left[\mathrm{YES}\Big|\left(\mathrm{T},\O \right)\right]\right)- \log \left(P\left[\mathrm{YES}\Big|\left(\O, \mathrm{B}\right)\right]\right)+ \log \left(P\left[\mathrm{YES}\Big|\left(\mathrm{T},\mathrm{B}\right)\right]\right)\hfill \\ {}\mathrm{zNRPC}=\mathrm{zscore}\left(P\left[\mathrm{NO}\Big|\left(\O, \O \right)\right]\right)-\mathrm{zscore}\left(P\left[\mathrm{NO}\Big|\left(\mathrm{T},\O \right)\right]\right)-\mathrm{zscore}\left(P\left[\mathrm{NO}\Big|\left(\O, \mathrm{B}\right)\right]\right)+\mathrm{zscore}\left(P\left[\mathrm{NO}\Big|\left(\mathrm{T},\mathrm{B}\right)\right]\right)\hfill \end{array} \)

Workload capacity

By workload capacity, we refer to the processing efficiency of the system as we increase the load of information by increasing the number of the to-be-processed targets. Townsend and Nozawa (1995) proposed a measure of performance under increases in workload that is based on hazard functions. The hazard function, h(t) = f(t)/S(t), captures the likelihood that a specific process (e.g., channel) will finish processing in the next instant, given that it has not yet done so. The larger the hazard function at any point in time, the higher the speed of processing. The integrated hazard function, H(t), is the integral of the hazard function from zero to t. The associated statistic is the capacity coefficient, which is computed as the ratio of the integrated hazard function from the double-target condition (i.e., two targets presented simultaneously, viewed here as “AB”) and the sum of the integrated hazard functions of the single-target conditions A and B: Thus, C OR(t) = H AB(t)/[H A(t) + H B(t)]. The subscript OR indicates that this index is calculated for the OR task.

C OR(t) is a measure benchmarked against a standard parallel process, with stochastically independent channels. The benchmark model has unlimited capacity, in the sense that its channel speeds do not vary with the number of other channels that are operating. Any such models produce C OR(t) = 1 for all times t ≥ 0. The prediction of a standard serial model, in which processing of one target item has to be completed before commencing the processing of the other target item and the two processes are independent, under certain conditions is C OR(t) = .5.

It important to note, however, that distinct architectures might produce the same capacity values. For example, a parallel model with inhibition across processing channels can produce C(t) values close to .5, which are characteristic of a serial system (Eidels, Houpt, Altieri, Pei, & Townsend, 2011). Such potential model mimicry emphasizes the importance of employing tests such as MIC and SIC that assess architecture while avoiding conflation with capacity.

To sum up, C OR(t) values of 1 imply that the system has an unlimited capacity. C OR(t) values that are below 1 define limited capacity, such that increasing the processing load (by increasing the number of targets on the display) takes a toll on the performance of one or both channels. Finally, if C OR(t) > 1, then the system is said to have supercapacity: The processing efficiency of individual channels actually increases as we increase the workload. Although capacity and independence are logically distinct, the capacity coefficient can be affected by dependencies. Thus, the prediction of a parallel model with positive cross-channel interactions is C OR(t) > 1, as is the qualitative prediction of a coactive model (Eidels et al., 2011). Very strong inhibitory cross-channel interactions may lead to severely limited capacity, such that C OR(t) < .5.

Townsend and Wenger (2004b) developed a comparable capacity index for the AND task: C AND(t) = [K A(t) + K B(t)]/ K AB(t). Here, K(t) is the integral of a different kind of “hazard” function, one that calculates the likelihood of just finishing in the last instant, rather than the instant ahead, and conditioned on the event that the process has been completed until just around time t: k(t) = f(t)/F(t), and K(t) = ∫k(t') dt', integrated from zero to t. In C AND(t), the numerator and denominator are arranged such that the interpretation is identical to that of the OR capacity index: C AND(t) values that are above, at, or below 1 imply super, unlimited, or limited capacity, respectively.

Miller (1978, 1982) suggested an upper bound on RT distribution when independent channels are involved in a race (in an OR task). He suggested that a violation of this bound, colloquially known as Miller’s race-model inequality, is evidence against race models and in favor of coactivation.4 Townsend and Eidels (2011) showed that it could, in fact, be viewed as a measure of workload capacity rather than architecture, and that it can be mapped onto the same space as C(t). Since Miller’s race-model inequality is a conservative test (a model can have supercapacity and not violate the inequality), we instead used the more refined C(t) measure.

Analysis of response accuracy data: the NRPC and Mulligan and Shaw’s information-sampling models

NO response probability contrast

We next consider the application of a factorial method to a psychophysical experiment in which response accuracy is the dependent variable. As we noted earlier, the present development represents an extension of the work of Shaw and colleagues (e.g., Mulligan & Shaw, 1980; Shaw, 1982) within the confines of our systems-factorial-oriented approach. Mulligan and Shaw analyzed the probabilities of NO responses in each of four stimulus conditions using the following formula, which we call the NRPC, as we related earlier. Let Ø represent a blank or null stimulus, A the presence of a target at the top of the display, and B the presence of a target at the bottom of the display. Then we define the NO response probability contrast as the double difference, NRPC = P[NO|(Ø,Ø)] – P[NO|(A,Ø)] – (P[NO|(Ø,B)] – P[NO|(A,B)]).

The first term, P[NO|(Ø,Ø)], represents the probability of a NO response given no signal. The term P[NO|(A,B)] represent the probability of a NO response given signals at both the top and bottom positions (i.e., a double-target display), and so on.

Mulligan and Shaw (1980) derived predictions for the probabilities of NO responses (as well as for their logarithmic and z-score transformations) for four types of models. The predictions of these models are summarized on the left-hand side of Table 1. A description of these models follows shortly, but for the reader’s convenience, we also summarize the formulas in Table 2.
Table 2

Formal description of the four types of processing models that were studied by Mulligan and Shaw (1980)

Model

Equation

Independent-decision sharing model

P(“ NO ”) = P(X A < β A) × P(X B < β B)

Independent-decision mixture

all-or-none model

P(“ NO ”) = a × P(X A < β A) + (1 − a) × P(X B < β B)

Independent-decision mixture

not all-or-none model

\( \begin{array}{l}P\left(``\mathrm{NO}"\right)=a\times P\left({X}_{\mathrm{A}}<{\beta}_{\mathrm{A}}\right)\times P\left({X}_{\mathrm{B}}<{\beta}_{\mathrm{B}}^{\prime}\right)\hfill \\ {}+\left(1-a\right)\times P\left({X}_{\mathrm{A}}<{\beta}_{\mathrm{A}}^{\prime}\right)\times P\left({X}_{\mathrm{B}}<{\beta}_{\mathrm{B}}\right)\hfill \end{array} \)

Weighted-integration model

P(“ NO ”) = P([w × X A + (1 − w) × X B] < β)

The equation for each model gives P(NO), the probability of responding “no signal.” For all models, X A and X B are the random variables that represent the number of counts on two processing channels, A and B, and β A and β B are the respective decision criteria. The last model has only one criterion, β. See the text for clarification concerning other notation

We shall employ Mulligan and Shaw’s (1980) terminology to facilitate connections with their earlier articles, and subsequently place the models within our approach. It should be noted that on occasion the language in their articles may seem to suggest a wider interpretation of the models than was mathematically defined in their equations. We must confine our analyses to the published mathematical interpretations.

All four models assume that, much like the static theory of signal detectability (e.g., Green & Swets, 1966), information is compared with one or more criteria. The first three models postulate that a stochastically independent comparison is made on each channel of the sampled information versus a criterion. These three models are said to differ from one another in the way that attention is allocated to multiple processing channels (say, to visual and auditory modalities, or in our case, display positions). In the fourth model, information from the two channels is averaged prior to comparison, and this single, integrated value is then compared to a single criterion. Next, we examine more closely each of the models and its NRPC predictions.

Independent-decision sharing model

In this model (also dubbed the fixed-sharing model in Shaw’s 1982 article), the participant is viewed as sharing attention between channels on each trial, and the proportion of attention assigned to each channel is assumed to remain constant across trials. The formula for the probability of a NO response is P(NO) = P(X A < β A) ⋅P(X B < β B). If each processing channel accumulates counts until a prescribed number is reached, then X A and X B are the random variables that represent the numbers of counts on processing channels A and B, and β A and β B are the respective decision criteria.

Mulligan and Shaw (1980) showed that this model predicts an overadditive NRPC (i.e., NRPC > 0) and additivity after logarithmic transformation, such that log(P[NO| (Ø, Ø)] – log(P[NO| (A, Ø)]) – log(P[NO| (Ø, B)]) + log(P[NO| (A, B)]) = 0 (see also Table 1, “Sharing” model). Our independent, unlimited-capacity parallel (race) models, when adapted to accuracy designs, can make this type of prediction. (e.g., Townsend & Ashby, 1983, Chap. 9). Now, within our approach “sharing” usually implies limited capacity, since it seems to suggest a fixed or bounded source of capacity in which HAB(t) would be less than HA(t) + HB(t) (see the Workload capacity section above). However, since we confine our discussion to the mathematical expression of the model, as presented by Mulligan and Shaw, it should be viewed for all intents and purposes as having unlimited capacity.

Independent-decision all-or-none probability mixture model

This model is a probability average of the probabilities that in each of the two channels information will fail to reach its criterion.

The formula relating individual channels to the overall likelihood of a NO response is P(NO) = αP(X A < β A) + (1 – α) ⋅P(X B < β B). Information on each trial is obtained from only one channel: only from channel A, with probability α, or only from channel B, with probability (1 – α). The overall performance is then a weighted mixture of the performance on individual trials. This model can be viewed as an attention-switching model in which, on any single trial, attention is fully allocated to one channel and not the other, but can switch between channels on subsequent trials. Of course, the unattended source of information, or channel, has no influence on the response probabilities.

The prediction of this version of the mixture model, also termed the all-or-none mixture model (Shaw, 1982), is NRPC = 0 (Table 1, “Mixture: All or none”). Within our taxonomy, this type of model would be classified as a serial model that, with some probability, selects one of the channels to process and stops immediately after completion (although, as we will show in the General discussion, all four models in Table 1 can be viewed as either serial or parallel). This type of behavior is most appropriate when responding YES on redundant-target trials—this is called first-terminating (or minimum-time) processing. Of course, a NO response requires rejection on both channels, and hence demands exhaustive processing to have a chance for optimal performance. We would expect accuracy with this kind of model to be suboptimal.5

Independent-decision not-all-or-none mixture mode

In this version of a mixture model, attention is directed primarily to one source, but some information from the unattended source is used in the detection decision. Attention affects the criterion value, such that β is the criterion for the attended channel and β' is the criterion for the less (but still) attended channel. Its formula is P(NO) = αP(X A < β A) ⋅P(X B < β' B) + (1 – α) ⋅P(X A < β' A) ⋅P(X B < β B). The prediction of this version is NRPC > 0 (Table 2, “Mixture: Not all or none”). This type of model would be called a compound-processing model in our approach (see Townsend & Ashby, 1983, Chap. 5), since this kind of prediction would follow from a probability mixture of parallel systems.

Weighted-integration model

In this model, the averaged evidence from separate processing channels is summed prior to decision and then compared to a single criterion, β. Its formula is P(NO) = P([wX A + (1 – w) ⋅X B] < β). Note that this formula is compatible with a system with attentional weight w placed on channel A and 1 – w on channel B. This model is a relative of the coactive model discussed earlier, except that the convention for coactive models has become a simple addition of the information or activation from the separate channels (e.g., Colonius & Townsend, 1997). In addition, in the particular instantiation studied by Mulligan and Shaw (1980), the probability distributions of the internal random variables (evidence or activation in a channel) are assumed to be Gaussian. With an additional, and rather strong, assumption of equal variances of the signal and the no-signal distributions, the weighted-integration model predicts additivity of the z, or inverse-normal, transformations of the probabilities of a NO response: z(P[NO|(Ø, Ø)]) – z(P[NO|(T, Ø)]) – z(P[NO|(Ø, B)]) + z(P[NO|(T, B)]) = 0 (Shaw, 1982, pp. 373–376).

To recap, the independent-sharing model and the not-all-or-none mixture model both predict NRPC > 0. The former also predicts that the double difference of the log transforms of the probabilities of the NO responses will equal 0. The model with an all-or-none mixture predicts NRPC = 0. Finally, the weighted-integration model, a relative of the coactive model, predicts z NRPC = 0. These predictions are summarized, as we mentioned, on the left-hand side of Table 1.

We next present the two experiments of Study I, in which we employed RTs to directly assess architecture, stopping rule, and workload capacity. We then proceed to Study II (Exps. 3 and 4), in which the same individuals performed in accuracy tasks.

Study I: the OR and AND response-time experiments

Method

Participants

Ten Indiana University students (two graduates and eight undergraduates; seven females, three males) were paid to participate in the study. They had normal or corrected-to-normal vision. Their ages ranged between 22 and 30 years. The participants performed in eight experimental sessions of approximately an hour each.

Stimuli

There were nine possible stimulus displays: four types of double-target displays, four types of single-target displays, and one no-target display. In a double-target display, two dots, with a diameter of 0.2° each, were located on a vertical meridian, equally spaced above and below a fixation point at an elevation of ± 1°. We refer to these targets as the top (A) and bottom (B) signals, respectively. There were two levels of target luminance (67 cd/m2 and 0.067 cd/m2), chosen after pilot testing to ensure a robust effect on the RTs (to allow for testing the interaction contrasts). Each target could appear in high (H) or low (L) luminance, thus comprising a total of four possible combinations (HH, HL, LH, and LL). On a single-target display, a target appeared at the top or the bottom position, but not at both. The single dot could appear in high or low luminance. The target-absent display consisted of a blank black screen.

The stimuli were generated via Microsoft Painter by an IBM-compatible (Pentium 4) microcomputer and displayed binocularly on a super-VGA 15-in. color monitor with a 1,024 × 768 resolution using DMDX software (Forster & Forster, 2003). On a trial, a single-pixel fixation point (subtending to .05° of visual angle at a viewing distance of 50 cm; luminance of 0.067 cd/m2) was presented at the center of the screen for 500 ms, followed by a blank black screen (500 ms), and then followed by the stimulus display. The stimulus appeared on the screen for 100 ms or until a response was given and was then replaced by a blank screen. Participants were instructed to respond as quickly as possible. The response sampling began with the onset of the stimulus display and continued for 4,000 ms. The intertrial interval was 1,000 ms. Participants were asked to respond affirmatively by pressing the right mouse key with their right index finger, and to respond NO by pressing the left mouse key with their left index finger.

The probabilities of presenting both targets, the top target alone, the bottom target alone, or no target at all were equal to .25.6 The probabilities associated with each target luminance were .5.

Procedure

The participants were tested individually in a completely dark room, after 10 min of darkness adaptation. Each participant performed in both the OR and the AND experiments. In the OR experiment, participants were asked to respond affirmatively if they detected the presence of at least one target (i.e., two targets, single target on top, or single target at the bottom), and to respond NO otherwise. In the AND experiment, participants were asked to respond affirmatively if and only if they detected the presence of two targets, and to respond NO otherwise. The order of the experiments was counterbalanced between participants. Each experiment consisted of four sessions, each about an hour long. The sessions were held on consecutive days (excluding weekend days). Feedback on response accuracy was given at the end of each session. Each session started with a practice block of 100 trials, followed by five experimental blocks of 160 trials each (with 2-min breaks between blocks). The order of the trials was randomized within a block. Overall, a large number of trials, 3,200, were collected for each participant in each experiment (OR, AND), for tests at the distributional level.

Results and discussion

One participant failed to reach the accuracy criterion (90 %), and her data were therefore excluded from the analysis. Accuracy for the nine remaining participants was high in both the OR and the AND experiments. The overall error rate, across tasks and participants, was 3.6 %, and no RT–error trade-off was observed. Analyses of the RT data were restricted to correct responses in both experiments. Responses above 900 ms or below 160 ms were omitted from the analysis (based on pilot testing to approximate criteria of ± 2.5 SDs away from the mean). Because the primary interest in Experiments 1 and 2 was in the patterns of RTs, we do not refer to accuracy in this section. Finally, because different individuals might employ different processing architectures (e.g., serial, parallel) or have different capacity limitations, we analyzed and report separately the results of individual participants. Group results (means) are presented at the bottoms of Tables 3, 4, 5 and 6 below, so as to provide an overview, but they were not subjected to separate inferential analysis.
Table 3

Mean response times (in milliseconds) in Experiment 1 (OR task)

Participant

Double Target

Single Target Top

Single Target Bottom

No Target

Subset of Double Targets

HH

HL

LH

LL

MIC

F

BJ

300

305

328

339

288

293

296

322

21

6.2*

RS

297

334

334

518

270

282

282

356

62

54.8***

JS

396

421

439

530

365

384

401

435

15

1.2

MB

286

311

321

484

266

270

277

331

51

28.6***

RM

356

371

394

507

336

344

352

394

34

9.0**

LB

425

479

494

652

371

404

411

515

71

33.8***

JG

240

263

275

433

215

222

226

295

63

147.0***

WY

296

344

333

492

259

280

274

370

75

62.8***

AW

230

247

262

462

216

217

211

274

62

112.0***

Means

314

342

353

491

287

300

303

366

50

 

* p < .05, ** p < .01, *** p < .001

Table 4

Mean response times (in milliseconds) in Experiment 2 (AND task)

Participant

Double Target

Single Target Top

Single Target Bottom

No Target

Subset of Double Targets

HH

HL

LH

LL

MIC

F

BJ

353

355

317

328

326

359

371

358

–46

37.1***

RS

467

442

430

451

416

470

489

500

–43

10.1**

JS

492

438

417

348

431

523

507

515

–83

46.3***

MB

398

379

365

408

358

413

413

417

–51

14.9***

RM

505

408

392

432

467

515

520

523

–44

13.6***

LB

556

598

604

644

481

594

574

580

–107

82.1***

JG

396

350

329

333

342

423

414

416

–78

54.6***

WY

496

439

431

390

426

532

535

503

–138

129.3***

AW

423

405

399

395

355

473

449

435

–131

153.8***

Means

454

424

409

414

400

478

475

472

–80

 

** p < .01, *** p < .001

Table 5

Probabilities of NO responses in Experiment 3 (OR task) for four factorial conditions (no target, single target on top and at the bottom, and targets at both the top and bottom positions) and the pertinent NO response probability contrast (NRPC) values

Participant

P[NO|(T,B)]

P[NO|(T,Ø)]

P[NO|(Ø,B)]

P[NO|(Ø,Ø)]

NRPC

Log NRPC

z NRPC

BJ

.01

.17

.03

.93

.75**

0.81

2.05**

RS

.03

.31

.07

.97

.63**

0.37

2.09**

JS

.01

.13

.03

.97

.81**

0.22

2.31**

MB

.01

.44

.03

.91

.46**

–0.18

1.11**

RM

.04

.35

.04

.70

.35**

0.68

0.90**

LB

.04

.20

.04

.91

.71**

1.49**

2.16**

JG

.01

.33

.04

.92

.57**

–0.21

1.34**

WY

.02

.07

.06

.97

.86**

1.55**

2.86**

AW

.00

.11

.04

.95

.81**

–1.35

1.62*

Means

.02

.23

.04

.91

.66

  

An NRPC value significantly different from zero is evidence against the all-or-none mixture model (and NRPC > 0 is a prediction of the sharing model). A log transformation of NRPC that is different from zero is evidence against the sharing model (in fact, all independent models). z score transformation of NRPC different from zero is evidence against the integration model. * p < .05, ** p < .01

Table 6

Probabilities of NO responses in Experiment 4 (AND task) for four factorial conditions (no target, single target on top and at the bottom, and targets at both the top and bottom positions), and the pertinent NO response probability contrast (NRPC) values

Participant

P[NO|T,B]

P[NO|T,Ø]

P[NO|Ø,B]

P[NO|Ø,Ø]

NRPC

Log YRPC

z NRPC

BJ

.24

.98

.97

1.00

–.72**

0.71

–1.75**

RS

.42

.97

.98

1.00

–.53**

1.58

–1.56**

JS

.47

.98

.97

1.00

–.48**

0.05

–0.99

MB

.30

1.00

.96

1.00

–.66**

1.38

–1.81**

RM

.55

.94

.84

.95

–.28**

0.75

–0.73**

LB

.23

.98

.79

1.00

–.54**

–1.72

–0.49

JG

.47

1.00

.94

1.00

–.48**

2.27

–1.67*

WY

.48

.96

.94

.98

–.45**

1.68**

–1.39**

AW

.23

.97

.94

1.00

–.68**

0.75

–1.58**

Means

.38

.98

.93

.99

–.53

  

An NRPC value significantly different from zero is evidence against the all-or-none mixture model (and NRPC < 0 is a prediction of the sharing model). A log transformation of the YES response contrast (log YRPC) that is different from zero is evidence against the sharing model. A z-score transformation of NRPC that is different from zero is evidence against the integration model. * p < .05, ** p < .01

Experiment 1 (OR)

The mean RTs for the individual participants are presented in Table 3. For data pooled across participants (as well as for each of the individual participants), mean RTs were fastest on the double-target trials (314 ms), then next fastest on the single-target trials (342 and 353 ms for the single-target top and bottom trials, respectively), and slowest on the target-absent trials (491 ms). This order was found to hold also at the survivor function level, which implies a stronger level of stochastic dominance (cf. Townsend, 1990). These results are compatible with those of Townsend and Nozawa (1995).

We performed a 2 × 2 analysis of variance (ANOVA) on the RT data of individual participants, with the Presence Versus Absence of the Top Target as one factor, and the Presence Versus Absence of the Bottom Target as a second factor. The ANOVA revealed significant main effects at p < .01 for both factors and for all participants. That is, participants were faster to respond when a target was presented at the top position (328 ms when averaged across all participants) than for trials with no target on top (422 ms). Similarly, they were faster to respond when a target was presented at the bottom position (333.5 ms across all participants) than for trials with only a blank at the bottom (416.5 ms). For eight participants (all except BJ), we also observed significant Target Top ×Target Bottom interactions, at p < .001, likely driven by the very slow responses on the no-target trials.

Comparing performance on double-target versus single-target trials results in the capacity index, C OR(t). C OR(t) plots for our individual participants are presented in Fig. 2. Most of the C OR(t) coefficient values for each participant lie above .5 and below 1, suggesting moderately limited capacity throughout the processing interval. Note that values of C OR(t) above .5, given approximately equal performance on the two signals, also indicate a so-called race benefit, meaning that performance is better than for either target alone. The same pattern was observed for all individual observers. These results are qualitatively the same as those from the Townsend and Nozawa (1988) experimental condition in which double-target stimuli were presented as two dichoptic dots in corresponding retinal locations in the two eyes. It is also worth noting that reasonable base-time components of RT (all the contributions to RTs not involving the processes under inspection) will not lead to substantial distortion of the capacity statistics. However, minor decreases in C OR(t) could be related to that variable (Townsend & Honey, 2007).
Fig. 2

Capacity coefficient values for individual observers in Experiment 1 (OR task). The thin dashed lines represent ± 1 standard error of the estimate of the capacity coefficient function (estimated by bootstrapping)

To better inform the reader about the (in)stability of the estimates of the capacity function, we plot in thin dashed lines the standard errors of estimation (estimated by bootstrapping; see Silverman, 1986, and Van Zandt, 2002). To overcome undesired effects of outliers, we estimated C OR(t) for the time range containing 99 % of the observations (separately for each individual). The estimations for this range were highly reliable, as is evident from the exceptionally tight error bounds.

Focusing on the subset of double-target trials, HH trials were processed faster, on average (287 ms), than were HL (300 ms) and LH (303 ms) trials. LL trials were the slowest (366 ms). Statistically significant main effects are necessary to draw architectural inferences from the interactions. A 2 × 2 ANOVA for top-target salience (high, low) by bottom-target salience (high, low) revealed significant main effects at p < .001 for both factors and for all participants: Responses were faster when the top target was highly salient (293.5 ms when averaged across all participants) than on trials in which the top target had low salience (334.5 ms). Similarly, responses were faster when the bottom target was highly salient (295 ms across all participants) than on trials in which the salience of the bottom target was low (333 ms). This information also supports the validity of the selective influence of the salience factor.

The most pertinent test for the purpose of the models’ diagnoses is the interaction of salience manipulations of the top and bottom targets, which in fact comprised the mean interaction contrast (MIC) and the survivor interaction contrast (SIC) analyses. The analysis revealed significant interactions for all participants but one (JS), ruling out serial models as a viable explanation for the processing of the top and bottom targets. MIC values and the corresponding F values are presented on the two rightmost columns of Table 3. The MIC values were positive for all participants, supporting parallel processing with a minimum-time stopping rule. Applying the interaction contrast at the distributions’ level resulted in SIC functions that were positive for all participants, for all times t (except for JS, for some ts), further bolstering a parallel minimum-time mode of processing (Fig. 3).
Fig. 3

Survivor interaction contrast functions for individual observers in Experiment 1 (OR task). The thin dashed lines represent ± 1 standard error of the estimate (estimated by bootstrapping). The scalings of the y-axes may vary slightly across individual plots

Experiment 2 (AND)

The mean RTs for individual participants are presented in Table 4. For data pooled across participants, the mean RT on the double-target condition was the slowest (454 ms). This was also true at the individual level for six out of the nine participants. The exact order of the remaining factorial conditions (single-target and no-target) varied across participants.

We performed the same ANOVA that we used for the OR data on the RT data from the AND experiment, with the factors Top Target (present, absent) by Bottom Target (present, absent). The ANOVA revealed significant main effects at p < .01 for both factors for seven out of the nine participants. These effects, interestingly, were opposite to those observed in the OR experiment: Participants were slower to respond when a target was presented at the top position (439 ms, averaged across all participants) than on trials with no target on top (411.5 ms). Concomitantly, they were slower to respond when a target was presented at the bottom position (431.5 ms across all participants) than on trials with only a blank at the bottom (419 ms). One participant (RS) exhibited a significant main effect for the presence versus absence of the top target [F(1, 1) = 13.85, p < .001], but not for the bottom target [F(1, 1) = 0.35, p = .55]. Another participant (MB) exhibited the opposite pattern [F(1, 1) = 0.23, p = .63, and F(1, 1) = 9.41, p < .01, for the top and bottom targets, respectively]. Eight participants (all except LB) exhibited significant interactions of Top-Target Presence ×Bottom-Target Presence, at p < .05.

A word is in order concerning strategies of computing workload capacity in AND designs. Calculations of C AND(t) are complicated by the fact that, unlike in the OR design, single-target trials in the AND case require a NO response rather than a YES response. This fact means that, since negative decision times are well known to typically be longer than affirmative times for a number of reasons (see, e.g., Clark & Chase, 1972), capacity could be artificially computed to be higher than it would if a homogeneous response (in these studies, YES) were employed in the single-target as well as the double-target trials. Although not a perfect solution to the problem, our stratagem was to transfer the single-target data (only) from the OR experiment, since these required, as did the double-target AND trials, a YES response. Of course, this technique assumed that the single-target RT distributions would be invariant from the OR to the AND blocks. However, the risk was lowered by the fact that the same participants engaged in both the OR and the AND experiments.

Capacity coefficients were therefore computed for each individual by combining the single-target data from the OR experiment, and then the double-target data from the same individual from the AND experiment. Individual capacity plots are presented in Fig. 4. Again, aided by the standard errors of estimation, C AND(t) was found to escape from the zone of limited capacity for participants BJ, JS, and LB for some time intervals. Capacity was overwhelmingly limited for participants RS, MB, RM, JG, WY, and AW. Interestingly, in the case of AND paradigms, the presence of a base time may actually lead to an increased, overestimated C AND(t), as opposed to the underestimation in the OR case (Townsend & Eidels, 2011). However, we note again that base time was not expected to be a major factor in the estimation of C(t).
Fig. 4

Capacity coefficient values for individual observers in Experiment 2 (AND task). The thin dashed lines represent ± 1 standard error of the estimate of the capacity coefficient function (estimated by bootstrapping)

Next, focusing on the subset of double-target trials, a close examination of Table 4 reveals that the order of mean RTs on the HL, LH, and LL conditions varied from one participant to another. However, mean RTs on HH trials were overwhelmingly faster than RTs on any of the other luminance conditions, thereby contributing to a negative mean interaction contrast. A 2 × 2 ANOVA for top-target salience (high, low) by bottom-target salience (high, low) revealed significant main effects at p < .01 for both factors, for all participants. As in the OR experiment, responses were faster when the top target was highly salient (439 ms when averaged across all participants) than on trials in which the top target had low salience (473.5 ms). Similarly, responses were faster when the bottom target was highly salient (437.5 ms across all participants) than on trials in which the salience of the bottom target was low (475 ms). Selective influence was again supported.

Recall that the critical test for assessing the models’ architecture was the interaction of the salience manipulations of the top and bottom targets—that is, the mean interaction contrast. MIC values were negative for all participants (at least at p < .01; see the two rightmost columns of Table 4), supporting parallel processing with an exhaustive stopping rule. Applying the interaction contrast at the distributions’ level—that is, using the SIC functions—resulted in a function that was negative for all times t, further supporting a parallel exhaustive mode of processing (Fig. 5).
Fig. 5

Survivor interaction contrast functions for individual observers in Experiment 2 (AND task). The thin dashed lines represent ± 1 standard error of the estimate (estimated by bootstrapping). The scalings of the y-axes may vary slightly across individual plots

The results of Study I provide valuable information about the architecture and stopping rule of the system when processing simple visual stimuli. The OR experiment, based on a variation of that of Townsend and Nozawa (1995), again showed parallel processing with a minimum-time stopping rule and moderately limited capacity. The new AND experiment also supported parallel processing, but now conforming to an exhaustive stopping rule (both positions must be completed when two targets are present).

Of the Shaw set of models, which seem most like they might be extendable to our RT findings? Overall stochastic parallelism and independence were generally supported well by our RT data, and these are best accounted for by what Mulligan and Shaw (1980) called the independent-decision sharing model. Capacity, however, usually failed to reach the unlimited capacity level, especially in the OR design. In this context, it is interesting to observe a property of the weighted-integration model. Namely, though a cousin of our coactive model (which naturally predicts supercapacity), this extension of the weighted-integration model (when given a time-stochastic interpretation) would have predicted severely limited capacity [C OR(t) ≤ .5] in OR designs, even worse than that in the observed data.

We now report Study II, in which we employed the same response assignments as Study I and almost the same stimuli (except that they were now more difficult to detect), and which also included both OR and AND versions. We assessed Shaw’s model predictions of the NRPC accuracy data and explored which of our RT models could handle these results.

Study II: OR and AND accuracy experiments

The same participants from Study I performed in two experimental sessions (OR, AND) on consecutive days. The apparatus was the same as in Experiments 1 and 2, but the targets’ luminance was lower in order to make detection more difficult and eventually lead to a substantial proportion of errors. Although some errors are necessary in order to compute the NO response probability contrast, Mulligan and Shaw (1980) pointed out that different models are most readily discriminated at high accuracy levels (85 %–95 % correct). On the basis of a calibration session (method of constant stimuli, 300 trials long; 20 trials from each of 15 evenly spaced luminance levels from 0.001 to 0.015 cd/m2, intermixed in a random order), we varied the luminance of targets for each observer such that each individual performed at 85 %–95 % accuracy. The targets’ luminance for most participants was set to 0.005 cd/m2. For two participants (LB, WY), it was set to 0.012 cd/m2; for one participant (JS), it was set to 0.002 cd/m2; and for another (AW), it was set to 0.001 cd/m2.

There were only four possible stimulus displays (double target, single target on top, single target at the bottom, and no target; no H or L manipulations), each appearing with a probability of .25. As in Study I, in the OR experiment participants were asked to respond affirmatively if they detected the presence of at least one target, and to respond NO otherwise. In the AND experiment, participants were asked to respond affirmatively if and only if they detected the presence of two targets, and to respond NO otherwise. Each experiment started with a 100-trial practice block, followed by eight blocks of 100 experimental trials each (with 2-min breaks between blocks). Participants were instructed to respond as accurately as they could. Due to the task difficulty, auditory feedback was provided after each correct (high tone) and incorrect (low tone) response.

Results and discussion

Overview

The results of Study II, for each individual observer and averaged across all observers, are presented in Table 5 (Exp. 3: OR) and Table 6 (Exp. 4: AND). For each participant, we present the probability of a NO response in each of the four factorial conditions (double target, single target on top, single target at the bottom, and no target) and the overall NRPC. The reader may find it useful to compare the results with the predictions of the different models in Table 1.

In the OR experiment, NRPC was positive for each of the individual participants, as well as at the means level. In the AND experiment, NRPC was negative for each of the individual participants, as well as at the means level. We further broke down the analysis of each individual participant to eight blocks, and computed the NRPCs separately for each block. In the OR experiment, NRPCs were positive on 69 out of 72 blocks (96 %). In the AND experiment, NRPCs were negative on 66 out of 72 blocks (92 %). Appropriate tests for statistical analyses of the NRPC and its transformations are described in Shaw’s (1982) Appendix. The test for the z-score transformation is based on the work of Gourevitch and Galanter (1967).

Experiment 3 (OR)

A further examination of the OR results in Table 5 reveals that NRPC values were significantly positive for all participants (p < .01). These results falsify Shaw’s (1982) all-or-none mixture model. At the same time, every participant’s data are in agreement with Shaw’s independent-decision sharing model (NRPC > 0). Furthermore, all participants except two exhibit log NRPC values that do not differ significantly from zero, and are hence in line with the prediction of the sharing model (NRPC = 0). Finally, the z-score transformations of NRPC were significantly different from zero for all of the participants, falsifying the weighted-integration model. This feature will be discussed momentarily.

Experiment 4 (AND)

The NRPC values, presented in Table 6, were significantly negative for all participants, in accordance with the prediction of the independent-decision sharing model.

We further present in Table 6 the test for logarithmic transformation of the YES response probability contrast, log YRPC = log(P[YES | (Ø, Ø)] – log(P[YES | (A, Ø)]) – log(P[YES | (Ø, B)]) + log(P[YES | (A, B)]). It is trivial to show that for the AND case, the independent-sharing model predicts log YRPC = 0, comparable to the log NRPC = 0 prediction of the same model in the OR case (see also Shaw, 1982, p. 378). Only one participant exhibited log YRPC ≠ 0, whereas the other eight exhibited values that did not differ significantly from zero.

Next, the z-score transformations of the NRPC were negative for all participants, and were significantly different from zero for seven of the nine participants, arguing against the weighted- integration model. For JS and LB, however, these test numbers were not significant. Is it possible that these participants integrated evidence from two channels (corresponding to two spatial locations), as is suggested by the model? Or are we facing a statistical power issue? Interestingly enough, in the RT task of Study I, JS and LB also exhibited supercapacity [C(t) > 1; see Fig. 4] on the AND, but not the OR, experiment. One explanation is that the AND task encourages integration (at least by some individuals), by virtue of calling attention to one target and the other.

In general, then, both the RT (Study I) and the NRPC accuracy-based (Study II) results are in agreement with parallel and independent processing of the two signals (except, perhaps, for two participants in the AND task). Furthermore, all of the analyses support an appropriate (OR/AND) stopping rule.

General discussion

Complementary and mutually supportive results were observed for the RT study and the accuracy study. First, we consider the OR and then the AND response-time experiments (1 and 2). Then we discuss the accuracy results from Experiments 3 and 4. Finally, the criticality of using RTs as well as accuracy is accentuated by a proof that Shaw’s time-based models can equally well be expressed as a serial or a parallel model.

The RT study

In the OR experiment of the RT study, participants exhibited a positive (over additive) MIC (mean interaction contrast), from which we tentatively inferred a parallel or coactive processing architecture. If the separate-decisions assumption (i.e., noncoactive parallel) holds, then a minimum-time stopping rule would be indicated.

Next, consider the SIC (survivor interaction contrast) functions: Parallel coactive architectures, in which activations from the two channels are integrated (independently, without weighting or interactions in the original channels) into a final common pool, also predict a positive MIC. Recall that coactive models predict a small negative dip in the SIC functions before they go positive (Houpt & Townsend, 2011; Townsend & Nozawa, 1995), but ordinary parallel race models predict continuous positivity. All of our OR SIC curves were purely positive, disconfirming coactive process models. In addition, coactive models generally predict very high supercapacity in the C OR(t) functions. Only extremely high capacity limitations, as when there is massive lateral inhibition (violating the independence assumption), can overcome this tendency (e.g., Eidels et al., 2011; Townsend & Nozawa, 1995; Townsend & Wenger, 2004b). The moderately limited capacity found throughout therefore combines with the SIC functions to render standard coactive (and therefore “integrated,” in the standard sense) processing unlikely.

Also, positively interactive first-terminating parallel models tend to produce modest early negative blips and supercapacity, like coactive models, but unlike our data (see Eidels et al., 2011). On the other hand, negatively interactive first-terminating parallel models can readily predict qualitatively the same SIC functions as independent parallel systems, but with reduced workload capacity (again, see Eidels et al., 2011). Hence, mild mutual inhibition in a first-terminating system is eminently compatible with our RT results.

The AND experiment delivers data in strong agreement with parallel processing, in league with an exhaustive decisional stopping rule. The MIC data were negative, as were the individual SIC functions across time. The workload capacity functions were mostly below 1 for small to moderate RTs, but in five of the nine cases, increased to become supercapacity for larger time values. We decisively ruled out coactivation because: (1) Pure coactive processing causes supercapacity for all ts > 0. (2) Coactivation models predict MICs greater than zero and mostly positive SIC functions with modest leading negative blips, even with an AND design, contrary to the results. Of course, a hybrid system in which processing starts limited and evolves into a coactive system cannot be ruled out. Another, perhaps more likely possibility is an interactive, separate-decisions parallel system with negative interactions to begin with, changing to positive interactions later on. The interactions cannot have been so great as to force deviations from the classical all-negative exhaustive parallel predictions for the SICs, which can occur with extreme interactions (Eidels et al., 2011).

The accuracy study

Inferences from the response probability contrast statistics are in striking agreement with the RT conclusions. The NRPCs from the accuracy OR experiment are overwhelmingly positive, as is predicted by the Mulligan and Shaw (1980) independent-sharing model, which can be viewed as an extension of our independent parallel model with a first-terminating stopping rule.

One might hypothesize that in parathreshold conditions, the system adopts and favors some kind of integration from multiple channels (such as coactivation or weighted integration) to improve detection. However, the results from Study II converge with those from Study I to falsify the weighted-integration model as well as the coactive model. Thus, the operative mode of processing, whether stimuli were easy to detect or difficult to perceive, was parallel and close to independent (except for a few interesting exceptions in the AND case). Capacity (from Study I) was either mildly limited, as in the OR design, or partly limited, partly super, as in the AND case.

Having summarized and interpreted our results, we now turn to a theoretical discussion concerning the static nature of Shaw’s models. First, as we expressed before, these models lack temporal specifications. Therefore, we deemed it critical to explore whether the NRPC predictions would also hold for choice-RT models that were designed to account not only for the (perceptual) decision but also for its time course. Second, the lack of temporal specification in Shaw’s models suggests that they are moot to architectural differences. We show below that although they initially appeal for a parallel interpretation, all four models can be equally well viewed as serial processes (and hence, subject to “model mimicry”). Therefore, Shaw’s models can be interpreted as either parallel or serial models.

Testing NRPC predictions with popular choice-RT models

We explored examples of two popular classes of choice-RT models—counting models and random-walk models. We show, either analytically (for the former) or via computer simulations (for the latter), that they both predict NRPC(OR) > 0 and NRPC(AND) < 0, much like the static models studied by Mulligan and Shaw (1980).

Counting models

For readers who would like to see a rather general type of counting model, but one based on discrete counting processes, we provide an Appendix, in which we demonstrate with a simple proof that NPRC(OR) > 0 and NRPC(AND) < 0. A simplified description of a counting model, given two sources of information (say, two possible signal positions, top and bottom), is presented in Fig. 6a. There are two parallel and independent processing channels, and each accumulates evidence in favor of the presence of a signal in its respective spatial position. To account for NO responses, the model needs to be augmented to the form presented in Fig. 6b: for each source of information (top and bottom positions), there is also a separate channel for “no information.” Each processing channel accumulates counts until a prescribed number is reached, and the winner of the race determines the outcome for this position. Poisson counting processes (e.g., Smith & Van Zandt, 2002; Townsend & Ashby, 1983) are natural exemplars of this general set of models.
Fig. 6

Schematics of a two-channel parallel model (a), a four-channel parallel model (b), and a random-walk model (c). Panel a shows a simplified accumulator model, in which streams of evidence from the top and bottom positions are accumulated in separate channels. Panel b illustrates an augmented version, in which for each position there also exists a separate “no-target” channel (accumulator). A NO response occurs if the “no-target” channel finishes processing before the “target” channel for that position. The thick solid line indicates that processing of the top and bottom positions is done separately, with separate OR decision gates. Panel c illustrates a parallel arrangement of two random-walk processes for the top and bottom targets

Random-walk models

Ratcliff and Smith (2004), in a thorough review, divided sequential-sampling models into two major classes: those with an absolute criterion (i.e., the amount of evidence in favor of a particular response must reach a prescribed criterion value) or a relative criterion (i.e., the evidence for one of the response alternatives must exceed the other by some criterion amount). The previously presented counting models form one type of absolute-criterion model. Random-walk models (e.g., Laming, 1968; Link & Heath, 1975), as well as diffusion models (à la Ratcliff, 1978), are members of the relative-criterion models’ class. Processing in random-walk models terminates once the amount of accrued evidence reaches a specified bound.

Because we adopted the random-walk framework for our purposes, the decision as to whether a target signal appears at the top position and/or the bottom position is the outcome of separate random-walk processes. A schematic depiction of such a model is presented in Fig. 6c. The overall architecture is somewhat like that of the counting model (Fig. 6b), except that the race architectures for the top and bottom positions are each replaced by a random-walk process. In each of the two random-walk processes (top, bottom), evidence is accumulated, in discrete time units, by making a step toward one bound ( “YES, target present,” or simply “target”) with probability p, or toward the other bound (“no target”) with probability 1 – p (see Fifić, Nosofsky, & Townsend, 2008, for a more detailed description of random-walk simulations). A decision for each process is made once it reaches one of the bounds, and a response in the overall system can be made after combining the outcomes in a logical AND/OR gate (depending on the stopping rule and the nature of the task).

Figure 7a (OR case) and b (AND case) show the NRPC results from Monte Carlo simulations of a random-walk model, with two separate and parallel random-walk processes—one for the top and one for the bottom position. NRPC is overwhelmingly positive for the OR case, and negative for the AND case, for performance that is better than chance (i.e., probability correct > .75). Thus, the simulations of the random-walk model reinforce our analytic results and extend it to commonly used cases of choice-RT models.
Fig. 7

Simulation results of a random-walk parallel model (i.e., two simultaneous random-walk processes—one for the top position, the other for the bottom position). The NO response probability contrast, NRPC, is plotted as a function of probability correct for the OR (a) and AND (b) cases

We intimated in the previous section that in the absence of temporal specification in Shaw’s models, some models can mimic others. The next section is brief, but the messages are important and surprising.

Predictions from Shaw and colleagues’ process models: model mimicking

Although Shaw and colleagues do not place restrictions on the architecture underlying their models, they initially may seem to appeal for a parallel interpretation. We will see that all four models can be equally well viewed as serial processes.

PROPOSITION: The Mulligan and Shaw (1980) models, due to their absence of temporal specification, can all be interpreted as either parallel or serial models.

Proof:
  1. A.

    The independent-decision sharing model. The formula for the likelihood of a NO response (of course conditional on the stimulus compound) is provided at the top of Table 2. (1) The parallel interpretation is naturally that each of two channels measures activation, and a NO occurs if and only if neither channel equals or exceeds its criterion, in a stochastically independent fashion. (2) The serial interpretation is that, independent of the order of processing (see, e.g., Townsend & Ashby, 1983), information is acquired on each position (item, etc.), and again, a NO occurs if and only if neither channel equals or exceeds its criterion, in a stochastically independent fashion.

     
  2. B.

    The independent-decision all-or-none mixture model (Table 2, second model from the top). (1) A possible parallel interpretation is that with probability α, the A channel is processed but the B channel is ignored, whereas with probability (1 – α), the B channel is processed while the A channel is disregarded. (2) The serial account states that with probability α, channel A is operated on first and the decision is made on the evidence from that channel, whereas with probability (1 – α), channel B is processed and the decision is determined by the result on that channel. Mathematically, Accounts 1 and 2 are identical. In both the serial and the parallel accounts, the performance is expected to be markedly suboptimal, since the evidence from one channel is always ignored (see also note 3).

     
  3. C.

    The independent-decision not-all-or-none mixture model (Table 2, third model). Unlike Option B, this model demands exhaustive processing on each trial, whether serial or parallel. With parallel or serial processing, this equation is most naturally viewed as a compound model (Townsend & Ashby, 1983, Chap. 5), in which different systems, possibly with distinct parameters or even architectures, are applied from trial to trial. (1) The parallel interpretation indicates that with probability α, independent simultaneous processing yields both channels as not reaching their criteria. With probability (1 – α), the same type of system, but now with reversed criteria, is responsible for a failure on each channel to reach its respective criterion. (2) In the serial account, with probability α the A position is processed first and the B position second, with criteria β A and β B ', respectively. In the serial interpretation, with probability (1 – α), the B position is processed first and the A second, with reversed criteria β B and β A '.

     
  4. D.

    The weighted-integration model. Here, rather than taking a combination of the probabilities or decisions on each position, based on X A and X B, a weighted combination of the actual information or of activation random variables on each position forms a new random variable, X A + X B, which is then compared with a single criterion. Hence, P(NO) = P([wX A + (1 – w) ⋅X B] < β). (1) The parallel rendition is that the actual outputs of each channel are weighted and then added, and the decision is made. (2) The serial interpretation says that each position is observed one at a time, and then the outputs are combined in this weighted fashion before the result is compared with a criterion β.

     

Conclusions

Overall, the present set of binocular experiments conjoin the accuracy strategy put forth by Shaw and colleagues (Mulligan & Shaw, 1980; Shaw, 1982) with our RT methodologies to boost support for parallel processing combined with appropriate decision rules in both AND and OR milieus. Although our experiments were quite disparate from those of Mulligan and Shaw, including different retinal locations than theirs (as they had called for), our OR accuracy findings were the same as theirs. Serial-processing models have been decisively falsified by our RT methodology. The overall results also disconfirm standard integration or coactivation theories, but the supercapacity verdict for some participants in the AND response-time experiments calls for further investigation. To the best of our knowledge, no parameterized process model captures all of the present findings, so future efforts may focus on constructing such a system. Bundesen’s theory (e.g., Bundesen, 1990, 1993; Bundesen & Habekost, 2008) can handle those results indicating independent, limited-capacity parallel processing. However, it fails to predict the shapes of the RT distributions that have been found empirically (Ratcliff & Smith, 2004).

On the theoretical side, we proved (in the model-mimicking Proposition above), that Mulligan and Shaw’s (1980) models, due to the lack of temporal specification, can be interpreted as either parallel or serial models. This mimicry problem of accuracy-based measures and models highlights the value of converging evidence from both RTs and accuracy. In conclusion, the conjoining of RTs with accuracy appears to augur a promising future for identifying architecture, independence, decisional stopping rules, and workload capacity.

Footnotes

  1. 1.

    Models of speeded decisions, such as the diffusion model (Ratcliff, 1978) or the linear ballistic accumulator (Brown & Heathcote, 2008) successfully account for both choice latency and accuracy. However, they make very specific assumptions about the forms of the distributions and their parameters, which we are able to avoid here.

  2. 2.

    An inherent duality between YES and NO responses logically interacts with the stopping rule. With an OR design, to correctly respond NO on no-target trials, participants must confirm that a signal is missing from both channels A and B; hence, there is virtually an AND stopping rule with regard to the NO decision. Conversely, in an AND design (i.e., to respond YES, the participant must make sure that there is a signal in each channel), a NO decision becomes, in effect, an OR trial, since if either channel delivers a “no-signal” decision, the overall decision can be NO, and processing can cease with the first NO decision to occur. Importantly, these response strategies are testable with our tools.

  3. 3.

    Thus, a coactive system cannot, in a logical sense, perform certain versions of AND tasks, because it cannot make separate decisions on the different channels. Nonetheless, a coactive model could, in principle, simply increase its decision criterion so as to attempt to minimize mistakes when only one or no signal is present.

  4. 4.

    A coactive model naturally predicts that responses would be faster on double-target displays than on single-target displays (the redundant-target effect). Raab (1962) noted that a stochastic independent-channels race model can also produce this effect via statistical considerations alone. Miller’s inequality helps to decide between the two models.

  5. 5.

    Consider the OR task, in which detection of at least one target signal is sufficient to elicit a YES response. If, on any given trial, attention in an all-or-none mixture model is fully allocated to one channel but not to the other, then at any time t there is just one process going on, much like in a serial model. On double-target trials, the system halts after processing the attended signal, no matter whether attention is allocated to one channel or the other (both have target signals). This case is identical to serial processing with a first-terminating stopping rule. Notice that both models predict very high error rates on single-target displays (50 %, if the probabilities of processing one channel but not the other are equal): A serial first-terminating model that processes the target-absent channel first will stop before processing the second, target-present display, leading to an erroneous “NO” response. Similarly, an all-or-none mixture model that allocates attention only to the target-absent channel will overlook the target presented to the other channel, again leading to an erroneous “NO” response.

  6. 6.

    This means that the overall probability of a YES response in the OR task was .75, or .25 in the AND task, and a response bias might ensue. However, systems factorial technology is insensitive to such biases. First, it does not attempt to fit criterion parameters. More importantly, the different statistics [MIC, SIC(t), C(t)] are all based on trials from the same response (YES), so response bias makes no difference. The only exception might have been the C AND(t), but as we will explain later in the text, it is computed with single-target data from the OR case (YES data).

Notes

Acknowledgments

We thank Phillip Smith and two anonymous reviewers for helpful comments on an earlier version of this manuscript. The study was supported by an Australian Research Council Discovery grant, ARC-DP 120102907, to A. E. Please address correspondence to Ami.Eidels@newcastle.edu.au.

References

  1. Ashby, F. G., & Townsend, J. T. (1980). Decomposing the reaction time distribution: Pure insertion and selective influence revisited. Journal of Mathematical Psychology, 21, 93–123. doi: 10.1016/0022-2496(80)90001-2 CrossRefGoogle Scholar
  2. Berryhill, M., Kveraga, K., Webb, L., & Hughes, H. C. (2007). Multimodal access to verbal name codes. Perception & Psychophysics, 69, 628–640. doi: 10.3758/BF03193920 CrossRefGoogle Scholar
  3. Brown, S. D., & Heathcote, A. (2008). The simplest complete model of choice reaction time: Linear ballistic accumulation. Cognitive Psychology, 57, 153–178. doi: 10.1016/j.cogpsych.2007.12.002 CrossRefPubMedGoogle Scholar
  4. Bundesen, C. (1990). A theory of visual attention. Psychological Review, 97, 523–547. doi: 10.1037/0033-295X.97.4.523 CrossRefPubMedGoogle Scholar
  5. Bundesen, C. (1993). The relationship between independent race models and Luce’s choice axiom. Journal of Mathematical Psychology, 37, 446–471.CrossRefGoogle Scholar
  6. Bundesen, C., & Habekost, T. (2008). Principles of visual attention: Linking mind and brain. Oxford, UK: Oxford University Press.CrossRefGoogle Scholar
  7. Clark, H. H., & Chase, W. G. (1972). On the process of comparing sentences against pictures. Cognitive Psychology, 3, 472–517.CrossRefGoogle Scholar
  8. Colonius, H., & Townsend, J. T. (1997). Activation-state representation of models for the redundant-signals-effect. In A. A. J. Marley (Ed.), Choice, decision, and measurement: Essays in honor of R. Duncan Luce (pp. 245–254). Mahwah, NJ: Erlbaum.Google Scholar
  9. Diederich, A., & Colonius, H. (1991). A further test of the superposition model for the redundant-signals effect in bimodal detection. Perception & Psychophysics, 50, 83–86.CrossRefGoogle Scholar
  10. Dzhafarov, E. N. (2003). Selective influence through conditional independence. Psychometrika, 68, 7–26.CrossRefGoogle Scholar
  11. Eidels, A., Houpt, J. W., Altieri, N., Pei, L., & Townsend, J. T. (2011). Nice guys finish fast and bad guys finish last: Facilitatory vs. inhibitory interaction in parallel systems. Journal of Mathematical Psychology, 55, 176–190.CrossRefPubMedCentralPubMedGoogle Scholar
  12. Fifić, M., Nosofsky, R. M., & Townsend, J. T. (2008). Information-processing architectures in multidimensional classification: A validation test for systems factorial technology. Journal of Experimental Psychology: Human Perception and Performance, 34, 356–375. doi: 10.1037/0096-1523.34.2.356 PubMedCentralPubMedGoogle Scholar
  13. Forster, K. I., & Forster, J. C. (2003). DMDX: A Windows display program with millisecond accuracy. Behavior Research Methods, Instruments, & Computers, 35, 116–124. doi: 10.3758/BF03195503 CrossRefGoogle Scholar
  14. Gourevitch, V., & Galanter, E. (1967). A significance test for one parameter isosensitivity functions. Psychometrika, 32, 25–33.CrossRefPubMedGoogle Scholar
  15. Green, D. M., & Swets, J. A. (1966). Signal detection theory and psychophysics. New York, NY: Wiley.Google Scholar
  16. Houpt, J. W., & Townsend, J. T. (2011). An extension of sic predictions to the wiener coactive model. Journal of Mathematical Psychology, 55, 267–270.CrossRefPubMedCentralPubMedGoogle Scholar
  17. Kujala, J. V., & Dzhafarov, E. N. (2008). Testing for selectivity in the dependence of random variables on external factors. Journal of Mathematical Psychology, 52, 128–144.CrossRefGoogle Scholar
  18. Laming, D. R. J. (1968). Information theory of choice reaction time. New York, NY: Wiley.Google Scholar
  19. Link, S. W., & Heath, R. A. (1975). A sequential theory of psychological discrimination. Psychometrika, 40, 77–105.CrossRefGoogle Scholar
  20. Miller, J. (1978). Multidimensional same-different judgments: Evidence against independent comparisons of dimensions. Journal of Experimental Psychology: Human Perception and Performance, 4, 411–422.PubMedGoogle Scholar
  21. Miller, J. (1982). Divided attention: Evidence for coactivation with redundant signals. Cognitive Psychology, 14, 247–279. doi: 10.1016/0010-0285(82)90010-X CrossRefPubMedGoogle Scholar
  22. Mulligan, R. M., & Shaw, M. L. (1980). Multimodal signal detection: Independent decisions vs. integration. Perception & Psychophysics, 28, 471–478.CrossRefGoogle Scholar
  23. Raab, D. (1962). Statistical facilitation of simple reaction time. Transactions of the New York Academy of Sciences, 43, 574–590.CrossRefGoogle Scholar
  24. Ratcliff, R. (1978). A theory of memory retrieval. Psychological Review, 85, 59–108. doi: 10.1037/0033-295X.85.2.59 CrossRefGoogle Scholar
  25. Ratcliff, R., & Smith, P. L. (2004). A comparison of sequential sampling models for two-choice reaction time. Psychological Review, 111, 333–367. doi: 10.1037/0033-295X.111.2.333 CrossRefPubMedCentralPubMedGoogle Scholar
  26. Schwarz, W. (1994). Diffusion, superposition, and the redundant targets effect. Journal of Mathematical Psychology, 38, 504–520. doi: 10.1006/jmps.1994.1036 CrossRefGoogle Scholar
  27. Schweickert, R. (1978). A critical path generalization of the additive factor method analysis of the Stroop task. Journal of Mathematical Psychology, 18, 105–139.CrossRefGoogle Scholar
  28. Schweickert, R., & Townsend, J. T. (1989). A trichotomy method: Interactions of factors prolonging sequential and concurrent mental processes in stochastic PERT networks. Journal of Mathematical Psychology, 33, 328–347.CrossRefGoogle Scholar
  29. Shaw, M. L. (1982). Attending to multiple sources of information: I. The integration of information in decision making. Cognitive Psychology, 14, 353–409.CrossRefGoogle Scholar
  30. Silverman, B. W. (1986). Density estimation for statistics and data analysis. London, UK: Chapman & Hall.CrossRefGoogle Scholar
  31. Smith, P. L., & Van Zandt, T. (2002). Time-dependent Poisson counter models of response latency in simple judgment. British Journal of Mathematical and Statistical Psychology, 53, 293–315.CrossRefGoogle Scholar
  32. Sternberg, S. (1969). Memory scanning: Mental processes revealed by reaction-time experiments. American Scientist, 4, 421–457.Google Scholar
  33. Townsend, J. T. (1974). Issues and models concerning the processing of a finite number of inputs. In B. H. Kantowitz (Ed.), Human information processing: Tutorials in performance and cognition (pp. 133–168). Hillsdale, NJ: Erlbaum.Google Scholar
  34. Townsend, J. T. (1990). Truth and consequences of ordinal differences in statistical distributions: Toward a theory of hierarchical inference. Psychological Bulletin, 108, 551–567. doi: 10.1037/0033-2909.108.3.551 CrossRefPubMedGoogle Scholar
  35. Townsend, J. T. (1992). On the proper scale for reaction time. In H. Geissler, S. Link, & J. T. Townsend (Eds.), Cognition, information processing and psychophysics: Basic issues (pp. 105–120). Hillsdale, NJ: Erlbaum.Google Scholar
  36. Townsend, J. T., & Ashby, F. G. (1983). The stochastic modeling of elementary psychological processes. Cambridge, UK: Cambridge University Press.Google Scholar
  37. Townsend, J. T., & Eidels, A. (2011). Workload capacity spaces: A unified methodology for response time measures of efficiency as workload is varied. Psychonomic Bulletin & Review, 18, 659–681.CrossRefGoogle Scholar
  38. Townsend, J. T., & Honey, C. J. (2007). Consequences of base time for redundant signals experiments. Journal of Mathematical Psychology, 51, 242–265.CrossRefPubMedCentralPubMedGoogle Scholar
  39. Townsend, J. T., & Nozawa, G. (1988). Strong evidence for parallel processing with simple dot stimuli. Chicago, IL: Paper presented at the Twenty-Ninth Annual Meeting of Psychonomic Society.Google Scholar
  40. Townsend, J. T., & Nozawa, G. (1995). Spatio-temporal properties of elementary perception: An investigation of parallel, serial, and coactive theories. Journal of Mathematical Psychology, 39, 321–359. doi: 10.1006/jmps.1995.1033 CrossRefGoogle Scholar
  41. Townsend, J. T., & Schweickert, R. (1989). Toward the trichotomy method of reaction times: Laying the foundation of stochastic mental networks. Journal of Mathematical Psychology, 33, 309–327. doi: 10.1016/0022-2496(89)90012-6 CrossRefGoogle Scholar
  42. Townsend, J. T., & Wenger, M. J. (2004a). The serial–parallel dilemma: A case study in a linkage of theory and method. Psychonomic Bulletin & Review, 11, 391–418. doi: 10.3758/BF03196588 CrossRefGoogle Scholar
  43. Townsend, J. T., & Wenger, M. J. (2004b). A theory of interactive parallel processing: New capacity measures and predictions for a response time inequality series. Psychological Review, 111, 1003–1035. doi: 10.1037/0033-295X.111.4.1003 CrossRefPubMedGoogle Scholar
  44. Van Zandt, T. (2002). Analysis of response time distributions. In J. T. Wixted (Vol. Ed.) & H. Pashler (Series Ed.), Stevens’ Handbook of experimental psychology (3rd ed.): Vol. 4. Methodology in experimental psychology (pp. 461–516). New York, NY: Wiley.Google Scholar

Copyright information

© The Psychonomic Society, Inc. 2014

Authors and Affiliations

  • Ami Eidels
    • 1
  • James T. Townsend
    • 2
  • Howard C. Hughes
    • 3
  • Lacey A. Perry
    • 2
  1. 1.School of PsychologyUniversity of NewcastleCallaghanAustralia
  2. 2.Indiana UniversityBloomingtonUSA
  3. 3.Dartmouth CollegeHanoverUSA

Personalised recommendations