Abstract
Systems factorial technology (SFT) comprises a set of powerful nonparametric models and measures, together with a theory-driven experiment methodology termed the double factorial paradigm (DFP), for assessing the cognitive information-processing mechanisms supporting the processing of multiple sources of information in a given task (Townsend and Nozawa, Journal of Mathematical Psychology 39:321–360, 1995). We provide an overview of the model-based measures of SFT, together with a tutorial on designing a DFP experiment to take advantage of all SFT measures in a single experiment. Illustrative examples are given to highlight the breadth of applicability of these techniques across psychology. We further introduce and demonstrate a new package for performing SFT analyses using R for statistical computing.
This is a preview of subscription content, log in to check access.













Notes
- 1.
In other areas of cognitive modeling, architecture is used to refer to fixed properties of the cognitive system. In some cases, this may include the temporal organization of the information processing, but they are distinct concepts. Architecture in the sense of this article may vary with a participant’s strategy, especially on high-level cognitive tasks in which a participant has a fair amount of control over strategy (Fifić, Nosofsky, & Townsend, 2008; Yang, 2011; Yang, Chang, & Wu, 2012; C. T Yang, Hsu, Huang, & Yeh, 2011). Architecture in the other sense could refer to properties that we classify under other monikers, such as workload constraints on information-processing efficiency.
- 2.
In many models, the amount of information required to stop processing a given source (i.e., the threshold in an information accumulator model; cf. Brown & Heathcote, 2008; Link & Heath, 1975; Ratcliff & Smith, 2004) can vary. This also falls under the general category of stopping rule. However, the SFT approach does not include the more detailed analyses involved in identifying changes in the amount of information needed for each subprocess to finish.
- 3.
We do not wish to claim that participants frequently, or even ever, attend more to the task when there are multiple sources of information. We only wish to indicate that it is not entirely an unreasonable possibility.
- 4.
Each of the tools in isolation is relatively weak with respect to analyzing stochastic dependence, but they are powerful when used together (Eidels et al., 2011).
- 5.
The additional adjective “effective” simply means that the salience manipulation has an effect: Channel processing times should be faster when the input is high salience. Here, we mean a particularly strong type of faster, that the response times for the fast condition should stochastically dominate the response times for the slow condition: S H(t) ≤ S L(t) for all ts, with S H (t) < S L (t) S H(t) < S L(t) for at least some t.
- 6.
We will cover the sft functions together with the relevant theory and definitions without detail regarding data formats; we address the formatting of data for use in sft in a later section.
- 7.
The calculation of the SIC is based on the R function ecdf. Both the ecdf function and the stepfun class are included in the stats package as part of R (R Development Core Team, 2011). For details on these or any other function or class in R, we suggest the use of the help function.
- 8.
Some researchers have attempted to apply bootstrapping for hypothesis testing with the SIC. However, there are problems with that approach. One can estimate pointwise confidence intervals, then check the confidence interval at each point to see whether it includes zero. If one were to conclude that the function is significantly nonzero, the type I error rate would be much higher without appropriate correction. With a large number of estimated points on the SIC, a correction based on the assumption that each test is independent (e.g., Bonferroni) would make it nearly impossible to find a significant value of the SIC. Determining the appropriate correction on the basis of the true dependencies among the points is possible, but it is more straightforward to simply treat the SIC as a function for hypothesis testing. Bootstrapping tests are possible for hypotheses about the function, but asymptotic tests (such as the Houpt and Townsend, 2010b, test) are usually (always?) more powerful. On the basis of these issues, we have decided not to include bootstrap tests for SIC and C(t) measures in either the package or this article.
- 9.
For details on the hazard function and its use in cognitive psychology, see Chechile (2003).
- 10.
We have not accounted for the additional time taken by nonperceptual, non-decision-related processes, such as motor movements, in this derivation. This additional time would complicate the derivation, but it has only a limited effect on the capacity coefficient predictions when the variance of the additional time contributes relatively little to the variance of the response time, which is reasonable for human data (Townsend & Honey, 2007). In particular, the extent to which the additional time changes capacity estimates scales with the variance of the base time, leading to underestimates of OR capacity (Townsend & Honey, 2007) and overestimates of AND capacity (Townsend & Eidels, 2011).
- 11.
For details on the reverse hazard function and its use in cognitive psychology, see Chechile (2011).
References
Aalen, O. O., Borgan, Ø., & Gjessing, H. K. (2008). Survival and event history analysis: A process point of view. New York: Springer.
Altieri, N., & Townsend, J. T. (2011). An assessment of behavioral dynamic information processing measures in audiovisual speech perception. Frontiers in Psychology, 2, 1–15.
Blaha, L. M. (2010). A dynamic Hebbian-style model of configural learning. PhD, Indiana University, Bloomington, Indiana.
Brown, S. D., & Heathcote, A. (2008). The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology, 57, 153–178.
Burns, D. M., Houpt, J. W., & Townsend, J. T. (2013). Functional principal components analysis of workload capacity functions. Behavior Research Methods, 1-10. doi:10.3758/s13428-013-0333-2
Burns, D. M., Pei, L., Houpt, J. W., & Townsend, J. T. (2009). Facial perception as a configural process. Poster presented at: Annual Meeting of the Cognitive Science Society.
Chechile, R. A. (2003). Mathematical tools for hazard function analysis. Journal of Mathematical Psychology, 47, 478–494.
Chechile, R. A. (2011). Properties of reverse hazard functions. Journal of Mathematical Psychology, 55, 203–222.
Donders, F. C. (1969). On the speed of mental processes. In W. G. Koster (Ed. & Trans.), Acta psychologica (pp. 412–431). Amsterdam: North-Holland Publishing Company.
Donnelly, N., Cornes, K., & Menneer, T. (2012). An examination of the processing capacity of features in the Thatcher illusion. Attention, Perception, & Psychophysics, 74, 1475–1487.
Dzhafarov, E. N. (2003). Selective influence through conditional independence. Psychometrika, 68, 7–26.
Dzhafarov, E. N., & Gluhovsky, I. (2006). Notes on selective influence, probabilistic causality, and probabilistic dimensionality. Journal of Mathematical Psychology, 50, 390–401.
Dzhafarov, E. N., Schweickert, R., & Sung, K. (2004). Mental architectures with selectively influenced but stochastically interdependent components. Journal of Mathematical Psychology, 48, 51–64.
Eidels, A., Donkin, C., Brown, S. D., & Heathcote, A. (2010). Converging measures of workload capacity. Psychonomic Bulletin & Review, 17, 763–771.
Eidels, A., Houpt, J. W., Pei, L., Altieri, N., & Townsend, J. T. (2011). Nice guys finish fast, bad guys finish last: Facilitatory vs. inhibitory interaction in parallel systems. Journal of Mathematical Psychology, 55, 176–190.
Eidels, A., & Townsend, J. T. (2009). Testing response time predictions of a large class of parallel models within or and and redundant signals paradigm. Presented at the 2009 Annual Meeting of the Society of Mathematical Psychology. Amsterdam, Netherlands.
Fifić, M., Little, D. R., & Nosofsky, R. M. (2010). Logical-rule models of classification response times: A synthesis of mental-architecture, random-walk, and decision-bound approaches. Psychological Review, 117, 309–348.
Fifić, M., Nosofsky, R. M., & Townsend, J. T. (2008). Information-processing architectures in multidimensional classification: A validation test of the systems factorial technology. Journal of Experimental Psychology: Human Perception and Performance, 34, 356–375.
Fifić, M., & Townsend, J. T. (2010). Information-processing alternatives to holistic perception: Identifying the mechanisms of secondary-level holism within a categorization paradigm. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36, 1290–1313.
Fific, M., Townsend, J. T., & Eidels, A. (2008). Studying visual search using Systems Factorial Methodology with target-distractor similarity as the factor. Perception & Psychophysics, 70, 583–603.
Garner, W. R. (1974). The processing of information and structure. New York: Wiley.
Garner, W. R., & Felfoldy, G. L. (1970). Integrality of stimulus dimensions in various types of information processing. Cognitive Psychology, 1, 225–241.
Houpt, J. W., & Townsend, J. T. (2010a). A new perspective on visual word processing efficiency. In S. Ohlsson & R. Catrambone (Eds.), Proceedings of the 32nd annual conference of the cognitive science society (pp. s1148–s1153). Austin, TX: Cognitive Science Society.
Houpt, J. W., & Townsend, J. T. (2010b). The statistical properties of the survivor interaction contrast. Journal of Mathematical Psychology, 54, 446–453.
Houpt, J. W., & Townsend, J. T. (2011). An extension of SIC predictions to the Wiener coactive model. Journal of Mathematical Psychology, 55, 267–270.
Houpt, J. W., & Townsend, J. T. (2012). Statistical measures for workload capacity analysis. Journal of Mathematical Psychology, 56, 341–355.
Ingvalson, E. M., & Wenger, M. J. (2005). A strong test of the dual-mode hypothesis. Perception & Psychophysics, 67, 14–35.
Johnson, S. A., Blaha, L. M., Houpt, J. W., & Townsend, J. T. (2010). Systems factorial technology provides new insights on global–local information processing in autism spectrum disorders. Journal of Mathematical Psychology, 54, 53–72.
Link, S. W., & Heath, R. A. (1975). A sequential theory of psychological discrimination. Psychometrika, 40, 77–105.
Miller, J. (1982). Divided attention: Evidence for coactivation with redundant signals. Cognitive Psychology, 14, 247–279.
Mordkoff, J. T., & Yantis, S. (1991). An interactive race model of divided attention. Journal of Experimental Psychology. Human Perception and Performance, 17, 520–538.
Neufeld, R. W., Townsend, J. T., & Jetté, J. (2007). Quantitative response time technology for measuring cognitive-processing capacity in clinical studies. In R. W. Neufeld (Ed.), Advances in clinical cognitive science: Formal modeling and assessment of processes and symptoms (pp. 207–238). Washington, D. C: American Psychological Association.
Perry, L., Blaha, L. M., & Townsend, J. T. (2008). Reassessing the architecture of same-different face judgments. Journal of Vision, 8, 88.
R Development Core Team. (2011). R: A language and environment for statistical computing [Computer software manual]. Vienna, Austria. http://www.R-project.org
Ratcliff, R., & Smith, P. L. (2004). A comparison of sequential sampling models for 2-choice response. Psychological Review, 3, 333–367.
Reinach, S. G. (1960). A nonparametric analysis for a multiway classification with one element per cell. South African Journal of Agricultural Science, 8, 941–960.
Repperger, D. W., Havig, P. R., Reis, G. A., Farris, K. A., McIntire, J. P., & Townsend, J. T. (2009). Studies on hazard functions and human performance. The Ohio Journal of Science, 109.
Sawilowsky, S. S. (1990). Nonparametric tests of interaction in experimental design. Review of Educational Research, 60, 91–126.
Schwarz, W. (1989). A new model to explain the redundant-signals effect. Perception & Psychophysics, 46, 498–500.
Schwarz, W. (1994). Diffusion, superposition, and the redundant-targets effect. Journal of Mathematical Psychology, 38, 504–520.
Townsend, J. T. (1972). Some results concerning the identifiability of parallel and serial processes. British Journal of Mathematical and Statistical Psychology, 25, 168–199.
Townsend, J. T. (1974). Issues and models concerning the processing of a finite number of inputs. In B. H. Kantowitz (Ed.), Human information processing: Tutorials in performance and cognition (pp. 133–168). Hillsdale, NJ: Erlbaum Press.
Townsend, J. T., & Altieri, N. (2012). An accuracy-response time capacity assessment function that measures performance against standard parallel predictions. Psychological Review, 119, 500–516.
Townsend, J. T., & Ashby, F. G. (1983). The stochastic modeling of elementary psychological processes. Cambridge: Cambridge University Press.
Townsend, J. T., & Eidels, A. (2011). Workload capacity spaces: A unified methodology for response time measures of efficiency as workload is varied. Psychonomic Bulletin & Review, 18, 659–681.
Townsend, J. T., & Fifić, M. (2004). Parallel and serial processing and individual differences in high-speed scanning in human memory. Perception & Psychophysics, 66, 953–962.
Townsend, J. T., & Honey, C. J. (2007). Consequences of base time for redundant signals experiments. Journal of Mathematical Psychology, 51, 242–265.
Townsend, J. T., & Nozawa, G. (1995). Spatio-temporal properties of elementary perception: An investigation of parallel, serial and coactive theories. Journal of Mathematical Psychology, 39, 321–360.
Townsend, J. T., & Thomas, R. D. (1994). Stochastic dependencies in parallel and serial models: Effects on systems factorial interactions. Journal of Mathematical Psychology, 38, 1–24.
Townsend, J. T., & Wenger, M. J. (2004). A theory of interactive parallel processing: New capacity measures and predictions for a response time inequality series. Psychological Review, 111, 1003–1035.
Von Der Heide, R. J., Wenger, M. J., Gilmore, R. O., & Elbich, D. (2011). Developmental changes in encoding and the capacity to process face information. Journal of Vision, 11.
Yang, C.-T. (2011). Relative saliency in change signal affects perceptual comparison and decision processes in change detection. Journal of Experimental Psychology: Human Perception and Performance, 37, 1708–1728.
Yang, C.-T., Chang, T.-Y., & Wu, C.-J. (2012). Relative change probability affects the decision process of detecting multiple feature changes. Journal of Experimental Psychology: Human Perception and Performance. Advance online publication
Yang, H., Houpt, J. W., Khodadadi, A., & Townsend, J. T. (2011). Revealing the underlying mechanism of implicit race bias. Poster presented at: Midwestern Cognitive Science Meeting. East Lansing, MI.
Yang, C.-T., Hsu, Y.-F., Huang, H.-Y., & Yeh, Y.-Y. (2011b). Relative salience affects the process of detecting changes in orientation and luminance. Acta Psychologica, 138, 377–389.
Zehetleitner, M., Krummenacher, J., & Müller, H. J. (2009). The detection of feature singletons defined in two dimensions is based on salience summation, rather than on serial exhaustive architectures. Attention, Perception, & Psychophysics, 71, 1739–1759.
Acknowledgements
This work was supported by AFOSR grant 10RH07COR to the late D. W. Repperger and P. R. Havig. We would like to thank Chris Myers for his comments on an early version of the manuscript.
Author information
Affiliations
Corresponding author
Additional information
Distribution A
Appproved for public release; distribution unlimited. Approval given by WPAFB Public Affairs. Disposition Date: 19 November 2012, Document Number 88ABW-2012-6052
Rights and permissions
About this article
Cite this article
Houpt, J.W., Blaha, L.M., McIntire, J.P. et al. Systems factorial technology with R. Behav Res 46, 307–330 (2014). https://doi.org/10.3758/s13428-013-0377-3
Published:
Issue Date:
Keywords
- Response Time Distribution
- Stimulus Rate
- Cumulative Hazard Function
- Capacity Coefficient
- Selective Influence