Behavior Research Methods

, Volume 51, Issue 2, pp 961–985 | Cite as

Dynamic models of choice

  • Andrew HeathcoteEmail author
  • Yi-Shin Lin
  • Angus Reynolds
  • Luke Strickland
  • Matthew Gretton
  • Dora Matzke


Parameter estimation in evidence-accumulation models of choice response times is demanding of both the data and the user. We outline how to fit evidence-accumulation models using the flexible, open-source, R-based Dynamic Models of Choice (DMC) software. DMC provides a hands-on introduction to the Bayesian implementation of two popular evidence-accumulation models: the diffusion decision model (DDM) and the linear ballistic accumulator (LBA). It enables individual and hierarchical estimation, as well as assessment of the quality of a model’s parameter estimates and descriptive accuracy. First, we introduce the basic concepts of Bayesian parameter estimation, guiding the reader through a simple DDM analysis. We then illustrate the challenges of fitting evidence-accumulation models using a set of LBA analyses. We emphasize best practices in modeling and discuss the importance of parameter- and model-recovery simulations, exploring the strengths and weaknesses of models in different experimental designs and parameter regions. We also demonstrate how DMC can be used to model complex cognitive processes, using as an example a race model of the stop-signal paradigm, which is used to measure inhibitory ability. We illustrate the flexibility of DMC by extending this model to account for mixtures of cognitive processes resulting from attention failures. We then guide the reader through the practical details of a Bayesian hierarchical analysis, from specifying priors to obtaining posterior distributions that encapsulate what has been learned from the data. Finally, we illustrate how the Bayesian approach leads to a quantitatively cumulative science, showing how to use posterior distributions to specify priors that can be used to inform the analysis of future experiments.


Response time Bayesian estimation Diffusion decison model Linear ballistic accumulator Stop-signal paradigm 


  1. Ando, T. (2011). Predictive Bayesian model selection. American Journal of Mathematical and Management Sciences, 31, 13–38.CrossRefGoogle Scholar
  2. Andrews, S., & Heathcote, A. (2001). Distinguishing common and task-specific processes in word identification: A matter of some moment? Journal of Experimental Psychology: Learning, Memory, and Cognition, 27, 514–544. Google Scholar
  3. Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of Computational and Graphical Statistics, 7, 434–455.Google Scholar
  4. Brown, S. D., & Heathcote, A. (2003). Averaging learning curves across and within participants. Behavior Research Methods, Instruments, & Computers, 35, 11–21.CrossRefGoogle Scholar
  5. Brown, S. D., & Heathcote, A. (2008). The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology, 57, 153–178. CrossRefGoogle Scholar
  6. Bushmakin, M. A., Eidels, A., & Heathcote, A. (2017). Breaking the rules in perceptual information integration. Cognitive Psychology, 95, 1–16. CrossRefGoogle Scholar
  7. Carpenter, R. H. S. (1981). Oculomotor procrastination. In D. F. Fisher, R. A. Monty, & J. W. Senders, Eye movements: Cognition and visual perception (pp. 237–246). Hillsdale, NJ: Erlbaum.Google Scholar
  8. Donkin, C., Brown, S. D., & Heathcote, A. (2009). The overconstraint of response time models: Rethinking the scaling problem. Psychonomic Bulletin & Review, 16, 1129–1135. CrossRefGoogle Scholar
  9. Dutilh, G., Vandekerckhove, J., Ly, A., Matzke, D., Pedroni, A., Frey, R., … Wagenmakers, E.-J. (2017). A test of the diffusion model explanation of the worst performance rule using preregistration and blinding. Attention, Perception, & Psychophysics, 79, 713–725. CrossRefGoogle Scholar
  10. Edwards, W., Lindman, H., & Savage, L. J. (1963). Bayesian statistical inference for psychological research. Psychological Review, 70, 193–242.CrossRefGoogle Scholar
  11. Eidels, A., Donkin, C., Brown, S. D., & Heathcote, A. (2010). Converging measures of workload capacity. Psychonomic Bulletin & Review, 17, 763–771. CrossRefGoogle Scholar
  12. Farrell, S., & Lewandowsky, S. (2015). An introduction to cognitive modelling. New York, NY: Liviana/Springer.Google Scholar
  13. Farrell, S., & Ludwig, C. J. H. (2008). Bayesian and maximum likelihood estimation of hierarchical response time models. Psychonomic Bulletin & Review, 15, 1209–1217. CrossRefGoogle Scholar
  14. Gamerman, D., & Lopes, H. F. (2006). Markov chain Monte Carlo: Stochastic simulation for Bayesian inference. Boca Raton, FL: Chapman & Hall/CRC.Google Scholar
  15. Gelman, A. (2013). Two simple examples for understanding posterior p-values whose distributions are far from uniform. Electronic Journal of Statistics, 7, 2595–2602.CrossRefGoogle Scholar
  16. Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis (3rd ed.). Chapman & Hall/CRC.Google Scholar
  17. Gelman, A., & Hill, J. (2007). Data analysis using regression and multilevel/hierarchical models. Cambridge, UK: Cambridge University Press.Google Scholar
  18. Gilks, W. R., Richardson, S., & Spiegelhalter, D. J. (1996). Markov chain Monte Carlo in practice. Boca Raton, FL: Chapman & Hall/CRC.Google Scholar
  19. Gutenkunst, R. N., Waterfall, J. J., Casey, F. P., Brown, K. S., Myers, C. R., & Sethna, J. P. (2007). Universally sloppy parameter sensitivities in systems biology models. PLoS Computational Biology, 3, e189. CrossRefGoogle Scholar
  20. Heathcote, A., Brown, S. D., & Mewhort, D. J. K. (2000). The power law repealed: The case for an exponential law of practice. Psychonomic Bulletin & Review, 7, 185–207. CrossRefGoogle Scholar
  21. Heathcote, A., Brown, S. D., & Wagenmakers, E.-J. (2015). An introduction to good practices in cognitive modeling. In B. U. Forstmann, & E.-J. Wagenmakers (Eds.), An introduction to model-based cognitive neuroscience (pp. 25–48). New York, NY: Springer.Google Scholar
  22. Heathcote, A., & Love, J. (2012). Linear deterministic accumulator models of simple choice. Frontiers in Psychology, 3, 292.
  23. Heathcote, A., Popiel, S. J., & Mewhort, D. J. (1991). Analysis of response time distributions: An example using the Stroop task. Psychological Bulletin, 109, 340–347. CrossRefGoogle Scholar
  24. Holmes, W. R. (2015). A practical guide to the Probability Density Approximation (PDA) with improved implementation and error characterization. Journal of Mathematical Psychology, 68–69, 13–24. CrossRefGoogle Scholar
  25. Holmes, W. R., Trueblood, J. S., & Heathcote, A. (2016). A new framework for modeling decisions about changing information: The Piecewise Linear Ballistic Accumulator model. Cognitive Psychology, 85, 1–29. CrossRefGoogle Scholar
  26. Jaynes, E. T., & Bretthorst, G. L. (2003). Probability theory. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
  27. Kass, R. E., & Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association, 90, 773–795. CrossRefGoogle Scholar
  28. Kruschke, J. K. (2010). Doing Bayesian data analysis: A tutorial introduction with R and BUGS. Burlington, MA: Academic Press.Google Scholar
  29. Lee, M. D., & Vanpaemel, W. (2018). Determining informative priors for cognitive models. Psychonomic Bulletin & Review, 25, 114–127. CrossRefGoogle Scholar
  30. Lee, M. D., & Wagenmakers, E.-J. (2013). Bayesian cognitive modeling: A practical course. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
  31. Lerche, V., & Voss, A. (2018). Experimental validation of the diffusion model based on a slow response time paradigm. Psychological Research. Advance online publication.
  32. Lin, Y., & Heathcote, A. (2017). ggdmc: Dynamic models of choice with parallel computation, and C++ capabilities (R package version Retrieved from
  33. Logan, G. D., & Cowan, W. B. (1984). On the ability to inhibit thought and action: A theory of an act of control. Psychological Review, 91, 295–327. CrossRefGoogle Scholar
  34. Logan, G. D., Cowan, W. B., & Davis, K. A. (1984). On the ability to inhibit simple and choice reaction time responses: A model and a method. Journal of Experimental Psychology: Human Perception and Performance, 10, 276–291. Google Scholar
  35. Logan, G. D., Van Zandt, T., Verbruggen, F., & Wagenmakers, E.-J. (2014). On the ability to inhibit thought and action: General and special theories of an act of control. Psychological Review, 121, 66–95.
  36. Maass, W. (2000). On the computational power of winner-take-all. Neural Computation, 12, 2519–2535. CrossRefGoogle Scholar
  37. Matzke, D., Curley, S., Gong, C., & Heathcote, A. (2018a). Inhibiting responses to difficult choices. Manuscript submitted for publication. Retrieved from
  38. Matzke, D., Dolan, C. V., Logan, G. D., Brown, S. D., & Wagenmakers, E.-J. (2013a). Bayesian parametric estimation of stop-signal reaction time distributions. Journal of Experimental Psychology: General, 142, 1047–1073. CrossRefGoogle Scholar
  39. Matzke, D., Hughes, M., Badcock, J. C., Michie, M., & Heathcote, A. (2017a). Failures of cognitive control or attention? The case of stop-signal deficits in schizophrenia. Attention, Perception, & Psychophysics, 79, 1078–1086. CrossRefGoogle Scholar
  40. Matzke, D., Love, J., & Heathcote, A. (2017b). A Bayesian approach for estimating the probability of trigger failures in the stop-signal paradigm. Behavior Research Methods, 49, 267–281. CrossRefGoogle Scholar
  41. Matzke, D., Love, J., Wiecki, T., Brown, S. D., Logan, G. D., & Wagenmakers, E.-J. (2013b). Releasing the BEESTS: Bayesian estimation of stop-signal reaction time distributions. Frontiers in Quantitative Psychology and Measurement, 4, 918. Google Scholar
  42. Matzke, D., Verbruggen, F., & Logan, G. (2018b). The stop-signal paradigm. In Stevens’ Handbook of experimental psychology and cognitive neuroscience: Vol. 5. Methodology (4th ed.). Hoboken, NJ: Wiley.Google Scholar
  43. Matzke, D., & Wagenmakers, E.-J. (2009). Psychological interpretation of the ex-Gaussian and shifted Wald parameters: A diffusion model analysis. Psychonomic Bulletin & Review, 16, 798–817. CrossRefGoogle Scholar
  44. Morey, R. D., Hoekstra, R., Rouder, J. N., Lee, M. D., & Wagenmakers, E.-J. (2016). The fallacy of placing confidence in confidence intervals. Psychonomic Bulletin & Review, 23, 103–123. CrossRefGoogle Scholar
  45. Mulder, M. J., van Maanen, L., & Forstmann, B. U. (2014). Neuroscience forefront review perceptual decision neurosciences—A model-based review. Neuroscience, 277, 872–884.CrossRefGoogle Scholar
  46. Osth, A. F., Jansson, A., Dennis, S., & Heathcote, A. (2018). Modeling the dynamics of recognition memory testing with an integrated model of retrieval and decision making. Cognitive Psychology, 104, 106–142. CrossRefGoogle Scholar
  47. Palada, H., Neal, A., Vuckovic, A., Martin, R., Samuels, K., & Heathcote, A. (2016). Evidence accumulation in a complex task: Making choices about concurrent multi-attribute stimuli under time pressure. Journal of Experimental Psychology: Applied, 22, 1–23. Google Scholar
  48. Plummer, M., Best, N., Cowles, K., & Vines, K. (2006). CODA: Convergence diagnosis and output analysis for MCMC. R News, 6, 7–11.Google Scholar
  49. R Core Team. (2016). R: A language and environment for statistical computing [Computer software manual]. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from
  50. Ratcliff, R., & McKoon, G. (2008). The diffusion decision model: Theory and data for two-choice decision tasks. Neural Computation, 20, 873–922. CrossRefGoogle Scholar
  51. Ratcliff, R., & Rouder, J. N. (1998). Modeling response times for two-choice decisions. Psychological Science, 9, 347–356. CrossRefGoogle Scholar
  52. Ratcliff, R., & Smith, P. L. (2004). A comparison of sequential sampling models for two-choice reaction time. Psychological Review, 111, 333–367. CrossRefGoogle Scholar
  53. Ratcliff, R., Smith, P. L., Brown, S. D., & McKoon, G., (2016). Diffusion decision model: Current issues and history. Trends in Cognitive Sciences, 20, 260–281.CrossRefGoogle Scholar
  54. Rouder, J. N., Lu, J., Speckman, P., Sun, D., & Jiang, Y. (2005). A hierarchical model for estimating response time distributions. Psychonomic Bulletin & Review, 12, 195–223. CrossRefGoogle Scholar
  55. Shiffrin, R., Lee, M., Kim, W., & Wagenmakers, E.-J. (2008). A survey of model evaluation approaches with a tutorial on hierarchical Bayesian methods. Cognitive Science, 32, 1248–1284. CrossRefGoogle Scholar
  56. Šíma, J., & Orponen, P. (2003). General-purpose computation with neural networks: A survey of complexity theoretic results. Neural Computation, 15, 2727–2778. CrossRefGoogle Scholar
  57. Singmann, H., Brown, S. D., Gretton, M., & Heathcote, A. (2017). rtdists: Response time distributions (R package version 0.8-1). Retrieved from https://CRAN.
  58. Spiegelhalter, D. J., Best, N. G., Carlin, B. P., & Van Der Linde, A. (2002). Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society: Series B, 64, 583–639.CrossRefGoogle Scholar
  59. Strickland, L., Loft, S., Remington, R. W., & Heathcote, A. (2018). Racing to remember: A theory of decision control in event-based prospective memory. Psychological Review.
  60. ter Braak, C. J. (2006). A Markov Chain Monte Carlo version of the genetic algorithm Differential Evolution: Easy Bayesian computing for real parameter spaces. Statistics and Computing, 16, 239–249.CrossRefGoogle Scholar
  61. Trueblood, J. S., Brown, S. D., & Heathcote, A. (2014). The multi-attribute linear ballistic accumulator model of context effects in multi-alternative choice. Psychological Review, 121, 179–205. CrossRefGoogle Scholar
  62. Turner, B. M., & Sederberg, P. B. (2014). A generalized, likelihood-free method for posterior estimation. Psychonomic Bulletin & Review, 21, 227–250. CrossRefGoogle Scholar
  63. Turner, B. M., Sederberg, P. B., Brown, S. D., & Steyvers, M. (2013). A method for efficiently sampling from distributions with correlated dimensions. Psychological Methods, 18, 368–384.CrossRefGoogle Scholar
  64. Usher, M., & McClelland, J. L. (2001). The time course of perceptual choice: The leaky, competing accumulator model. Psychological Review, 108, 550–592. CrossRefGoogle Scholar
  65. Van Zandt, T., Colonius, H., & Proctor, R. W. (2000). A comparison of two response time models applied to perceptual matching. Psychonomic Bulletin & Review, 7, 208–256. CrossRefGoogle Scholar
  66. Voss, A., Nagler, M., & Lerche, V. (2013). Diffusion models in experimental psychology. Experimental Psychology, 60, 385–402. CrossRefGoogle Scholar
  67. Wagenmakers, E.-J., Marsman, M., Jamil, T., Ly, A., Verhagen, J., Love, J., … Morey, R. D. (2018). Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications. Psychonomic Bulletin & Review, 25, 35–57.
  68. Watanabe, S. (2010). Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. Journal of Machine Learning Research, 11, 3571–3594.Google Scholar
  69. Wickham, H. (2009). ggplot2: Elegant graphics for data analysis. New York, NY: Springer.Google Scholar

Copyright information

© Psychonomic Society, Inc. 2018

Authors and Affiliations

  • Andrew Heathcote
    • 1
    Email author
  • Yi-Shin Lin
    • 1
  • Angus Reynolds
    • 1
  • Luke Strickland
    • 1
  • Matthew Gretton
    • 1
  • Dora Matzke
    • 2
  1. 1.Division of PsychologyUniversity of TasmaniaHobartAustralia
  2. 2.Department of PsychologyUniversity of AmsterdamAmsterdamThe Netherlands

Personalised recommendations