Cognitive, Affective, & Behavioral Neuroscience

, Volume 13, Issue 3, pp 598–614 | Cite as

Could millisecond timing errors in commonly used equipment be a cause of replication failure in some neuroscience studies?

Article

Abstract

Neuroscience is a rapidly expanding field in which complex studies and equipment setups are the norm. Often these push boundaries in terms of what technology can offer, and increasingly they make use of a wide range of stimulus materials and interconnected equipment (e.g., magnetic resonance imaging, electroencephalography, magnetoencephalography, eyetrackers, biofeedback, etc.). The software that bonds the various constituent parts together itself allows for ever more elaborate investigations to be carried out with apparent ease. However, research over the last decade has suggested a growing, yet underacknowledged, problem with obtaining millisecond-accurate timing in some computer-based studies. Crucially, timing inaccuracies can affect not just response time measurements, but also stimulus presentation and the synchronization between equipment. This is not a new problem, but rather one that researchers may have assumed had been solved with the advent of faster computers, state-of-the-art equipment, and more advanced software. In this article, we highlight the potential sources of error, their causes, and their likely impact on replication. Unfortunately, in many applications, inaccurate timing is not easily resolved by utilizing ever-faster computers, newer equipment, or post-hoc statistical manipulation. To ensure consistency across the field, we advocate that researchers self-validate the timing accuracy of their own equipment whilst running the actual paradigm in situ.

Keywords

Replication Millisecond timing accuracy Millisecond timing error Experiment generators Equipment error 

References

  1. Basser, P. J., & Jones, D. K. (2002). Diffusion-tensor MRI: Theory, experimental design and data analysis—A technical review. NMR in Biomedicine, 15, 456–467. doi:10.1002/nbm.783 PubMedCrossRefGoogle Scholar
  2. Brown, W., Malveau, R., McCormick, S., & Mowbray, T. (1998). AntiPatterns: Refactoring software, architectures, and projects in crisis. New York, NY: Wiley.Google Scholar
  3. Damian, M. F. (2010). Does variability in human performance outweigh imprecision in response devices such as computer keyboards? Behavior Research Methods, 42, 205–211. doi:10.3758/BRM.42.1.205 PubMedCrossRefGoogle Scholar
  4. Jaekl, P., Soto-Faraco, S., & Harris, L. R. (2012). Perceived size change induced by audiovisual temporal delays. Experimental Brain Research, 216, 457–462. doi:10.1007/s00221-011-2948-9 CrossRefGoogle Scholar
  5. Kessler, B., Treiman, R., & Mullennix, J. (2002). Phonetic biases in voice key response time measurements. Journal of Memory and Language, 47, 145–171. doi:10.1006/jmla.2001.2835 CrossRefGoogle Scholar
  6. Killion, M. C. (1984). New insert headphones for audiometry. Hearing Instruments, 35, 46.Google Scholar
  7. McDonnell, S. (2004). Code complete: A practical handbook of software construction. Redmond, WA: Microsoft Press.Google Scholar
  8. Morgante, J. D., Zolfaghari, R., & Johnson, S. P. (2011). A critical test of temporal and spatial accuracy of the Tobii T60XL eye tracker. Infancy, 17, 9–32. doi:10.1111/j.1532-7078.2011.00089.x CrossRefGoogle Scholar
  9. Neath, I., Earle, A., Hallett, D., & Surprenant, A. M. (2011). Response timing accuracy in Apple Macintosh computers. Behavior Research Methods, 43, 353–362. doi:10.3758/s13428-011-0069-9 PubMedCrossRefGoogle Scholar
  10. Pashler, H., & Wagenmakers, E.-J. (2012). Editors’ introduction to the special section on replicability in psychological science: A crisis of confidence? Perspectives on Psychological Science, 7, 528–530. doi:10.1177/1745691612465253 CrossRefGoogle Scholar
  11. Plant, R., & Hammond, N. V. (2001a). Benchmarking the timing characteristics of tools used by behavioural scientists. Abstracts of the Psychonomic Society (42nd Annual Meeting), 6, 109a.Google Scholar
  12. Plant, R., & Hammond, N. V. (2001b, November). Towards an experimental timing standards laboratory. Paper presented at the annual meeting of the Society for Computers in Psychology (SCiP), Orlando, Florida.Google Scholar
  13. Plant, R. R., & Hammond, N. V. (2002). Towards an experimental timing standards lab: Benchmarking precision in the real world. Behavior Research Methods, Instruments, & Computers, 34, 218–226.CrossRefGoogle Scholar
  14. Plant, R. R., Hammond, N., & Whitehouse, T. (2003). How choice of mouse may effect response timing in psychological studies. Behavior Research Methods, Instruments, & Computers, 35, 276–284. doi:10.3758/BF03202553 CrossRefGoogle Scholar
  15. Plant, R. R., & Turner, G. (2004). Self-validating presentation and response timing in cognitive paradigms: How and why? Behavior Research Methods, Instruments, & Computers, 36, 291–303.CrossRefGoogle Scholar
  16. Plant, R. R., & Turner, G. (2009). Millisecond precision psychological research in a world of commodity computers: New hardware, new problems? Behavior Research Methods, 41, 598–614. doi:10.3758/BRM.41.3.598 PubMedCrossRefGoogle Scholar
  17. Plant, R., & Turner, G. (2012, November). Could your equipment account for your experimental effect? Paper presented at the annual meeting of the Society for Computers in Psychology (SCiP), Minneapolis, Minnesota.Google Scholar
  18. Reimers, S., & Stewart, N. (2007). Adobe Flash as a medium for online experimentation: A test of RT measurement capabilities. Behavior Research Methods, 39, 365–370. doi:10.3758/BF03193004 PubMedCrossRefGoogle Scholar
  19. Reimers, S., & Stewart, N. (2008). Using Adobe Flash Lite on mobile phones for psychological research: Reaction time measurement reliability and interdevice variability. Behavior Research Methods, 40, 1170–1176. doi:10.3758/BRM.40.4.1170 PubMedCrossRefGoogle Scholar
  20. Rosenthal, R. (1979). The file draw problem and the tolerance for null results. Psychological Bulletin, 83, 638–641. doi:10.1037/0033-2909.86.3.638 CrossRefGoogle Scholar
  21. Thurgood, C., Whitfield, T. W., & Patterson, J. (2011). Towards a visual recognition threshold: New instrument shows humans identify animals with only 1 ms of visual exposure. Vision Research, 51, 1966–1971. doi:10.1016/j.visres.2011.07.008 PubMedCrossRefGoogle Scholar
  22. Tyler, M. D., Tyler, L., & Burnham, D. K. (2005). The delayed trigger voice key: An improved analogue voice key for psycholinguistic research. Behavior Research Methods, 37, 139–147. doi:10.3758/BF03206408 PubMedCrossRefGoogle Scholar
  23. Tyson, M. (2011, September 7). Experiments with precise timing in iOS [Web log post]. Retrieved from http://atastypixel.com/blog/experiments-with-precise-timing-in-ios/
  24. Ulrich, R., & Giray, M. (1989). Time resolution of clocks. Effects on reaction time measurement—Good news for bad clocks. British Journal of Mathematical and Statistical Psychology, 42, 1–12.CrossRefGoogle Scholar
  25. Wang, D., Vaidyanathan, P., Haake, A., & Pelz, J. (2012, November). Are eye trackers always as accurate as we assume? Paper presented at the annual meeting of the Society for Computers in Psychology (SCip), Minneapolis, Minnesota.Google Scholar

Copyright information

© Psychonomic Society, Inc. 2013

Authors and Affiliations

  1. 1.Black Box ToolKit LtdSheffieldUK
  2. 2.Department of PsychologyUniversity of YorkYorkUK

Personalised recommendations