Attention, Perception, & Psychophysics

, Volume 79, Issue 7, pp 2064–2072 | Cite as

Headphone screening to facilitate web-based auditory experiments

  • Kevin J. P. Woods
  • Max H. Siegel
  • James Traer
  • Josh H. McDermott
Article

Abstract

Psychophysical experiments conducted remotely over the internet permit data collection from large numbers of participants but sacrifice control over sound presentation and therefore are not widely employed in hearing research. To help standardize online sound presentation, we introduce a brief psychophysical test for determining whether online experiment participants are wearing headphones. Listeners judge which of three pure tones is quietest, with one of the tones presented 180° out of phase across the stereo channels. This task is intended to be easy over headphones but difficult over loudspeakers due to phase-cancellation. We validated the test in the lab by testing listeners known to be wearing headphones or listening over loudspeakers. The screening test was effective and efficient, discriminating between the two modes of listening with a small number of trials. When run online, a bimodal distribution of scores was obtained, suggesting that some participants performed the task over loudspeakers despite instructions to use headphones. The ability to detect and screen out these participants mitigates concerns over sound quality for online experiments, a first step toward opening auditory perceptual research to the possibilities afforded by crowdsourcing.

Keywords

Psychometrics/testing Stimulus control Audition 

Notes

Acknowledgments

This work was supported by an NSF CAREER award and NIH grant 1R01DC014739-01A1 to J.H.M. The authors thank Malinda McPherson, Alex Kell, and Erica Shook for sharing data from Mechanical Turk experiments, Dorit Kliemann for help recruiting subjects for in-lab validation experiments, and Ray Gonzalez and Kelsey R. Allen for organizing code for distribution.

Code implementing the headphone screening task can be downloaded from the McDermott lab website (http://mcdermottlab.mit.edu/downloads.html).

Supplementary material

13414_2017_1361_MOESM1_ESM.pdf (5.9 mb)
ESM 1(PDF 5.94 mb)

References

  1. Brady, T. F., & Alvarez, G. A. (2011). Hierarchical encoding in visual working memory ensemble statistics bias memory for individual items. Psychological Science, 22, 384–392.CrossRefPubMedGoogle Scholar
  2. Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6, 3–5.CrossRefPubMedGoogle Scholar
  3. Chandler, J., Mueller, P., & Paolacci, G. (2013). Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavioral Research Methods, 46, 112–130.CrossRefGoogle Scholar
  4. Crump, M. J. C., McDonnell, J. V., & Gureckis, T. M. (2013). Evaluating Amazon’s Mechanical Turk as a tool for experimental behavioral research. PLOS ONE, 8, e57410.CrossRefPubMedPubMedCentralGoogle Scholar
  5. Curtis, M. E., & Bharucha, J. J. (2009). Memory and musical expectation for tones in cultural context. Music Perception: An Interdisciplinary Journal, 26, 365–375.CrossRefGoogle Scholar
  6. Frank, M. C., & Goodman, N. D. (2012). Predicting pragmatic reasoning in language games. Science, 336, 998.CrossRefPubMedGoogle Scholar
  7. Freeman, J., Ziemba, C. M., Heeger, D. J., Simoncelli, E. P., & Movshon, J. A. (2013). A functional and perceptual signature of the second visual area in primates. Nature Neuroscience, 16, 974–981.CrossRefPubMedPubMedCentralGoogle Scholar
  8. Gardner, W. G. (2002). Reverberation algorithms. In Applications of digital signal processing to audio and acoustics (pp. 85–131). Springer US.Google Scholar
  9. Gibson, E., Piantadosi, S., & Fedorenko, K. (2011). Using Mechanical Turk to obtain and analyze English acceptability judgments. Language Linguistics Compass, 5, 509–524.CrossRefGoogle Scholar
  10. Gutierrez-Parera, P., Lopez, J. J., & Aguilera, E. (2015). On the influence of headphone quality in the spatial immersion produced by Binaural Recordings. In Audio Engineering Society Convention 138. Audio Engineering Society.Google Scholar
  11. Hartshorne, J. K., & Germine, L. T. (2015). When does cognitive functioning peak? The asynchronous rise and fall of different cognitive abilities across the life span. Psychological Science, 26.Google Scholar
  12. Henrich, J., Heine, S. J., & Norenzayan, A. (2010). Most people are not WEIRD. Nature, 466, 29.CrossRefPubMedGoogle Scholar
  13. Jensen, F. B., Kuperman, W. A., Porter, M. B., & Schmidt, H. (2000). Computational ocean acoustics. Springer Science & Business Media.Google Scholar
  14. Kidd, G. R., Watson, C. S., & Gygi, B. (2007). Individual differences in auditory abilities. Journal of the Acoustical Society of America, 122, 418–435.CrossRefPubMedGoogle Scholar
  15. McDermott, J. H., Lehr, A. J., & Oxenham, A. J. (2008). Is relative pitch specific to pitch? Psychological Science, 19, 1263–1271.CrossRefPubMedPubMedCentralGoogle Scholar
  16. McDermott, J. H., Lehr, A. J., & Oxenham, A. J. (2010). Individual differences reveal the basis of consonance. Current Biology, 20, 1035–1041.CrossRefPubMedPubMedCentralGoogle Scholar
  17. Meade, A. W., & Bartholomew, S. (2012). Identifying careless responses in survey data. Psychological Methods, 17, 437–455.CrossRefPubMedGoogle Scholar
  18. Peer, E., Vosgerau, J., & Acquisti, A. (2013). Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behavioral Research Methods, 46, 1023–1031.CrossRefGoogle Scholar
  19. Saunders, D. R., Bex, P. J., & Woods, R. L. (2013). Crowdsourcing a normative natural language dataset: A comparison of Amazon Mechanical Turk and in-lab data collection. Journal of Medical Internet Research, 15, e100.CrossRefPubMedPubMedCentralGoogle Scholar
  20. Shin, H., & Ma, W. J. (2016). Crowdsourced single-trial probes of visual working memory for irrelevant features. Journal of Vision, 16, 10.CrossRefPubMedPubMedCentralGoogle Scholar
  21. Simons, D. J., & Chabris, C. F. (1999). Gorillas in our midst: Sustained inattentional blindness for dynamic events. Perception, 28, 1059–1074.CrossRefPubMedGoogle Scholar
  22. Sprouse, J. (2010). A validation of Amazon Mechanical Turk for the collection of acceptability judgments in linguistic theory. Behavioral Research Methods, 43, 155–167.CrossRefGoogle Scholar
  23. Teki, S., Kumar, S., & Griffiths, T. D. (2016). Large-scale analysis of auditory segregation behavior crowdsourced via a smartphone app. PloS one, 11(4), e0153916.CrossRefPubMedPubMedCentralGoogle Scholar
  24. Traer, J. A., & McDermott, J. H. (2016). Statistics of natural reverberation enable perceptual separation of sound and space. Proceedings of the National Academy of Sciences, 113(48), E7856–E7865.CrossRefGoogle Scholar
  25. Woods, K. J. P., & McDermott, J. H. (2015). Attentive tracking of sound sources. Current Biology, 25, 2238–2246.CrossRefPubMedGoogle Scholar

Copyright information

© The Psychonomic Society, Inc. 2017

Authors and Affiliations

  • Kevin J. P. Woods
    • 1
    • 2
  • Max H. Siegel
    • 1
  • James Traer
    • 1
  • Josh H. McDermott
    • 1
    • 2
  1. 1.Department of Brain and Cognitive SciencesMITCambridgeUSA
  2. 2.Program in Speech and Hearing Bioscience and TechnologyHarvard UniversityBostonUSA

Personalised recommendations