Psychological Research

, Volume 78, Issue 3, pp 349–360

Bottom-up influences of voice continuity in focusing selective auditory attention

  • Scott Bressler
  • Salwa Masud
  • Hari Bharadwaj
  • Barbara Shinn-Cunningham
Original Article

DOI: 10.1007/s00426-014-0555-7

Cite this article as:
Bressler, S., Masud, S., Bharadwaj, H. et al. Psychological Research (2014) 78: 349. doi:10.1007/s00426-014-0555-7

Abstract

Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the “unit” on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the attentional foreground depends on the interplay of top-down, volitional attention and stimulus-driven, involuntary attention. Here, we test the idea that auditory attention is object based by exploring whether continuity of a non-spatial feature (talker identity, a feature that helps acoustic elements bind into one perceptual object) also influences selective attention performance. In Experiment 1, we show that perceptual continuity of target talker voice helps listeners report a sequence of spoken target digits embedded in competing reversed digits spoken by different talkers. In Experiment 2, we provide evidence that this benefit of voice continuity is obligatory and automatic, as if voice continuity biases listeners by making it easier to focus on a subsequent target digit when it is perceptually linked to what was already in the attentional foreground. Our results support the idea that feature continuity enhances streaming automatically, thereby influencing the dynamic processes that allow listeners to successfully attend to objects through time in the cacophony that assails our ears in many everyday settings.

Supplementary material

426_2014_555_MOESM1_ESM.wav (147 kb)
Supplementary material 1 (WAV 147 kb)
426_2014_555_MOESM2_ESM.wav (274 kb)
Supplementary material 2 (WAV 273 kb)
426_2014_555_MOESM3_ESM.wav (157 kb)
Supplementary material 3 (WAV 157 kb)
426_2014_555_MOESM4_ESM.wav (268 kb)
Supplementary material 4 (WAV 268 kb)

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Scott Bressler
    • 1
  • Salwa Masud
    • 1
    • 2
  • Hari Bharadwaj
    • 1
    • 2
  • Barbara Shinn-Cunningham
    • 1
    • 2
  1. 1.Center for Computational Neuroscience and Neural TechnologyBoston UniversityBostonUSA
  2. 2.Department of Biomedical EngineeringBoston UniversityBostonUSA