Attention, Perception, & Psychophysics

, Volume 81, Issue 4, pp 1108–1118 | Cite as

Working-memory disruption by task-irrelevant talkers depends on degree of talker familiarity

  • Jens KreitewolfEmail author
  • Malte Wöstmann
  • Sarah Tune
  • Michael Plöchl
  • Jonas ObleserEmail author
Perceptual/Cognitive Constraints on the Structure of Speech Communication: In Honor of Randy Diehl


When one is listening, familiarity with an attended talker’s voice improves speech comprehension. Here, we instead investigated the effect of familiarity with a distracting talker. In an irrelevant-speech task, we assessed listeners’ working memory for the serial order of spoken digits when a task-irrelevant, distracting sentence was produced by either a familiar or an unfamiliar talker (with rare omissions of the task-irrelevant sentence). We tested two groups of listeners using the same experimental procedure. The first group were undergraduate psychology students (N = 66) who had attended an introductory statistics course. Critically, each student had been taught by one of two course instructors, whose voices served as the familiar and unfamiliar task-irrelevant talkers. The second group of listeners were family members and friends (N = 20) who had known either one of the two talkers for more than 10 years. Students, but not family members and friends, made more errors when the task-irrelevant talker was familiar versus unfamiliar. Interestingly, the effect of talker familiarity was not modulated by the presence of task-irrelevant speech: Students experienced stronger working memory disruption by a familiar talker, irrespective of whether they heard a task-irrelevant sentence during memory retention or merely expected it. While previous work has shown that familiarity with an attended talker benefits speech comprehension, our findings indicate that familiarity with an ignored talker disrupts working memory for target speech. The absence of this effect in family members and friends suggests that the degree of familiarity modulates the memory disruption.


Talker familiarity Working memory Irrelevant-speech task Attention Distraction 


Author note

Research was funded by the University of Lübeck. Photographs appear by courtesy of Leo Waschke. We thank all of the students, family members, and friends who participated in this experiment, as well as two anonymous reviewers for their valuable comments on an earlier version of the manuscript.


  1. Awh, E., Vogel, E. K., & Oh, S.-H. (2006). Interactions between attention and working memory. Neuroscience, 139, 201–208. CrossRefPubMedGoogle Scholar
  2. Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67, 1–48. CrossRefGoogle Scholar
  3. Bolia, R. S., Nelson, W. T., Ericson, M. A., & Simpson, B. D. (2000). A speech corpus for multitalker communications research. Journal of the Acoustical Society of America, 107, 1065–1066.CrossRefPubMedGoogle Scholar
  4. Bressler, S., Masud, S., Bharadwaj, H., & Shinn-Cunningham, B. (2014). Bottom-up influences of voice continuity in focusing selective auditory attention. Psychological Research, 78, 349–360.CrossRefPubMedPubMedCentralGoogle Scholar
  5. Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6, 3–5. CrossRefGoogle Scholar
  6. Bürkner, P. C. (2016). brms: An R package for Bayesian multilevel models using Stan. Journal of Statistical Software, 80(1), 1–28. CrossRefGoogle Scholar
  7. Colle, H. A., & Welsh, A. (1976). Acoustic masking in primary memory. Journal of Verbal Learning and Verbal Behavior, 15, 17–31.CrossRefGoogle Scholar
  8. Cowan, N. (1998). Attention and memory: An integrated framework. New York, NY: Oxford University Press.CrossRefGoogle Scholar
  9. Diehl, R. L., Lotto, A. J., & Holt, L. L. (2004). Speech perception. Annual Review of Psychology, 55, 149–179. CrossRefPubMedGoogle Scholar
  10. Ellermeier, W., Kattner, F., Ueda, K., Doumoto, K., & Nakajima, Y. (2015). Memory disruption by irrelevant noise-vocoded speech: Effects of native language and the number of frequency bands. Journal of the Acoustical Society of America, 138, 1561–1569.CrossRefPubMedGoogle Scholar
  11. Erb, J., Henry, M. J., Eisner, F., & Obleser, J. (2012). Auditory skills and brain morphology predict individual differences in adaptation to degraded speech. Neuropsychologia, 50, 2154–2164.CrossRefPubMedGoogle Scholar
  12. Fritz, J. B., Elhilali, M., David, S. V., & Shamma, S. A. (2007). Auditory attention—Focusing the searchlight on sound. Current Opinion in Neurobiology, 17, 437–455.CrossRefPubMedGoogle Scholar
  13. Gaspelin, N., & Luck, S. J. (2018). Inhibition as a potential resolution to the attentional capture debate. Current Opinion in Psychology, 29, 12–18. CrossRefPubMedGoogle Scholar
  14. Geyer, T., Müller, H. J., & Krummenacher, J. (2006). Cross-trial priming in visual search for singleton conjunction targets: Role of repeated target and distractor features. Perception & Psychophysics, 68, 736–749. CrossRefGoogle Scholar
  15. Holmes, E., Domingo, Y., & Johnsrude, I. S. (2018). Familiar voices are more intelligible, even if they are not recognized as familiar. Psychological Science, 29 1575–1583.CrossRefPubMedGoogle Scholar
  16. Jeffreys, H. (1961). The theory of probability (3rd ed.). Oxford, UK: Oxford University Press.Google Scholar
  17. Johnsrude, I. S., Mackey, A., Hakyemez, H., Alexander, E., Trang, H. P., & Carlyon, R. P. (2013). Swinging at a cocktail party: Voice familiarity aids speech perception in the presence of a competing voice. Psychological Science, 24, 1995–2004.CrossRefPubMedGoogle Scholar
  18. Jones, D., & Morris, N. (1992). Irrelevant speech and serial recall: Implications for theories of attention and working memory. Scandinavian Journal of Psychology, 33, 212–229.CrossRefPubMedGoogle Scholar
  19. Jones, D. M., & Macken, W. J. (1993). Irrelevant tones produce an irrelevant speech effect: Implications for phonological coding in working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 19, 369–381. CrossRefGoogle Scholar
  20. Kalikow, D. N., Stevens, K. N., & Elliott, L. L. (1977). Development of a test of speech intelligibility in noise using sentence materials with controlled word predictability. Journal of the Acoustical Society of America, 61, 1337– 1351.CrossRefPubMedGoogle Scholar
  21. Kerzel, D., & Barras, C. (2016). Distractor rejection in visual search breaks down with more than a single distractor feature. Journal of Experimental Psychology: Human Perception and Performance, 42, 648–657. CrossRefPubMedGoogle Scholar
  22. Kreitewolf, J., Mathias, S. R., Trapeau, R., Obleser, J., & Schönwiesner, M. (2018). Perceptual grouping in the cocktail party: Contributions of voice-feature continuity. Journal of the Acoustical Society of America, 144, 2178–2188.CrossRefPubMedGoogle Scholar
  23. Kreitewolf, J., Mathias, S. R., & von Kriegstein, K. (2017). Implicit talker training improves comprehension of auditory speech in noise. Frontiers in Psychology, 8, 1584.CrossRefPubMedPubMedCentralGoogle Scholar
  24. Lavan, N., Burston, L. F. K., & Garrido, L. (2018). How many voices did you hear? Natural variability disrupts identity perception from unfamiliar voices. British Journal of Psychology, 1–18.
  25. Lavan, N., Burton, A. M., Scott, S. K., & McGettigan, C. (2019). Flexible voices: Identity perception from variable vocal signals. Psychonomic Bulletin & Review, 26, 90–102. CrossRefGoogle Scholar
  26. Lavner, Y., Rosenhouse, J., & Gath, I. (2001). The prototype model in speaker identification by human listeners. International Journal of Speech Technology, 4, 63–74.CrossRefGoogle Scholar
  27. Lenth, R. V. (2016). Least-squares means: The R package lsmeans. Journal of Statistical Software, 69, 1–33.CrossRefGoogle Scholar
  28. Levi, S. V., Winters, S. J., & Pisoni, D. B. (2011). Effects of cross-language voice training on speech perception: Whose familiar voices are more intelligible? Journal of the Acoustical Society of America, 130, 4053–4062.CrossRefPubMedGoogle Scholar
  29. Magnuson, J. S., Yamada, R. A., & Nusbaum, H. C. (1995). The effects of familiarity with a voice on speech perception. In Proceedings of the 1995 Spring Meeting of the Acoustical Society of Japan (pp. 391–392). Tokyo, Japan: Acoustical Society of Japan.Google Scholar
  30. Maguinness, C., Roswandowitz, C., & von Kriegstein, K. (2018). Understanding the mechanisms of familiar voice-identity recognition in the human brain. Neuropsychologia, 116, 179–193.CrossRefPubMedGoogle Scholar
  31. Marini, F., Chelazzi, L., & Maravita, A. (2013). The costly filtering of potential distraction: Evidence for a supramodal mechanism. Journal of Experimental Psychology: General, 142, 906–922. CrossRefGoogle Scholar
  32. Mathias, S. R., & von Kriegstein, K. (2014). How do we recognise who is speaking? Frontiers in Bioscience (Scholar Edition), 6, 92–109.CrossRefGoogle Scholar
  33. McPherson, M. J., & McDermott, J. H. (2018). Diversity in pitch perception revealed by task dependence. Nature Human Behaviour, 2, 52–66. CrossRefPubMedGoogle Scholar
  34. Newman, R. S., & Evers, S. (2007). The effect of talker familiarity on stream segregation. Journal of Phonetics, 35, 85–103.CrossRefGoogle Scholar
  35. Noonan, M. P., Adamian, N., Pike, A., Printzlau, F., Crittenden, B. M., & Stokes, M. G. (2016). Distinct mechanisms for distractor suppression and target facilitation. Journal of Neuroscience, 36, 1797–1807.CrossRefPubMedGoogle Scholar
  36. Nygaard, L. C., & Pisoni, D. B. (1998). Talker-specific learning in speech perception. Perception & Psychophysics, 60, 355–376. CrossRefGoogle Scholar
  37. Nygaard, L. C., Sommers, M. S., & Pisoni, D. B. (1994). Speech perception as a talker-contingent process. Psychological Science, 5, 42–46. CrossRefPubMedPubMedCentralGoogle Scholar
  38. Obleser, J., Wöstmann, M., Hellbernd, N., Wilsch, A., & Maess, B. (2012). Adverse listening conditions and memory load drive a common alpha oscillatory network. Journal of Neuroscience, 32, 12376–12383.CrossRefPubMedGoogle Scholar
  39. R Core Team. (2017). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing.Google Scholar
  40. Röer, J. P., Bell, R., & Buchner, A. (2013). Self-relevance increases the irrelevant sound effect: Attentional disruption by one’s own name. Journal of Cognitive Psychology, 25, 925–931.CrossRefGoogle Scholar
  41. Röer, J. P., Bell, R., & Buchner, A. (2015). Specific foreknowledge reduces auditory distraction by irrelevant speech. Journal of Experimental Psychology: Human Perception and Performance, 41, 692–702. CrossRefPubMedGoogle Scholar
  42. Ruff, C. C., & Driver, J. (2006). Attentional preparation for a lateralized visual distractor: Behavioral and fMRI evidence. Journal of Cognitive Neuroscience, 18, 522–538.CrossRefPubMedGoogle Scholar
  43. Salamé, P., & Baddeley, A. (1982). Disruption of short-term memory by unattended speech: Implications for the structure of working memory. Journal of Verbal Learning and Verbal Behavior, 21, 150–164. CrossRefGoogle Scholar
  44. Saunders, D. R., Bex, P. J., & Woods, R. L. (2013). Crowdsourcing a normative natural language dataset: A comparison of Amazon Mechanical Turk and in-lab data collection. Journal of Medical Internet Research, 15, e100. CrossRefPubMedPubMedCentralGoogle Scholar
  45. Schlittmeier, S. J., Weisz, N., & Bertrand, O. (2011). What characterizes changing-state speech in affecting short-term memory? An EEG study on the irrelevant sound effect. Psychophysiology, 48, 1669–1680.CrossRefPubMedGoogle Scholar
  46. Senior, B., & Babel, M. (2018). The role of unfamiliar accents in competing speech. Journal of the Acoustical Society of America, 143, 931–942.CrossRefPubMedGoogle Scholar
  47. Shinn-Cunningham, B. G. (2008). Object-based auditory and visual attention. Trends in Cognitive Sciences, 12, 182–186.CrossRefPubMedPubMedCentralGoogle Scholar
  48. Souza, P., Gehani, N., Wright, R., & McCloy, D. (2013). The advantage of knowing the talker. Journal of the American Academy of Audiology, 24, 689–700.CrossRefPubMedPubMedCentralGoogle Scholar
  49. Wöstmann, M., Lim, S. J., & Obleser, J. (2017). The human neural alpha response to speech is a proxy of attentional control. Cerebral Cortex, 27, 3307–3317.CrossRefPubMedGoogle Scholar
  50. Wöstmann, M., & Obleser, J. (2016). Acoustic detail but not predictability of task-irrelevant speech disrupts working memory. Frontiers in Human Neuroscience, 10, 538. CrossRefPubMedPubMedCentralGoogle Scholar

Copyright information

© The Psychonomic Society, Inc. 2019

Authors and Affiliations

  1. 1.Department of PsychologyUniversity of LübeckLübeckGermany

Personalised recommendations