Advertisement

Attention, Perception, & Psychophysics

, Volume 81, Issue 2, pp 571–589 | Cite as

Not just a function of function words: Distal speech rate influences perception of prosodically weak syllables

  • Melissa M. Baese-Berk
  • Laura C. DilleyEmail author
  • Molly J. Henry
  • Louis Vinke
  • Elina Banzina
Article
  • 85 Downloads

Abstract

Listeners resolve ambiguities in speech perception using multiple sources, including non-local or distal speech rate (i.e., the speech rate of material surrounding a particular region). The ability to resolve ambiguities is particularly important for the perception of casual, everyday productions, which are often produced using phonetically reduced forms. Here, we examine whether the distal speech rate effect is specific to a lexical class of words and/or to particular lexical or phonological contexts. In Experiment 1, we examined whether distal speech rate influenced perception of phonologically similar content words differing in number of syllables (e.g., form/forum). In Experiment 2, we used both transcription and word-monitoring tasks to examine whether distal speech rate influenced perception of a reduced vowel, causing lexical reorganization (e.g., cease, see us). Distal speech rate influenced perception of lexical content in both experiments. This demonstrates that distal rate influences perception of a lexical class other than function words and affects perception in a variety of phonological and lexical contexts. These results support a view that distal speech rate is a pervasive source of information with far-reaching consequences for perception of lexical content and word segmentation.

Keywords

Speech perception Spoken word recognition Word perception 

Notes

Acknowledgements

This work was partially supported by an NSF Faculty Early Career Development (CAREER) Award and NSF grant BCS 1431063 to Laura C. Dilley and by a University of Oregon Faculty Research Award to Melissa M. Baese-Berk.

References

  1. Alexandrou, A. M., Saarinen, T., Kujala, J., & Salmelin, R. (2018). Cortical tracking of global and local variations of speech rhythm during connected natural speech perception. Journal of Cognitive Neuroscience, 1–16.Google Scholar
  2. Allopenna, P. D., Magnuson, J. S., & Tanenhaus, M. K. (1998). Tracking the Time Course of Spoken Word Recognition Using Eye Movements: Evidence for Continuous Mapping Models. Journal of Memory and Language, 38(4), 419–439.Google Scholar
  3. Baese-Berk, M. M., Dilley, L. C., Schmidt, S., Morrill, T. H., & Pitt, M. A. (2016). Revisiting Neil Armstrongs Moon-Landing Quote: Implications for Speech Perception, Function Word Reduction, and Acoustic Ambiguity. PloS one, 11(9), e0155975.Google Scholar
  4. Baese-Berk, M. M., Heffner, C. C., Dilley, L. C., Pitt, M. A., Morrill, T. H., & McAuley, J. D. (2014). Long-Term Temporal Tracking of Speech Rate Affects Spoken-Word Recognition. Psychological Science, 25(8), 1546–1553.Google Scholar
  5. Baese-Berk, M. M., Morrill, T. H., & Dilley, L. C. (2016). Do non-native speakers use context speaking rate in spoken word recognition?. In Proceedings of the 8th International Conference on Speech Prosody (SP2016) (pp. 979-983).Google Scholar
  6. Bates, D. M., Maechler, M., Bolker, B., & Walker, S. (2014). lme4: Linear mixed-effects models using Eigen and S4.Google Scholar
  7. Beach, C. M. (1991). The interpretation of prosodic patterns at points of syntactic structure ambiguity: Evidence for cue trading relations. Journal of Memory and Language, 30(6), 644–663.Google Scholar
  8. Bell, A., Brenier, J. M., Gregory, M., Girand, C., & Jurafsky, D. (2009). Predictability effects on durations of content and function words in conversational English. Journal of Memory and Language, 60(1), 92–111.Google Scholar
  9. Boersma, P., & Weenink, D. (2015). Praat: doing phonetics by computer. Available at www.praat.org. Acessed 21 Nov 2018.
  10. Bosker, H. R. (2017). Accounting for rate-dependent category boundary shifts in speech perception. Attention, Perception & Psychophysics, 79, 333-343.  https://doi.org/10.3758/s13414-016-1206-4.Google Scholar
  11. Bosker, H. R., & Ghitza, O. (2018). Entrained theta oscillations guide perception of subsequent speech: behavioural evidence from rate normalisation. Language, Cognition and Neuroscience, 33(8), 955-967.Google Scholar
  12. Breen, M., Dilley, L. C., McAuley, J. D., & Sanders, L. D. (2014). Auditory evoked potentials reveal early perceptual effects of distal prosody on speech segmentation. Language, Cognition and Neuroscience, 29(9), 1132-1146.Google Scholar
  13. Brouwer, S., Mitterer, H., & Huettig, F. (2012). Speech reductions change the dynamics of competition during spoken word recognition. Language and Cognitive Processes, 27(4), 539-571.Google Scholar
  14. Browman, C. P., & Goldstein, L. (1990). Gestural specification using dynamically-defined articulatory structures. Journal of Phonetics, 18, 299-320.Google Scholar
  15. Brown, M. (2014). Interpreting Prosodic Variation in Context (Unpublished doctoral dissertation). University of Rochester, Rochester, NY.Google Scholar
  16. Brown, M., & Kuperberg, G. R. (2015). A hierarchical generative framework of language processing: Linking language perception, interpretation, and production abnormalities in schizophrenia. Frontiers in Human Neuroscience, 9, 643.Google Scholar
  17. Brown, M., Dilley, L. C., & Tanenhaus, M. K. (2012). Real-time expectations based on context speech rate can cause words to appear or disappear. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 34, No. 34).Google Scholar
  18. Brown, M., Dilley, L. C., & Tanenhaus, M. K. (2014). Probabilistic prosody: Effects of relative speech rate on perception of (a) word (s) several syllables earlier. In Proceedings of the 7th International Conference on Speech Prosody, Dublin, Ireland, May 20–23 (pp. 1154-58).Google Scholar
  19. Bürki, A., Fougeron, C., Gendrot, C., & Frauenfelder, U. H. (2011). Phonetic reduction versus phonological deletion of French schwa: Some methodological issues. Journal of Phonetics, 39(3), 279-288.Google Scholar
  20. Clayards, M. A., Tanenhaus, M. K., Aslin, R. N., & Jacobs, R. A. (2008). Perception of speech reflects optimal use of probabilistic speech cues. Cognition, 108(3), 804–809.Google Scholar
  21. Cutler, A., & Norris, D. (1988). The role of strong syllables in segmentation for lexical access. Journal of Experimental Psychology: Human Perception and Performance, 14(1), 113.Google Scholar
  22. Dahan, D., & Magnuson, J. S. (2006). Spoken word recognition. In M. J. Traxler & M. A. Gernsbacher (Eds.), Handbook of Psycholinguistics (pp. 249-283). Amsterdam: Academic PressGoogle Scholar
  23. Davidson, L. (2006). Schwa elision in fast speech: Segmental deletion or gestural overlap? Phonetica, 63, 79-112.Google Scholar
  24. Davis, M. H., Gaskell, M. G., & Marslen-Wilson, W. (1998). Recognising Embedded Words in Connected Speech: Context and Competition. In 4th Neural Computation and Psychology Workshop, London, 9–11 April 1997 (pp. 254–266). London: Springer London.Google Scholar
  25. Davis, M. H, Marslen-Wilson, W. D., & Gaskell, M. G. (2002). Leading up the lexical garden path: Segmentation and ambiguity in spoken word recognition. Journal of Experimental Psychology: Human Perception and Performance, 28(1), 218–244.Google Scholar
  26. De Ruiter, J. P., Mitterer, H., & Enfield, N. J. (2006). Projecting the end of a speaker's turn: A cognitive cornerstone of conversation. Language, 82(3), 515-535.Google Scholar
  27. Dell, G. S. (2013). Cascading and feedback in interactive models of production: A reflection of forward modeling?. Behavioral and Brain Sciences, 36(4), 351-352.Google Scholar
  28. Di Liberto, G. M., O’Sullivan, J. A., & Lalor, E. C. (2015). Low-frequency cortical entrainment to speech reflects phoneme-level processing. Current Biology, 25(19), 2457-2465.Google Scholar
  29. Dilley, L., Shattuck-Hufnagel, S., & Ostendorf, M. (1996). Glottalization of vowel­initial syllables as a function of prosodic structure. Journal of Phonetics, 24, 423­444.Google Scholar
  30. Dilley, L. C., Arjmandi, M. K., & Ireland, Z. (2017). Spectrotemporal cues for perceptual recovery of reduced syllables from continuous, casual speech. Journal of the Acoustical Society of America, 141(5), 3700.Google Scholar
  31. Dilley, L. C., Morrill, T. H., & Banzina, E. (2013). New tests of the distal speech rate effect: examining cross-linguistic generalization. Frontiers in Psychology, 4.Google Scholar
  32. Dilley, L. C., & Pitt, M. A. (2010). Altering context speech rate can cause words to appear or disappear. Psychological Science, 21(11), 1664–1670.Google Scholar
  33. Ding, N., Melloni, L., Zhang, H., Tian, X., & Poeppel, D. (2016). Cortical tracking of hierarchical linguistic structures in connected speech. Nature Neuroscience, 19(1), 158–164.Google Scholar
  34. Ding, N., Patel, A. D., Chen, L., Butler, H., Luo, C., & Poeppel, D. (2017). Temporal modulations in speech and music. Neuroscience & Biobehavioral Reviews, 81, 181–187.Google Scholar
  35. Doelling, K. B., Arnal, L. H., Ghitza, O., & Poeppel, D. (2014). Acoustic landmarks drive delta–theta oscillations to enable speech comprehension by facilitating perceptual parsing. Neuroimage, 85, 761-768.Google Scholar
  36. Dorman, M. F., Raphael, L. J., & Liberman, A. M. (1976). Further observations on the role of silence in the perception of stop consonants. Journal of the Acoustical Society of America, 59, S40. doi:  https://doi.org/10.1121/1.2002677 Google Scholar
  37. Drijvers, L., Mulder, K., & Ernestus, M. (2016). Alpha and gamma band oscillations index differential processing of acoustically reduced and full forms. Brain and Language, 153, 27-37.Google Scholar
  38. Eisner, F., & McQueen, J. M. (2018). Speech Perception. Stevens' Handbook of Experimental Psychology and Cognitive Neuroscience, Language and Thought, 3, 1.Google Scholar
  39. Ernestus, M., Baayen, R. H., & Schreuder, R. (2002). The recognition of reduced word forms. Brain and Language, 81(1-3), 162-173.Google Scholar
  40. Ernestus, M., & Warner, N. (2011). An introduction to reduced pronunciation variants. Journal of Phonetics, 39(SI), 253-260.Google Scholar
  41. Farris, M. C., & Barshi, I. (2013). Misunderstandings in ATC communication: Language, cognition, and experimental methodology. Ashgate Publishing, LtdGoogle Scholar
  42. Fougeron, C., & Steriade, D. (1997). Does the deletion of French schwa lead to neutralization of lexical distinctions? In Proceedings of Eurospeech (Vol. 2, pp. 943-946).Google Scholar
  43. Gahl, S., Yao, Y., & Johnson, K. (2012). Why reduce? Phonological neighborhood density and phonetic reduction in spontaneous speech. Journal of Memory and Language, 66(4), 789-806.Google Scholar
  44. Ganong, W. (1980). Phonetic categorization in auditory word perception. Journal of Experimental Psychology: Human Perception and Performance, 6(1), 110–125.Google Scholar
  45. Gaskell, M. G., & Marslen-Wilson, W. D. (2001). Lexical ambiguity resolution and spoken word recognition: Bridging the gap. Journal of Memory and Language, 44(3), 325-349.Google Scholar
  46. Gow Jr, D. W. (2001). Assimilation and anticipation in continuous spoken word recognition. Journal of Memory and Language, 45(1), 133-159.Google Scholar
  47. Gow, D. W., Jr, & Gordon, P. C. (1995). Lexical and prelexical influences on word segmentation: Evidence from priming. Journal of Experimental Psychology: Human Perception and Performance, 21(2), 344–359.Google Scholar
  48. Havy, M., Serres, J., & Nazzi, T. (2014). A consonant/vowel asymmetry in word-form processing: Evidence in childhood and in adulthood. Language and Speech, 57(2), 254-281.Google Scholar
  49. Heffner, C. C., Dilley, L. C., McAuley, J. D., & Pitt, M. A. (2013). When cues combine: How distal and proximal acoustic cues are integrated in word segmentation. Language and Cognitive Processes, 28(9), 1275-1302.Google Scholar
  50. Heffner, C. C., Newman, R. S., & Idsardi, W. J. (2017). Support for context effects on segmentation and segments depends on the context. Attention, Perception, & Psychophysics, 79(3), 964-988.Google Scholar
  51. Hillenbrand, J. M., & Houde, R. A. (1996). Role of F0 and amplitude in the perception of intervocalic glottal stops. Journal of Speech, Language, and Hearing Research, 39(6), 1182-1190Google Scholar
  52. Indefrey, P., & Levelt, W. J. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1-2), 101-144.Google Scholar
  53. Jaeger, T. (2010). Redundancy and reduction: Speakers manage syntactic information density. Cognitive Psychology, 61(1), 23–62.Google Scholar
  54. Johnson, K. (2004). Massive reduction in conversational American English. In Spontaneous speech: Data and analysis. Proceedings of the 1st session of the 10th international symposium (pp. 29-54).Google Scholar
  55. Keitel, A., Gross, J., & Kayser, C. (2018). Perceptually relevant speech tracking in auditory and motor cortex reflects distinct linguistic features. PLoS Biology, 16(3), e2004473.Google Scholar
  56. Kemps, R., Ernestus, M., Schreuder, R., & Baayen, R. H. (2004). Processing reduced word forms: The suffix restoration effect. Brain and Language, 90, 117–127.Google Scholar
  57. Kessler, B., & Treiman, R. (1997). Syllable Structure and the Distribution of Phonemes in English Syllables. Journal of Memory and Language, 37(3), 295–311.Google Scholar
  58. Kidd, G. R. (1989). Articulatory-rate context effects in phoneme identification. Journal of Experimental Psychology: Human Perception and Performance, 15(4), 736.Google Scholar
  59. Kingston, J., & Beckman, M. E. (Eds.). (1990). Lengthenings and shortenings and the nature of prosodic constituency. In Papers in Laboratory Phonology Volume 1, Between the Grammar and Physics of Speech (pp. 152–178). Cambridge: Cambridge University Press.Google Scholar
  60. Kohler, K. J. (1998). The disappearance of words in connected speech. ZAS Papers in Linguistics, 11, 21-33.Google Scholar
  61. Kohler, K. J. (2006). Paradigms in experimental prosodic analysis: from measurement to function. Methods in Empirical Prosody Research, (3), 123-152.Google Scholar
  62. Kösem, A., Bosker, H. R., Takashima, A., Meyer, A., Jensen, O., & Hagoort, P. (2018). Neural entrainment determines the words we hear. Current Biology, 28(18), 2867–2875.Google Scholar
  63. Krivokapić, J. (2007). Prosodic planning: Effects of phrasal length and complexity on pause duration. Journal of Phonetics, 35(2), 162–179.Google Scholar
  64. Krivokapić, J. (2014). Gestural coordination at prosodic boundaries and its role for prosodic structure and speech planning processes. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 369(1658), 20130397.Google Scholar
  65. Kuperman, V., Pluymaekers, M., Ernestus, M., & Baayen, H. (2007). Morphological predictability and acoustic duration of interfixes in Dutch compounds. The Journal of the Acoustical Society of America, 121(4), 2261-2271.Google Scholar
  66. Kurumada, C., Brown, M., & Tanenhaus, M. K. (2017). Effects of distributional information on categorization of prosodic contours. Psychonomic Bulletin & Review, 1-8.Google Scholar
  67. Levinson, S. C. (2016). Turn-taking in human communication–origins and implications for language processing. Trends in Cognitive Sciences, 20(1), 6-14.Google Scholar
  68. Liberman, A. M., Delattre, Gerstman, L. J., & Cooper, F. S. (1956). Tempo of frequency change as a cue for distinguishing classes of speech sounds. Journal of Experimental Psychology: Human Perception and Performance, 52(2), 127-137.Google Scholar
  69. Lisker, L., & Abramson, A. S. (1967, 1970). The voicing dimension: some experiments in comparative phonetics. Paper presented at the Proceedings of the 6th International Congress of Phonetic Sciences, Prague.Google Scholar
  70. LoCasto, P., & Connine, C. M. (2002). Rule-governed missing information in spoken word recognition: Schwa vowel deletion. Perception & Psychophysics, 64(2), 208-219.Google Scholar
  71. Luce, P. A., & Pisoni, D. B. (1998). Recognizing spoken words; the neighbourhood activation model. Ear and Hearing, 19, 1–36.Google Scholar
  72. Mahowald, K., Fedorenko, E., Piantadosi, S. T., & Gibson, E. (2013). Info/information theory: Speakers choose shorter words in predictive contexts. Cognition, 126(2), 313-318.Google Scholar
  73. Manuel, S. Y., Shattuck-Hufnagel, S., Huffman, M. K., Stevens, K., Carlson, R., & Hunnicutt, S. (1992). Studies of vowel and consonant reduction. Paper presented at the 1992 International Confernce on Spoken Language Processing, University of Alberta: Edmonton, Canada.Google Scholar
  74. Marslen-Wilson, W., & Zwitserlood, P. (1989). Accessing spoken words: The importance of word onsets. Journal of Experimental Psychology: Human Perception and Performance, 15(3), 576–585.Google Scholar
  75. Marslen-Wilson, W. D. (1987). Functional parallelism in spoken word-recognition. Cognition, 25(1-2), 71–102.Google Scholar
  76. Marslen-Wilson, W. D., & Welsh, A. (1978). Processing interactions and lexical access during word recognition in continuous speech. Cognitive Psychology, 10(1), 29–63.Google Scholar
  77. Mattys, S. L., Davis, M. H., Bradlow, A. R., & Scott, S. K. (2012). Speech recognition in adverse conditions: A review. Language and Cognitive Processes, 27(7-8), 953-978.Google Scholar
  78. Mattys, S. L., & Melhorn, J. F. (2007). Sentential, lexical, and acoustic effects on the perception of word boundaries. Journal of the Acoustical Society of America, 122(1), 554–567.Google Scholar
  79. Mattys, S. L., Melhorn, J. F., & White, L. (2007). Effects of syntactic expectations on speech segmentation. Journal of Experimental Psychology: Human Perception and Performance, 33(4), 960–977.Google Scholar
  80. Mattys, S. L., White, L., & Melhorn, J. F. (2005). Integration of Multiple Speech Segmentation Cues: A Hierarchical Framework. Journal of Experimental Psychology: General, 134(4), 477–500.Google Scholar
  81. McClelland, J. L., & Elman, J. L. (1986). The TRACE model of speech perception. Cognitive Psychology, 18(1), 1–86.Google Scholar
  82. McMurray, B., Tanenhaus, M. K., & Aslin, R. N. (2002). Gradient effects of within-category phonetic variation on lexical access. Cognition, 86(2), B33–B42.Google Scholar
  83. McQueen, J. M. (1998). Segmentation of Continuous Speech Using Phonotactics, Journal of Memory and Language, 39(1), 21-46.Google Scholar
  84. Miller, J. L., Aibel, I. L., & Green, K. (1984). On the nature of rate-dependent processing during phonetic perception. Perception and Psychophysics, 35(1), 5–15.Google Scholar
  85. Miller, J. L., & Liberman, A. M. (1979). Some effects of later-occurring information on the perception of stop consonant and semivowel. Attention, Perception & Psychophysics, 25(6), 457–465.Google Scholar
  86. Mitterer, H. (2018). The singleton-geminate distinction can be rate dependent: Evidence from Maltese. Laboratory Phonology: Journal of the Association for Laboratory Phonology, 9(1).Google Scholar
  87. Morrill, T. H., Baese-Berk, M., Heffner, C., & Dilley, L. C. (2015). Interactions between distal speech rate, linguistic knowledge, and speech environment. Psychonomic Bulletin and Review, 22(5), 1451-1457.Google Scholar
  88. Morrill, T. H., Dilley, L. C., McAuley, J. D., & Pitt, M. A. (2014). Distal rhythm influences whether or not listeners hear a word in continuous speech: Support for a perceptual grouping hypothesis. Cognition, 131(1), 69–74.Google Scholar
  89. New, B., & Nazzi, T. (2014). The time course of consonant and vowel processing during word recognition. Language, Cognition and Neuroscience, 29(2), 147-157.Google Scholar
  90. Niebuhr, O., & Kohler, K. J. (2011). Perception of phonetic detail in the identification of highly reduced words. Journal of Phonetics, 39(3), 319-329.Google Scholar
  91. Norris, D. (1994). Shortlist: a connectionist model of continuous speech recognition. Cognition, 52(3), 189–234.Google Scholar
  92. Norris, D., Cutler, A., McQueen, J. M., & Butterfield, S. (2006). Phonological and conceptual activation in speech comprehension. Cognitive Psychology, 53(2), 146–193.Google Scholar
  93. Norris, D., & McQueen, J. M. (2008). Shortlist B: a Bayesian model of continuous speech recognition. Psychological Review, 115(2), 357.Google Scholar
  94. Norris, D., McQueen, J. M., & Cutler, A. (2016). Prediction, Bayesian inference and feedback in speech recognition. Language, Cognition, and Neuroscience, 31(1), 4-18.Google Scholar
  95. O'Dell, M., Nieminen, T. (2018) Distal rate effect for Finnish epenthetic vowels. Proc. 9th International Conference on Speech Prosody 2018, 646-650.  https://doi.org/10.21437/SpeechProsody.2018-131.
  96. Oh, Y. M., Coupé, C., Marsico, E., & Pellegrino, F. (2015). Bridging phonological system and lexicon: Insights from a corpus study of functional load. Journal of Phonetics, 53, 153-176.Google Scholar
  97. Olasagasti, I., Bouton, S., & Giraud, A. L. (2015). Prediction across sensory modalities: A neurocomputational model of the McGurk effect. Cortex, 68, 61-75.Google Scholar
  98. Park, H., Thut, G., & Gross, J. (2018). Predictive entrainment of natural speech through two fronto-motor top-down channels. bioRxiv, 280032.Google Scholar
  99. Patterson, D. J., LoCasto, P., & Connine, C. M. (2003). A corpus analysis of schwa vowel deletion frequency in American English. Phonetica, 60, 45-68.Google Scholar
  100. Peelle, J. E., & Davis, M. H. (2012). Neural oscillations carry speech rhythm through to comprehension. Frontiers in Psychology, 3, 320.Google Scholar
  101. Pickering, M. J., & Garrod, S. (2013). An integrated theory of language production and comprehension. Behavioral and Brain Sciences, 36(4), 329-347.Google Scholar
  102. Pickett, J. M., & Decker, L. R. (1960). Time factors in perception of a double consonant. Language and Speech, 3, 11–17.Google Scholar
  103. Pierrehumbert, J. and D. Talkin, (1991) Lenition of /h/ and glottal stop. Papers in Laboratory Phonology II, Cambridge Univ. Press, Cambridge UK. 90-117Google Scholar
  104. Pisoni, D. B., Carrell, T. D., & Gans, S. J. (1983). Perception of the duration of rapid spectrum changes in speech and nonspeech signals. Perception and Psychophysics, 34(4), 314–322.Google Scholar
  105. Pitt, M. A., Dilley, L., & Tat, M. (2011). Exploring the role of exposure frequency in recognizing pronunciation variants. Journal of Phonetics, 39(3), 304-311.Google Scholar
  106. Pitt, M. A., Szostak, C., & Dilley, L. C. (2016). Rate dependent speech processing can be speech specific: Evidence from the perceptual disappearance of words under changes in context speech rate. Attention, Perception, & Psychophysics, 78(1), 334-345.Google Scholar
  107. Poellmann, K., Bosker, H. R., McQueen, J. M., & Mitterer, H. (2014). Perceptual adaptation to segmental and syllabic reductions in continuous spoken Dutch. Journal of Phonetics, 46, 101-127.Google Scholar
  108. Quené, H. (1992). Durational cues for word segmentation in Dutch. Journal of Phonetics, 20(3), 331–350.Google Scholar
  109. R Development Core Team (2014). R: A language and environment for statistical computing. Vienna, Austria. <http://www.R-project.org>. Accessed 5 Aug 2018.
  110. Ravignani, A., Honing, H., & Kotz, S. A. (2017). The evolution of rhythm cognition: Timing in music and speech. Frontiers in Human Neuroscience, 11, 303.Google Scholar
  111. Redi, L. & Shattuck-Hufnagel, S. (2001). Variation in realization of glottalization in normal speakers. Journal of Phonetics, 29, 407-429. doi:  https://doi.org/10.1006/jpho.2001.0145 Google Scholar
  112. Reinisch, E. (2016). Speaker-specific processing and local context information: The case of speaking rate. Applied Psycholinguistics, 37(6), 1397-1415.Google Scholar
  113. Reinisch, E., Jesse, A., & McQueen, J. M. (2011). Speaking rate from proximal and distal contexts is used during word segmentation. Journal of Experimental Psychology: Human Perception and Performance, 37(3), 978.Google Scholar
  114. Reinisch, E., & Sjerps, M. J. (2013). The uptake of spectral and temporal cues in vowel perception is rapidly influenced by context. Journal of Phonetics, 41(2), 101-116.Google Scholar
  115. Remez, R., Rubin, P., Pisoni, D., & Carrell, T. D. (1981). Speech perception without traditional speech cues. Science, 212(4497), 947–949.Google Scholar
  116. Repp, B. H., Liberman, A. M., Eccardt, T., & Pesetsky, D. (1978). Perceptual integration of acoustic cues for stop, fricative, and affricate manner. Journal of Experimental Psychology: Human Perception and Performance, 4(4), 612-637.Google Scholar
  117. Salverda, A. P., Dahan, D., & McQueen, J. M. (2003). The role of prosodic boundaries in the resolution of lexical embedding in speech comprehensionCognition, 90(1), 51–89.Google Scholar
  118. Samuel, A. G. (1981). Phonemic restoration: insights from a new methodology. Journal of Experimental Psychology: General, 110(4), 474.Google Scholar
  119. Samuel, A. G. (1996). Does lexical information influence the perceptual restoration of phonemes?. Journal of Experimental Psychology: General, 125(1), 28.Google Scholar
  120. Sawusch, J. R., & Newman, R. S. (2000). Perceptual normalization for speaking rate II: Effects of signal discontinuities. Attention, Perception & Psychophysics, 62(2), 285–300.Google Scholar
  121. Schuppler, B., Ernestus, M., Scharenborg, O., & Boves, L. (2011). Acoustic reduction in conversational Dutch: A quantitative analysis based on automatically generated segmental transcriptions. Journal of Phonetics, 39(1), 96-109.Google Scholar
  122. Seyfarth, S. (2014). Word informativity influences acoustic duration: Effects of contextual predictability on lexical representation. Cognition, 133(1), 140–155.Google Scholar
  123. Shannon, R. V., Zeng, F.-G., Kamath, V., Wygonski, J., & Ekelid, M. (1995). Speech recognition with primarily temporal cues. Science, 270(5234), 303–304.Google Scholar
  124. Shatzman, K. B., & McQueen, J. M. (2006). Segment duration as a cue to word boundaries in spoken-word recognition. Perception and Psychophysics, 68(1), 1–16.Google Scholar
  125. Shockey, L. (2008). Sound patterns of spoken English. Wiley, Hoboken.Google Scholar
  126. Snedeker, J., & Trueswell, J. (2003). Using prosody to avoid ambiguity: Effects of speaker awareness and referential context. Journal of Memory and Language, 48(1), 103–130.Google Scholar
  127. Staub, A., & Clifton, C., Jr. (2006). Syntactic prediction in language comprehension: Evidence from either... or. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(2), 425.Google Scholar
  128. Stevens, K. N. (2000). Acoustic phonetics. Cambridge: MIT Press.Google Scholar
  129. Summerfield, Q. (1981). Articulatory rate and perceptual constancy in phonetic perception. Journal of Experimental Psychology: Human Perception and Performance, 7(5), 1074–1095.Google Scholar
  130. Tavano, A., & Scharinger, M. (2015). Prediction in speech and language processing. Cortex, 68, 1-7.Google Scholar
  131. Tucker, B. V., & Ernestus, M. (2016). Why we need to investigate casual speech to truly understand language production, processing and the mental lexicon. The Mental Lexicon, 11(3), 375-400.Google Scholar
  132. Turk, A. E., & Shattuck-Hufnagel, S. (2000). Word-boundary-related duration patterns in English. Journal of Phonetics, 28(4), 397–440.Google Scholar
  133. Turk, A. E., & Shattuck-Hufnagel, S. (2007). Multiple targets of phrase-final lengthening in American English words. Journal of Phonetics, 35(4), 445–472.Google Scholar
  134. Van de Ven, M., & Ernestus, M. (2017). The role of segmental and durational cues in the processing of reduced words. Language and Speech, 61(3), 358–383.Google Scholar
  135. Vitevitch, M. S., & Luce, P. A. (1999). Probabilistic phonotactics and neighborhood activation in spoken word recognition. Journal of Memory and Language, 40(3), 374-408.Google Scholar
  136. Wade, T., & Holt, L. L. (2005). Effects of later-occurring nonlinguistic sounds on speech categorization. Journal of the Acoustical Society of America, 118(3), 1701–1710.Google Scholar
  137. Warren, R. M. (1970). Perceptual restoration of missing speech sounds. Science, 167(3917), 392-393.Google Scholar
  138. Warren, R. M., & Sherman, G. L. (1974). Phonemic restorations based on subsequent context. Attention, Perception, & Psychophysics, 16(1), 150-156.Google Scholar
  139. Wilshire, C. E. (1999). The “Tongue Twister” Paradigm as a Technique for Studying Phonological Encoding. Language and Speech, 42(1), 57–82.Google Scholar
  140. Wright, R. (2004). A review of perceptual cues and cue robustness. In B. Hayes, R. Kirchner, & D. Steriad (Eds.) Phonetically based phonology (pp. 34–57).Google Scholar
  141. Zhang, X., & Samuel, A. G. (2015). The activation of embedded words in spoken word recognition. Journal of Memory and Language, 79, 53–75.Google Scholar
  142. Zhou, X., Espy-Wilson, C. Y., Boyce, S., Tiede, M., Holland, C., & Choe, A. (2008). A magnetic resonance imaging-based articulatory and acoustic study of “retroflex” and “bunched” American English /r/. The Journal of the Acoustical Society of America, 123(6), 4466–4481.Google Scholar
  143. Zoefel, B., Archer-Boyd, A., & Davis, M. H. (2018). Phase entrainment of brain oscillations causally modulates neural responses to intelligible speech. Current Biology, 28(3), 401-408.Google Scholar

Copyright information

© The Psychonomic Society, Inc. 2018

Authors and Affiliations

  • Melissa M. Baese-Berk
    • 1
  • Laura C. Dilley
    • 2
    Email author
  • Molly J. Henry
    • 3
  • Louis Vinke
    • 4
  • Elina Banzina
    • 5
  1. 1.Department of Linguistics1290 University of OregonEugeneUSA
  2. 2.Department of Communicative Sciences and DisordersMichigan State UniversityEast LansingUSA
  3. 3.Department of Psychology, Brain and Mind InstituteUniversity of Western OntarioLondonCanada
  4. 4.Center for Systems NeuroscienceBoston UniversityBostonUSA
  5. 5.Department of LinguisticsStockholm School of Economics in RigaRigaLatvia

Personalised recommendations