Electrophysiological evidence for a self-processing advantage during audiovisual speech integration
- 210 Downloads
Previous electrophysiological studies have provided strong evidence for early multisensory integrative mechanisms during audiovisual speech perception. From these studies, one unanswered issue is whether hearing our own voice and seeing our own articulatory gestures facilitate speech perception, possibly through a better processing and integration of sensory inputs with our own sensory-motor knowledge. The present EEG study examined the impact of self-knowledge during the perception of auditory (A), visual (V) and audiovisual (AV) speech stimuli that were previously recorded from the participant or from a speaker he/she had never met. Audiovisual interactions were estimated by comparing N1 and P2 auditory evoked potentials during the bimodal condition (AV) with the sum of those observed in the unimodal conditions (A + V). In line with previous EEG studies, our results revealed an amplitude decrease of P2 auditory evoked potentials in AV compared to A + V conditions. Crucially, a temporal facilitation of N1 responses was observed during the visual perception of self speech movements compared to those of another speaker. This facilitation was negatively correlated with the saliency of visual stimuli. These results provide evidence for a temporal facilitation of the integration of auditory and visual speech signals when the visual situation involves our own speech gestures.
KeywordsSelf recognition Speech perception Audiovisual integration EEG
This study was supported by research funds from the European Research Council (FP7/2007-2013 Grant Agreement No. 339152).
- Alcorn S (1932) The Tadoma method. Volta Rev 34:195–198Google Scholar
- Boersma P and Weenink D (2013) Praat: doing phonetics by computer. Computer program, Version 5.3.42, retrieved 2 March 2013 from (http://www.fon.hum.uva.nl/praat/). Accessed 4 July 2017
- Burfin S, Pascalis O, Ruiz-Tada E, Costa A, Savariaux C, Kandel S (2014) Bilingualism affects the audio–visual processing of non-native phonemes. Front Psychol 5:1179 (Research topic “New advances on the perception and production of non-native speech sounds”—section language sciences) CrossRefPubMedPubMedCentralGoogle Scholar
- Callan DE, Jones JA, Munhall KG, Callan AM, Kroos C, Vatikiotis-Bateson E (2003) Neural processes underlying perceptual enhancement by visual speech gestures. Neuro Rep 14:2213–2217Google Scholar
- Campbell R, MacSweeney M, Surguladze S, Calvert G, McGuire P, Suckling J, Brammer MJ, David AS (2001) Cortical substrates for the perception of face actions: an fMRI study of the specificity of activation for seen speech and for meaningless lower-face acts (gurning). Cogn Brain Res 12:233–243CrossRefGoogle Scholar
- Jones JA, Callan DE (2003) Brain activity during audiovisual speech perception: an fMRI study of the McGurk effect. Neuro Rep 14:1129–1133Google Scholar
- Reisberg D, McLean J, Goldfield A (1987) Easy to hear but hard to understand: a lip-reading advantage with intact auditory stimuli. In: Dodd B, Campbell R (eds) Hearing by eye: the psychology of lipreading. Lawrence Erlbaum Associates, Inc, New Jersey, pp 97–114Google Scholar
- Schwartz JL, Savariaux C (2001) Is it easier to lipread one’s own speech gestures than those of somebody else? It seems not! auditory–visual speech processing. ISCA Archive, Aalborg, pp 18–23Google Scholar
- Treille A, Vilain C, Sato M (2014b) The sound of your lips: electrophysiological cross-modal interactions during hand-to-face and face-to-face speech perception. Front Psychol 5(420):1–9Google Scholar