Experimental Brain Research

, Volume 167, Issue 1, pp 66–75 | Cite as

Automatic audiovisual integration in speech perception

Research article

Abstract

Two experiments aimed to determine whether features of both the visual and acoustical inputs are always merged into the perceived representation of speech and whether this audiovisual integration is based on either cross-modal binding functions or on imitation. In a McGurk paradigm, observers were required to repeat aloud a string of phonemes uttered by an actor (acoustical presentation of phonemic string) whose mouth, in contrast, mimicked pronunciation of a different string (visual presentation). In a control experiment participants read the same printed strings of letters. This condition aimed to analyze the pattern of voice and the lip kinematics controlling for imitation. In the control experiment and in the congruent audiovisual presentation, i.e. when the articulation mouth gestures were congruent with the emission of the string of phones, the voice spectrum and the lip kinematics varied according to the pronounced strings of phonemes. In the McGurk paradigm the participants were unaware of the incongruence between visual and acoustical stimuli. The acoustical analysis of the participants’ spoken responses showed three distinct patterns: the fusion of the two stimuli (the McGurk effect), repetition of the acoustically presented string of phonemes, and, less frequently, of the string of phonemes corresponding to the mouth gestures mimicked by the actor. However, the analysis of the latter two responses showed that the formant 2 of the participants’ voice spectra always differed from the value recorded in the congruent audiovisual presentation. It approached the value of the formant 2 of the string of phonemes presented in the other modality, which was apparently ignored. The lip kinematics of the participants repeating the string of phonemes acoustically presented were influenced by the observation of the lip movements mimicked by the actor, but only when pronouncing a labial consonant. The data are discussed in favor of the hypothesis that features of both the visual and acoustical inputs always contribute to the representation of a string of phonemes and that cross-modal integration occurs by extracting mouth articulation features peculiar for the pronunciation of that string of phonemes.

Keywords

McGurk effect Audiovisual integration Voice spectrum analysis Lip kinematics Imitation 

Notes

Acknowledgements

We whish to thank Paola Santunione and Andrea Candiani for the help in carrying out the experiments and Dr. Cinzia Di Dio for the comments on the manuscript. The work was supported by grant from MIUR (Ministero dell’Istruzione, dell’Università e della Ricerca) to M.G.

References

  1. Bookheimer S (2002) Functional MRI of language: new approaches to understanding the cortical organization of semantic processing. Ann Rev Neurosci 25:151–188CrossRefPubMedGoogle Scholar
  2. Buccino G, Binkofski F, Fink GR, Fadiga L, Fogassi L, Gallese V, Seitz RJ, Rizzolatti G, Freund HJ (2001) Action observation activates premotor and parietal areas in somatotopic manner: an fMRI study. Eur J Neurosci 13:400–404PubMedCrossRefGoogle Scholar
  3. Buccino G, Lui F, Canessa N, Patteri I, Lagravinese G, Benuzzi F, Porro CA, Rizzolatti G (2004) Neural circuits involved in the recognition of actions performed by nonconspecific: an fMRI study. J Cogn Neurosci 16:114–126CrossRefPubMedGoogle Scholar
  4. Calvert GA, Campbell R (2003) Reading speech from still and moving faces: the neural substrates of visibile speech. J Cogn Neurosci 15:57–70CrossRefPubMedGoogle Scholar
  5. Calvert GA, Bullmore ET, Brammer MJ, Campbell R, Williams SC, McGuire PK, Woodruff PW, Iversen SD, David AS (1997) Activation of auditory cortex during silent lipreading. Science 276:593–596CrossRefPubMedGoogle Scholar
  6. Calvert GA, Brammer MJ, Bullmore ET, Campbell R, Iversen SD, David AS (1999) Response amplification in sensory-specific cortices during cross-modal binding. Neuroreport 10:2619–2623PubMedCrossRefGoogle Scholar
  7. Calvert GA, Bullmore ET, Brammer MJ (2000) Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex. Curr Biol 10:649–657CrossRefPubMedGoogle Scholar
  8. Campbell R, MacSweeney M, Surguladze S, Calvert GA, McGuire PK, Brammer MJ, David AS, Suckling J (2001) Cortical substrates for the perception of face actions: an fMRI study of the specificity of activation for seen speech and for meaningless lower face acts (gurnings). Cogn Brain Res 12:233–243CrossRefGoogle Scholar
  9. Carr L, Iacoboni M, Dubeau MC, Mazziotta JC (2003) Neural mechanisms of empathy in humans: a relay from neural systems for imitation to limbic areas. PNAS 100:5497–5502CrossRefPubMedGoogle Scholar
  10. Chen TH, Massaro DW (2004) Mandarin speech perception by ear and eye follows a universal principle. Percept Psychophys 66:820–836PubMedGoogle Scholar
  11. Demonet JF, Chollet F, Ramsay S, Cardebat D, Nespoulous JC, Wise R, Frackowiak RSJ (1992) The anatomy of phonological and semantic processing in normal subjects. Brain 115:1753–1768PubMedCrossRefGoogle Scholar
  12. Ferrero F, Genre A, Boë LJ Contini M (1979) Nozioni di fonetica acustica. Edizioni Omega,TorinoGoogle Scholar
  13. Gentilucci M, Chieffi S, Scarpa M, Castiello U (1992) Temporal coupling between transport and grasp components during prehension movements: effects of visual perturbation. Behav Brain Res 47:71–82PubMedCrossRefGoogle Scholar
  14. Gentilucci M, Santunione P, Roy AC, Stefanini S (2004) Execution and observation of bringing a fruit to the mouth affect syllable pronunciation. Eur J Neurosci 19:190–202PubMedCrossRefGoogle Scholar
  15. Grèzes J, Armony JL, Rowe J, Passingham RE (2003) Activations related to “mirror” and “canonical” neurones in the human brain: an fMRI study. Neuroimage 18:928–937CrossRefPubMedGoogle Scholar
  16. Heiser M, Iacoboni M, Maeda F, Marcus J, Mazziotta JC (2003) The essential role of Broca’s area in imitation. Eur J Neurosci 17:1123–1128PubMedCrossRefGoogle Scholar
  17. Iacoboni M, Woods RP, Brass M, Bekkering H, Mazziotta JC, Rizzolatti G (1999) Cortical mechanism of human imitation. Science 286:2526–2528CrossRefPubMedGoogle Scholar
  18. Leoni FA, Maturi P (2002) Manuale di Fonetica. Carocci, RomaGoogle Scholar
  19. Leslie KR, Johnson-Frey SH, Grafton S (2004) Functional imaging of face and hand imitation: towards a motor theory of empathy. Neuroimage 21:601–607CrossRefPubMedGoogle Scholar
  20. Liberman AM, Mattingly IG (1985) The motor theory of speech perception revised. Cognition 1:1–36CrossRefGoogle Scholar
  21. Massaro DW (1998) Perceiving talking faces: from speech perception to behavioral principle. MIT press, Cambrige, MAGoogle Scholar
  22. McGurk H, MacDonald J (1976) Hearing lips and seeing voices. Nature 264:746–748CrossRefPubMedGoogle Scholar
  23. Meltzoff AN (2002) Elements of a developmental theory of imitation. In: Meltzoff AN, Prinz W (eds) The imitative mind: development, evolution, and brain bases. Cambridge University Press, New York, pp 74–84Google Scholar
  24. Munhall KG, Vatikiotis-Bateson E (1998) The moving face during speech communication. In: Campbell R, Dodd B, Burnham D (eds) Hearing by eye II: advances in the psychology of speechreading and auditory-visual speech. Psychology, Hove UK, pp 123–139Google Scholar
  25. Oldfield RC (1971) The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9:97–113CrossRefPubMedGoogle Scholar
  26. Paulesu E, Frith CD, Frackowiak RSJ (1993) The neural correlates of the verbal component of working memory. Nature 362:342–345CrossRefPubMedGoogle Scholar
  27. Reisberg D, McLean J, Goldfield A (1987) Easy to hear but not to understand: a lipreading advantage with intact auditory stimuli. In Dodd B, Campbell R (eds) Hearing by eye: the psychology of lip-reading. Erlbaum, Hillsdale NJ, pp 97–113Google Scholar
  28. Rizzolatti G, Arbib MA (1998) Language within our grasp. Trends Neurosci 21:188–194CrossRefPubMedGoogle Scholar
  29. Sekiyama K, Tohkura Y (1993) Inter-language differences in the influence of visual cues in speech perception. J Phonetics 21:427–444Google Scholar
  30. Sekiyama K, Kanno I, Miura S, Sugita Y (2003) Audio-visual speech perception examined by fMRI and PET. Neurosci Res 47:277–287CrossRefPubMedGoogle Scholar
  31. Sumby WH, Pollack I (1954) Visual contributions to speech intelligibility in noise. J Acoust Soc Am 26:212–215CrossRefGoogle Scholar
  32. Summerfield Q (1992) Lipreading and audio-visual speech perception. Philos Trans R Soc Lond B Biol Sci 335:71–78PubMedCrossRefGoogle Scholar
  33. Watkins K, Paus T (2004) Modulation of motor excitability during speech perception: the role of Broca’s area. J Cogn Neurosci 16:978–987CrossRefPubMedGoogle Scholar
  34. Zatorre RJ, Evans AC, Meyer E, Gjedde A (1992) Lateralization of phonetic and pitch discrimination in speech processing. Science 256:846–849PubMedCrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2005

Authors and Affiliations

  1. 1.Dipartimento di NeuroscienzeUniversitá di ParmaParmaItaly

Personalised recommendations