Abstract
Experiments show that human perceptual responses to audiovisual speech signals factorize into two independent components, one controlled by the optic signal and one controlled by the acoustic signal (Massaro, 1987). From a Bayesian point of view, this result indicates that, at some level, the perceptual system treats acoustic and optic speech signals as if they were conditionally independent processes. This raises the question of whether conditional independence is an optimal assumption or whether the perceptual system uses it for reasons other than minimization of error rates. In this paper we present results suggesting that the opto-acoustic signals are indeed conditionally independent and that therefore the factorization of optic and acoustic influences observed in humans is optimal. Finally, based on a previous analysis by Movellan and McClelland (1995) we show that the implicit assumption of conditional independence can be implemented in nervous systems by using physically separable audio and visual channels that talk to each other via top-down feedback connections.
Keywords
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Movellan, J.R., Chadderdon, G. (1996). Channel Separability in the Audio-Visual Integration of Speech: A Bayesian Approach. In: Stork, D.G., Hennecke, M.E. (eds) Speechreading by Humans and Machines. NATO ASI Series, vol 150. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-13015-5_36
Download citation
DOI: https://doi.org/10.1007/978-3-662-13015-5_36
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-08252-8
Online ISBN: 978-3-662-13015-5
eBook Packages: Springer Book Archive