Abstract
In this paper, we have analyzed different approaches to audio–visual speech recognition. We mainly focused on testing different modalities fusion techniques, rather than other parts of AVSR (e.g., feature extraction methods). Tree audio–visual modalities integration methods were under consideration, namely GMM-CHMM, DNN-HMM and end-to-end approaches, defined as the most promising and commonly found in scientific literature. The testing was performed on two different datasets: on GRID corpus for the English language and on HAVRUS corpus for the Russian. Obtained results once again confirms the superiority of neural network approaches compared to the others in conditions when we have enough data to effectively train NN models, which was demonstrated by our experiments on the GRID dataset. On a more compact in size HAVRUS database, the best recognition results were demonstrated by the traditional GMM-CHMM approach. This paper presents our vision on current state of audio–visual speech recognition field and possible directions for the further research.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Katsaggelos, K., Bahaadini, S., Molina, R.: Audiovisual fusion: challenges and new approaches. Proc. IEEE 103(9), 1635–1653 (2015)
McGurk, H., MacDonald, J.: Hearing lips and seeing voices. Nature 264, 746–748 (1976)
Ivanko, D., et al.: Multimodal speech recognition: increasing accuracy using high speed video data. J. Multimodal User Interfaces 12(4), 319–328 (2018)
Davis, S.: Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences [Text]. Proces.s IEEE Trans. Acoust. Speech Signal 28(4), 357–366 (1980)
Hermansky, H.: Perceptual linear predictive (PLP) analysis of speech [Text]. J. Acoust. Soc. Am. 87(4), 1738–1752 (1990)
Bundy, A., Wallen, L.: Linear predictive coding. In: Catalogue of Artificial Intelligence. Springer-Verlag (1984)
Bahaadini, S., Asaei, A., Imseng, D., Bourlard, H.: Posterior-based sparse representation for automatic speech recognition. In: Proceeding of Interspeech, pp. 2454–2458 (2014)
Konar, A., Chakraborty, A.: Emotion Recognition: A Pattern Analysis Approach. Wiley (2015)
Hong, X., Yao, H., Wan, Y., Chen, R.: A PCA Based visual DCT feature extraction method for lip-reading. In: Proceedings International Conference Intelligent Information Hiding Multimedia, Signal Process, pp. 321–326 (2006)
Ivanko, D., Ryumin, D., Kipyatkova, I., Axyonov, A., Karpov, A.: Lip-Reading Using pixel-based and geometry-based features for multimodal human–robot interfaces. In: Proceedings of 14th International Conference on Electromechanics and Robotics “Zavalishin’s Readings”, pp. 477–486 (2020)
Cetingul, H., Yemez, Y., Erzin, E., Tekalp, A.: Discriminative analysis of lip motion features for speaker identification and speech-reading. Proc. IEEE Trans. Image Process. 15(10), 2879–2891 (2006)
Ivanko, D., Ryumin, D., Axyonov, A., Zelezny, M.: Designing advanced geometric features for automatic Russian visual speech recognition. In: Proceedings of International Conference on Speech and Computer, pp. 245–254 (2018)
Ngiam, J. et al.: Multimodal deep learning. In: Proceedings of 28th international Conference of Machine Learning, pp. 689–696 (2011)
Chetty, G., Wagner., M.: Audio-visual multimodal fusion for biometric person authentication and liveness verification. In: Proceedings NICTA-HCSNet Multimodal user interaction workshop, vol. 57, pp. 17–24 (2006)
Atrey, P.K., Hossain, M.A., Saddik, E., Kankanhalli, M.S.: Multimodal fusion for multimedia analysis: A survey. Multimedia Syst. 16(6), 345–379 (2010)
Xu, H., Chua, T.S.: Fusion of AV features and external information sources for event detection in team sport video. ACM Trans. Multimedia Comput. Commun. Appl. 2(1), 44–67 (2006)
Snoek, C.G., Worring, M., Smeulders, A.W.: Early versus late fusion in semantic video analysis. In: Proceedings 13th Annual ACM International Conference on Multimedia, pp. 399–402 (2005)
Wu, Z., Cai, L., Meng, H.: Multi-level fusion of audio and visual features for speaker identification. In: Advances in Biometrics, pp. 493–499 (2005)
Ivanko, D., et al.: Using a high-speed video Camera for robust audio-visual speech recognition in acoustically noisy conditions. SPECOM 2017, 757–766 (2017)
Chung, J.S., Zisserman, A.: Lip reading in the wild. In: Computer Vision—ACCV 2016. pp. 87–103 (2016)
He, J., Zhang, H.: Lipreading recognition based on svm and dtak. In: 2010 4th International Conference on Bioinformatics and Biomedical Engineering, pp. 1–4 (2010)
Hinton, G., et al.: IEEE signal process. Mag. 29(6), 82–97 (2012)
Assael, Y.M., Shillingford, B., Whiteson, S., Freitas, N. LipNet: End-to-End Sentence-level Lipreading. In: arXiv:1611.01599, pp. 1–13 (2016)
Cooke, M., Barker, J., Cunningham, S., Shao, X.: An audio-visual corpus for speech perception and automatic speech recognition. J. Acoust. Soc. Am. 120(5), 2421–2424 (2006)
Verkhodanova, V., Ronzhin, A., Kipyatkova, I., Ivanko, D., Karpov, A., Zelezny, M.: HAVRUS corpus: high-Speed recordings of audio-visual Russian speech. In: International Conference on Speech and Computer, pp. 338–345 (2016)
Ivanko, D. et al.: Using a high-speed video Camera for robust audio-visual speech recognition in acoustically noisy conditions. In: International Conference on Speech and Computer, pp. 757–766 (2017)
Meutzner, H., Ma, N., Nickel, R., Schymura, C., Kolossa, D.: Improving audio-visual speech recognition using deep neural networks with dynamic stream reliability estimates. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5320–5324 (2017)
Markovnikov, N., Kipyatkova, I.: An analytic survey of end-to-end speech recognition systems. SPIIRAS Proc. 58(3), 77–110 (2018)
Acknowledgements
This research is supported by the RFBR projects No. 19-29-09081 (Section 3) and No. 18-37-00306 (Sections 2 and 4), as well as state research No. 0073-2019-0005.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Ivanko, D., Ryumin, D., Karpov, A. (2021). An Experimental Analysis of Different Approaches to Audio–Visual Speech Recognition and Lip-Reading. In: Ronzhin, A., Shishlakov, V. (eds) Proceedings of 15th International Conference on Electromechanics and Robotics "Zavalishin's Readings". Smart Innovation, Systems and Technologies, vol 187. Springer, Singapore. https://doi.org/10.1007/978-981-15-5580-0_16
Download citation
DOI: https://doi.org/10.1007/978-981-15-5580-0_16
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-5579-4
Online ISBN: 978-981-15-5580-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)