Abstract
In Computer Assisted Language Learning systems, pronunciation scoring consists in providing a score grading the overall pronunciation quality of the speech uttered by a student. In this work, a log-likelihood ratio obtained with respect to two automatic speech recognition (ASR) models was used as score. One model represents native pronunciation while the other one captures non-native pronunciation. Different approaches to obtain each model and different amounts of training data were analyzed. The best results were obtained training an ASR system using a separate large corpus without pronunciation quality annotations and then adapting it to the native and non-native data, sequentially. Nevertheless, when models are trained directly on the native and non-native data, pronunciation scoring performance is similar. This is a surprising result considering that word error rates for these models are significantly worse, indicating that ASR performance is not a good predictor of pronunciation scoring performance on this system.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Godfrey, J.J., Holliman, E.C., McDaniel, J.: SWITCHBOARD: telephone speech corpus for research and development. In: Proceedings of ICASSP. IEEE, San Francisco (1992)
Gauvain, J.-L., Lee, C.-H.: Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains. IEEE Trans. Speech Audio Process. 2, 291–298 (1994)
Ronen, O., Neumeyer, L., Franco, H.: Automatic detection of mispronunciation for language instruction. In: Proceedings of EUROSPEECH, Rhodes (1997)
Cieri, C., Miller, D., Walker, K.: The fisher corpus: a resource for the next generations of speech-to-text. In: LREC, Lisbon (2004)
Franco, H., Ferrer, L., Bratt, H.: Adaptive and discriminative modeling for improved mispronunciation detection. In: Proceedings of ICASSP. IEEE, Florence (2014)
Robertson, S., Munteanu, C., Penn, G.: Pronunciation error detection for new language learners. In: Proceedings of Interspeech, San Francisco (2016)
Cucchiarini, C., Strik, H., Binnenpoorte, D., Boves, L.: Pronunciation evaluation in read and spontaneous speech: a comparison between human ratings and automatic scores. In: Proceedings of the New Sounds. Citeseer (2000)
Hönig, F., Batliner, A., Nöth, E.: Automatic assessment of non-native prosody annotation, modelling and evaluation. In: Proceedings of ISADEPT (2012)
Efron, B.: Bootstrap methods: another look at the Jackknife. Ann. Stat. 7, 1–26 (1979)
Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek, O., Goel, N., Hannemann, M., Motlicek, P., Qian, Y., Schwarz, P., Silovsky, J., Stemmer, G., Vesely, K.: IEEE 2011 Workshop on Automatic Speech Recognition and Understanding (2011)
Acknowledgments
Work partially supported by ANPCYT PICT 2014-1713.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Landini, F., Ferrer, L., Franco, H. (2017). Adaptation Approaches for Pronunciation Scoring with Sparse Training Data. In: Karpov, A., Potapova, R., Mporas, I. (eds) Speech and Computer. SPECOM 2017. Lecture Notes in Computer Science(), vol 10458. Springer, Cham. https://doi.org/10.1007/978-3-319-66429-3_8
Download citation
DOI: https://doi.org/10.1007/978-3-319-66429-3_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-66428-6
Online ISBN: 978-3-319-66429-3
eBook Packages: Computer ScienceComputer Science (R0)