Abstract
We present a freely available speech corpus for the Uzbek language and report preliminary automatic speech recognition (ASR) results using both the deep neural network hidden Markov model (DNN-HMM) and end-to-end (E2E) architectures. The Uzbek speech corpus (USC) comprises 958 different speakers with a total of 105 h of transcribed audio recordings. To the best of our knowledge, this is the first open-source Uzbek speech corpus dedicated to the ASR task. To ensure high quality, the USC has been manually checked by native speakers. We first describe the design and development procedures of the USC, and then explain the conducted ASR experiments in detail. The experimental results demonstrate promising results for the applicability of the USC for ASR. Specifically, 18.1% and 17.4% word error rates were achieved on the validation and test sets, respectively. To enable experiment reproducibility, we share the USC dataset, pre-trained models, and training recipes in our GitHub repository (https://github.com/IS2AI/Uzbek_ASR).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
We trained several N-gram LMs with different orders and smoothing techniques and picked the one that obtained the best perplexity score on the validation set.
- 4.
Note that the Uzbek Latin alphabet contains 29 letters, however, some of the letters are represented using digraphs (e.g., ng, sh, ch, o’ and g’), which we broke down into smaller units and obtained 25 letters. The 26th letter is ‘w’ obtained from international words.
References
Speechocean’s Uzbek speech corpus. http://en.speechocean.com/datacenter/details/1847.html. Accessed 21 May 2021
Uzbek language. https://en.wikipedia.org/wiki/Uzbek_language. Accessed 20 May 2021
Voxforge. http://www.voxforge.org/. Accessed 11 May 2021
Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29(6), 82–97 (2012)
Abdel-Hamid, O., Mohamed, A., Jiang, H., Deng, L., Penn, G., Yu, D.: Convolutional neural networks for speech recognition. IEEE ACM Trans. Audio Speech Lang. Process. 22(10), 1533–1545 (2014)
Ardila, R., et al.: Common voice: a massively-multilingual speech corpus. In: LREC, pp. 4218–4222. ELRA (2020)
Besacier, L., Barnard, E., Karpov, A., Schultz, T.: Automatic speech recognition for under-resourced languages: a survey. Speech Commun. 56, 85–100 (2014)
Bu, H., Du, J., Na, X., Wu, B., Zheng, H.: AISHELL-1: an open-source Mandarin speech corpus and a speech recognition baseline. In: Proceedings of the O-COCOSDA, pp. 1–5. IEEE (2017)
Doumbouya, M., Einstein, L., Piech, C.: Using radio archives for low-resource speech recognition: towards an intelligent virtual assistant for illiterate users. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 14757–14765. AAAI Press (2021)
Gales, M.J.F., Knill, K.M., Ragni, A., Rath, S.P.: Speech recognition and keyword spotting for low-resource languages: Babel project research at CUED. In: Proceedings of the Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU), pp. 16–23. ISCA (2014)
Graves, A., Fernández, S., Gomez, F.J., Schmidhuber, J.: Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In: Proceedings of the International Conference on Machine Learning (ICML), Pittsburgh, Pennsylvania, USA, 25–29 June 2006, pp. 369–376 (2006)
Graves, A., Jaitly, N.: Towards end-to-end speech recognition with recurrent neural networks. In: Proceedings of the International Conference on Machine Learning (ICML), Beijing, China, 21–26 June 2014, vol. 32, pp. 1764–1772 (2014)
Graves, A., Mohamed, A., Hinton, G.E.: Speech recognition with deep recurrent neural networks. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6645–6649. IEEE (2013)
Gulati, A., et al.: Conformer: convolution-augmented transformer for speech recognition. In: Proceedings of INTERSPEECH, pp. 5036–5040. ISCA (2020)
Gülçehre, Ç., et al.: On using monolingual corpora in neural machine translation. CoRR abs/1503.03535 (2015). http://arxiv.org/abs/1503.03535
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Khassanov, Y., Mussakhojayeva, S., Mirzakhmetov, A., Adiyev, A., Nurpeiissov, M., Varol, H.A.: A crowdsourced open-source Kazakh speech corpus and initial speech recognition baseline. In: Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, pp. 697–706. Association for Computational Linguistics, April 2021
Kim, S., Hori, T., Watanabe, S.: Joint CTC-attention based end-to-end speech recognition using multi-task learning. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4835–4839 (2017)
Ko, T., Peddinti, V., Povey, D., Khudanpur, S.: Audio augmentation for speech recognition. In: Proceedings of INTERSPEECH, pp. 3586–3589 (2015)
Musaev, M., Khujayorov, I., Ochilov, M.: Image approach to speech recognition on CNN. In: Proceedings of the International Symposium on Computer Science and Intelligent Control (ISCSIC), pp. 57:1–57:6. ACM (2019)
Musaev, M., Khujayorov, I., Ochilov, M.: Development of integral model of speech recognition system for Uzbek language. In: Proceedings of the IEEE International Conference on Application of Information and Communication Technologies (AICT), pp. 1–6. IEEE (2020)
Musaev, M., Khujayorov, I., Ochilov, M.: The use of neural networks to improve the recognition accuracy of explosive and unvoiced phonemes in Uzbek language. In: Proceedings of the Information Communication Technologies Conference (ICTC), pp. 231–234. IEEE (2020)
Musaev, M., Khujayorov, I., Ochilov, M.: Automatic recognition of Uzbek speech based on integrated neural networks. In: Aliev, R.A., Yusupbekov, N.R., Kacprzyk, J., Pedrycz, W., Sadikoglu, F.M. (eds.) WCIS 2020. AISC, vol. 1323, pp. 215–223. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-68004-6_28
Mussakhojayeva, S., Janaliyeva, A., Mirzakhmetov, A., Khassanov, Y., Varol, H.A.: KazakhTTS: an open-source Kazakh text-to-speech synthesis dataset. CoRR abs/2104.08459 (2021). https://arxiv.org/abs/2104.08459
Panayotov, V., Chen, G., Povey, D., Khudanpur, S.: Librispeech: an ASR corpus based on public domain audio books. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5206–5210. IEEE (2015)
Park, D.S., et al.: SpecAugment: a simple data augmentation method for automatic speech recognition. In: Proceedings of INTERSPEECH, pp. 2613–2617 (2019)
Povey, D., et al.: Semi-orthogonal low-rank matrix factorization for deep neural networks. In: Proceedings of INTERSPEECH, pp. 3743–3747 (2018)
Povey, D., et al.: The Kaldi speech recognition toolkit. In: Proceedings of the IEEE Workshop on Automatic Speech Recognition and Understanding (2011)
Povey, D., et al.: Purely sequence-trained neural networks for ASR based on lattice-free MMI. In: Proceedings of INTERSPEECH, pp. 2751–2755 (2016)
ReadBeyond: Aeneas. https://www.readbeyond.it/aeneas/
Hernandez, F., Nguyen, V., Ghannay, S., Tomashenko, N., Estève, Y.: TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation. In: Karpov, A., Jokisch, O., Potapova, R. (eds.) SPECOM 2018. LNCS (LNAI), vol. 11096, pp. 198–208. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99579-3_21
Solak, I.: The M-AILABS speech dataset. https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/. Accessed 11 May 2021
Telegram FZ LLC and Telegram Messenger Inc.: Telegram. https://telegram.org
Vaswani, A., et al.: Attention is all you need. In: Proceedings of the Annual Conference on Neural Information Processing Systems, 4–9 December 2017, Long Beach, CA, USA, pp. 5998–6008 (2017)
Watanabe, S., et al.: ESPnet: end-to-end speech processing toolkit. In: Proceedings of INTERSPEECH, Hyderabad, India, 2–6 September 2018, pp. 2207–2211 (2018)
Yu, D., Deng, L.: Automatic Speech Recognition. SCT, Springer, London (2015). https://doi.org/10.1007/978-1-4471-5779-3
Zeiler, M.D.: ADADELTA: an adaptive learning rate method. CoRR abs/1212.5701 (2012). http://arxiv.org/abs/1212.5701
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Musaev, M., Mussakhojayeva, S., Khujayorov, I., Khassanov, Y., Ochilov, M., Atakan Varol, H. (2021). USC: An Open-Source Uzbek Speech Corpus and Initial Speech Recognition Experiments. In: Karpov, A., Potapova, R. (eds) Speech and Computer. SPECOM 2021. Lecture Notes in Computer Science(), vol 12997. Springer, Cham. https://doi.org/10.1007/978-3-030-87802-3_40
Download citation
DOI: https://doi.org/10.1007/978-3-030-87802-3_40
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87801-6
Online ISBN: 978-3-030-87802-3
eBook Packages: Computer ScienceComputer Science (R0)