Language Identification Based on the Variations in Intonation Using Multi-classifier Systems
In this article we make use of the characteristics of tonal languages and machine learning methodologies to understand the patterns in them. Instead of analyzing the absolute pitch or frequency, we analyze how one tone transitions to another in speech. Features (namely, zero crossing count, short time energy, minimum formant frequency, maximum formant frequency) are extracted using the tonal transitions over segments of audio signals. We have developed a multi-classifier system using four classifiers, namely maximum likelihood estimate (MLE), minimum distance classifier (MDC), k-nearest neighbor (kNN) classifier and fuzzy k-NN classifier to automatically identify tonal languages from audio signals. Initially, each individual classifier is trained with existing known data represented by the extracted features. The trained classifier is then used for language identification. Results obtained from these classifiers are combined to generate the final output. Experiments are conducted using three different tonal languages, namely, Chinese, Thai and Vietnamese. The output reveals that the developed multi-classifier model is able to produce promising results. The extracted features produced better results in comparison to usually used frequency value (as a feature). Ensemble of classifiers is a better tool than using individual classifiers.
KeywordsTonal language Language identification Classification Multi-classifier
An earlier version of this work has been presented at the Intel International Science and Engineering Fair (Intel ISEF), held at Los Angeles, USA in May 2017 and won a Grand Award. The author would like to acknowledge her School teacher, Dr. Partha Pratim Roy, for advising her throughout the course of this work. Thanks are due to the Intel Initiative for Research and Innovation in Science (IRIS) Scientific Review Committee and her mentors, for their valuable comments. The author also acknowledges Rahul Roy and Ajoy Mondal, her parents’ students, for helping her in conducting the experiments.
- 1.Jurafsky, D., Martin, J.H.: Speech and Language Processing, 2nd edn. Pearson, New Delhi (2014)Google Scholar
- 3.Zissman, M.A.: Automatic language identification of telephone speech. Lincoln Laboratory Manual, MIT, USA, vol. 8, no. 2, pp. 115–144 (1995)Google Scholar
- 6.Itahashi, S., Zhou, J.X., Tanaka, K.: Spoken language discrimination using speech fundamental frequency. In: Proceedings of Third International Conference on Spoken Language Processing, Japan, vol. 4, pp. 1899–1902 (1994)Google Scholar
- 7.Tong, R., Ma, B., Zhu, D., Li, H., Chng, E.S.: Integrating acoustic, prosodic and phonotactic features for spoken language identification. In: Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. I 205–I 208 (2006)Google Scholar
- 11.Yencken, L.: The great language game (2013). www.greatlanguagegame.com
- 14.Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 3rd edn. Pearson Education, New Delhi (2009)Google Scholar
- 15.Cannam, C., Landone, C., Sandler, M.: Sonic visualiser: an open source application for viewing, analysing, and annotating music audio files. In: Proceedings of the 18th ACM International Conference on Multimedia, pp. 1467–1468 (2010)Google Scholar