Abstract
The purpose of this work is to investigate the improved recognition accuracy caused from exploiting optimization stages for tuning parameters of an Active Learning (AL) classifier. Since plenty of data could be available during Speaker Recognition (SR) tasks, the AL concept, which incorporates human entities inside its learning kernel for exploring hidden insights into unlabeled data, seems extremely suitable, without demanding much expertise on behalf of the human factor. Six datasets containing 8 and 16 speakers’ utterances under different recording setups, are described by audiovisual features and evaluated through the time-efficient Uncertainty Sampling query strategy (UncS). Both Support Vector Machines (SVMs) and Random Forest (RF) algorithms were selected to be tuned over a small subset of the initial training data and then applied iteratively for mining the most suitable instances from a corresponding pool of unlabeled instances. Useful conclusions are drawn concerning the values of the selected parameters, allowing future optimization attempts to get employed into more restricted regions, while remarkable improvements rates were obtained using an ideal annotator.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Juang, B.H., Chen, T.: The past, present, and future of speech processing. IEEE Signal Process. Mag. 15, 24–48 (1998)
Wang, J.-C., Wang, J.-F., Weng, Y.-S.: Chip design of MFCC extraction for speech recognition. Integr. VLSI J. 32, 111–131 (2002)
Burget, L., Matejka, P., Schwarz, P., Glembek, O., Cernocký, J.: Analysis of feature extraction and channel compensation in a GMM speaker recognition system. IEEE Trans. Audio Speech Lang. Process. 15, 1979–1986 (2007)
Srinivas, V., Santhi Rani, C., Madhu, T.: Investigation of decision tree induction, probabilistic technique and SVM for speaker identification. Int. J. Signal Process. Image Process. Pattern Recognit. 6, 193–204 (2013)
Wang, J.-C., Yang, C.-H., Wang, J.-F., Lee, H.-P.: Robust speaker identification and verification. IEEE Comput. Intell. Mag. 2, 52–59 (2007)
Lei, Y., Scheffer, N., Ferrer, L., McLaren, M.: A novel scheme for speaker recognition using a phonetically-aware deep neural network. In: ICASSP, pp. 1695–1699. IEEE (2014)
Campbell, W.M., Sturim, D., Reynolds, D.A.: Support vector machines using GMM supervectors for speaker verification. IEEE Signal Process. Lett. 13, 308–311 (2006)
Syed, A.R., Rosenberg, A., Kislal, E.: Supervised and unsupervised active learning for automatic speech recognition of low-resource languages. In: ICASSP, pp. 5320–5324. IEEE (2016)
Settles, B.: Active Learning. Morgan & Claypool Publishers, San Rafael (2012)
Zhang, Z., Cummins, N., Schuller, B.: Advanced data exploitation in speech analysis: an overview. IEEE Signal Process. Mag. 34(4), 107–129 (2017)
Wen, G., Li, H., Huang, J., Li, D., Xun, E.: Random deep belief networks for recognizing emotions from speech signals. Comput. Intell. Neurosci. 2017, 9 (2017)
Szekrényes, I., Kovács, G.: Classification of formal and informal dialogues based on turn-taking and intonation using deep neural networks. In: Karpov, A., Potapova, R., Mporas, I. (eds.) SPECOM 2017. LNCS (LNAI), vol. 10458, pp. 233–243. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66429-3_22
Granell, E., Romero, V., Martínez-Hinarejos, C.D.: Multimodality, interactivity, and crowdsourcing for document transcription. Comput. Intell. 34(2), 398–419 (2018)
Matoušek, J., Tihelka, D.: Annotation error detection: anomaly detection vs. classification. In: Karpov, A., Potapova, R., Mporas, I. (eds.) SPECOM 2017. LNCS (LNAI), vol. 10458, pp. 141–151. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66429-3_13
Safavi, S., Gan, H., Mporas, I.: Improving speaker verification performance under spoofing attacks by fusion of different operational modes. In: Proceedings of the 2017 IEEE 13th International Colloquium Signal Processing and its Applications, CSPA 2017, pp. 219–223 (2017)
Ahn, H., Kim, H.: Enhanced spoken sentence retrieval using a conventional automatic speech recognizer in smart home. Int. J. Artif. Intell. Tools 25, 1650017 (2016)
Lux, M.: Content based image retrieval with LIRe. In: Proceeding of the 19th ACM International Conference on Multimedia, pp. 735–738. ACM (2011)
Karlos, S., Fazakis, N., Karanikola, K., Kotsiantis, S., Sgarbas, K.: Speech recognition combining MFCCs and image features. In: Ronzhin, A., Potapova, R., Németh, G. (eds.) SPECOM 2016. LNCS (LNAI), vol. 9811, pp. 651–658. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-43958-7_79
Bergstra, J., Komer, B., Eliasmith, C., Yamins, D., Cox, D.D.: Hyperopt: a Python library for model selection and hyperparameter optimization. Comput. Sci. Discov. 8(1), 014008 (2015)
Cummins, F., Grimaldi, M., Leonard, T., Simko, J.: The CHAINS speech corpus: CHAracterizing INdividual Speakers. In: Proceedings of SPECOM, pp. 1–6 (2006)
Kamaris, G., Karlos, S., Terpinas, S., Koutsaidis, D., Mourjopoulos, J.: Audio system spatial image evaluation via binaural feature classification. In: Audio Engineering Society Convention 142. Audio Engineering Society (2017)
Yang, Y.-Y., Lee, S.-C., Chung, Y.-A., Wu, T.-E., Chen, S.-A., Lin, H.-T.: libact: Pool-based active learning in Python. CoRR abs/1710.00379 (2017)
Pedregosa, F., et al.: Machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)
Vestman, V., Gowda, D., Sahidullah, M., Alku, P., Kinnunen, T.: Speaker recognition from whispered speech: a tutorial survey and an application of time-varying linear prediction. Speech Commun. 99, 62–79 (2018)
Yu, C., Hansen, J.H.L.: Active learning based constrained clustering for speaker diarization. IEEE/ACM Trans. Audio Speech Lang. Process. 25, 2188–2198 (2017)
Xie, D., Zhang, L., Bai, L.: Deep learning in visual computing and signal processing. Appl. Comput. Intell. Soft Comput. 2017, 13 (2017)
Guo, X., Toyoda, Y., Li, H., Huang, J., Ding, S., Liu, Y.: Environmental sound recognition using time-frequency intersection patterns. Appl. Comput. Intell. Soft Comput. 2012, 1–6 (2012)
Bernard, J., Zeppelzauer, M., Sedlmair, M., Aigner, W.: A unified process for visual-interactive labeling. In: Proceedings of EuroVis Workshop on Visual Analytics (EuroVA), pp. 10–14 (2017)
Ben Ayed, Y.: A new SVM kernel for keyword spotting using confidence measures. Int. J. Artif. Intell. Tools 24, 1–22 (2015)
Acknowledgement
Stamatis Karlos and Konstantinos Kaleris gratefully acknowledge the support of their work by both the General Secretariat for Research and Technology (GSRT) and the Hellenic Foundation for Research and Innovation (HFRI).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Karlos, S., Kaleris, K., Fazakis, N., Kanas, V.G., Kotsiantis, S. (2018). Optimized Active Learning Strategy for Audiovisual Speaker Recognition. In: Karpov, A., Jokisch, O., Potapova, R. (eds) Speech and Computer. SPECOM 2018. Lecture Notes in Computer Science(), vol 11096. Springer, Cham. https://doi.org/10.1007/978-3-319-99579-3_30
Download citation
DOI: https://doi.org/10.1007/978-3-319-99579-3_30
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-99578-6
Online ISBN: 978-3-319-99579-3
eBook Packages: Computer ScienceComputer Science (R0)