Optimized Active Learning Strategy for Audiovisual Speaker Recognition

  • Stamatis KarlosEmail author
  • Konstantinos Kaleris
  • Nikos Fazakis
  • Vasileios G. Kanas
  • Sotiris Kotsiantis
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11096)


The purpose of this work is to investigate the improved recognition accuracy caused from exploiting optimization stages for tuning parameters of an Active Learning (AL) classifier. Since plenty of data could be available during Speaker Recognition (SR) tasks, the AL concept, which incorporates human entities inside its learning kernel for exploring hidden insights into unlabeled data, seems extremely suitable, without demanding much expertise on behalf of the human factor. Six datasets containing 8 and 16 speakers’ utterances under different recording setups, are described by audiovisual features and evaluated through the time-efficient Uncertainty Sampling query strategy (UncS). Both Support Vector Machines (SVMs) and Random Forest (RF) algorithms were selected to be tuned over a small subset of the initial training data and then applied iteratively for mining the most suitable instances from a corresponding pool of unlabeled instances. Useful conclusions are drawn concerning the values of the selected parameters, allowing future optimization attempts to get employed into more restricted regions, while remarkable improvements rates were obtained using an ideal annotator.


Active Learning Optimized learner Speaker Recognition Audiovisual features Support Vector Machines Hyperopt package tool 



Stamatis Karlos and Konstantinos Kaleris gratefully acknowledge the support of their work by both the General Secretariat for Research and Technology (GSRT) and the Hellenic Foundation for Research and Innovation (HFRI).


  1. 1.
    Juang, B.H., Chen, T.: The past, present, and future of speech processing. IEEE Signal Process. Mag. 15, 24–48 (1998)CrossRefGoogle Scholar
  2. 2.
    Wang, J.-C., Wang, J.-F., Weng, Y.-S.: Chip design of MFCC extraction for speech recognition. Integr. VLSI J. 32, 111–131 (2002)CrossRefGoogle Scholar
  3. 3.
    Burget, L., Matejka, P., Schwarz, P., Glembek, O., Cernocký, J.: Analysis of feature extraction and channel compensation in a GMM speaker recognition system. IEEE Trans. Audio Speech Lang. Process. 15, 1979–1986 (2007)CrossRefGoogle Scholar
  4. 4.
    Srinivas, V., Santhi Rani, C., Madhu, T.: Investigation of decision tree induction, probabilistic technique and SVM for speaker identification. Int. J. Signal Process. Image Process. Pattern Recognit. 6, 193–204 (2013)Google Scholar
  5. 5.
    Wang, J.-C., Yang, C.-H., Wang, J.-F., Lee, H.-P.: Robust speaker identification and verification. IEEE Comput. Intell. Mag. 2, 52–59 (2007)CrossRefGoogle Scholar
  6. 6.
    Lei, Y., Scheffer, N., Ferrer, L., McLaren, M.: A novel scheme for speaker recognition using a phonetically-aware deep neural network. In: ICASSP, pp. 1695–1699. IEEE (2014)Google Scholar
  7. 7.
    Campbell, W.M., Sturim, D., Reynolds, D.A.: Support vector machines using GMM supervectors for speaker verification. IEEE Signal Process. Lett. 13, 308–311 (2006)CrossRefGoogle Scholar
  8. 8.
    Syed, A.R., Rosenberg, A., Kislal, E.: Supervised and unsupervised active learning for automatic speech recognition of low-resource languages. In: ICASSP, pp. 5320–5324. IEEE (2016)Google Scholar
  9. 9.
    Settles, B.: Active Learning. Morgan & Claypool Publishers, San Rafael (2012)zbMATHGoogle Scholar
  10. 10.
    Zhang, Z., Cummins, N., Schuller, B.: Advanced data exploitation in speech analysis: an overview. IEEE Signal Process. Mag. 34(4), 107–129 (2017)CrossRefGoogle Scholar
  11. 11.
    Wen, G., Li, H., Huang, J., Li, D., Xun, E.: Random deep belief networks for recognizing emotions from speech signals. Comput. Intell. Neurosci. 2017, 9 (2017)CrossRefGoogle Scholar
  12. 12.
    Szekrényes, I., Kovács, G.: Classification of formal and informal dialogues based on turn-taking and intonation using deep neural networks. In: Karpov, A., Potapova, R., Mporas, I. (eds.) SPECOM 2017. LNCS (LNAI), vol. 10458, pp. 233–243. Springer, Cham (2017). Scholar
  13. 13.
    Granell, E., Romero, V., Martínez-Hinarejos, C.D.: Multimodality, interactivity, and crowdsourcing for document transcription. Comput. Intell. 34(2), 398–419 (2018)CrossRefGoogle Scholar
  14. 14.
    Matoušek, J., Tihelka, D.: Annotation error detection: anomaly detection vs. classification. In: Karpov, A., Potapova, R., Mporas, I. (eds.) SPECOM 2017. LNCS (LNAI), vol. 10458, pp. 141–151. Springer, Cham (2017). Scholar
  15. 15.
    Safavi, S., Gan, H., Mporas, I.: Improving speaker verification performance under spoofing attacks by fusion of different operational modes. In: Proceedings of the 2017 IEEE 13th International Colloquium Signal Processing and its Applications, CSPA 2017, pp. 219–223 (2017)Google Scholar
  16. 16.
    Ahn, H., Kim, H.: Enhanced spoken sentence retrieval using a conventional automatic speech recognizer in smart home. Int. J. Artif. Intell. Tools 25, 1650017 (2016)CrossRefGoogle Scholar
  17. 17.
    Lux, M.: Content based image retrieval with LIRe. In: Proceeding of the 19th ACM International Conference on Multimedia, pp. 735–738. ACM (2011)Google Scholar
  18. 18.
    Karlos, S., Fazakis, N., Karanikola, K., Kotsiantis, S., Sgarbas, K.: Speech recognition combining MFCCs and image features. In: Ronzhin, A., Potapova, R., Németh, G. (eds.) SPECOM 2016. LNCS (LNAI), vol. 9811, pp. 651–658. Springer, Cham (2016). Scholar
  19. 19.
    Bergstra, J., Komer, B., Eliasmith, C., Yamins, D., Cox, D.D.: Hyperopt: a Python library for model selection and hyperparameter optimization. Comput. Sci. Discov. 8(1), 014008 (2015)CrossRefGoogle Scholar
  20. 20.
    Cummins, F., Grimaldi, M., Leonard, T., Simko, J.: The CHAINS speech corpus: CHAracterizing INdividual Speakers. In: Proceedings of SPECOM, pp. 1–6 (2006)Google Scholar
  21. 21.
    Kamaris, G., Karlos, S., Terpinas, S., Koutsaidis, D., Mourjopoulos, J.: Audio system spatial image evaluation via binaural feature classification. In: Audio Engineering Society Convention 142. Audio Engineering Society (2017)Google Scholar
  22. 22.
    Yang, Y.-Y., Lee, S.-C., Chung, Y.-A., Wu, T.-E., Chen, S.-A., Lin, H.-T.: libact: Pool-based active learning in Python. CoRR abs/1710.00379 (2017)Google Scholar
  23. 23.
    Pedregosa, F., et al.: Machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetzbMATHGoogle Scholar
  24. 24.
    Vestman, V., Gowda, D., Sahidullah, M., Alku, P., Kinnunen, T.: Speaker recognition from whispered speech: a tutorial survey and an application of time-varying linear prediction. Speech Commun. 99, 62–79 (2018)CrossRefGoogle Scholar
  25. 25.
    Yu, C., Hansen, J.H.L.: Active learning based constrained clustering for speaker diarization. IEEE/ACM Trans. Audio Speech Lang. Process. 25, 2188–2198 (2017)CrossRefGoogle Scholar
  26. 26.
    Xie, D., Zhang, L., Bai, L.: Deep learning in visual computing and signal processing. Appl. Comput. Intell. Soft Comput. 2017, 13 (2017)CrossRefGoogle Scholar
  27. 27.
    Guo, X., Toyoda, Y., Li, H., Huang, J., Ding, S., Liu, Y.: Environmental sound recognition using time-frequency intersection patterns. Appl. Comput. Intell. Soft Comput. 2012, 1–6 (2012)CrossRefGoogle Scholar
  28. 28.
    Bernard, J., Zeppelzauer, M., Sedlmair, M., Aigner, W.: A unified process for visual-interactive labeling. In: Proceedings of EuroVis Workshop on Visual Analytics (EuroVA), pp. 10–14 (2017)Google Scholar
  29. 29.
    Ben Ayed, Y.: A new SVM kernel for keyword spotting using confidence measures. Int. J. Artif. Intell. Tools 24, 1–22 (2015)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Department of MathematicsUniversity of PatrasRio, AchaiaGreece
  2. 2.Department of Electrical and EngineeringUniversity of PatrasRio, AchaiaGreece

Personalised recommendations