Skip to main content

Optimized Active Learning Strategy for Audiovisual Speaker Recognition

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11096))

Abstract

The purpose of this work is to investigate the improved recognition accuracy caused from exploiting optimization stages for tuning parameters of an Active Learning (AL) classifier. Since plenty of data could be available during Speaker Recognition (SR) tasks, the AL concept, which incorporates human entities inside its learning kernel for exploring hidden insights into unlabeled data, seems extremely suitable, without demanding much expertise on behalf of the human factor. Six datasets containing 8 and 16 speakers’ utterances under different recording setups, are described by audiovisual features and evaluated through the time-efficient Uncertainty Sampling query strategy (UncS). Both Support Vector Machines (SVMs) and Random Forest (RF) algorithms were selected to be tuned over a small subset of the initial training data and then applied iteratively for mining the most suitable instances from a corresponding pool of unlabeled instances. Useful conclusions are drawn concerning the values of the selected parameters, allowing future optimization attempts to get employed into more restricted regions, while remarkable improvements rates were obtained using an ideal annotator.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Juang, B.H., Chen, T.: The past, present, and future of speech processing. IEEE Signal Process. Mag. 15, 24–48 (1998)

    Article  Google Scholar 

  2. Wang, J.-C., Wang, J.-F., Weng, Y.-S.: Chip design of MFCC extraction for speech recognition. Integr. VLSI J. 32, 111–131 (2002)

    Article  Google Scholar 

  3. Burget, L., Matejka, P., Schwarz, P., Glembek, O., Cernocký, J.: Analysis of feature extraction and channel compensation in a GMM speaker recognition system. IEEE Trans. Audio Speech Lang. Process. 15, 1979–1986 (2007)

    Article  Google Scholar 

  4. Srinivas, V., Santhi Rani, C., Madhu, T.: Investigation of decision tree induction, probabilistic technique and SVM for speaker identification. Int. J. Signal Process. Image Process. Pattern Recognit. 6, 193–204 (2013)

    Google Scholar 

  5. Wang, J.-C., Yang, C.-H., Wang, J.-F., Lee, H.-P.: Robust speaker identification and verification. IEEE Comput. Intell. Mag. 2, 52–59 (2007)

    Article  Google Scholar 

  6. Lei, Y., Scheffer, N., Ferrer, L., McLaren, M.: A novel scheme for speaker recognition using a phonetically-aware deep neural network. In: ICASSP, pp. 1695–1699. IEEE (2014)

    Google Scholar 

  7. Campbell, W.M., Sturim, D., Reynolds, D.A.: Support vector machines using GMM supervectors for speaker verification. IEEE Signal Process. Lett. 13, 308–311 (2006)

    Article  Google Scholar 

  8. Syed, A.R., Rosenberg, A., Kislal, E.: Supervised and unsupervised active learning for automatic speech recognition of low-resource languages. In: ICASSP, pp. 5320–5324. IEEE (2016)

    Google Scholar 

  9. Settles, B.: Active Learning. Morgan & Claypool Publishers, San Rafael (2012)

    MATH  Google Scholar 

  10. Zhang, Z., Cummins, N., Schuller, B.: Advanced data exploitation in speech analysis: an overview. IEEE Signal Process. Mag. 34(4), 107–129 (2017)

    Article  Google Scholar 

  11. Wen, G., Li, H., Huang, J., Li, D., Xun, E.: Random deep belief networks for recognizing emotions from speech signals. Comput. Intell. Neurosci. 2017, 9 (2017)

    Article  Google Scholar 

  12. Szekrényes, I., Kovács, G.: Classification of formal and informal dialogues based on turn-taking and intonation using deep neural networks. In: Karpov, A., Potapova, R., Mporas, I. (eds.) SPECOM 2017. LNCS (LNAI), vol. 10458, pp. 233–243. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66429-3_22

    Chapter  Google Scholar 

  13. Granell, E., Romero, V., Martínez-Hinarejos, C.D.: Multimodality, interactivity, and crowdsourcing for document transcription. Comput. Intell. 34(2), 398–419 (2018)

    Article  Google Scholar 

  14. Matoušek, J., Tihelka, D.: Annotation error detection: anomaly detection vs. classification. In: Karpov, A., Potapova, R., Mporas, I. (eds.) SPECOM 2017. LNCS (LNAI), vol. 10458, pp. 141–151. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66429-3_13

    Chapter  Google Scholar 

  15. Safavi, S., Gan, H., Mporas, I.: Improving speaker verification performance under spoofing attacks by fusion of different operational modes. In: Proceedings of the 2017 IEEE 13th International Colloquium Signal Processing and its Applications, CSPA 2017, pp. 219–223 (2017)

    Google Scholar 

  16. Ahn, H., Kim, H.: Enhanced spoken sentence retrieval using a conventional automatic speech recognizer in smart home. Int. J. Artif. Intell. Tools 25, 1650017 (2016)

    Article  Google Scholar 

  17. Lux, M.: Content based image retrieval with LIRe. In: Proceeding of the 19th ACM International Conference on Multimedia, pp. 735–738. ACM (2011)

    Google Scholar 

  18. Karlos, S., Fazakis, N., Karanikola, K., Kotsiantis, S., Sgarbas, K.: Speech recognition combining MFCCs and image features. In: Ronzhin, A., Potapova, R., Németh, G. (eds.) SPECOM 2016. LNCS (LNAI), vol. 9811, pp. 651–658. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-43958-7_79

    Chapter  Google Scholar 

  19. Bergstra, J., Komer, B., Eliasmith, C., Yamins, D., Cox, D.D.: Hyperopt: a Python library for model selection and hyperparameter optimization. Comput. Sci. Discov. 8(1), 014008 (2015)

    Article  Google Scholar 

  20. Cummins, F., Grimaldi, M., Leonard, T., Simko, J.: The CHAINS speech corpus: CHAracterizing INdividual Speakers. In: Proceedings of SPECOM, pp. 1–6 (2006)

    Google Scholar 

  21. Kamaris, G., Karlos, S., Terpinas, S., Koutsaidis, D., Mourjopoulos, J.: Audio system spatial image evaluation via binaural feature classification. In: Audio Engineering Society Convention 142. Audio Engineering Society (2017)

    Google Scholar 

  22. Yang, Y.-Y., Lee, S.-C., Chung, Y.-A., Wu, T.-E., Chen, S.-A., Lin, H.-T.: libact: Pool-based active learning in Python. CoRR abs/1710.00379 (2017)

    Google Scholar 

  23. Pedregosa, F., et al.: Machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  24. Vestman, V., Gowda, D., Sahidullah, M., Alku, P., Kinnunen, T.: Speaker recognition from whispered speech: a tutorial survey and an application of time-varying linear prediction. Speech Commun. 99, 62–79 (2018)

    Article  Google Scholar 

  25. Yu, C., Hansen, J.H.L.: Active learning based constrained clustering for speaker diarization. IEEE/ACM Trans. Audio Speech Lang. Process. 25, 2188–2198 (2017)

    Article  Google Scholar 

  26. Xie, D., Zhang, L., Bai, L.: Deep learning in visual computing and signal processing. Appl. Comput. Intell. Soft Comput. 2017, 13 (2017)

    Article  Google Scholar 

  27. Guo, X., Toyoda, Y., Li, H., Huang, J., Ding, S., Liu, Y.: Environmental sound recognition using time-frequency intersection patterns. Appl. Comput. Intell. Soft Comput. 2012, 1–6 (2012)

    Article  Google Scholar 

  28. Bernard, J., Zeppelzauer, M., Sedlmair, M., Aigner, W.: A unified process for visual-interactive labeling. In: Proceedings of EuroVis Workshop on Visual Analytics (EuroVA), pp. 10–14 (2017)

    Google Scholar 

  29. Ben Ayed, Y.: A new SVM kernel for keyword spotting using confidence measures. Int. J. Artif. Intell. Tools 24, 1–22 (2015)

    Article  Google Scholar 

Download references

Acknowledgement

Stamatis Karlos and Konstantinos Kaleris gratefully acknowledge the support of their work by both the General Secretariat for Research and Technology (GSRT) and the Hellenic Foundation for Research and Innovation (HFRI).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stamatis Karlos .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Karlos, S., Kaleris, K., Fazakis, N., Kanas, V.G., Kotsiantis, S. (2018). Optimized Active Learning Strategy for Audiovisual Speaker Recognition. In: Karpov, A., Jokisch, O., Potapova, R. (eds) Speech and Computer. SPECOM 2018. Lecture Notes in Computer Science(), vol 11096. Springer, Cham. https://doi.org/10.1007/978-3-319-99579-3_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-99579-3_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-99578-6

  • Online ISBN: 978-3-319-99579-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics