Skip to main content

Speech Stuttering Detection and Removal Using Deep Neural Networks

  • Conference paper
  • First Online:
Proceedings of the 11th International Conference on Computer Engineering and Networks

Abstract

There are more than 70 million people worldwide who suffer from stuttering problems. This will affect the confidence of public speaking in people who suffer from this issue. To solve this problem many people take therapy sessions but the therapy sessions are a temporary solution, as soon as they leave therapy sessions this problem might arise again. This work aims to use state of the art machine learning algorithms that have improved over the past few years to solve this problem. We have used the dataset from UCLASS archives which provide the data for stuttered speech in.wav format with time-aligned transcriptions. We have tried different algorithms and optimized our model by hyper parameter tuning to maximize the model’s accuracy. The algorithm is tested on random speech data with low to heavy stuttering from the same dataset, and it is observed that there is significant reduction in the Word Error Rate (WER) for most of the test cases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 469.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 599.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 599.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chee, L.S., Ai, O.C., Yaacob, S.: Overview of automatic stuttering recognition system. In: Proceedings International Conference on Man-Machine Systems, pp. 1–6, Batu Ferringhi, Penang Malaysia, October (2009)

    Google Scholar 

  2. Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Sig. Process. Mag. 29(6), 82–97 (2012)

    Article  Google Scholar 

  3. Kesarkar, M.P.: Feature extraction for speech recognition. Electronic systems, EE, Department, IIT Bombay (2003)

    Google Scholar 

  4. Szczurowska, I., Kuniszyk-Jóźkowiak, W., Smołka, E.: The application of Kohonen and multilayer perceptron networks in the speech non fluency analysis. Arch. Acoust. 31(4(S)), 205–210 (2014)

    Google Scholar 

  5. Fabbri-Destro, M., Rizzolatti, G.: Mirror neurons and mirror systems in monkeys and humans. Physiology 23(3), 171–179 (2008)

    Article  Google Scholar 

  6. Sanju, H.K., Choudhury, M., Kumar, V.: Effect of stuttering intervention on depression, stress and anxiety among individuals with stuttering: case study. J. Speech Pathol. Ther. 3(1), 132 (2018)

    Google Scholar 

  7. Howell, P., Davis, S., Bartrip, J.: The university college London archive of stuttered speech (UCLASS). J. Speech Lang. Hear. Res. 52, 556–569 (2009)

    Google Scholar 

  8. MacWhinney, B.: The CHILDES project part 1: the CHAT transcription format (2009)

    Google Scholar 

  9. Howell, P., Davis, S., Bartrip, J., Wormald, L.: Effectiveness of frequency shifted feedback at reducing disfluency for linguistically easy, and difficult, sections of speech (original audio recordings included). Stammering Res.: On-line J. Publ. Br. Stammering Assoc. 1(3), 309 (2004)

    Google Scholar 

  10. Zhang, Z.: Improved adam optimizer for deep neural networks. In 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS), pp. 1–2. IEEE (2018)

    Google Scholar 

  11. Gliozzo, A., et al.: Building Cognitive Applications with IBM Watson Services: Volume 1 Getting Started. IBM Redbooks, Armonk (2017)

    Google Scholar 

  12. McFee, B., et al.: librosa: audio and music signal analysis in Python. In: Proceedings of the 14th Python in Science Conference, vol. 8 (2015)

    Google Scholar 

  13. Schröder, M.: Expressive speech synthesis: past, present, and possible futures. In: Tao, J., Tan, T. (eds.) Affective Information Processing, pp. 111–126. Springer, London (2009). https://doi.org/10.1007/978-1-84800-306-4_7

Download references

Acknowledgement

The authors express their sincere gratitude to Vellore Institute of Technology, Vellore for encouragement provided as well as the support and resources to complete the project. We would also thank University College London archive of Stuttered Speech (UCLASS) for the dataset and the volunteers who have provided the data and the Russian Foundation for Basic Research (project 19–57-45008–IND_a) – for Russian researcher and Department of Science and Technology (DST) (INTRUSRFBR382) - for Indian researcher.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ruban Nersisson .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rajput, S., Nersisson, R., Raj, A.N.J., Mary Mekala, A., Frolova, O., Lyakso, E. (2022). Speech Stuttering Detection and Removal Using Deep Neural Networks. In: Liu, Q., Liu, X., Chen, B., Zhang, Y., Peng, J. (eds) Proceedings of the 11th International Conference on Computer Engineering and Networks. Lecture Notes in Electrical Engineering, vol 808. Springer, Singapore. https://doi.org/10.1007/978-981-16-6554-7_50

Download citation

Publish with us

Policies and ethics