Abstract
This paper presents the performance of the speech recognition system with reference to children with normal hearing and children with hearing impairment. Though the nasal and oral cavities of the hearing impaired are perfect, they cannot produce sounds since they cannot hear anything. The reason is that the ability to understand language and speech production is coordinated by the brain. So a person with a problem in the ear or damage in brain activities due to an accident, stroke or birth defect may have problems in producing speech. They are classified as profoundly deaf and hard of hearing, based on the degree of hearing ability. Early detection of deafness would enable the hearing impaired to produce sounds by speech therapy. If deafness is detected at a later stage, it is difficult to make the speech of the hearing impaired understandable. So, it is necessary to develop the system for recognizing their speeches, especially in the native language. In this paper, a system is developed for Tamil language by using, Melfrequency cepstral coefficient feature extraction at the front end and Hidden Markov Model tool kit at the back end. System is evaluated and the comparison is done between the speeches of normal speakers and the hearing impaired. Recognition accuracy is 92.4% for hearing impaired speeches and 98.4% for normal speeches. Though it is difficult for the unfamiliar listeners to understand the hearing impaired speeches, this system can be utilized for recognizing the speeches of Hearing impaired by others.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
Alireza A. Dibazar, Hyung O, Park, and Theodore W. Berger. 2010. Nonlinear dynamic modeling of impaired voice, 32nd Annual International Conference of the IEEE EMBS, 2770–2773.
B.H. Juang, L.R.Rabiner,1991. Hidden Markov Models for speech recognition, Techno metrics, 251–272.
Chris J. Clement, Florien J. Koopmans-van Beinum & Louis C. W. Pols, 1996. Acoustical characteristics of sound production of deaf and normally hearing infants, Fourth international conference on spoken language, 1549–1552.
Craig w. newman, Sharon a. sandridge, 2004. Hearing loss is often undiscovered but screening is easy, Cleveland clinic Journal of Medicine, 71:3.
Dr. Colin Brooks, 2000. Speech to text system for deaf, deafened and hard of hearing people, The IEEE Seminar on Speech and Language Processing for Disabled and Elderly People, 5:1–4.
Harry Levitt, Member, 1971. Acoustic Analysis of Deaf Speech Using Digital Processing Techniques, IEEE Fall Electronics Conference, Chicago.
Harry Levitt, 1973. Speech Processing Aids for the Deaf: an overview, IEEE Transactions on audio and Electro acoustics, 21(3): 269–273.
J. M. Pickett, 1969. Some Applications of Speech Analysis to Communication Aids for the Deaf, IEEE Transactions on Audio and Electro acoustics, 17(4):283–289.
Lim Sin Chee, Ooi Chia Ai, M.Hariharan & Sazali Yaacob, 2009. MFCC based recognition of repetitions and prolongations in stuttered speech using K-NN and LDA, Proceedings of the IEEE student Conference on Research and Developement, 146–149.
L.R. Rabiner, 1989. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, Proceedings of the IEEE, 77(2): 257–286.
Murty, K.S.R., & Yegnanarayana. B,2006. Combining evidence from residual phase and MFCC features for speaker recognition, IEEE Signal Processing Letters, 13(1): 52–55.
Picone J, 1993. Signal modelling techniques in speech recognition, Proceedings of the IEEE, 81(9): 1215–1247.
Prashant S. Dikshit, Edward L. Goshorn, and Ronald L. Seaman, 1993. Differences in fundamental frequency of deaf speech using FFT and Electroglottograph, Proceedings of the Twelfth Southern IEEE Biomedical Engineering Conference, 111–113.
Pujol. P, Pol. S, Nadeu. C, Hagen. A, Bourlard. H, 2005. Comparison and combination of features in a hybrid HMM/MLP and a HMM/GMM speech recognition system, IEEE Transactions on Speech and Audio processing, 13(1): 14–22.
Rabiner, L. R., and B. H. Juang, 1993. Fundamentals of Speech Recognition, Prentice Hall, New Jersey.
Shaughnessy D.O., 2003. Speech communication: human and machine, Addison-Wesley.
Steve Young, Gunnar Evermann, Thomas Hain, Dan Kershaw, Gareth Moore, Julian Odell, Dave Ollason, Dan Povey, Valtcho Valtchev, Phil Woodland, 2001. The HTK Book, Cambridge University Engineering department.
Tolba.H, El Torgoman.A.S, 2009. Towards the improvement of automatic recognition of dysarthric speech, IEEE International Conference on Computer Science and Information Technology, 277–281.
Thomas R. Willemine, Francis F. Lee, Fellow IEEE, 1972. Tactile Pitch Displays for the Deaf, IEEE Transaction on Audio and Electro acoustics, 20(1):9–16.
Umesh S. & Cohen L. and Nelson D, 1999. Fitting the Mel scale, Proceedings of IEEE ICASSP, 217–220.
Valerie Henderson-Summet1, Rebecca E. Grinter1, Jennie Carroll & Thad Starner, 2007. Electronic Communication: Themes from a Case Study of the Deaf Community, IFIP International Federation for Information Processing, 347–360.
Xuejing, Sun, 2002. Pitch determination and voice quality analysis using sub harmonic-to-harmonic ratio, Proceedings of IEEE International conference on Acoustics, Speech and Signal Processing, 333–336.
Open Access: This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0) which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Author information
Authors and Affiliations
Corresponding author
Additional information
*Department of ECE, Trichy Engineering College, Trichy, Tamilnadu, India.
**Former HOD, Department of ECE, College of Engineering, Anna university, Chennai, Tamilnadu, India.
Department of ECE, Saranathan College of Engineering,Trichy, Tamilnadu, India.
lakshmi.jeya67@yahoo.com, profvkmurthi@yahoo.co.in, revathidhanabal@rediffmail.com
* Corresponding Author.
Open Access: This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0) which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Jeyalakshmi, C., Krishnamurthi, V. & Revathi, A. Development of speech recognition system in native language for hearing impaired. J Engin Res 2, 6 (2014). https://doi.org/10.7603/s40632-014-0006-z
Revised:
Accepted:
Published:
DOI: https://doi.org/10.7603/s40632-014-0006-z