Cognitively Inspired Feature Extraction and Speech Recognition for Automated Hearing Loss Testing

Abstract

Hearing loss, a partial or total inability to hear, is one of the most commonly reported disabilities. A hearing test can be carried out by an audiologist to assess a patient’s auditory system. However, the procedure requires an appointment, which can result in delays and practitioner fees. In addition, there are often challenges associated with the unavailability of equipment and qualified practitioners, particularly in remote areas. This paper presents a novel idea that automatically identifies any hearing impairment based on a cognitively inspired feature extraction and speech recognition approach. The proposed system uses an adaptive filter bank with weighted Mel-frequency cepstral coefficients for feature extraction. The adaptive filter bank implementation is inspired by the principle of spectrum sensing in cognitive radio that is aware of its environment and adapts to statistical variations in the input stimuli by learning from the environment. Comparative performance evaluation demonstrates the potential of our automated hearing test method to achieve comparable results to the clinical ground truth, established by the expert audiologist’s tests. The overall absolute error of the proposed model when compared with the expert audiologist test is less than 4.9 dB and 4.4 dB for the pure tone and speech audiometry tests, respectively. The overall accuracy achieved is 96.67% with a hidden Markov model (HMM). The proposed method potentially offers a second opinion to audiologists, and serves as a cost-effective pre-screening test to predict hearing loss at an early stage. In future work, authors intend to explore the application of advanced deep learning and optimization approaches to further enhance the performance of the automated testing prototype considering imperfect datasets with real-world background noise.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

References

  1. 1.

    Organization WH, et al. 2013. Millions of People in the World have Hearing Loss that can be Treated or Prevented. Awareness is the Key to Prevention.

  2. 2.

    Dalton DS, Cruickshanks KJ, Klein BE, Klein R, Wiley TL, Nondahl DM. The impact of hearing loss on quality of life in older adults. Gerontol 2003;43(5):661–668.

    Article  Google Scholar 

  3. 3.

    Davis A, Smith P, Ferguson M, Stephens D, Gianopoulos I. Acceptability, benefit and costs of early screening for hearing disability: a study of potential screening tests and models. Health Technology Assessment-Southampton-. 2007;11(42).

  4. 4.

    Fagan J. 2014. Open access guide to audiology and hearing aids for otolaryngologists.

  5. 5.

    Association ASLH, et al. 2005. Guidelines for manual pure-tone threshold audiometry.

  6. 6.

    Hudgins CV, Hawkins J, Kaklin J, Stevens S. The development of recorded auditory tests for measuring hearing loss for speech. Laryngoscope 1947;57(1):57–89.

    CAS  PubMed  Article  Google Scholar 

  7. 7.

    Probst R, Lonsbury-Martin B, Martin G, Coats A. Otoacoustic emissions in ears with hearing loss. Amer J Otolaryngol 1987;8(2):73–81.

    CAS  Article  Google Scholar 

  8. 8.

    Wilson DF, Hodgson RS, Gustafson MF. Auditory brainstem response testing. Laryngoscope 1993;103 (5):580–581.

    Article  Google Scholar 

  9. 9.

    Schlauch RS, Han HJ, Tzu-Ling JY, Carney E. Pure-tone–spondee threshold relationships in functional hearing loss: a test of loudness contribution. J Speech Language Hear Res 2017;60(1):136–143.

    Article  Google Scholar 

  10. 10.

    Martin FN, Clark JG. Introduction to audiology. Boston: Allyn and Bacon; 1997.

    Google Scholar 

  11. 11.

    Brandy WT. Speech audiometry. Handb Clin Audiol 2002;5:96–110.

    Google Scholar 

  12. 12.

    Franks JR. Hearing measurement. National Institute for Occupational Safety and Health. 2001; p. 183–232.

  13. 13.

    Carhart R. Clinical application of bone conduction audiometry. Arch Otolaryngol 1950;51(6):798–808.

    CAS  PubMed  Article  Google Scholar 

  14. 14.

    Stapells DR, Oates P. Estimation of the pure-tone audiogram by the auditory brainstem response: A review. Audiol Neurotol 1997;2(5):257–280.

    CAS  Article  Google Scholar 

  15. 15.

    Loss CH. 2012. Sensorineural hearing loss. Diseases Ear Nose Throat.

  16. 16.

    Pensak ML, Adelman RA. 1993. Conductive hearing loss. Otolaryngology-head and neck surgery St Louis: Mosby Year Book.

  17. 17.

    Ramsay HA, Linthicum JF. Mixed hearing loss in otosclerosis: indication for long-term follow-up. Amer J Otol 1994;15(4):536–539.

    CAS  Google Scholar 

  18. 18.

    Sreedhar J, Venkatesh L, Nagaraja M, Srinivasan P. Development and evaluation of paired words for testing of speech recognition threshold in Telugu A preliminary report. J Indian Speech Lang Hear Assoc 2011;25 (2):128–136.

    Google Scholar 

  19. 19.

    Van Tasell DJ, Yanz JL. Speech recognition threshold in noise: effects of hearing loss, frequency response, and speech materials. J Speech Lang Hear Res 1987;30(3):377–386.

    Article  Google Scholar 

  20. 20.

    Association ASLH, et al. 1988. Determining threshold level for speech.

  21. 21.

    Martin FN, Champlin CA, Chambers JA. Seventh survey of audiometric practices in the United States. J-Amer Acad Audiol 1998;9:95–104.

    CAS  Google Scholar 

  22. 22.

    MD R. 2000. Audiological survey.

  23. 23.

    Schoepflin JR. 2015. Back to basics: speech audiometry.

  24. 24.

    Boothroyd A. Developments in speech audiometry. Br J Audiol 1968;2(1):3–10.

    Article  Google Scholar 

  25. 25.

    Renda L, Selċuk ÖT, Eyigör H, Osma Ü, Yılmaz MD. Smartphone based audiometric test for confirming the level of hearing; Is it useable in underserved areas? J Int Adv Otol 2016;12(1):61–6.

    PubMed  Article  Google Scholar 

  26. 26.

    Szudek J, Ostevik A, Dziegielewski P, Robinson-Anagor J, Gomaa N, Hodgetts B, et al. Can Uhear me now? Validation of an iPod-based hearing loss screening test. Journal of Otolaryngology–Head & Neck Surgery. 2012; p. 41.

  27. 27.

    Wong TW, Yu T, Chen W, Chiu Y, Wong C, Wong A. Agreement between hearing thresholds measured in non-soundproof work environments and a soundproof booth. Occup Environ Med 2003;60(9):667–671.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  28. 28.

    Kam ACS, Gao H, Li LKC, Zhao H, Qiu S, Tong MCF. Automated hearing screening for children: a pilot study in China. Int J Audiol 2013;52(12):855–860.

    PubMed  Article  Google Scholar 

  29. 29.

    Foulad A, Bui P, Djalilian H. Automated audiometry using Apple iOS-based application technology. Otolaryngol–Head Neck Surg 2013;149(5):700–706.

    PubMed  Article  Google Scholar 

  30. 30.

    Ananthi S, Dhanalakshmi P. SVM and HMM modeling techniques for speech recognition using LPCC and MFCC features. Proceedings of the 3rd International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA) 2014. Springer; 2015. p. 519–526.

  31. 31.

    Chen Ch. Handbook of pattern recognition and computer vision. Singapore: World Scientific; 2015.

    Google Scholar 

  32. 32.

    Anagnostopoulos CN, Iliou T, Giannoukos I. Features and classifiers for emotion recognition from speech: a survey from 2000 to 2011. Artif Intell Rev 2015;43(2):155–177.

    Article  Google Scholar 

  33. 33.

    Rabiner LR. A tutorial on hidden Markov models and selected applications in speech recognition. Proc IEEE 1989;77(2):257–286.

    Article  Google Scholar 

  34. 34.

    Carhart R, Jerger J. 1959. Preferred method for clinical determination of pure-tone thresholds. Journal of Speech & Hearing Disorders.

  35. 35.

    Franks JR. Hearing measurement. National Institute for Occupational Safety and Health. 2001; p. 183–232.

  36. 36.

    Ezeiza A, de Ipiña KL, Hernández C, Barroso N. Enhancing the feature extraction process for automatic speech recognition with fractal dimensions. Cogn Comput 2013;5(4):545–550.

    Article  Google Scholar 

  37. 37.

    Alam MJ, Kenny P, O’shaughnessy D. Low-variance multitaper mel-frequency cepstral coefficient features for speech and speaker recognition systems. Cogn Comput 2013;5(4):533–544.

    Article  Google Scholar 

  38. 38.

    Hei Y, Li W, Li M, Qiu Z, Fu W. Optimization of multiuser MIMO cooperative spectrum sensing in cognitive radio networks. Cogn Comput 2015;7(3):359–368.

    Article  Google Scholar 

  39. 39.

    Nisar S, Khan OU, Tariq M. An efficient adaptive window size selection method for improving spectrogram visualization. Computational intelligence and neuroscience. 2016.

  40. 40.

    Dobie RA, Van Hemel S, Council NR, et al. 2004. Basics of Sound, the Ear, and Hearing.

  41. 41.

    Schoepflin JR. 2015. Back to Basics: Speech Audiometry.

  42. 42.

    Kapul A, Zubova E, Torgaev SN, Drobchik V, Vol. 881. Pure-tone audiometer. In: Journal of Physics: Conference Series. UK: IOP Publishing; 2017, p. 012010.

    Google Scholar 

  43. 43.

    Behgam M, Grant SL. Echo cancellation for bone conduction transducers. 2014 48th Asilomar Conference on Signals, Systems and Computers. IEEE; 2014. p. 1629–1632.

  44. 44.

    Zhong W, Kong X, You X, Wang B. 2015. Recording Device Identification Based on Cepstral Mixed Features.

  45. 45.

    Hsu CW, Chang CC, Lin CJ, et al. 2003. A practical guide to support vector classification.

  46. 46.

    Shady Y, Zayed SHH. Speaker independent Arabic speech recognition using support vector machine. Department of Electrical Engineering, Shoubra Faculty of Engineering. Cairo: Benha University; 2009.

    Google Scholar 

  47. 47.

    Priya TL, Raajan N, Raju N, Preethi P, Mathini S. Speech and non-speech identification and classification using KNN Algorithm. Proced Eng 2012;38:952–958.

    Article  Google Scholar 

  48. 48.

    Bhatia N, et al. 2010. Survey of nearest neighbor techniques. arXiv:abs/10070085.

  49. 49.

    Breiman L. Bagging predictors. Mach Learn 1996;24(2):123–140.

    Google Scholar 

  50. 50.

    Freund Y, Schapire RE. Game theory, on-line prediction and boosting. Proceedings of the ninth annual conference on Computational learning theory. ACM; 1996. p. 325–332.

  51. 51.

    Freund Y, Schapire RE, et al. Experiments with a new boosting algorithm. icml; 1996. p. 148–156.

  52. 52.

    Rokach L. Ensemble-based classifiers. Artif Intell Rev 2010;33(1):1–39.

    Article  Google Scholar 

  53. 53.

    Dietterich TG. Ensemble methods in machine learning. International workshop on multiple classifier systems. Springer; 2000. p. 1–15.

  54. 54.

    Vimala C, Radha V. Isolated speech recognition system for Tamil language using statistical pattern matching and machine learning techniques. J Eng Sci Technol (JESTEC) 2015;10(5):617–632.

    Google Scholar 

  55. 55.

    Juang BH, Rabiner LR. Hidden Markov models for speech recognition. Technometrics 1991;33(3):251–272.

    Article  Google Scholar 

  56. 56.

    Organization WH, et al. 2014. Deafness and hearing loss. 2015. http://www.who.int/mediacentre/factsheets/fs300/en/ http://www.who.int/mediacentre/factsheets/fs300/en/ (visited on 01/16/ 2016).

  57. 57.

    Eddins DA, Walton JP, Dziorny AE, Frisina RD. Comparison of pure tone thresholds obtained via automated audiometry and standard pure tone audiometry. J Acoust Soc Amer 2012;131(4):3518–3518.

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to acknowledge the support of Prof. Hidayat Ullah from Khyber Teaching Hospital (ENT department) in Pakistan, for his kind support with pure tone and speech audiometry. The authors would like to acknowledge audiologists, Kamran Mulk from Rehman Medical Institute and Muhammad Saeed from Khyber Teaching Hospital for highlighting pure tone and speech audiometry-related issues. The authors would like to acknowledge the support of Dr. Muhammad Arsalan Khan from Khyber Teaching Hospital for inviting participants (patients). Lastly, the authors would like to gratefully acknowledge the support of deepCI and Taibah Valley (Taibah University, Madinah, Saudi Arabia).

Funding

This research was supported by deepCI and Taibah Valley (Taibah University, Saudi Arabia).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Ahsan Adeel.

Ethics declarations

This manuscript has not been published in whole or in part elsewhere, which has also not currently being considered for publication in another journal. All authors have been personally and actively involved in substantive work leading to the manuscript, and will hold themselves jointly and individually responsible for its content.

Conflict of interests

The authors declare that they have no conflict of interest.

Ethical Approval

This article does not contain any studies with human participants performed by any of the authors.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Nisar, S., Tariq, M., Adeel, A. et al. Cognitively Inspired Feature Extraction and Speech Recognition for Automated Hearing Loss Testing. Cogn Comput 11, 489–502 (2019). https://doi.org/10.1007/s12559-018-9607-4

Download citation

Keywords

  • Hearing loss
  • Speech recognition
  • Machine learning
  • Automation
  • Cognitive radio