Assistive Listening System Using a Human-Like Auditory Processing Algorithm

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 298)

Abstract

Enhancing the quality of hearing perception in noisy environments plays a significant role to improve life quality of elderly persons and hearing impaired people. Accordingly, this study presents a human-like auditory processing (HAP) algorithm to enhance the speech signal in low signal-to-noise ratio (SNR) and non-stationary noise environments. The proposed algorithm comprises two modules, namely Cochlear Wavelet Transform (CWT) and AM-FM Demodulation (AFD); mimicking the human peripheral auditory processing system and the human cortical auditory processing system, respectively. The performance of the proposed HAP algorithm is evaluated in accordance with the ITU Perceptual Evaluation of Speech Quality (PESQ) standard. The results show that the proposed algorithm improves the speech quality by 16.9 % on average. In the other words, the algorithm has significant potential for assistive listening in noisy environments.

Keywords

Auditory processing wavelet transform coherent demodulation spectro-temporal modulation filter non-stationary signal cortical representation assistive listening speech enhancement 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Boll, S.F.: Suppression of Acoustic Noise in Speech Using Spectral Subtraction. IEEE Trans. Acoust. Speech, Signal Processing 27, 113–120 (1979)CrossRefGoogle Scholar
  2. 2.
    Ephraim, Y., Malah, D.: Speech Enhancement using A Minimum-Mean Square Error Short-Time Spectral Amplitude Estimator. IEEE Trans. Acoust. Speech, Signal Processing 32, 1109–1121 (1984)CrossRefGoogle Scholar
  3. 3.
    Ephraim, Y.: Statistical-Model-based Speech Enhancement Systems. Proc. IEEE 80, 1526–1555 (1992)CrossRefGoogle Scholar
  4. 4.
    Hermus, K., Wambacq, P., Van Hamme, H.: A Review of Signal Subspace Speech Enhancement and its Application to Noise Robust Speech Recognition. EURASIP J. Adv. Sig. Pr. 2007, 045821 (2007)CrossRefGoogle Scholar
  5. 5.
    Potamianos, P., Maragos, P.: Speech Analysis and Synthesis using an AM–FM Modulation Model. Speech Commun. 28, 195–209 (1999)CrossRefGoogle Scholar
  6. 6.
    Paliwal, K., Wójcicki, K., Schwerin, B.: Single-Channel Speech Enhancement using Spectral Subtraction in the Short-Time Modulation Domain. Speech Commun. 52, 450–475 (2010)CrossRefGoogle Scholar
  7. 7.
    Qin, L., Atlas, L.: Coherent Modulation Filtering for Speech. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4481–4484. IEEE Press, Las Vegas (2008)Google Scholar
  8. 8.
    Khalfa, S., Bougeard, R., Morand, N., Veuillet, E., Isnard, J., Guenot, M., et al.: Evidence of Peripheral Auditory Activity Modulation by the Auditory Cortex in Humans. Neurosci. 104, 347–358 (2001)CrossRefGoogle Scholar
  9. 9.
    Patterson, R.D., Robinson, K., Holdsworth, J., McKeown, D., Zhang, C., Allerhand, M.: Complex Sounds and Auditory Images. In: Proc. 9th International Symposium on Hearing, pp. 429–446. Pergamon, Oxford (1992)Google Scholar
  10. 10.
    Slaney, M.: Lyon’s Cochlear Model. Apple Technical Report (1988)Google Scholar
  11. 11.
    Qi, L.: An Auditory-based Transform for Audio Signal Processing. In: IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, pp. 181–184. IEEE Press, New York (2009)Google Scholar
  12. 12.
    Irino, T., Patterson, R.D.: A Dynamic Compressive Gammachirp Auditory Filterbank. IEEE Trans. Audio, Speech, Language Process. 14, 2222–2232 (2006)CrossRefGoogle Scholar
  13. 13.
    Sung, P.H., Thompson, W.R., Wang, J.N., Wang, J.F., Jang, L.S.: Computer-assisted Auscultation: Patent Ductus Arteriosus Detection based on Auditory Time Frequency Analysis. J. Med. Biol. Eng. (2014), doi:10.5405/jmbe.1689Google Scholar
  14. 14.
    Athineos, M., Ellis, D.P.W.: Frequency-Domain Linear Prediction for Temporal Features. In: IEEE Workshop on Automatic Speech Recognition and Understanding. IEEE Press, St. Thomas (2003)Google Scholar
  15. 15.
    Woolley, S.M.N., Fremouw, T.E., Hsu, A., Theunissen, F.E.: Tuning for Spectro-Temporal Modulations as a Mechanism for Auditory Discrimination of Natural Sounds. Nat. Neurosci. 8, 1371–1379 (2005)CrossRefGoogle Scholar
  16. 16.
    IEEE. IEEE Recommended Practice for Speech Quality Measurements. IEEE Trans. Audio Electroacoust., 225–246 (1969)Google Scholar
  17. 17.
    Pearce, D., Hirsch, H.G.: The Aurora Experimental Framework for the Performance Evaluation of Speech Recognition Systems under Noisy Conditions. In: Proc. ISCA ITRW ASR, Paris, France (2000)Google Scholar
  18. 18.
    Yi, H., Loizou, P.C.: Evaluation of Objective Quality Measures for Speech Enhancement. IEEE Trans. Audio, Speech, Lang. Processing 16, 229–238 (2008)CrossRefGoogle Scholar
  19. 19.
    Kamath, S., Loizou, P.: A Multi-Band Spectral Subtraction Method for Enhancing Speech Corrupted by Colored Noise. In: IEEE International Conference on Acoustics, Speech and Signal Processing, p. IV-4164. IEEE Press, Orlando (2002)Google Scholar
  20. 20.
    Schimmel, S., Atlas, L.: Coherent Envelope Detection for Modulation Filtering of Speech. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 221–224. IEEE Press, Philadelphia (2005)Google Scholar
  21. 21.
    The Corpus of Spontaneous Japanese, http://www.ninjal.ac.jp/english/products/csj/

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Department of Electrical EngineeringNational Cheng Kung UniversityTainanTaiwan
  2. 2.Department of Digital Multimedia DesignTajen UniversityPingtung CountyTaiwan

Personalised recommendations