Advertisement

A new Genetic Algorithm based fusion scheme in monaural CASA system to improve the performance of the speech

  • S. ShobaEmail author
  • R. Rajavel
Original Research
  • 42 Downloads

Abstract

This research work proposes a new Genetic Algorithm (GA) based fusion scheme to effectively fuse the Time–Frequency (T–F) binary mask of voiced and unvoiced speech. The perceptual cues such as correlogram, cross-correlogram and pitch are commonly used to obtain the T–F binary mask of voiced speech. Recently, researchers use speech onset and offset to segment the unvoiced speech from the noisy speech mixture. Most of the research work which uses speech onset and offset to represent the unvoiced speech, combine the segments of unvoiced speech with the segments of voiced speech to obtain the T–F binary mask. This research work effectively fuses the T–F binary mask of voiced and unvoiced speech, instead of combining the segments of voiced and unvoiced speech using a Genetic Algorithm (GA). Moreover, a new method is proposed in this research work to obtain a T–F binary mask from the segments of unvoiced speech. The performance of the proposed GA based fusion scheme is evaluated using measures such as quality and intelligibility. The experimental results show that the proposed system enhances the speech quality by increasing the SNR with an average value of 10.74 dB and decreases the noise residue with an average value of 26.15% when compared with noisy speech mixture and enhances the speech intelligibility by increasing the CSII, NCM and STOI with an average value of 0.22, 0.20 and 0.17 as compared with the conventional speech segregation systems.

Keywords

Speech segregation GA fusion scheme T–F binary mask Voiced Mask Unvoiced Mask 

Notes

References

  1. Boll SF (1979) Suppression of acoustic noise in speech using spectral subtraction. IEEE Trans Audio Speech Signal Process 27:113–120CrossRefGoogle Scholar
  2. Brown GJ, Cooke MP (1994) Computational auditory scene analysis. Comput Speech Lang 8:297–336CrossRefGoogle Scholar
  3. Brown GJ, Wang DL (2005) Separation of speech by computational auditory scene analysis. In: Benesty J, Makino S, Chen J (eds) Speech enhancement. Springer, Berlin, pp 371–402CrossRefGoogle Scholar
  4. Cooke MP (1993) Modeling auditory processing and organization. Dissertation, University of Sheffield, UKGoogle Scholar
  5. Dharmalingam M, JohnWiselin MC (2017) CASA for improving speech intelligibility in monaural speech separation. Int J Perform Eng 13(3):259–263Google Scholar
  6. Donald S, Wang D (2017) Time-frequency masking in the complex domain for speech dereverberation and denoising. IEEE/ACM Trans Audio Speech Lang Process 25(7):1492–1501CrossRefGoogle Scholar
  7. Ephraim Y, Trees HL (1995) A signal subspace approach for speech enhancement. IEEE Trans Speech Audio Process 3:251–266CrossRefGoogle Scholar
  8. Ellis DPW, Weiss RJ (2006) Model-based monaural source separation using a vector-quantized phase-vocoder representation. In Proceedings on IEEE international conference on acoust speech and signal processing (ICASSP,) pp 957–960Google Scholar
  9. Gibak K, Loizou PC (2010) Improving speech intelligibility in noise using environment-optimized algorithms. IEEE Trans Audio Speech Lang Process 18(8):2080–2090CrossRefGoogle Scholar
  10. Harish N, Rajavel R (2014) Monaural speech separation system based on optimum soft mask. IEEE Int Conf Comput Intell Comput Res.  https://doi.org/10.1109/ICCIC.2014.7238420 CrossRefGoogle Scholar
  11. Hu G, Wang D (2006) An auditory scene analysis approach to monaural speech segregation. In: Hansler E, Schmidt G (eds) Topics in acoustic echo and noise control. Springer, New York, pp 485–515Google Scholar
  12. Hu G, Wang D (2007) Auditory segmentation based on onset and offset analysis. IEEE Trans Audio Speech Lang Process 15(2):396–405CrossRefGoogle Scholar
  13. Hu K, Wang D (2011) Unvoiced speech segregation from non-speech interference via CASA and spectral subtraction. IEEE Trans Audio Speech Lang Process 19(6):1600–1609CrossRefGoogle Scholar
  14. Hu K, Wang D (2004) Monaural speech segregation based on pitch tracking and amplitude modulation. IEEE Trans Neural Netw 15(5):1135–1150CrossRefGoogle Scholar
  15. Hu Y, Loizou PC (2007) A comparative intelligibility study of speech enhancement algorithms. In: Proceedings of IEEE international conference on acoustics speech and signal processing (ICASSP), pp 561–564Google Scholar
  16. Hyvarinen A, Karhunen J, Oja E (2001) Independent component analysis. Wiley, New YorkCrossRefGoogle Scholar
  17. Jensen J, Hansen HL (2001) Speech enhancement using a constrained iterative sinusoidal model. IEEE Trans Speech Audio Process 9:731–740CrossRefGoogle Scholar
  18. Hu K, Wang D (2013) An unsupervised approach to cochannel speech separation. IEEE Trans Audio Speech Lang Process 21(1):122–131MathSciNetCrossRefGoogle Scholar
  19. Yi-nan L, xiong-wei Zhang, Zeng L, Huang JJ (2014) An improved monaural speech enhancement algorithm based on sparse dictionary learning. J Signal Process 30(1):44–50Google Scholar
  20. Ma J, Hu Y, Loizou P (2009) Objective measures for predicting speech intelligibility in noisy conditions based on new band-importance functions. J Acoust Soc Am 125(5):3387–3405CrossRefGoogle Scholar
  21. Meddis R (1988) Simulation of auditory-neural transduction: further studies. J Acoust Soc Am 83(3):1056–1063CrossRefGoogle Scholar
  22. Naik R, Ganesh R, Wang W (2012) Audio analysis of statistically instantaneous signals with mixed Gaussian probability distributions. Int J Electron 99(10):1333–1350CrossRefGoogle Scholar
  23. Naik R, Ganesh R (2012) Measure of quality of source separation for sub and super-Gaussian audio mixtures. Informatica 23(4):581–599MathSciNetzbMATHGoogle Scholar
  24. Nilesh M, Ann S et al (2013) The potential for speech intelligibility improvement using the ideal binary mask and the ideal Wiener filter in single channel noise reduction systems: application to auditory prostheses. IEEE Trans Audio Speech Lang Process 21(1):63–72CrossRefGoogle Scholar
  25. Patterson RD, Nimmo-Smith I, Holdsworth J et al (1988) An efficient auditory filterbank based on the gammatone function. MRC Applied Psych UnitGoogle Scholar
  26. Phapatanaburi K, Wang L, Oo Z et al (2017) Noise robust voice activity detection using joint phase and magnitude based feature enhancement. J Ambient Intell Hum Comput 8(6):845–859CrossRefGoogle Scholar
  27. Pichevar R, Rouat J (2005) A quantitative evaluation of a bio-inspired sound segregation technique for two and three-source mixtures. In: Chollet G, Esposito A, Faundez-Zanuy M, Marinaro M (eds) Nonlinear speech modeling and applications, vol 3445. Lecture notes in computer science. Springer, Berlin, pp 430–435CrossRefGoogle Scholar
  28. Qazi KA, Nawaz T, Mehmood Z, Rashid M, Habib HA (2018) A hybrid technique for speech segregation and classification using a sophisticated deep neural network. PLoS ONE 13(3):e0194151.  https://doi.org/10.1371/journal.pone.0194151 CrossRefGoogle Scholar
  29. Rajavel R, Sathidevi PS (2012) Adaptive reliability measure and optimum integration weight for decision fusion audio-visual speech recognition. J Signal Process System 68(1):83–93CrossRefGoogle Scholar
  30. Rajavel R, Sathidevi PS (2011) A new GA optimised reliability ratio based integration weight estimation scheme for decision fusion audio-visual speech recognition. Int J Signal Imaging Syst Eng 4(2):123–131CrossRefGoogle Scholar
  31. Sameti H, Sheikhzadeh H, Deng L, Brennan RL (1998) HMM-based strategies for enhancement of speech signals embedded in non-stationary noise. IEEE Trans Speech Audio Process 6:445–455CrossRefGoogle Scholar
  32. Shoba S, Rajavel R (2017) Adaptive energy threshold selection for monaural speech separation. In: International conference on communication and signal processing (ICCSP), India, pp 905–908Google Scholar
  33. Shoba S, Rajavel R (2017) Image processing techniques for segments grouping in monaural speech separation. Circ Syst Signal Process 37(8):3651–3670MathSciNetzbMATHCrossRefGoogle Scholar
  34. Shoba S, Rajavel R (2018) Improving speech intelligibility in monaural segregation system by fusing voiced and unvoiced speech segments circuits systems and signal process. Circ Syst Signal Process.  https://doi.org/10.1007/s00034-018-1005-3 zbMATHCrossRefGoogle Scholar
  35. Shoba S, Rajavel R (2018) Performance improvement of monaural speech separation system using image analysis techniques. IET Signal Process 12(7):896–906zbMATHCrossRefGoogle Scholar
  36. Singhal S, Passricha V, Sharma P et al (2018) Multi-level region-of-interest CNNs for end to end speech recognition. J Ambient Intell Hum Comput.  https://doi.org/10.1007/s12652-018-1146-z CrossRefGoogle Scholar
  37. Taal CH, Hendriks RC, Heusdens R et al (2011) An algorithm for intelligibility prediction of time frequency weighted noisy speech. IEEE Trans Audio Speech Lang Process 19(7):2125–2136CrossRefGoogle Scholar
  38. Therese SS, Lingam C (2017) A linear visual assessment tendency based clustering with power normalized cepstral coefficients for audio signal recognition system. J Ambient Intell Hum Comput.  https://doi.org/10.1007/s12652-017-0653-7 CrossRefGoogle Scholar
  39. Trowitzsch Ivo (2017) Robust detection of environmental sounds in binaural auditory scenes. IEEE/ACM Trans Audio Speech Lang Process 25(6):1344–1356CrossRefGoogle Scholar
  40. Wang DL, Kun H (2013) Towards generalizing classification based speech separation. IEEE Trans Audio Speech Lang Process 21(1):68–77Google Scholar
  41. Wang D (2012) Tandem algorithm for pitch estimation and voiced speech segregation. IEEE Trans Audio Speech Lang Process 18(8):2067–2079Google Scholar
  42. Wang DL, Brown GJ (1999) Separation of speech from interfering sounds based on oscillatory correlation. IEEE Trans Neural Netw 10:684–697CrossRefGoogle Scholar
  43. Wang Y, Lin J, Chen N, Yuan W (2013) Improved monaural speech segregation based on computational auditory scene analysis. J Audio Speech Music Process.  https://doi.org/10.1186/1687-4722-2013-2 CrossRefGoogle Scholar
  44. Weintraub M (1985) A theory and computational model of auditory monaural sound separation. Ph.D. dissertation, Dept Elect Eng, Stanford UniversityGoogle Scholar
  45. Yu J, Xie L, Xiao X et al (2017) A hybrid neural network hidden Markov model approach for automatic story segmentation. J Ambient Intell Hum Comput 8(6):925–936CrossRefGoogle Scholar
  46. Zhang X, Wang DL (2017) Deep learning based binaural speech separation in reverberant environments. IEEE/ACM Trans Audio Speech Lang Process 25(5):1075–1084CrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.SSN College of EngineeringChennaiIndia

Personalised recommendations