Statistical feature evaluation for classification of stressed speech
- 127 Downloads
The variations in speech production due to stress have an adverse affect on the performances of speech and speaker recognition algorithms. In this work, different speech features, such as Sinusoidal Frequency Features (SFF), Sinusoidal Amplitude Features (SAF), Cepstral Coefficients (CC) and Mel Frequency Cepstral Coefficients (MFCC), are evaluated to find out their relative effectiveness to represent the stressed speech. Different statistical feature evaluation techniques, such as Probability density characteristics, F-ratio test, Kolmogorov-Smirnov test (KS test) and Vector Quantization (VQ) classifier are used to assess the performances of the speech features. Four different stressed conditions, Neutral, Compassionate, Anger and Happy are tested. The stressed speech database used in this work consists of 600 stressed speech files which are recorded from 30 speakers. SAF shows maximum recognition result followed by SFF, MFCC and CC respectively with the VQ classifier. The relative classification results and the relative magnitudes of F-ratio values for SFF, MFCC and CC features are obtained with the same order. SFF and MFCC feature show consistent relative performance for all the three tests, F-ratio, K-S test and VQ classifier.
KeywordsFeature evaluation Probability density Kolmogorov-Smirnov Test
Unable to display preview. Download preview PDF.
- Bhatti, M., Wang, Y., & Guan, L. (2004). A neural network approach for human emotion recognition in speech. In ICSAS’04, proc. of IEEE (pp. 181–184) 2004. Google Scholar
- De Silva, L. C., & NgFourth, P. C. (2000). Bimodal emotion recognition. In IEEE international conference on automatic face and gesture recognition (pp. 332–335) Mar. 2000. Google Scholar
- Hansen, J. H. L., Womack, B., & Arsian, L. M. (1994). A source generator based production model for environmental robustness in speech recognition. In International conference on spoken language processing (ICSLP) (pp. 1003–1006) 1994. Google Scholar
- Nwe, T. L., Foo, S. W., & De Silva, L. C. (2003). Detection of stress and emotion in speech using traditional and FFT based log energy features. In Fourth Pacific rim conference on multimedia, information, communications & signal processing (Vol. 3, pp. 1619–1623) Dec. 2003. Google Scholar
- Press, W., Teukolsky, S., & Vetterling, W. (1992). Flannery, numerical recipes in C. Cambridge: Cambridge University Press. Google Scholar
- Rabiner, L., & Juang, B. H. (1993). Fundamentals of speech recognition. Englewood Cliffs: Prentice Hall, 07632. Google Scholar
- Ramamohan, S., & Dandapat, S. (2002). Feature analysis for classification of speech under stress. In The Indo-European conference on multilingual communication technologies (IEMCT), Pune, June 2002. Google Scholar
- Sathyanarayana, N., Dandapat, S., & Sahambi, J. S. (2001). Stressed speech analysis using sinusoidal model. In International conference on energy, automation and information technology, Indian Institute of Technology, Kharagpur, India (pp. 10–12) Dec. 2001. Google Scholar
- Sato, J., & Morishma, S. (1996). Emotion modeling in speech production using emotion space. In IEEE international workshop on robot and human communication (pp. 472–477) Sep. 1996. Google Scholar
- Schuller, B., Rigoll, G., & Lang, M. (2004). Speech emotion recognition combining acoustic features and linguistic information in a hybrid support vector machine—belief network architecture. In International conference on acoustics, speech and signal processing (ICASSP) (pp. 577–580) 2004. Google Scholar