Emotion Recognition Using Support Vector Machine and Deep Neural Network
Emotion recognition from voice has recently attracted considerable interest in the fields of human-machine communication. In this paper, we propose an emotion recognition system which is a combination of three subsystems. The first and second subsystems utilize support vector machines (SVM) and deep neural networks (DNN) respectively to classify the features directly. In the third subsystem, we utilize DNN to extract segment-level features from raw data and show that they are effective for speech emotion recognition. The extracted segment-level features are emotion state probability distribution. Then we construct utterance-level features from segment-level probability distributions. Finally, utterance-level features are fed into a SVM to identify the emotions for each utterance. The experimental results show that all the subsystems outperform the hidden markov model (HMM) baseline, and the combined system get the best performance on F-score.
KeywordsEmotion recognition Deep neural networks Support vector machine
This work was supported by the China NSFC projects (No. 61603252 and No. U1736202), the Shanghai Sailing Program No. 16YF1405300, and the Tencent-Shanghai Jiao Tong University joint project. Experiments have been carried out on the PI supercomputer at Shanghai Jiao Tong University. We would like to thank Heinrich Dinkel for his insightful comments on this paper.
- 1.Lee, C.M., Yildirim, S., Bulut, M., Kazemzadeh, A., Busso, C., Deng, Z., Lee, S., Narayanan, S.: Emotion recognition based on phoneme classes. In: Eighth International Conference on Spoken Language Processing (2004)Google Scholar
- 2.Schuller, B., Rigoll, G., Lang, M.: Hidden Markov model-based speech emotion recognition. In: Proceedings of 2003 International Conference on Multimedia and Expo, ICME 2003, vol. 1, p. I-401. IEEE (2003)Google Scholar
- 3.Hu, H., Xu, M.X., Wu, W.: GMM supervector based SVM with spectral features for speech emotion recognition. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2007, vol. 4, p. IV-413. IEEE (2007)Google Scholar
- 4.Lee, C.C., Mower, E., Busso, C., Lee, S., Narayanan, S.: Emotion recognition using a hierarchical binary decision tree approach. In: Interspeech 2009, pp. 320–323 (2009)Google Scholar
- 5.Huang, Z., Dong, M., Mao, Q., Zhan, Y.: Speech emotion recognition using CNN. In: Proceedings of the ACM International Conference on Multimedia - MM 2014, pp. 801–804 (2014)Google Scholar
- 6.Lee, J., Tashev, I.: High-level feature representation using recurrent neural network for speech emotion recognition. In: INTERSPEECH 2015, pp. 1537–1540 (2015)Google Scholar
- 7.Hinton, G., Deng, L., Yu, D., Dahl, G.E., Mohamed, A.R., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T.N., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Sign. Process. Mag. 29(6), 82–97 (2012)CrossRefGoogle Scholar
- 8.Yu, D., Seltzer, M.L., Li, J., Huang, J.T., Seide, F.: Feature learning in deep neural networks-studies on speech recognition tasks. arXiv preprint arXiv:1301.3605 (2013)
- 10.Han, K., Yu, D., Tashev, I.: Speech emotion recognition using deep neural network and extreme learning machine. In: Fifteenth Annual Conference of the International Speech Communication Association (2014)Google Scholar
- 11.Steidl, S., Batliner, A.: The INTERSPEECH 2009 Emotion Challenge, pp. 312–315 (2013)Google Scholar