Advertisement

Recognition of Emotions of Speech and Mood of Music: A Review

  • Gaurav Agarwal
  • Vikas Maheshkar
  • Sushila Maheshkar
  • Sachi Gupta
Conference paper
Part of the Lecture Notes on Data Engineering and Communications Technologies book series (LNDECT, volume 18)

Abstract

A few emotion recognition frameworks have been developed by various scientists for recognition of human feelings in talked expressions. This paper depicts recognition of emotions of speech and mood of music in light of the past researches. Moreover, distinctive techniques for feature extraction and diverse classifiers for the emotion recognition are also explored in this paper. The database for the recognition of speech emotions and music mood recognition framework is the speech and music samples, and the elements derived from these music and speech samples are the linear prediction cepstral coefficient (LPCC), vitality, pitch and Mel-frequency cepstral coefficient (MFCC). For the extraction of vector features, various wavelet distinctive structures can be used. The classifiers are utilized to separate feelings, for example, outrage, joy, pity, astound, fear, unbiased state and so forth. Extracted features are one of the base parameters for analysing the classifier’s performance. Results obtained from execution and confinements of speech emotion and music mood recognition framework insight of various techniques are discussed here. Preprocessing, feature extraction and recognition are the three basic steps for most of the models used for speech recognition. If there is an occurrence of speech emotion and music mood recognition system, the researchers use one of the three distinctive methodologies specifically knowledge-based method, acoustic phonetic method and pattern recognition method. Various techniques like principal component analysis (PCA) and MFFCs for the recognition of emotions of speech are also described here. Various parameters like entropy, zero crossing rate, spectral centroid, spectral roll-off and so forth are also discussed for the feature extraction and recognition of emotion and mood for speech and music, respectively.

Keywords

Speech emotion Music mood Classifier Feature extraction and selection 

References

  1. 1.
    B. Singh, N. Kapur, P. Kaur, Speech recognition with hidden Markov model: A review. Int. J. Adv. Res. Comput. Sci. Softw. Eng 2(3), 400–403 (2012)Google Scholar
  2. 2.
    D.D. Joshi, M.B. Zalte, Speech emotion recognition: A review. IOSR J. Electron. Commun. Eng (IOSR-JECE) 4(4), 34–37 (2013)CrossRefGoogle Scholar
  3. 3.
    J.G. Wilpon, L.R. Rabiner, A modified K-means clustering algorithm for use in isolated word recognition. IEEE Trans. Acoust. Speech Signal Process 33(3), 587 (1985)CrossRefGoogle Scholar
  4. 4.
    P. Saini, P. Kaur, Automatic speech recognition: A review. Int. J. Eng. Trends Technol 4(2), 132 (2013)Google Scholar
  5. 5.
    R.B. Lanjewar, D.S. Chaudhari, Speech emotion recognition: A review. Int. J. Innov. Technol. Explos. Eng (IJITEE) 2(4), 68 (2013)Google Scholar
  6. 6.
    S. Sharma, R.S. Jadon, Mood based music classification. Int. J. Innov. Sci. Eng. Technol (IJISET) 1(6), 387–402 (2014)Google Scholar
  7. 7.
    S.K. Gaikwad, B.W. Gawali, P. Yannawar, A review on speech recognition technique. Int. J. Comput. Appl 10(3), 16 (2010)Google Scholar
  8. 8.
    S.G. Koolagudi, K. Sreenivasa Rao, Emotion recognition from speech: A review. Int. J. Speech Technol 13(5), 308–311 (2006)Google Scholar
  9. 9.
    S. Swamy, K.V. Ramakrishnan, An efficient speech recognition system. Comput. Sci. Eng. Int. J (CSEIJ) 3(4), 21–27 (2013)Google Scholar
  10. 10.
    S.G. Koolagudi, K. Sreenivasa Rao, Emotion recognition from speech: A review. Int. J. Speech Technol, pp 99–117 (2012)CrossRefGoogle Scholar
  11. 11.
    S. Shinde, S. Pande, A survey on: Emotion recognition with respect to database and various recognition techniques. Int. J. Comput. Appl 58(3), 0975 (2012)Google Scholar
  12. 12.
    T. Sreenivas, P. Kirnapure, Codebook constrained wiener filtering for speech enhancement. IEEE Trans. Speech Audio Process 4, 383 (1996)CrossRefGoogle Scholar
  13. 13.
    S. Karpagavalli, E. Chandra, A review on automatic speech recognition architecture and approaches. Int. J. Signal Process. Image Process. Pattern Recogn 9(4), 393–404 (2016)Google Scholar
  14. 14.
    J. Li, L. Deng, R.H. Umbach, Y. Gong, Robust Automatic Speech Recognition: A Bridge to Practical Applications (Academic Press, Waltham, 2015)Google Scholar
  15. 15.
    S. Arora, M. Goel, Survey paper on scheduling in Hadoop. Int. J. Adv. Res. Comput. Sci. Softw. Eng 4(5), 812 (2014)Google Scholar
  16. 16.
    G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, B. Kingsbury, Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag. 29(6), 82 (2012)CrossRefGoogle Scholar
  17. 17.
    M. Vyas, A Gaussian mixture model based speech recognition system using Matlab. Int. J. Speech Image Process (SIPIJ) 4(4), 109–118 (2013)Google Scholar
  18. 18.
    W.M. Campbell, D.E. Sturim, D.A. Reynolds, Support vector machines using GMM super vectors for speaker verification. IEEE Signal Process. Lett 13(5), 308–311 (2006)CrossRefGoogle Scholar
  19. 19.
    S.B. Davis, P. Mermelstein, Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Trans. Acoust. Speech Signal Process. 28(4), 357 (1980)CrossRefGoogle Scholar
  20. 20.
    M. Sarode, D.G. Bhalke, Automatic music mood recognition using support vector regression. Int. J. Comput. Appl. 163(5), 32–35 (2017)Google Scholar
  21. 21.
    M. Barthet, G. Fazekas, M. Sandler, Music Emotion Recognition: From Content to Content Based Models (Springer, Berlin/Heidelberg, 2013)Google Scholar
  22. 22.
    Y-H. Cho, H. Lim, D-W. Kim, I-K. Lee, Music emotion recognition using chord progressions, in IEEE International Conference on Systems, Man, and Cybernetics SMC, 2016Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Gaurav Agarwal
    • 1
  • Vikas Maheshkar
    • 2
  • Sushila Maheshkar
    • 1
  • Sachi Gupta
    • 3
  1. 1.Department of Computer Science and EngineeringIndian Institute of Technology (ISM)DhanbadIndia
  2. 2.Division of Information TechnologyNetaji Subhas Institute of TechnologyDwarkaIndia
  3. 3.Department of Computer Science and EngineeringRaj Kumar Goel Institute of TechnologyGhaziabadIndia

Personalised recommendations