Skip to main content

Recognition of Emotion Intensity Basing on Neutral Speech Model

  • Conference paper
Man-Machine Interactions 3

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 242))

Abstract

Research in emotional speech recognition is generally focused on analysis of a set of primary emotions. However it is clear that spontaneous speech, which is more intricate comparing to acted out utterances, carries information about emotional complexity or degree of their intensity. This paper refers to the theory of Robert Plutchik, who suggested the existence of eight primary emotions. All other states are derivatives and occur as combinations, mixtures or compounds of the primary emotions. During the analysis Polish spontaneous speech database containing manually created confidence labels was implemented as a training and testing set. Classification results of four primary emotions (anger, fear, joy, sadness) and their intensities have been presented. The level of intensity is determined basing on the similarity of particular emotion to neutral speech. Studies have been conducted using prosodic features and perceptual coefficients. Results have shown that the proposed measure is effective in recognition of intensity of the predicted emotion.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Attabi, Y., Dumouchel, P.: Emotion recognition from speech: Woc-nn and class-interaction. In: Proceedings of the 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA 2012), pp. 126–131 (2012)

    Google Scholar 

  2. Bojanić, M., Crnojević, V., Delić, V.D.: Application of neural networks in emotional speech recognition. In: Proceedings of the 11th Symposium on Neural Network Applications in Electrical Engineering (NEUREL 2012), pp. 223–226 (2012)

    Google Scholar 

  3. Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W.F., Weiss, B.: A database of german emotional speech. In: Proceedings of 9th European Conference on Speech Communication and Technology (INTERSPEECH 2005), pp. 1517–1520 (2005)

    Google Scholar 

  4. Christina, I.J., Milton, A.: Analysis of all pole model to recognize emotion from speech signal. In: Proceedings of the International Conference on Computing, Electronics and Electrical Technologies (ICCEET 2012), pp. 723–728 (2012)

    Google Scholar 

  5. Deng, J., Han, W., Schuller, B.: Confidence measures for speech emotion recognition: A start. In: Proceedings of the Speech Communication Symposium, 10th ITG Symposium, pp. 1–4 (2012)

    Google Scholar 

  6. Fewzee, P., Karray, F.: Dimensionality reduction for emotional speech recognition. In: Proceedings of ASE/IEEE International Conference on Social Computing (SocialCom) and ASE/IEEE International Conference on Privacy, Security, Risk and Trust (PASSAT), pp. 532–537. IEEE Computer Society (2012)

    Google Scholar 

  7. Garay, N., Cearreta, I., López, J.M., Fajardo, I.: Assistive technology and affective mediation. Assistive Technol. 2(1), 55–83 (2006)

    Google Scholar 

  8. Gunes, H., Piccardi, M.: Bi-modal emotion recognition from expressive face and body gestures. Journal of Network and Computer Applications 30(4), 1334–1345 (2007)

    Article  Google Scholar 

  9. Han, W., Zhang, Z., Deng, J., Wöllmer, M., Weninger, F., Schuller, B.: Towards distributed recognition of emotion from speech. In: Proceedings of the 5th International Symposium on Communications, Control and Signal Processing (ISCCSP 2012), pp. 1–4 (2012)

    Google Scholar 

  10. Han, Z., Lung, S., Wang, J.: A study on speech emotion recognition based on ccbc and neural network. In: Proceedings of International Conference on Computer Science and Electronics Engineering (ICCSEE 2012), vol. 2, pp. 144–147 (2012)

    Google Scholar 

  11. Ivanov, A.V., Riccardi, G.: Kolmogorov-smirnov test for feature selection in emotion recognition from speech. In: Proceedings of the 38th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2012), pp. 5125–5128. IEEE Computer Society (2012)

    Google Scholar 

  12. Metallinou, A., Katsamanis, A., Narayanan, S.: A hierarchical framework for modeling multimodality and emotional evolution in affective dialogs. In: Proceedings of the 38th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2012), pp. 2401–2404. IEEE Computer Society (2012)

    Google Scholar 

  13. Mower, E., Metallinou, A., Lee, C.C., Kazemzadeh, A., Busso, C., Lee, S., Narayanan, S.: Interpreting ambiguous emotional expressions. In: Proceedings of 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (ACII 2009), pp. 1–8 (2009)

    Google Scholar 

  14. Ntalampiras, S., Fakotakis, N.: Modeling the temporal evolution of acoustic parameters for speech emotion recognition. IEEE Transaction on Affective Computing 3(1), 116–125 (2012)

    Article  Google Scholar 

  15. Plutchik, R.: The nature of emotions. American Scientist 89(4), 344 (2001)

    Google Scholar 

  16. Ślot, K.: Rozpoznawanie biometryczne. Nowe metody ilościowej reprezentacji obiektów. Wydawnictwa Komunikacji i Łączności (2010) (in Polish)

    Google Scholar 

  17. Vasuki, P., Aravindan, C.: Improving emotion recognition from speech using sensor fusion techniques. In: Proceedings of IEEE Region 10 Conference TENCON, pp. 1–6 (2012)

    Google Scholar 

  18. Yang, N., Muraleedharan, R., Kohl, J., Demirkoly, I., Heinzelman, W., Sturge-Apple, M.: Speech-based emotion classification using multiclass svm with hybridkernel and tresholding fusion. In: Proceedings of the 4th IEEE Workshop on Spoken Language Technology (SLT 2012), pp. 455–460 (2012)

    Google Scholar 

  19. Yun, S., Yoo, C.D.: Loss-scaled large-margin gaussian mixture models for speech emotion classification. IEEE Transactions on Audio, Speech and Language Processing 20(2), 585–598 (2012)

    Article  Google Scholar 

  20. Zbancioc, M.D., Feraru, S.M.: Emotion recognition of the srol romanian database using fuzzy knn algorithm. In: Proceedings of 10th International Symposium on Electronics and Telecommunications (ISETC 2012), pp. 347–350 (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dorota Kamińska .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Kamińska, D., Sapiński, T., Pelikant, A. (2014). Recognition of Emotion Intensity Basing on Neutral Speech Model. In: Gruca, D., Czachórski, T., Kozielski, S. (eds) Man-Machine Interactions 3. Advances in Intelligent Systems and Computing, vol 242. Springer, Cham. https://doi.org/10.1007/978-3-319-02309-0_49

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-02309-0_49

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-02308-3

  • Online ISBN: 978-3-319-02309-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics