Advertisement

Estimation of Emotion Type and Intensity in Japanese Tweets Using Multi-task Deep Learning

  • Kazuki Sato
  • Tomonobu OzakiEmail author
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 927)

Abstract

Accurate estimation of emotions in SNS posts plays an essential role in a wide variety of real world applications such as intelligent dialogue systems, review analysis for recommendations and so on. In this paper, we focus on developing accurate models for estimating types of emotions and their intensities in Japanese tweets by using multi-task deep learning. More concretely, three deep learning models for estimating intensities of emotions were extended to be able to predict the emotional types and their intensities at a time. The effectiveness of the developed models was confirmed through experiments using the database of Japanese tweets annotated with intensity scores of four types of emotions using best-worst scaling.

Notes

Acknowledgements

This work was partially supported by JSPS KAKENHI Grant Number 17K00315.

References

  1. 1.
    Mohammad, S.M., Bravo-Marquez, F.: WASSA-2017 shared task on emotion intensity. In: Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (2017)Google Scholar
  2. 2.
    Mohammad, S.M., Bravo-Marquez, F., Salameh, M., Kiritchenko, S.: SemEval-2018 task 1: affect in Tweets. In: Proceedings of the 12th International Workshop on Semantic Evaluation (2018)Google Scholar
  3. 3.
    Go, A., Bhayani, R., Huang, L.: Twitter sentiment classification using distant supervision. In: Final Projects from CS224N for Spring 2008/2009 at The Stanford Natural Language Processing Group (2009)Google Scholar
  4. 4.
    Mohammad, S.M.: Word affect intensities. arXiv preprint arXiv:1704.08798 (2017)
  5. 5.
  6. 6.
    Himeno, S., Aono, M.: Estimation of Twitter emotion intensity by using transfer learning of feature tensors and emotion lexicon. IEICE Technical report DE2018-11 (2018). (in Japanese)Google Scholar
  7. 7.
    Felbo, B., Mislove, A., Søgaard, A., Rahwan, I., Lehmann, S.: Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 1615–1625 (2017)Google Scholar
  8. 8.
    Duppada, V., Hiray, S.: Seernet at EmoInt-2017: tweet emotion intensity estimator. In: Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pp. 205–211 (2017)Google Scholar
  9. 9.
    Goel, P., Kulshreshtha, D., Jain, P., Shukla, K.K.: Prayas at EmoInt 2017: an ensemble of deep neural architectures for emotion intensity prediction in tweets. In: Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pp. 58–65 (2017)Google Scholar
  10. 10.
    Kim, Y.: Convolutional neural networks for sentence classification. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1746–1751 (2014)Google Scholar
  11. 11.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  12. 12.
    Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Trans. Sig. Process. 45, 2673–2681 (1997)CrossRefGoogle Scholar
  13. 13.
    He, Y., Yu, L.-C., Lai, K.R., Liu, W.: YZU-NLP at EmoInt-2017: determining emotion intensity using a bi-directional LSTM-CNN model. In: Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pp. 238–242 (2017)Google Scholar
  14. 14.
    Zhang, Y., Yuan, H., Wang, J., Zhang, X.: YNU-HPCC at EmoInt-2017: using a CNN-LSTM model for sentiment intensity prediction. In: Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pp. 200–204 (2017)Google Scholar
  15. 15.
    Duppada, V., Jain, R., Hiray, S.: SeerNet at SemEval-2018 task 1: domain adaptation for affect in tweets. In: Proceedings of the 12th International Workshop on Semantic Evaluation, pp. 18–23 (2018)Google Scholar
  16. 16.
    Baziotis, C., Nikolaos, A., Chronopoulou, A., Kolovou, A., Paraskevopoulos, G., Ellinas, N., Narayanan, S., Potamianos, A.: NTUA-SLP at SemEval-2018 task 1: predicting affective content in tweets with deep attentive RNNs and transfer learning. In: Proceedings of the 12th International Workshop on Semantic Evaluation, pp. 245–255 (2018)Google Scholar
  17. 17.
    Park, J.H., Xu, P., Fung, P.: PlusEmo2Vec at SemEval-2018 task 1: exploiting emotion knowledge from emoji and #hashtags. In: Proceedings of the 12th International Workshop on Semantic Evaluation, pp. 264–272 (2018)Google Scholar
  18. 18.
    Liang, D., Xu, W., Zhao, Y.: Combining word-level and character-level representations for relation classification of informal text. In: Proceedings of the 2nd Workshop on Representation Learning for NLP, pp. 43–47 (2017)Google Scholar
  19. 19.
    Cho, K., Merrienboer, B., Gulcehre, C., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1724–1734 (2014)Google Scholar
  20. 20.
    Srivastava, R.K., Greff, K., Schmidhuber, J.: Training very deep networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems, vol. 2, pp. 2377–2385 (2015)Google Scholar
  21. 21.
    Kingma, D., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint, arXiv:1412.6980 (2014)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Graduate School of Integrated Basic SciencesNihon UniversityTokyoJapan
  2. 2.College of Humanities and SciencesNihon UniversityTokyoJapan

Personalised recommendations