A Deep Neural Architecture for Sentence-Level Sentiment Classification in Twitter Social Networking

Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 781)

Abstract

This paper introduces a novel deep learning framework including a lexicon-based approach for sentence-level prediction of sentiment label distribution. We propose to first apply semantic rules and then use a Deep Convolutional Neural Network (DeepCNN) for character-level embeddings in order to increase information for word-level embedding. After that, a Bidirectional Long Short-Term Memory network (Bi-LSTM) produces a sentence-wide feature representation from the word-level embedding. We evaluate our approach on three twitter sentiment classification datasets. Experimental results show that our model can improve the classification accuracy of sentence-level sentiment analysis in Twitter social networking.

Keywords

Twitter Sentiment classification Deep learning 

Notes

Acknowledgment

This work was supported by the JSPS KAKENHI Grant number JP15K16048.

References

  1. 1.
    Appel, O., Chiclana, F., Carter, J., Fujita, H.: A hybrid approach to the sentiment analysis problem at the sentence level. Knowl. Based Syst. 108, 110–124 (2016)CrossRefGoogle Scholar
  2. 2.
    Bravo-Marquez, F., Mendoza, M., Poblete, B.: Combining strengths, emotions and polarities for boosting twitter sentiment analysis. In: Proceedings of the Second International Workshop on Issues of Sentiment Discovery and Opinion Mining, p. 2. ACM (2013)Google Scholar
  3. 3.
    Da Silva, N.F.F., Hruschka, E.R., Hruschka, E.R.: Tweet sentiment analysis with classifier ensembles. Decis. Support Syst. 66, 170–179 (2014)CrossRefGoogle Scholar
  4. 4.
    Dos Santos, C.N., Gatti, M.: Deep convolutional neural networks for sentiment analysis of short texts. In: COLING, pp. 69–78 (2014)Google Scholar
  5. 5.
    Gers, F.A., Schmidhuber, J.: Recurrent nets that time and count. In: Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, IJCNN 2000, vol. 3, pp. 189–194. IEEE (2000)Google Scholar
  6. 6.
    Go, A., Bhayani, R., Huang, L.: Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford, 1(12) (2009)Google Scholar
  7. 7.
    Graves, A.: Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 (2013)
  8. 8.
    Graves, A., Schmidhuber, J.: Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 18(5), 602–610 (2005)CrossRefGoogle Scholar
  9. 9.
    Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 (2012)
  10. 10.
    Kim, Y.: Convolutional neural networks for sentence classification. CoRR, abs/1408.5882 (2014)Google Scholar
  11. 11.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  12. 12.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, vol. 26, pp. 3111–3119. Curran Associates Inc. (2013)Google Scholar
  13. 13.
    Muhammad, A., Wiratunga, N., Lothian, R.: Contextual sentiment analysis for social media genres. Knowl. Based Syst. 108, 92–101 (2016)CrossRefGoogle Scholar
  14. 14.
    Nakov, P., Ritter, A., Rosenthal, S., Sebastiani, F., Stoyanov, V.: SemEval-2016 task 4: sentiment analysis in twitter. In: Proceedings of SemEval, pp. 1–18 (2016)Google Scholar
  15. 15.
    Silva, J., Coheur, L., Mendes, A.C., Wichert, A.: From symbolic to sub-symbolic information in question classification. Artif. Intell. Rev. 35(2), 137–154 (2011)CrossRefGoogle Scholar
  16. 16.
    Speriosu, M., Sudan, N., Upadhyay, S., Baldridge, J.: Twitter polarity classification with label propagation over lexical links and the follower graph. In: Proceedings of the First workshop on Unsupervised Learning in NLP, pp. 53–63. Association for Computational Linguistics (2011)Google Scholar
  17. 17.
    Wang, S., Manning, C.D.: Baselines and bigrams: simple, good sentiment and topic classification. In: Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers, vol. 2, pp. 90–94. Association for Computational Linguistics (2012)Google Scholar
  18. 18.
    Xie, Y., Chen, Z., Kunpeng Zhang, Y., Cheng, D.K., Honbo, A.A., Choudhary, A.N.: MuSES: multilingual sentiment elicitation system for social media data. IEEE Intell. Syst. 29(4), 34–42 (2014)CrossRefGoogle Scholar
  19. 19.
    Zeiler, M.D.: ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 (2012)
  20. 20.
    Zhang, X., LeCun, Y.: Text understanding from scratch. CoRR, abs/1502.01710 (2015)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Japan Advanced Institute of Science and TechnologyIshikawaJapan

Personalised recommendations