Sentiment Classification of Short Texts

Movie Review Case Study
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10868)

Abstract

Over the few years, Sentiment analysis has been the heart of social media research due to the huge volume of opinionated data available on the web and its pervasive real life and commercial applications. Sentiment classification of shorter texts such as movie reviews is challenging due to lack of contextual information which often leads to interesting and unexpected results. Historically, this problem has been addressed using machine learning algorithms that usually learn from rule-based approaches or manually defined sparse features. In the recent years, Deep Neural Networks have gained a lot of attention in sentiment analysis due to their ability to effectively capture subtle semantic information from the input. These methods are capable of building dense continuous feature vectors, which is difficult to model in conventional models such as bag-of-words. In this paper, we conduct experiments and compare several machine learning algorithms Support Vector Machine, Naïve Bayes, Random Forest, and a Deep Learning Algorithm. We selected Convolution Neural Network (CNN) trained on top of various pre-trained word vectors for movie review classification. We validate above models on IMDB movie review dataset, experimental results demonstrate that the task of sentiment analysis can benefit more from the CNN rather than the machine learning techniques.

Keywords

Sentiment analysis Machine learning Convolution Neural Network 

References

  1. 1.
    Pang, B., et al.: Thumbs up?: sentiment classification using machine learning techniques. In: Proceedings of the ACL-2002 Conference on Empirical Methods in Natural Language Processing, vol. 10, pp. 79–86 (2002)Google Scholar
  2. 2.
    Turney, P.D.: Thumbs up or thumbs down? semantic orientation applied to unsupervised 
classification of reviews. In: Proceedings of the Association for Computational Linguistics (ACL), pp. 417–424 (2002)Google Scholar
  3. 3.
    Turney, P.D., Littman, M.L.: Measuring praise and criticism: inference of semantic 
orientation from association. ACM Trans. Inf. Syst. TOIS 21(4), 315–346 (2003)CrossRefGoogle Scholar
  4. 4.
    Pang, B., Lee, L.: Opinion mining and sentiment analysis. Found. Trends Inf. Retr. 2(1–2), 1–135 (2008)CrossRefGoogle Scholar
  5. 5.
    Mudinas, A., et al.: Combining lexicon and learning based approaches for concept-level sentiment analysis. In: Proceedings of the First International Workshop on Issues of Sentiment Discovery and Opinion Mining, Article 5, pp. 1–8. ACM, New York (2012)Google Scholar
  6. 6.
    Joshi, A., et al.: C-feel-it: a sentiment analyzer for micro blogs. In: Proceedings of ACL: Systems Demonstrations, HLT, vol. 11, pp. 127–132 2011Google Scholar
  7. 7.
    Zhai, Z., et al.: Clustering product features for opinion mining. In: WSDM 2011, 9–12 February 2011, Hong Kong, China (2011)Google Scholar
  8. 8.
    Yang, F., Wang, H.Z., Mi, H., Cai, W.W.: Using random forest for reliable classification and cost-sensitive learning for medical diagnosis. BMC Bioinform. 10(1), S22 (2009)CrossRefGoogle Scholar
  9. 9.
    Medhat, W., Hassan, A., Korashy, H.: Sentiment analysis algorithms and applications: a survey. Ain Shams Eng. J. 5(4), 1093–1113 (2014)CrossRefGoogle Scholar
  10. 10.
    Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C.D., Ng, A., Potts, C.: Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1631–1642 (2013)Google Scholar
  11. 11.
    Large Movie Review Dataset, http://ai.stanford.edu/~amaas/data/sentiment/. Accessed 26 Feb 2018
  12. 12.
    Lin, C., He, Y.: Joint sentiment/topic model for sentiment analysis. In: Proceedings of the 18th ACM Conference on Information and Knowledge Management, pp. 375–384. ACM, November 2009Google Scholar
  13. 13.
    Harris, Z.S.: Distributional structure. Word 10(2–3), 146–162 (1954)CrossRefGoogle Scholar
  14. 14.
    Johnson, R., Zhang, T.: Effective use of word order for text categorization with convolutional neural networks. arXiv preprint arXiv:1412.1058 (2014)
  15. 15.
    Joachims, T.: Making large-scale SVM learning practical. In: Schölkopf, B., Smola, A. (eds.) Advances in Kernel Methods - Support Vector Learning, pp. 44–56. MIT Press (1999)Google Scholar
  16. 16.
    Support Vector Machine, http://svmlight.joachims.org/. Accessed 26 Feb 2018
  17. 17.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)CrossRefGoogle Scholar
  18. 18.
    Breiman, L.: Random forests. Machine learning 45(1), 5–32 (2001)CrossRefGoogle Scholar
  19. 19.
    Pang, B., Lee, L.: Opinion mining and sentiment analysis. Found. Trends® Inf. Retr. 2(1–2), 1–135 (2008)CrossRefGoogle Scholar
  20. 20.
    Abbasi, A., Chen, H., Salem, A.: Sentiment analysis in multiple languages: Feature selection for opinion classification in web forums. ACM Transactions on Information Systems (TOIS) 26(3), 12 (2008)CrossRefGoogle Scholar
  21. 21.
    Bengio, Y.: Deep learning of representations: looking forward. In: Dediu, A.-H., Martín-Vide, C., Mitkov, R., Truthe, B. (eds.) SLSP 2013. LNCS (LNAI), vol. 7978, pp. 1–37. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-39593-2_1CrossRefGoogle Scholar
  22. 22.
    Kim, Y.: Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014)
  23. 23.
    Mu, Y., et al.: Event-related theta and alpha oscillations mediate empathy for pain. Brain Res. 1234, 128–136 (2008)CrossRefGoogle Scholar
  24. 24.
    Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188 (2014)
  25. 25.
    Zhang, Y., Wallace, B.: A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. arXiv preprint arXiv:1510.03820 (2015)
  26. 26.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)Google Scholar
  27. 27.
    Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoustic models. Proc. ICML 30(1), 3 (2013)Google Scholar
  28. 28.
    Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, vol. 1 Long Papers, Baltimore, Mary, pp. 655–665 (2014)Google Scholar
  29. 29.
    Kotsiantis, S.B., Zaharakis, I., Pintelas, P.: Supervised machine learning: A review of classification techniques. Emer. Artif. Intell. Appl. Comput. Eng. 160, 3–24 (2007)Google Scholar
  30. 30.
    Yu, L., Liu, H.: Efficient feature selection via analysis of relevance and redundancy. JMLR 5(Oct), 1205–1224 (2004)Google Scholar
  31. 31.
    Xu, B., Guo, X., Ye, Y., Cheng, J.: An improved random forest classifier for text categorization. JCP 7(12), 2913–2920 (2012)Google Scholar
  32. 32.
    Google. https://code.google.com/archive/p/word2vec/. Accessed 26 Feb 2018
  33. 33.
    Gokulakrishnan, B., Priyanthan, P., Ragavan, T., Prasath, N., Perera, A.: Opinion mining and sentiment analysis on a twitter data stream. In: 2012 International Conference on Advances in ICT for Emerging Regions (ICTer), pp. 182–188. IEEE, December 2012Google Scholar
  34. 34.
    Adankon, M.M., Cheriet, M.: Support vector machine. In: Encyclopedia of Biometrics, pp. 1303–1308. Springer, Boston (2009)Google Scholar
  35. 35.
    Rish, I.: An empirical study of the naive Bayes classifier. In: IJCAI 2001 Workshop on Empirical Methods in Artificial Intelligence, vol. 3(22), pp. 41–46. IBM, August 2001Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.University of GuelphGuelphCanada

Personalised recommendations