Advertisement

Literature Review

  • Arindam ChaudhuriEmail author
Chapter
Part of the SpringerBriefs in Computer Science book series (BRIEFSCOMPUTER)

Abstract

The analysis of sentiments has been a popular research topic towards social media data processing (Dashtipour et al. in Cogn Comput 8(4):757–771, 2016, [1]). The majority of sentiment analysis research is using the English language, but there is a gradual increase towards the multilingual aspect.

References

  1. 1.
    Dashtipour, K., Poria, S., Hussain, A., Cambria, E., Hawalah, A.Y., Gelbukh, A., Zhou, Q.: Multilingual sentiment analysis: state of the art and in dependent comparison of techniques. Cogn. Comput. 8(4), 757–771 (2016)CrossRefGoogle Scholar
  2. 2.
    Cambria, E.: Affective computing and sentiment analysis. IEEE Intell. Syst. 31(2), 102–107 (2016)CrossRefGoogle Scholar
  3. 3.
    Peng, H., Ma, Y., Li, Y., Cambria, E.: Learning multi-grained aspect target sequence for chinese sentiment analysis. Knowl. Based Syst. 148, 167–176 (2018)CrossRefGoogle Scholar
  4. 4.
    Bandhakavi, A., Wiratunga, N., Massie, S., Deepak, P.: Lexicon generation for emotion analysis of text. IEEE Intell. Syst. 32(1), 102–108 (2017)CrossRefGoogle Scholar
  5. 5.
    Dragoni, M., Poria, S., Cambria, E.: OntoSenticNet: A common sense ontology for sentiment analysis. IEEE Intell. Syst. 33(3), 77–85 (2018)CrossRefGoogle Scholar
  6. 6.
    Cambria, E., Poria, S., Hazarika, D., Kwok, K.: SenticNet 5: discovering conceptual primitives for sentiment analysis by means of context embeddings. In: Proceedings of the 32nd Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence, pp. 1795–1802 (2018)Google Scholar
  7. 7.
    Oneto, L., Bisio, F., Cambria, E., Anguita, D.: Statistical learning theory and ELM for big social data analysis. IEEE Comput. Intell. Mag. 11(3), 45–55 (2016)CrossRefGoogle Scholar
  8. 8.
    Hussain, A., Cambria, E.: Semi-supervised learning for big social data analysis. Neurocomputing 275(C), 1662–1673 (2018)CrossRefGoogle Scholar
  9. 9.
    Li, Y., Pan, Q., Yang, T., Wang, S., Tang, J., Cambria, E.: Learning word representations for sentiment analysis. Cogn. Comput. 9(6), 843–851 (2017)CrossRefGoogle Scholar
  10. 10.
    Young, T., Hazarika, D., Poria, S., Cambria, E.: Recent trends in deep learning based natural language processing. arXiv:1708.02709. (2017)Google Scholar
  11. 11.
    Li, Y., Pan, Q., Wang, S., Yang, T., Cambria, E.: A generative model for category text generation. Inf. Sci. 450, 301–315 (2018)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Cambria, E., Poria, S., Gelbukh, A., Thelwall, M.: Sentiment analysis is a big suitcase. IEEE Intell. Syst. 32(6), 74–80 (2017)CrossRefGoogle Scholar
  13. 13.
    Xia, Y., Cambria, E., Hussain, A., Zhao, H.: Word polarity disambiguation using bayesian model and opinion-level features. Cogn. Comput. 7(3), 369–380 (2015)CrossRefGoogle Scholar
  14. 14.
    Chaturvedi, I., Ragusa, E., Gastaldo, P., Zunino, R., Cambria, E.: Bayesian network based extreme learning machine for subjectivity detection. J. Franklin Inst. 355(4), 1780–1797 (2018)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Majumder, N., Poria, S., Gelbukh, A., Cambria, E.: Deep learning-based document modeling for personality detection from text. IEEE Intell. Syst. 32(2), 74–79 (2017)CrossRefGoogle Scholar
  16. 16.
    Satapathy, R., Guerreiro, C., Chaturvedi, I., Cambria, E.: Phonetic-based microtext normalization for twitter sentiment analysis. In: Proceedings of the International Conference on Data Management, pp. 407–413 (2017)Google Scholar
  17. 17.
    Rajagopal, D., Cambria, E., Olsher, D., Kwok, K.: A graph-based approach to common sense concept extraction and semantic similarity detection. In: Proceedings of the World Wide Web Conference, pp. 565–570 (2013)Google Scholar
  18. 18.
    Zhong, X., Sun, A., Cambria, E.: Time expression analysis and recognition using syntactic token types and general heuristic rules. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 420–429 (2017)Google Scholar
  19. 19.
    Ma, Y., Peng, H., Cambria, E.: Targeted aspect-based sentiment analysis via embedding common sense knowledge into an attentive LSTM. In: Proceedings of the 32nd Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence, pp. 5876–5883 (2018)Google Scholar
  20. 20.
    Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P.: Natural language processing (almost) from scratch. J Mach Learn. Res. 12, 2493–2537 (2011)zbMATHGoogle Scholar
  21. 21.
    Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv:1301.3781. (2013)Google Scholar
  22. 22.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Proceedings of the Advances in Neural Information Processing Systems (2013)Google Scholar
  23. 23.
    Mikolov, T., Yih, W.T., Zweig, G.: Linguistic regularities in continuous space word representations. In: Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 746–751 (2013)Google Scholar
  24. 24.
    Kim, Y.: Convolutional neural networks for sentence classification. arXiv:1408.5882. (2014)Google Scholar
  25. 25.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of the Advances in Neural Information Processing Systems (2012)Google Scholar
  26. 26.
    You, Q., Luo, J., Jin, H., Yang, J.: Joint visual textual sentiment analysis with deep neural networks. In: Proceedings of the 23rd ACM Conference on Multimedia, pp. 1071–1074 (2015)Google Scholar
  27. 27.
    Borth, D., Ji, R., Chen, T., Breuel, T., Chang, S.F.: Large-scale visual sentiment ontology and detectors using adjective noun pairs. In: Proceedings of the 21st ACM International Conference on Multimedia, pp. 223–232 (2013)Google Scholar
  28. 28.
    Chen, T., Borth, D., Darrell, T., Chang, S.F.: Deepsentibank: visual sentiment concept classification with deep convolutional neural networks. arXiv:1410.8586 (2014)Google Scholar
  29. 29.
    Xu, C., Cetintas, S., Lee, K.C., Li, L.J.: Visual sentiment prediction with deep convolutional neural networks. arXiv:1411.5731. (2014)Google Scholar
  30. 30.
    Wang, M., Cao, D., Li, L., Li, S., Ji, R.: Microblog sentiment analysis based on cross-media bag-of-words model. In: Proceedings of the International Conference on Internet Multimedia Computing and Service, pp. 76–80 (2014)Google Scholar
  31. 31.
    Cao, D., Ji, R., Lin, D., Li, S.: Visual sentiment topic model-based microblog image sentiment analysis. Multimedia Tools Appl. 75(15), 8955–8968 (2016)CrossRefGoogle Scholar
  32. 32.
    Cao, D., Ji, R., Lin, D., Li, S.: A cross-media public sentiment analysis system for microblog. Multimedia Syst. 22(4), 479–486 (2016)CrossRefGoogle Scholar
  33. 33.
    Yu, Y., Lin, H., Yu, Q., Meng, J., Zhao, Z., Li, Y., Zuo, L.: Modality classification for medical images using multiple deep convolutional neural networks. J. Comput. Inf. Syst. 11(15), 5403–5413 (2015)Google Scholar
  34. 34.
    Wan, L., Zeiler, M., Zhang, S., Cun, Y.L., Fergus, R.: Regularization of neural networks using dropconnect. In: Proceedings of the 30th International Conference on Machine Learning, PMLR, vol. 28, issue 3, pp. 1058–1066 (2013)Google Scholar
  35. 35.
    Chaudhuri, A., Ghosh, S.K.: Sentiment analysis of customer reviews using robust hierarchical bidirectional recurrent neural networks. In: Silhavy, R. et al. (eds.) Artificial Intelligence Perspectives in Intelligent Systems. Advances in Intelligent Systems and Computing, vol. 464, pp. 249–261. Springer (2016)Google Scholar
  36. 36.
    Le, Q., Mikolov, T.: Distributed representation of sentences and document. In: Proceedings of the 31st International Conference on Machine Learning, PMLR, vol. 32, issue 2, pp. 1188–1196 (2014)Google Scholar
  37. 37.
    Siersdorfer, S., Minack, E., Deng, F., Hare, J.: Analysing and predicting sentiment of images on the social web. In: Proceedings of the 18th ACM International Conference on Multimedia, pp. 715–718 (2010)Google Scholar
  38. 38.
    Tumasjan, A., Sprenger, T.O., Sandner, P.G., Welpe, I.M.: Predicting elections with twitter: what 140 characters reveal about political sentiment. In: Proceedings of the Association for the Advancement in Artificial Intelligence Conference on Weblogs and Social Media, vol. 10, pp. 178–185 (2010)Google Scholar
  39. 39.
    Yuan, J., Mcdonough, S., You, Q., Luo. J.: Sentribute: image sentiment analysis from a mid-level perspective. In: Proceedings of the 2nd ACM International Workshop on Issues of Sentiment Discovery and Opinion Mining, Article 10. (2013)Google Scholar
  40. 40.
    You, Q., Cao, L., Jin, H., Luo, J.: Robust visual textual sentiment analysis: when attention meets tree structured recurrent neural networks. In: Proceedings of the 24th ACM Conference on Multimedia, pp. 1008–1017 (2016)Google Scholar
  41. 41.
    You, Q., Luo, J., Jin, H., Yang, J.: Cross modality consistent regression for joint visual textual sentiment analysis of social media. In: Proceedings of the 9th International Conference on Web Search and Data Mining, pp. 13–22 (2016)Google Scholar
  42. 42.
    De Silva, L.C., Miyasato, T., Nakatsu, R.: Facial emotion recognition using multi-modal information. In: Proceedings of IEEE International Conference on Information, Communications and Signal Processing, vol. 1, pp. 397–401 (1997)Google Scholar
  43. 43.
    Chen, L.S., Huang, T.S., Miyasato, T., Nakatsu, R.: Multimodal human emotion/expression recognition. In: Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition, pp. 366–371 (1998)Google Scholar
  44. 44.
    Ellis, J.G., Jou, B., Chang, S.F.: Why we watch the news: a dataset for exploring sentiment in broadcast video news. In: Proceedings of the ACM International Conference on Multimodal Interaction, pp. 104–111 (2014)Google Scholar
  45. 45.
    Kessous, L., Castellano, G., Caridakis, G.: Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis. J. Multimodal User Interfaces 3(1–2), 33–48 (2010)CrossRefGoogle Scholar
  46. 46.
    Schuller, B.: Recognizing affect from linguistic information in 3D continuous space. IEEE Trans. Affect. Comput. 2(4), 192–205 (2011)CrossRefGoogle Scholar
  47. 47.
    Rozgic, V., Ananthakrishnan, S., Saleem, S., Kumar, R., Prasad, R.: Ensemble of SVM trees for multimodal emotion recognition. In: Proceedings of IEEE Signal and Information Processing Association Annual Summit and Conference, pp. 1–4 (2012)Google Scholar
  48. 48.
    Metallinou, A., Lee, S., Narayanan, S.: Audio-visual emotion recognition using gaussian mixture models for face and voice. In: Proceedings of IEEE 10th International Symposium on Multimedia, pp. 250–257 (2008)Google Scholar
  49. 49.
    Poria, S., Chaturvedi, I., Cambria, E., Hussain, A.: Convolutional MKL based multimodal emotion recognition and sentiment analysis. In: Proceedings of 16th IEEE International Conference on Data Mining, vol. 1, pp. 439–448 (2016)Google Scholar
  50. 50.
    Wöllmer, M., Weninger, F., Knaup, T., Schuller, B., Sun, C., Sagae, K., Morency, L.P.: YouTube movie reviews: sentiment analysis in an audio-visual context. IEEE Intell. Syst. 28(3), 46–53 (2013)CrossRefGoogle Scholar
  51. 51.
    Wu, C.H., Liang, W.B.: Emotion recognition of affective speech based on multiple classifiers using acoustic-prosodic information and semantic labels. IEEE Trans. on Affect. Comput. 2(1), 10–21 (2011)CrossRefGoogle Scholar
  52. 52.
    Zadeh, A., Chen, M., Poria, S., Cambria, E., Morency, L.P.: Tensor fusion network for multimodal sentiment analysis. In: Proceedings of Empirical Methods in Natural Language Processing, pp. 1114–1125 (2017)Google Scholar
  53. 53.
    Poria, S., Peng, H., Hussain, A., Howard, N., Cambria, E.: Ensemble application of convolutional neural networks and multiple kernel learning for multimodal sentiment analysis. Neurocomputing 261, 217–230 (2017)CrossRefGoogle Scholar
  54. 54.
    Eyben, F., Wöllmer, M., Graves, A., Schuller, B., Douglas-Cowie, E., Cowie, R.: On-line emotion recognition in a 3-D activation-valence-time continuum using acoustic and linguistic cues. J. Multimodal User Interf. 3(1–2), 7–19 (2010)CrossRefGoogle Scholar

Copyright information

© The Author(s), under exclusive to Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Samsung R & D Institute DelhiNoidaIndia

Personalised recommendations