Skip to main content
Log in

Extracting Opinion Targets Using Attention-Based Neural Model

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

Extracting opinion-target expression is a core subtask to perform aspect-based sentiment analysis which aims to identify the discussed aspects within a text associated with their opinion targets and classify the sentiment as positive, negative, or neutral. This paper proposes a deep learning model to tackle the opinion-target expression extraction task. The proposed model is composed of bidirectional long short-term memory as an encoder, long short-term memory as a decoder with an attention mechanism, and conditional random fields. This model, which operates at the sentence level, is designed to extract opinion targets for the Arabic language. The proposed model’s performance is evaluated using SemEval-2016 annotated dataset for the hotels’ domain. Experimental results show that the proposed model outperforms the baseline and the prior works, where it achieved an F1 measure of 72.83%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Pang B, Lee L. Opinion mining and sentiment analysis. Found Trends R Inf Retr. 2008;2(1–2):1–135.

    Article  Google Scholar 

  2. Pontiki M, Galanis D, Papageorgiou H, Androutsopoulos I, Manandhar S, Mohammad AS, AlAyyoub M, Zhao Y, Qin B, De Clercq O. Semeval2016 task 5: Aspect based sentiment analysis. In: Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), pp. 19–30 (2016)

  3. Irsoy O, Cardie C. Opinion mining with deep recurrent neural networks. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 720–728 (2014)

  4. Ramshaw LA, Marcus MP. Text chunking using transformation-based learning. In: Natural language processing using very large corpora, pp. 157–176. Springer (1999)

  5. Lafferty J, McCallum A, Pereira FCN. Conditional random fields: Probabilistic models for segmenting and labeling sequence data (2001)

  6. Beal MJ, Ghahramani Z, Rasmussen CE. The infinite hidden Markov model. In: Advances in neural information processing systems, pp. 577–584 (2002)

  7. Ma X, Hovy E. End-to-end sequence labeling via bidirectional lstm-cnns-crf. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1064–1074 (2016)

  8. Ma X, Xia F. Unsupervised dependency parsing with transferring distribution via parallel guidance and entropy regularization. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1337–1348 (2014)

  9. Deng L, Yu D. Deep learning: methods and applications. Found Trends R Signal Process. 2014;7(3–4):197–387.

    Article  MathSciNet  Google Scholar 

  10. Bengio Y, Ducharme R, Vincent P, Jauvin C. A neural probabilistic language model. J Mach Learn Res. 2003;3:1137–55.

    MATH  Google Scholar 

  11. Collobert R, Weston J, Bottou L, Karlen M, Kavukcuoglu K, Kuksa P. Natural language processing (almost) from scratch. J Mach Learn Res. 2011;12:2493–537.

    MATH  Google Scholar 

  12. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J. Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems, pp. 3111–3119 (2013)

  13. Elman JL. Finding structure in time. Cognit Sci. 1990;14(2):179–21111.

    Article  Google Scholar 

  14. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9(8):1735–80.

    Article  Google Scholar 

  15. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradientbased learning applied to document recognition. Proc IEEE. 1998;86(11):2278–324.

    Article  Google Scholar 

  16. Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. In: 3rd International Conference on Learning Representations, ICLR 2015 (2015)

  17. Dong N, Nguyen KA. Attentive neural network for named entity recognition in Vietnamese. arXiv preprint. arXiv:1810.13097 (2018)

  18. Liu B, Lane I. Attention-based recurrent neural network models for joint intent detection and slot filling. Interspeech. 2016;2016:685–9.

    Article  Google Scholar 

  19. Baziotis C, Pelekis N, Doulkeridis C. Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis. In: Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017), pp. 747–754 (2017)

  20. Chernyshevich M. IHS R&D Belarus: cross-domain extraction of product features using crf. In: Proceedings of the 8th international workshop on semantic evaluation (SemEval 2014), pp. 309–313 (2014)

  21. Toh Z, Wang W. Dlirec: aspect term extraction and term polarity classification system. In: Proceedings of the 8th international workshop on semantic evaluation (SemEval 2014), pp. 235–240 (2014)

  22. Agerri R, Bermudez J, Rigau G. IXA pipeline: efficient and ready to use multilingual NLP tools. LREC. 2014;2014:3823–8.

    Google Scholar 

  23. Brown PF, Desouza PV, Mercer RL, Pietra VJD, Lai JC. Class-based n-gram models of natural language. Comput Linguist. 1992;18(4):467–79.

    Google Scholar 

  24. Collins M. Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms. In: Proceedings of the ACL-02 conference on empirical methods in natural language processing, Vol. 10, pp. 1–8. Association for computational linguistics (2002)

  25. Clark A. Combining distributional and morphological information for part of speech induction. In: Proceedings of the tenth conference on European chapter of the Association for computational linguistics-Vol. 1, pp. 59–66. Association for computational linguistics (2003)

  26. San Vicente I, Saralegi X, Agerri R, Sebastia´n DS. EliXa: a modular and flexible ABSA platform. SemEval2015 p. 748 (2015)

  27. Xenos D, Theodorakakos P, Pavlopoulos J, Malakasiotis P, Androutsopoulos I. Aueb-absa at semeval-2016 task 5: Ensembles of classifiers and embeddings for aspect based sentiment analysis. In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval2016), pp. 312–317 (2016)

  28. Liu P, Joty S, Meng H. Fine-grained opinion mining with recurrent neural networks and word embeddings. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1433–1443 (2015)

  29. Wang W, Pan SJ, Dahlmeier D, Xiao X. Recursive neural conditional random fields for aspect-based sentiment analysis. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 616–626 (2016)

  30. Wang W, Pan SJ, Dahlmeier D, Xiao X. Coupled multi-layer attentions for co-extraction of aspect and opinion terms. In: AAAI, pp. 3316–3322 (2017)

  31. Da’u A, Salim N. Aspect extraction on user textual reviews using multi-channel convolutional neural network. PeerJ Comput Sci. 2019;5:e191.

    Article  Google Scholar 

  32. Chauhan GS, Meena YK, Gopalani D, Nahta R. An unsupervised multiple word-embedding method with attention model for cross domain aspect term extraction. In: 2020 3rd International Conference on Emerging Technologies in Computer Engineering: Machine Learning and Internet of Things (ICETCE), pp. 110–116. IEEE (2020)

  33. Luo H, Li T, Liu B, Wang B, Unger H. Improving aspect term extraction with bidirectional dependency tree representation. IEEE/ACM Trans Audio Speech Lang Process. 2019;27(7):1201–12.

    Article  Google Scholar 

  34. Tran TU, Hoang HTT, Huynh HX. Bidirectional independently long short-term memory and conditional random field integrated model for aspect extraction in sentiment analysis. In: Frontiers in Intelligent Computing: Theory and Applications, pp. 131–140. Springer (2020)

  35. Chen T, Xu R, He Y, Wang X. Improving sentiment analysis via sentence type classification using bilstm-crf and cnn. Expert Syst Appl. 2017;72:221–30.

    Article  Google Scholar 

  36. Obaidat I, Mohawesh R, Al-Ayyoub M, Mohammad AS, Jararweh Y. Enhancing the determination of aspect categories and their polarities in Arabic reviews using lexicon-based approaches. In: 2015 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT), pp. 1–6. IEEE (2015)

  37. Al-Sarhan H, Al-So’ud M, Al-Smadi M, Al-Ayyoub M, Jararweh Y. Framework for affective news analysis of Arabic news: 2014 Gaza attacks case study. In: Information and Communication Systems (ICICS), 2016 7th International Conference on, pp. 327–332. IEEE (2016)

  38. Mohammad AS, Qwasmeh O, Talafha B, Al-Ayyoub M, Jararweh Y, Benkhelifa E. An enhanced framework for aspect-based sentiment analysis of Hotels’ reviews: Arabic reviews case study. In: 2016 11th International Conference for Internet Technology and Secured Transactions (ICITST), pp. 98–103. IEEE (2016)

  39. Al-Smadi M, Al-Ayyoub M, Jararweh Y, Qawasmeh O. Enhancing aspect-based sentiment analysis of Arabic hotels’ reviews using morphological, syntactic and semantic features. Inf Process Manag. 2019;56(2):308–19.

    Article  Google Scholar 

  40. Al-Smadi M, Qawasmeh O, Al-Ayyoub M, Jararweh Y, Gupta B. Deep Recurrent neural network vs. support vector machine for aspect-based sentiment analysis of Arabic hotels’ reviews. J Comput Sci. 2018;27:386–93.

    Article  Google Scholar 

  41. Lample G, Ballesteros M, Subramanian S, Kawakami K, Dyer C. Neural architectures for named entity recognition. In: Proceedings of NAACL-HLT, pp. 260–270 (2016)

  42. Al-Smadi M, Talafha B, Al-Ayyoub M, Jararweh Y. Using long short-term memory deep neural networks for aspect-based sentiment analysis of Arabic reviews. Int J Mach Learn Cybern. 2018;2018:1–13.

    Google Scholar 

  43. Nair V, Hinton GE. Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807–814 (2010)

  44. Kim Y. Convolutional neural networks for sentence classification. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1746–1751 (2014)

  45. Zhou C, Sun C, Liu Z, Lau F. A c-lstm neural network for text classification. arXiv preprint. arXiv:1511.08630 (2015)

  46. Santos CN, Guimaraes V. Boosting named entity recognition with neural character embeddings. arXiv preprint. arXiv:1505.05008 (2015)

  47. Huang Z, Xu W, Yu K. Bidirectional LSTMCRF models for sequence tagging. arXiv preprint. arXiv:1508.01991 (2015)

  48. Schuster M, Paliwal KK. Bidirectional recurrent neural networks. IEEE Trans Signal Process. 1997;45(11):2673–81.

    Article  Google Scholar 

  49. Graves A, Schmidhuber J. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 2005;18(5–6):602–10.

    Article  Google Scholar 

  50. El-Sahar H, El-Beltagy SR. Building large Arabic multi-domain resources for sentiment analysis. In: International Conference on Intelligent Text Processing and Computational Linguistics, pp. 23–34. Springer (2015)

  51. Kingma DP, Ba J. Adam: a method for stochastic optimization. arXiv preprint. arXiv:1412.6980 (2014)

  52. Soliman AB, Eissa K, El-Beltagy SR. Aravec: a set of arabic word embedding models for use in arabic nlp. Proc Comput Sci. 2017;117:256–65.

    Article  Google Scholar 

  53. Chiu JP, Nichols E. Named entity recognition with bidirectional LSTM-CNNS. Trans Assoc Comput Linguist. 2016;4:357–70.

    Article  Google Scholar 

Download references

Funding

No funding was received.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saja Al-Dabet.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Al-Dabet, S., Tedmori, S. & Al-Smadi, M. Extracting Opinion Targets Using Attention-Based Neural Model. SN COMPUT. SCI. 1, 242 (2020). https://doi.org/10.1007/s42979-020-00270-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-020-00270-4

Keywords

Navigation