Advertisement

Applying Self-attention for Stance Classification

  • Margarita Bugueño
  • Marcelo MendozaEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11896)

Abstract

Stance classification is the task of automatically identify the user’s positions about a specific topic. The classification of stance may help to understand how people react to a piece of target information, a task that is interesting in different areas as advertising campaigns, brand analytics, and fake news detection, among others. The rise of social media has put into the focus of this task the classification of stance in online social networks. A number of methods have been designed for this purpose showing that this problem is hard and challenging. In this work, we explore how to use self-attention models for stance classification. Instead of using attention mechanisms to learn directly from the text we use self-attention to combine different baselines’ outputs. For a given post, we use the transformer architecture to encode each baseline output exploiting relationships between baselines and posts. Then, the transformer learns how to combine the outputs of these methods reaching a consistently better classification than the ones provided by the baselines. We conclude that self-attention models are helpful to learn from baselines’ outputs in a stance classification task.

Keywords

Stance classification Self-attention models Social networks 

Notes

Acknowledgements

Mr. Mendoza and Ms. Bugueño acknowledge funding from the Millennium Institute for Foundational Research on Data. Mr. Mendoza was partially funded by the project BASAL FB0821.

References

  1. 1.
    Anand, P., Walker, M.A., Abbott, R., Tree, J.E.F., Bowmani, R., Minor, M.: Cats rule and dogs drool!: classifying stance in online debate. In: WASSA@ACL 2011, pp. 1–9 (2011)Google Scholar
  2. 2.
    Augenstein, I., Rocktäschel, T., Vlachos, A., Bontcheva, K.: Stance detection with bidirectional conditional encoding. In: EMNLP 2016, pp. 876–885 (2016)Google Scholar
  3. 3.
    Bahuleyan, H., Vechtomova, O.: UWaterloo at SemEval-2017 t-8: detecting stance towards rumours with topic independent features. In: SemEval 2017, pp. 461–464 (2017)Google Scholar
  4. 4.
    Chen, Y.-C., Liu, Z.-Y., Kao, H.-Y.: IKM at SemEval-2017 t-8: convolutional neural networks for stance detection and rumor verification. In: SemEval 2017, pp. 465–469 (2017)Google Scholar
  5. 5.
    Derczynski, L., Bontcheva, K.: Pheme: veracity in digital social networks. In: UMAP Workshops (2014)Google Scholar
  6. 6.
    Derczynski, L., Bontcheva, K., Liakata, M., Procter, R., Wong Sak Hoi, G., Zubiaga, A.: SemEval-2017 t-8: RumourEval: determining rumour veracity and support for rumours. In: SemEval 2017, pp. 69–76 (2017)Google Scholar
  7. 7.
    Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. CoRR 1810.04805 (2018)Google Scholar
  8. 8.
    Faulkner, A.: Automated classification of stance in student essays: an approach using stance target information and the Wikipedia link-based measure. In: FLAIRS 2014 (2014)Google Scholar
  9. 9.
    Hasan, K.S., Ng, V.: Stance classification of ideological debates: data, models, features, and constraints. In: IJCNLP 2013, pp. 1348–1356 (2013)Google Scholar
  10. 10.
    Lozano, M.G., Lilja, H., Tjörnhammar, E., Karasalo, M.: Mama Edha at SemeVal-2017 t-8: stance classification with CNN and rules. In: SemEval 2017, pp. 481–485 (2017)Google Scholar
  11. 11.
    Lukasik, M., Srijith, P.K., Vu, D., Bontcheva, K., Zubiaga, A., Cohn, T.: Hawkes processes for continuous time sequence classification: an application to rumour stance classification in Twitter. In: ACL 2016 (2016)Google Scholar
  12. 12.
    Ma, J., Gao, W., Wong, K.-F.: Detect rumor and stance jointly by neural multi-task learning. In: WWW 2018, pp. 585–593 (2018)Google Scholar
  13. 13.
    Mohammad, S., Kiritchenko, S., Sobhani, P., Zhu, X., Cherry, C.: SemEval-2016 t-6: detecting stance in tweets. In: SemEval 2016, pp. 31–41 (2016)Google Scholar
  14. 14.
    Mohammad, S.M., Sobhani, P., Kiritchenko, S.: Stance and sentiment in tweets. ACM Trans. Internet Technol. 17(3), 26:1–26:23 (2017)CrossRefGoogle Scholar
  15. 15.
    Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP 2014, pp. 1532–1543 (2014)Google Scholar
  16. 16.
    Sobhani, P., Inkpen, D., Matwin, S.: From argumentation mining to stance classification. In: ArgMining@HLT-NAACL 2015, pp. 67–77 (2015)Google Scholar
  17. 17.
    Somasundaran, S., Wiebe, J.: Recognizing stances in ideological on-line debates. In: NAACL CAAGET 2010, pp. 116–124 (2010)Google Scholar
  18. 18.
    Vaswani, A., et al.: Attention is all you need. In: NIPS 2017, pp. 6000–6010 (2017)Google Scholar
  19. 19.
    Walker, M., Tree, J.F., Anand, P., Abbott, R., King, J.: A corpus for research on deliberation and debate. In: LREC 2012, pp. 812–817 (2012)Google Scholar
  20. 20.
    Zubiaga, A., Kochkina, E., Liakata, M., Procter, R., Lukasik, M.: Stance classification in rumours as a sequential task exploiting the tree structure of social media conversations. In: COLING 2016, pp. 2438–2448 (2016)Google Scholar
  21. 21.
    Zubiaga, A., Liakata, M., Procter, R., Hoi, G.W.S., Tolmie, P.: Analysing how people orient to and spread rumours in social media by looking at conversational threads. PloS One 11(3), e0150989 (2016)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Instituto Milenio Fundamentos de los Datos, Departamento de InformáticaUniversidad Técnica Federico Santa MaríaSantiagoChile

Personalised recommendations