Skip to main content
Log in

Stance prediction with a relevance attribute to political issues in comparing the opinions of citizens and city councilors

  • Published:
International Journal on Digital Libraries Aims and scope Submit manuscript

Abstract

This study focuses on a method for differentiating between the stance of citizens and city councilors on political issues (i.e., in favor or against) and attempts to compare the arguments of both sides. We created a dataset by annotating citizen tweets and city council minutes with labels for four attributes: stance, usefulness, regional dependence, and relevance. We then fine-tuned pretrained large language model using this dataset to assign the attribute labels to a large quantity of unlabeled data automatically. We introduced multitask learning to train each attribute jointly with relevance to identify the clues by focusing on those sentences that were relevant to the political issues. Our prediction models are based on T5, a large language model suitable for multitask learning. We compared the results from our system with those that used BERT or RoBERTa. Our experimental results showed that the macro-F1-scores for stance were improved by 1.8% for citizen tweets and 1.7% for city council minutes with multitask learning. Using the fine-tuned model to analyze real opinion gaps, we found that although the vaccination regime was positively evaluated by city councilors in Fukuoka city, it was not rated very highly by citizens.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. We define a “political issue” as an urgent issue or policy that divides opinion in local politics.

  2. https://x.com.

  3. In this study, political issues are expressed as “target sentences” in order to clarify the criteria for judging stances.

  4. Note that we do not to use this label in the experiments described in Sect. 4.3 and later because the number of such sentences accounted for less than 1% of the dataset.

  5. https://spacy.io.

  6. http://www.city.fukuoka.fukuoka.dbsr.jp/index.php/

  7. https://ssp.kaigiroku.net/tenant/cityosaka/SpTop.html

  8. http://giji.city.yokohama.lg.jp/tenant/yokohama/pg/index.html.

  9. https://pytorch.org/.

  10. https://huggingface.co/sonoisa/t5-base-japanese.

  11. https://huggingface.co/ku-nlp/roberta-base-japanese-char-wwm.

  12. https://github.com/cl-tohoku/bert-japanese.

  13. https://huggingface.co/docs/transformers.

  14. Note that all these data should be predicted as “N/A.”

References

  1. Asahi Shimbun Digital (2018) Visualizing the issue of children on waiting list project. https://www.asahi.com/special/taikijido/, Accessed 20 September 2022. (in Japanese)

  2. Augenstein I, Rocktäschel T, Vlachos A, et al (2016) Stance Detection with Bidirectional Conditional Encoding. In: Su J, Duh K, Carreras X (eds) Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, vol 1. association for computational linguistics, Austin, Texas, pp 876–885, https://doi.org/10.18653/v1/D16-1084

  3. Baly R, Mohtarami M, Glass J, et al (2018) Integrating Stance Detection and Fact Checking in a Unified Corpus. In: Walker M, Ji H, Stent A (eds) Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol 2. Association for Computational Linguistics, New Orleans, Louisiana, pp 21–27, https://doi.org/10.18653/v1/N18-2004

  4. Caruana, R.: Multitask Learning. Mach. Learn. 28(1), 41–75 (1997). https://doi.org/10.1023/A:1007379606734

    Article  MathSciNet  Google Scholar 

  5. Devlin J, Chang MW, Lee K, et al (2009) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: Burstein J, Doran C, Solorio T (eds) Proceedings of the 2019 Conference of the North American chapter of the association for computational linguistics: human language technologies, vol 1. Association for Computational Linguistics, Minneapolis, Minnesota, pp 4171–4186, https://doi.org/10.18653/v1/N19-1423

  6. Fleiss, J.L.: Measuring Nominal Scale Agreement among Many Raters. Psychol. Bull. 76(5), 378–382 (1971). https://doi.org/10.1037/h0031619

    Article  Google Scholar 

  7. Hanselowski A, PVS A, Schiller B, et al (2018) A Retrospective Analysis of the Fake News Challenge Stance-Detection Task. In: Bender EM, Derczynski L, Isabelle P (eds) Proceedings of the 27th international conference on computational linguistics, vol 1. Association for Computational Linguistics, Santa Fe, New Mexico, USA, pp 1859–1874, https://aclanthology.org/C18-1158/

  8. Ishida, T., Seki, Y., Kashino, W., et al.: Extracting citizen feedback from social media by appraisal opinion type viewpoint. J. Nat. Lang. Process. 29(2), 416–442 (2022). https://doi.org/10.5715/jnlp.29.416

    Article  Google Scholar 

  9. Kimura Y, Shibuki H (2009) Annotation of common categories for matching between minutes of municipal assemblies and inhabitants blog (in Japanese). Proceedings of the 23rd Annual conference of the japanese society for artificial intelligence JSAI2009(0):3F2NFC310. https://doi.org/10.11517/pjsai.jsai2009.0_3f2nfc310

  10. Kimura Y, Shibuki H, Ototake H, et al (2019) Overview of the NTCIR-14 QA Lab-PoliInfo Task. Proceedings of the 14th NTCIR Conference pp 121–140. https://research.nii.ac.jp/ntcir/workshop/OnlineProceedings14/pdf/ntcir/01-NTCIR14-OV-QALAB-KimuraY.pdf

  11. Kimura Y, Shibuki H, Ototake H, et al (2020) Overview of the NTCIR-15 QA Lab-PoliInfo-2 Task. Proceedings of the 15th NTCIR Conference pp 101–112. https://research.nii.ac.jp/ntcir/workshop/OnlineProceedings15/pdf/ntcir/01-NTCIR15-OV-QALAB-KimuraY.pdf

  12. Kolhatkar V, Taboada M (2017) Constructive language in news comments. In: Waseem Z, Chung WHK, Hovy D, et al (eds) Proceedings of the first workshop on abusive language online, vol 1. Association for Computational Linguistics, Vancouver, BC, Canada, pp 11–17, https://doi.org/10.18653/v1/W17-3002

  13. Kyodo News (2021) Yokohama Withdraws Bid to Host Casino Resort due to Local Concerns. https://english.kyodonews.net/news/2021/09/8b903ebe4a1e-yokohama-withdraws-bid-to-host-casino-resort-due-to-local-concerns.html, Accessed 13 July 2022

  14. Landis, J.R., Koch, G.G.: The measurement of observer agreement for categorical data. Biometrics 33(1), 159–174 (1977). https://doi.org/10.2307/2529310

    Article  Google Scholar 

  15. Liu Y, Ott M, Goyal N, et al (2019) RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR abs/1907.11692. https://doi.org/10.48550/arXiv.1907.11692,

  16. Loshchilov I, Hutter F (2019) Decoupled Weight Decay Regularization. In: 7th International conference on learning representations (ICLR 2019). OpenReview.net, https://openreview.net/forum?id=Bkg6RiCqY7

  17. Mohammad S, Kiritchenko S, Sobhani P, et al (2016) SemEval-2016 Task 6: Detecting Stance in Tweets. In: Bethard S, Carpuat M, Cer D, et al (eds) Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), vol 1. Association for Computational Linguistics, San Diego, California, pp 31–41, https://doi.org/10.18653/v1/S16-1003

  18. Prime Minister’s Official Residence (2021) A Collection of “Good” Local Government Innovations for Vaccination. https://www.kantei.go.jp/jp/headline/kansensho/jirei.html, Accessed 14 July 2023. (in Japanese)

  19. Raffel, C., Shazeer, N., Roberts, A., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21(1), 5485–5551 (2020)

  20. Roy, A., Fafalios, P., Ekbal, A., et al.: Exploiting stance hierarchies for cost-sensitive stance detection of web documents. J. Intell. Inf. Syst. 58, 1–19 (2022). https://doi.org/10.1007/s10844-021-00642-z

    Article  Google Scholar 

  21. Senoo, K., Seki, Y., Kashino, W., et al.: Visualization of the Gap Between the Stances of Citizens and City Councilors on Political Issues. In: Tseng, Y.H., Katsurai, M., Nguyen, H.N. (eds.) From Born-Physical to Born-Virtual: Augmenting Intelligence in Digital Libraries, pp. 73–89. Springer, Cham (2022)

    Chapter  Google Scholar 

  22. Stefanov P, Darwish K, Atanasov A, et al (2020) Predicting the Topical Stance and Political Leaning of Media Using Tweets. In: Jurafsky D, Chai J, Schluter N, et al (eds) Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, vol 1. Association for Computational Linguistics, Online, pp 527–537, https://doi.org/10.18653/v1/2020.acl-main.50

  23. Vaswani A, Shazeer N, Parmar N, et al (2017) Attention is All You Need. CoRR abs/1706.03762. https://doi.org/10.48550/arXiv.1706.03762,

  24. Xu C, Paris C, Nepal S, et al (2018) Cross-Target Stance Classification with Self-Attention Networks. In: Gurevych I, Miyao Y (eds) Proceedings of the 56th annual meeting of the association for computational linguistics, vol 2. Association for Computational Linguistics, Melbourne, Australia, pp 778–783, https://doi.org/10.18653/v1/P18-2123

  25. Zhang Q, Liang S, Lipani A, et al (2019) From Stances’ Imbalance to Their HierarchicalRepresentation and Detection. In: Liu L, White R (eds) The World Wide Web Conference, WWW ’19, vol 1. Association for Computing Machinery, New York, NY, USA, pp 2323–2332, https://doi.org/10.1145/3308558.3313724

Download references

Acknowledgements

This work was partially supported by a Japanese Society for the Promotion of Science Grant-in-Aid for Challenging Exploratory Research (#22K19822), a Grant-in-Aid for Scientific Research (B) (#23H03686), and a Grant-in-Aid for Research Activity Start-up (#22K21303).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Ko Senoo or Yohei Seki.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A

Appendix A

1.1 A.1 Dataset

Table 8 shows Fleiss’ \(\kappa \) coefficients for each team used for the annotation work described in Sect. 4.1.3.

1.2 A.2 Training

Table 9 shows the optimal combinations for \(\alpha _k\), as described in Sects. 4.3 and 4.4.

1.3 A.3 comparison of citizens’ and city councilors’ opinions

For the political issue “children on a waiting list,” Fig. 10 shows the results for Fukuoka, Osaka, and Yokohama, respectively.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Senoo, K., Seki, Y., Kashino, W. et al. Stance prediction with a relevance attribute to political issues in comparing the opinions of citizens and city councilors. Int J Digit Libr 25, 75–91 (2024). https://doi.org/10.1007/s00799-024-00396-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00799-024-00396-3

Keywords

Navigation