Skip to main content

Overview of Touché 2021: Argument Retrieval

  • Conference paper
  • First Online:
Experimental IR Meets Multilinguality, Multimodality, and Interaction (CLEF 2021)

Abstract

This paper is a condensed report on the second year of the Touché shared task on argument retrieval held at CLEF 2021. With the goal to provide a collaborative platform for researchers, we organized two tasks: (1) supporting individuals in finding arguments on controversial topics of social importance and (2) supporting individuals with arguments in personal everyday comparison situations.

Copyright © 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CLEF 2021, 21–24 September 2021, Bucharest, Romania.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The name of the lab is inspired by the usage of the term ‘touché’ as an exclamation “used to admit that someone has made a good point against you in an argument or discussion.” [https://dictionary.cambridge.org/dictionary/english/touche].

  2. 2.

    https://touche.webis.de/.

  3. 3.

    https://www.research.ibm.com/artificial-intelligence/project-debater/.

  4. 4.

    http://commoncrawl.org.

  5. 5.

    The expected format of submissions was also described at https://touche.webis.de.

  6. 6.

    https://webis.de/data.html#args-me-corpus.

  7. 7.

    https://www.args.me/api-en.html.

  8. 8.

    https://lemurproject.org/clueweb12/.

  9. 9.

    https://demo.webis.de/targer-api/apidocs/.

  10. 10.

    https://www.chatnoir.eu/doc/.

References

  1. Ajjour, Y., Wachsmuth, H., Kiesel, J., Potthast, M., Hagen, M., Stein, B.: Data acquisition for argument search: the args.me corpus. In: Proceedings of the 42nd German Conference on Artificial Intelligence (KI 2019). pp. 48–59. Springer, Berlin, Heidelberg, New York (2019). https://doi.org/10.1007/978-3-030-30179-8_4

  2. Akiki, C., Potthast, M.: Exploring argument retrieval with transformers. In: Working Notes Papers of the CLEF 2020 Evaluation Labs, vol. 2696 (2020). http://ceur-ws.org/Vol-2696/

  3. Kennedy, G.A.: On Rhetoric: A Theory of Civic Discourse. Oxford University Press, Oxford (2006)

    Google Scholar 

  4. Bar-Haim, R., Eden, L., Friedman, R., Kantor, Y., Lahav, D., Slonim, N.: From arguments to key points: towards automatic argument summarization. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020), pp. 4029–4039. Association for Computational Linguistics(2020). https://doi.org/10.18653/v1/2020.acl-main.371

  5. Bar-Haim, R., et al.: From surrogacy to adoption; from bitcoin to cryptocurrency: debate topic expansion. In: Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL 2019), pp. 977–990. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/p19-1094

  6. Bevendorff, J., Stein, B., Hagen, M., Potthast, M.: Elastic ChatNoir: search engine for the clueweb and the common crawl. In: Pasi, G., Piwowarski, B., Azzopardi, L., Hanbury, A. (eds.) ECIR 2018. LNCS, vol. 10772, pp. 820–824. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-76941-7_83

    Chapter  Google Scholar 

  7. Bondarenko, A., et al.: Overview of Touché 2020: argument retrieval. In: Working Notes Papers of the CLEF 2020 Evaluation Labs. CEUR Workshop Proceedings, vol. 2696 (2020). http://ceur-ws.org/Vol-2696/

  8. Bondarenko, A., et al.: Overview of Touché 2021: argument retrieval. In: Hiemstra, D., Moens, M.-F., Mothe, J., Perego, R., Potthast, M., Sebastiani, F. (eds.) ECIR 2021. LNCS, vol. 12657, pp. 574–582. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72240-1_67

    Chapter  Google Scholar 

  9. Bondarenko, A., et al.: Overview of Touché 2021: argument retrieval. In: Working Notes of CLEF 2021 - Conference and Labs of the Evaluation Forum, p. (to appear). CEUR Workshop Proceedings, CLEF and CEUR-WS.org (2021)

    Google Scholar 

  10. Cer, D., et al.: Universal Sentence Encoder. CoRR abs/1803.11175 (2018).http://arxiv.org/abs/1803.11175

  11. Chekalina, V., Bondarenko, A., Biemann, C., Beloucif, M., Logacheva, V., Panchenko, A.: Which is better for deep learning: python or MATLAB? Answering comparative questions in natural language. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations (EACL 2021), pp. 302–311. Association for Computational Linguistics (2021). https://www.aclweb.org/anthology/2021.eacl-demos.36/

  12. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In:Proceedings of the 22nd ACM SIGKDD International Conference on KnowledgeDiscovery and Data Mining, pp. 785–794. ACM (2016). https://doi.org/10.1145/2939672.2939785

  13. Chernodub, A., et al.: TARGER: neural argument mining at your fingertips. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), pp. 195–200. Association for Computational Linguistics (2019). https://www.aclweb.org/anthology/P19-3031

  14. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019), pp. 4171–4186. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/n19-1423

  15. Dumani, L., Neumann, P.J., Schenkel, R.: A framework for argument retrieval - ranking argument clusters by frequency and specificity. In: Jose, J.M., et al. (eds.) ECIR 2020. LNCS, vol. 12035, pp. 431–445. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-45439-5_29

    Chapter  Google Scholar 

  16. Dumani, L., Schenkel, R.: Quality aware ranking of arguments. In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 335–344. CIKM 2020, Association for Computing Machinery (2020). https://doi.org/10.1007/978-3-030-45439-5_29

  17. Fellbaum, C.: WordNet: An Electronic Lexical Database. Bradford Books (1998)

    Google Scholar 

  18. Fröbe, M., et al.: CopyCat: near-duplicates within and between the ClueWeb and the common crawl. In: Proceedings of the 44th International ACM Conference on Research and Development in Information Retrieval (SIGIR 2021). ACM (2021). https://doi.org/10.1145/3404835.3463246

  19. Fröbe, M., Bevendorff, J., Reimer, J., Potthast, M., Hagen, M.: Sampling bias due to near-duplicates in learning to rank. In: Proceedings of the 43rd International ACM Conference on Research and Development in Information Retrieval (SIGIR 2020), pp. 1997–2000. ACM (2020). https://doi.org/10.1145/3397271.3401212

  20. Fröbe, M., Bittner, J.P., Potthast, M., Hagen, M.: The effect of content-equivalent near-duplicates on the evaluation of search engines. In: Jose, J.M., et al. (eds.) ECIR 2020. LNCS, vol. 12036, pp. 12–19. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-45442-5_2

    Chapter  Google Scholar 

  21. Gienapp, L., Stein, B., Hagen, M., Potthast, M.: Efficient pairwise annotation of argument quality. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020), pp. 5772–5781. Association for Computational Linguistics, Online (2020). https://www.aclweb.org/anthology/2020.acl-main.511/

  22. Iyer, S., Dandekar, N., Csernai, K.: First Quora Dataset Release: Question Pairs (2017). https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs

  23. Jindal, N., Liu, B.: Identifying comparative sentences in text documents. In: Proceedings of the 29th Annual International Conference on Research and Development in Information Retrieval (SIGIR 2006), pp. 244–251. ACM (2006). https://doi.org/10.1145/1148170.1148215

  24. Jindal, N., Liu, B.: Mining comparative sentences and relations. In: Proceedings of the 21st National Conference on Artificial Intelligence and the 18th Innovative Applications of Artificial Intelligence Conference (AAAI 2006), pp. 1331–1336. AAAI Press (2006). http://www.aaai.org/Library/AAAI/2006/aaai06-209.php

  25. Ke, G., et al.: LightGBM: a highly efficient gradient boosting decision tree. In: Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS 2017), pp. 3146–3154 (2017)

    Google Scholar 

  26. Kessler, W., Kuhn, J.: A corpus of comparisons in product reviews. In: Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC 2014), pp. 2242–2248. European Language Resources Association (ELRA) (2014). http://www.lrec-conf.org/proceedings/lrec2014/summaries/1001.html

  27. Krovetz, R.: Viewing morphology as an inference process. In: Proceedings of the 16th Annual International Conference on Research and Development in Information Retrieval (SIGIR 1993), pp. 191–202. ACM (1993). https://doi.org/10.1145/160688.160718

  28. Lavrenko, V., Croft, W.B.: Relevance-based language models. In: Proceedings of the 24th Annual International Conference on Research and Development in Information Retrieval (SIGIR 2001), pp. 120–127. ACM (2001). https://doi.org/10.1145/383952.383972

  29. Levy, R., Bogin, B., Gretz, S., Aharonov, R., Slonim, N.: Towards an argumentative content search engine using weak supervision. In: Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018), pp. 2066–2081. Association for Computational Linguistics (2018). https://www.aclweb.org/anthology/C18-1176/

  30. Lippi, M., Torroni, P.: MARGOT: a web server for argumentation mining. Expert Syst. Appl. 65, 292–303 (2016). https://doi.org/10.1016/j.eswa.2016.08.050

    Article  Google Scholar 

  31. Ma, N., Mazumder, S., Wang, H., Liu, B.: Entity-aware dependency-based deep graph attention network for comparative preference classification. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020), pp. 5782–5788. Association for Computational Linguistics (2020). https://www.aclweb.org/anthology/2020.acl-main.512/

  32. Mass, Y., et al.: Word emphasis prediction for expressive text to speech. In: Proceedings of the 19th Annual Conference of the International Speech Communication Association (Interspeech 2018), pp. 2868–2872. ISCA (2018). https://doi.org/10.21437/Interspeech.2018-1159

  33. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of wordrepresentations in vector space. In: Proceedings of the 1st InternationalConference on Learning Representations (ICLR 2013) (2013).http://arxiv.org/abs/1301.3781

  34. Nadamoto, A., Tanaka, K.: A comparative web browser (CWB) for browsing and comparing web pages. In: Proceedings of the 12th International World Wide Web Conference (WWW 2003), pp. 727–735. ACM (2003). https://doi.org/10.1145/775152.775254

  35. Nakayama, H., Kubo, T., Kamura, J., Taniguchi, Y., Liang, X.: doccano: Text Annotation Tool for Human (2018). https://github.com/doccano/doccano

  36. Nguyen, T., et al.: MS MARCO: a human generated machine reading comprehension dataset. In: Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 co-located with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016). CEUR Workshop Proceedings, vol. 1773. CEUR-WS.org (2016). http://ceur-ws.org/Vol-1773/CoCoNIPS_2016_paper9.pdf

  37. Page, L., Brin, S., Motwani, R., Winograd, T.: The PageRank Citation Ranking: Bringing Order to the Web. Technical Report 1999–66, Stanford InfoLab (1999). http://ilpubs.stanford.edu:8090/422/

  38. Palotti, J.R.M., Scells, H., Zuccon, G.: TrecTools: an open-source python library for information retrieval practitioners involved in TREC-like campaigns. In: Proceedings of the 42nd International Conference on Research and Development in Information Retrieval (SIGIR 2019), pp. 1325–1328. ACM (2019). https://doi.org/10.1145/3331184.3331399

  39. Panchenko, A., Bondarenko, A., Franzek, M., Hagen, M., Biemann, C.: Categorizing comparative sentences. In: Proceedings of the 6th Workshop on Argument Mining (ArgMining@ACL 2019), pp. 136–145. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/w19-4516

  40. Potthast, M., et al.: Argument search: assessing argument relevance. In: Proceedings of the 42nd International Conference on Research and Development in Information Retrieval (SIGIR 2019), pp. 1117–1120. ACM (2019). https://doi.org/10.1145/3331184.3331327

  41. Potthast, M., Gollub, T., Wiegmann, M., Stein, B.: TIRA integrated research architecture. In: Information Retrieval Evaluation in a Changing World. TIRS, vol. 41, pp. 123–160. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-22948-1_5

    Chapter  Google Scholar 

  42. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)

    Google Scholar 

  43. Sanh, V., Debut, L., Chaumond, J., Wolf, T.: DistilBERT, a Distilled Version of BERT: Smaller, Faster, Cheaper and Lighter. CoRR abs/1910.01108 (2019). http://arxiv.org/abs/1910.01108

  44. Schildwächter, M., Bondarenko, A., Zenker, J., Hagen, M., Biemann, C., Panchenko, A.: Answering comparative questions: better than ten-blue-links? In: Proceedings of the Conference on Human Information Interaction and Retrieval (CHIIR 2019), pp. 361–365. ACM (2019). https://doi.org/10.1145/3295750.3298916

  45. Stab, C., et al.: ArgumenText: searching for arguments in heterogeneous sources. In: Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2018), pp. 21–25. Association for Computational Linguistics (2018). https://doi.org/10.18653/v1/n18-5005

  46. Sun, J., Wang, X., Shen, D., Zeng, H., Chen, Z.: CWS: a comparative web search system. In: Proceedings of the 15th International Conference on World Wide Web (WWW 2006), pp. 467–476. ACM (2006). https://doi.org/10.1145/1135777.1135846

  47. Trask, A., Michalak, P., Liu, J.: Sense2vec - A Fast and Accurate Method forWord Sense Disambiguation in Neural Word Embeddings. CoRRabs/1511.06388 (2015). http://arxiv.org/abs/1511.06388

  48. Wachsmuth, H., et al.: Argumentation quality assessment: theory vs. practice. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), pp. 250–255. Association for Computational Linguistics (2017). https://doi.org/10.18653/v1/P17-2039

  49. Wachsmuth, H., et al.: Argumentation quality assessment: theory vs. practice. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), pp. 250–255. Association for Computational Linguistics (2017)

    Google Scholar 

  50. Wachsmuth, H., et al.: Computational argumentation quality assessment in natural language. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017), pp. 176–187 (2017). http://aclweb.org/anthology/E17-1017

  51. Wachsmuth, H., et al.: Building an argument search engine for the web. In: Proceedings of the 4th Workshop on Argument Mining (ArgMining@EMNLP 2017), pp. 49–59. Association for Computational Linguistics (2017). https://doi.org/10.18653/v1/w17-5106

  52. Wachsmuth, H., Stein, B., Ajjour, Y.: “PageRank” for argument relevance. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017), pp. 1117–1127. Association for Computational Linguistics (2017). https://doi.org/10.18653/v1/e17-1105

  53. Wachsmuth, H., Syed, S., Stein, B.: Retrieval of the best counterargument without prior topic knowledge. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018), pp. 241–251. Association for Computational Linguistics (2018). https://www.aclweb.org/anthology/P18-1023/

Download references

Acknowledgments

We are very grateful to the CLEF 2021 organizers and the Touché participants, who allowed this lab to happen. We also want to thank Jan Heinrich Reimer for setting up Doccano, Christopher Akiki for providing the baseline DirichletLM implementation, our volunteer annotators who helped to create the relevance and argument quality assessments, and our reviewers for their valuable feedback on the participants’ notebooks.

This work was partially supported by the DFG through the project “ACQuA: Answering Comparative Questions with Arguments” (grants BI 1544/7-1 and HA 5851/2-1) as part of the priority program “RATIO: Robust Argumentation Machines” (SPP 1999).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Bondarenko .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bondarenko, A. et al. (2021). Overview of Touché 2021: Argument Retrieval. In: Candan, K.S., et al. Experimental IR Meets Multilinguality, Multimodality, and Interaction. CLEF 2021. Lecture Notes in Computer Science(), vol 12880. Springer, Cham. https://doi.org/10.1007/978-3-030-85251-1_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-85251-1_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-85250-4

  • Online ISBN: 978-3-030-85251-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics