Advertisement

Construction of Vietnamese Argument Annotated Dataset for Why-Question Answering Method

  • Chinh Trong Nguyen
  • Dang Tuan Nguyen
Conference paper
Part of the Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering book series (LNICST, volume 168)

Abstract

In this paper, the method of building a Vietnamese Argument Annotated Dataset (VAAD) is presented. This dataset contains argumentative data which can be used to answer the why-questions. Therefore, it is important to discover the characteristics of the answers of why-questions to develop why-question answering method by using causal relations between texts. In addition, this dataset can be used to generate the testing dataset for evaluation of answering method. In order to build the dataset, a process of four steps is proposed after studying relevant problems. To briefly evaluate the method, an experiment is conducted to show the applicability of the method in practice.

Keywords

Discourse analysis Why-question answering Vietnamese Argument Annotated Dataset 

References

  1. 1.
    Hovy, E.H., Hermjakob, U., Ravichandran, D.: A question/answer typology with surface text patterns. In: 2nd International Conference on Human Language Technology Research, California, pp. 247–251 (2002)Google Scholar
  2. 2.
    Verberne, S., Boves, L., Oostdijk, N., Coppen, P.: Using syntactic information for improving why-question answering. In: 22nd International Conference on Computational Linguistics, Manchester, United Kingdom, pp. 953–960 (2008)Google Scholar
  3. 3.
    Verberne, S.: Developing an approach for why-question answering. In: 11th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop, Trento, Italy, pp. 39–46 (2006)Google Scholar
  4. 4.
    Delmonte, R., Pianta., E.: Answering why-questions in closed domains from a discourse model. In: Conference on Semantics in Text Processing, pp. 103–114. ACL, Stroudsburg (2008)Google Scholar
  5. 5.
    Oh, J., Torisawa, K., Hashimoto, C., Sano, M., Saeger, S. D.: Why-question answering using intra- and inter-sentential causal relations. In: 51st Annual Meeting of the Association for Computational Linguistics, pp. 1733–1743. ACL Anthology, Sofia (2013)Google Scholar
  6. 6.
    Higashinaka, R., Isozaki, H.: Corpus-based question answering for why-questions. In: 3rd International Joint Conference of Natural Language Processing, Hyderabad, India, pp. 418–425 (2008)Google Scholar
  7. 7.
    Mann, W.C., Thompson, S.A.: Rhetorical structure theory: towards a functional theory of text organization. Text 3(8), 243–281 (1988)Google Scholar
  8. 8.
    Power, R., Scott, D., Bouayad-Agha, N.: Document Structure. Comput. Linguist. 29(2), 211–260 (2003)CrossRefGoogle Scholar
  9. 9.
    Marcu, D.: The rhetorical parsing of natural language texts. In: 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics, pp. 96–103. ACL, Stroudsburg (1997)Google Scholar
  10. 10.
    Hwee, T.N., Leong, H.T., Lai, J.P.K.: A machine learning approach to answering questions for reading comprehension tests. In: The 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pp. 124–132. ACL, Stroudsburg (2000)Google Scholar
  11. 11.
    Riloff, E., Thelen, M.: A rule-based question answering system for reading comprehension tests. In: The 2000 ANLP/NAACL Workshop on Reading Comprehension Tests as Evaluation for Computer-Based Language Understanding Systems, pp. 13–19. ACL, Stroudsburg (2000)Google Scholar
  12. 12.
    Saint-Dizier, P.: Processing Natural Language Arguments with the <TextCoop> Platform. Argument Comput. 3(1), 49–82 (2012). Taylor and FrancisGoogle Scholar
  13. 13.
    Zheng, Z.: AnswerBus question answering system. In: The 2nd International Conference on Human Language Technology Research, pp. 399–404. Morgan Kaufmann Publishers Inc., San Francisco (2002)Google Scholar
  14. 14.
    Clarke, C., Cormack, G., Kemkes, G., Laszlo, M., Lynam, T., Terra, E., Tilker, P.: Statistical selection of exact answers (multitext experiments for TREC 2002). In: TREC, pp. 823–831. NIST (2002)Google Scholar
  15. 15.
    Brill, E., Dumais, S., Banko, M.: An analysis of the AskMSR question-answering system. In: The ACL 2002 Conference on Empirical Methods in Natural Language Processing, pp. 257–264. ACL, Stroudsburg (2002)Google Scholar
  16. 16.
    Buchholz, S., Daelemans, W.: Shapaqa: shallow parsing for question answering on the world wide web. In: Euroconference Recent Advances in Natural Language Processing, Tzigov Chark, Bulgaria, pp. 47–51 (2001)Google Scholar
  17. 17.
    Katz, B., Felshin, S., Yuret, D., Ibrahim, A., Lin, J.J., Marton, G., McFarland, A.J., Temelkuran, B.: Omnibase: uniform access to heterogeneous data for question answering. In: Andersson, B., Bergholtz, M., Johannesson, P. (eds.) NLDB 2002. LNCS, vol. 2553, pp. 230–234. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  18. 18.
    Kamp, H.: Discourse representation theory. In: Gabbay, D., Guenthner, F. (eds.) Handbook of Philosophical Logic, vol. 15, pp. 125–394. Springer, Netherlands (2011)CrossRefGoogle Scholar

Copyright information

© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2016

Authors and Affiliations

  1. 1.Faculty of Computer ScienceUniversity of Information Technology, VNU-HCMHo Chi Minh CityVietnam

Personalised recommendations