Advertisement

Using Variant Directional Dis(similarity) Measures for the Task of Textual Entailment

  • Anand Gupta
  • Manpreet Kaur
  • Disha Garg
  • Karuna SainiEmail author
Conference paper
  • 1.1k Downloads
Part of the Communications in Computer and Information Science book series (CCIS, volume 799)

Abstract

Textual entailment (TE) is a task used to determine degree of semantic inference between a pair of text fragments in many natural language processing applications. In literature, a single document summarization framework has exploited TE to establish degree of connectedness between pair of sentences in a text summarization method. Despite noteworthy performance of the method, the extensive resource requirements and slow speed of the TE tool render it impractical to generate summaries in real time scenarios. This has stimulated the authors to propose the use of available directional dis(similarity) (distance and similarity) measures in place of TE system. The present paper aims to find a suitable directional measure which can successfully replace the TE system and decrease the overall runtime of the summarization method. Therefore, state-of-the-art directional dis(similarity) measures are implemented in the same summarization framework to present a comparative analysis of performance of all the measures. The experiments are conducted on DUC 2002 dataset and the results are evaluated using ROUGE tool to find the most suitable directional measure of textual entailment.

References

  1. 1.
    Bentivogli, L., Dagan, I., Dang, H.T., Giampiccolo, D., Magnini, B.: The fifth PASCAL recognizing textual entailment challenge. In: Proceedings of Text Analysis Conference, Gaithersburg, Maryland, USA, pp. 14–24, 16–17 November 2009Google Scholar
  2. 2.
    LoBue, P., Yates, A.: Types of common-sense knowledge needed for recognizing textual entailment. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers, vol. 2, pp. 329–334. Association for Computational Linguistics (2011)Google Scholar
  3. 3.
    Malakasiotis, P., Androutsopoulos, I.: Learning textual entailment using SVMs and string similarity measures. In: Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pp. 42–47. Association for Computational Linguistics (2007)Google Scholar
  4. 4.
    Yokote, K.I., Bollegala, D., Ishizuka, M.: Similarity is not entailment-jointly learning similarity transformation for textual entailment. In: AAAI (2012)Google Scholar
  5. 5.
    Achananuparp, P., Hu, X., Shen, X.: The evaluation of sentence similarity measures. In: Song, I.-Y., Eder, J., Nguyen, T.M. (eds.) DaWaK 2008. LNCS, vol. 5182, pp. 305–316. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-85836-2_29CrossRefGoogle Scholar
  6. 6.
    Bos, J., Markert, K.: Recognising textual entailment with logical inference. In: Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, pp. 628–635. Association for Computational Linguistics (2005)Google Scholar
  7. 7.
    Gupta, A., Kaur, M., Singh, A., Goel, A., Mirkin, S.: Text summarization through entailment-based minimum vertex cover. In: Lexical and Computational Semantics (*SEM 2014), p. 75 (2014)Google Scholar
  8. 8.
    Monz, C., de Rijke, M.: Light-weight entailment checking for computational semantics. In: Proceedings of the Third Workshop on Inference in Computational Semantics (ICoS-3) (2001)Google Scholar
  9. 9.
    Kouylekov, M., Magnini, B.: Tree edit distance for recognizing textual entailment: estimating the cost of insertion. In: Proceedings of the PASCAL RTE-2 Challenge, pp. 68–73 (2006)Google Scholar
  10. 10.
    Tatar, D., Serban, G., Mihis, A.D., Mihalcea, R., et al.: Textual entailment as a directional relation. J. Res. Pract. Inf. Technol. 41(1), 53 (2009)Google Scholar
  11. 11.
    Corley, C., Mihalcea, R.: Measuring the semantic similarity of texts. In: Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment, pp. 13–18. Association for Computational Linguistics (2005)Google Scholar
  12. 12.
    DUC: Document understanding conference. In: HLT/NAACL Workshop on Text Summarization, pp. 22–31 (2002). http://duc.nist.gov/
  13. 13.
    Lin, C.Y.: ROUGE: a package for automatic evaluation of summaries. In: Proceedings of the Workshop on Text Summarization Branches Out, Post-Conference Workshop of ACL 2004, Barcelona, Spain (2004)Google Scholar
  14. 14.
    Lin, C.Y., Hovy, E.: Automatic evaluation of summaries using n-gram cooccurrence statistics. In: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (HLT-NAACL 2003), Edmonta, Canada, pp. 71–78, 27 May–1 June 2003Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  • Anand Gupta
    • 1
  • Manpreet Kaur
    • 1
  • Disha Garg
    • 2
  • Karuna Saini
    • 2
    Email author
  1. 1.Department of Computer ScienceNSITNew DelhiIndia
  2. 2.Department of Information TechnologyNSITNew DelhiIndia

Personalised recommendations