Skip to main content

Abductive Reasoning with a Large Knowledge Base for Discourse Processing

  • Chapter

Part of the Text, Speech and Language Technology book series (TLTB,volume 47)


This chapter presents a discourse processing framework based on weighted abduction. We elaborate on ideas described in Hobbs et al. (1993) and implement the abductive inference procedure in a system called Mini-TACITUS. Particular attention is paid to constructing a large and reliable knowledge base for supporting inferences. For this purpose we exploit such lexical-semantic resources as WordNet and FrameNet. English Slot Grammar is used to parse text and produce logical forms. We test the proposed procedure and the resulting knowledge base on the recognizing textual entailment task using the data sets from the RTE-2 challenge for evaluation. In addition, we provide an evaluation of the semantic role labeling produced by the system taking the Frame-Annotated Corpus for Textual Entailment as a gold standard.


  • Logical Form
  • Word Sense
  • Good Interpretation
  • Discourse Processing
  • Semantic Role Label

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Michael C. McCord is an independent researcher.

This is a preview of subscription content, access via your institution.

Buying options

USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-94-007-7284-7_7
  • Chapter length: 21 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
USD   109.00
Price excludes VAT (USA)
  • ISBN: 978-94-007-7284-7
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Hardcover Book
USD   139.99
Price excludes VAT (USA)
Algorithm 1


  1. 1.

  2. 2.

    In the rest of this chapter we omit quantification.

  3. 3.

    The actual value of the default costs of the input propositions does not matter, because the interpretation costs are calculated using a multiplication function. The only heuristic we use here concerns setting all costs of the input propositions to be equal (all propositions cost 10 in the discussed example). This heuristic needs further investigation.

  4. 4.

    The anaphoric he in the logical form is already linked to its antecedent John.

  5. 5.

  6. 6.

  7. 7.

    The computation was done on a High Performance Cluster (320 2.4 GHz nodes, CentOS 5.0) of the Center for Industrial Mathematics (Bremen, Germany).

  8. 8.

    “Number of axioms” stands for the average number of axioms applied per sentence.

  9. 9.

    In order to get a better understanding of which parts of our KB are useful for computing entailment and for which types of entailment, in future, we are planning to use the detailed annotation of the RTE-2 dataset describing the source of the entailment, which was produced by Garoufi (2007). We would like to thank one of the reviewers of our IWCS 2011 paper which is the basis of this chapter for giving us this idea.

  10. 10.

    FATE was annotated with the FrameNet 1.3 labels, while we have been using version 1.5 for extracting axioms. However, in the new FN version the number of frames and roles increases and there is no message about removed frames in the General Release Notes R1.5, see Therefore we suppose that most of the frames and roles used for the FATE annotation are still present in FN 1.5.

  11. 11.

    We do not compare filler matching, because the FATE syntactic annotation follows different standards as the one produced by the ESG parser, which makes aligning fillers non-trivial.

  12. 12.

    There exists one more probabilistic system labeling text with FrameNet frames and roles, called SEMAFOR (Das et al. 2010). We do not compare our results with the results of SEMAFOR, because it has not been evaluated against the FATE corpus yet.

  13. 13.

    The discourse processing pipeline including the ILP-based abductive reasoner is available at


  • Bar-Haim, R., Dagan, I., Dolan, B., Ferro, L., Giampiccolo, D., Magnini, B., & Szpektor, I. (2006). The second PASCAL recognising textual entailment challenge. In Proc. of the second PASCAL challenges workshop on recognising textual entailment.

    Google Scholar 

  • Burchardt, A., & Pennacchiotti, M. (2008). FATE: A FrameNet-annotated corpus for textual entailment. In Proc. of LREC’08, Marrakech, Morocco.

    Google Scholar 

  • Burchardt, A., Erk, K., & Frank, A. (2005). A WordNet detour to framenet. In Sprachtechnologie, mobile Kommunikation und linguistische Resourcen (Vol. 8).

    Google Scholar 

  • Burchardt, A., Pennacchiotti, M., Thater, S., & Pinkal, M. (2009). Assessing the impact of frame semantics on textual entailment. Natural Language Engineering, 15(4), 527–550.

    CrossRef  Google Scholar 

  • Clark, P., Harrison, P., Thompson, J., Murray, W., Hobbs, J., & Fellbaum, C. (2007). On the role of lexical and world knowledge in RTE3. In Proc. of the ACL-PASCAL workshop on textual entailment and paraphrasing (pp. 54–59).

    CrossRef  Google Scholar 

  • Dagan, I., Dolan, B., Magnini, B., & Roth, D. (2010). Recognizing textual entailment: Rational, evaluation and approaches – erratum. Natural Language Engineering, 16(1), 105.

    CrossRef  Google Scholar 

  • Das, D., Schneider, N., Chen, D., & Smith, N. A. (2010). SEMAFOR 1.0: A probabilistic frame-semantic parser (Technical Report CMU-LTI-10-001). Carnegie Mellon University, Pittsburgh, Pennsylvania.

    Google Scholar 

  • Davidson, D. (1967). The logical form of action sentences. In N. Rescher (Ed.), The logic of decision and action (pp. 81–120). Pittsburgh: University of Pittsburgh Press.

    Google Scholar 

  • Erk, K., & Pado, S. (2006). Shalmaneser – a flexible toolbox for semantic role assignment. In Proc. of LREC’06, Genoa, Italy.

    Google Scholar 

  • Fellbaum, C. (Ed.) (1998). WordNet: An electronic lexical database (1st ed.) Cambridge: MIT Press.

    MATH  Google Scholar 

  • Garoufi, K. (2007). Towards a better understanding of applied textual entailment: Annotation and evaluation of the RTE-2 dataset. Master’s thesis, Saarland University.

    Google Scholar 

  • Hobbs, J. R. (1985). Ontological promiscuity. In Proc. of the 23rd annual meeting of the association for computational linguistics, Chicago, Illinois (pp. 61–69).

    Google Scholar 

  • Hobbs, J. R., Stickel, M., Appelt, D., & Martin, P. (1993). Interpretation as abduction. Artificial Intelligence, 63, 69–142.

    CrossRef  Google Scholar 

  • Inoue, N., & Inui, K. (2011). ILP-based reasoning for weighted abduction. In Proc. of AAAI workshop on plan, activity and intent recognition.

    Google Scholar 

  • Inoue, N., Ovchinnikova, E., Inui, K., & Hobbs, J. R. (2012). Coreference resolution with ILP-based weighted abduction. In Proc. of the 24th international conference on computational linguistics (pp. 1291–1308).

    Google Scholar 

  • McCord, M. C. (1990). Slot grammar: A system for simpler construction of practical natural language grammars. In In natural language and Logic: International scientific symposium, lecture notes in computer science (pp. 118–145). Berlin: Springer.

    CrossRef  Google Scholar 

  • McCord, M. C. (2010). Using slot grammar (Technical report). IBM T. J. Watson Research Center. RC 23978 Revised.

    Google Scholar 

  • McCord, M. C., Murdock, J. W., & Boguraev, B. K. (2012). Deep parsing in Watson. IBM Journal of Research and Development, 56(3/4), 3:1–3:15.

    Google Scholar 

  • Mulkar, R., Hobbs, J. R., & Hovy, E. (2007). Learning from reading syntactically complex biology texts. In Proc. of the 8th international symposium on logical formalizations of commonsense reasoning, Palo Alto, USA.

    Google Scholar 

  • Mulkar-Mehta, R. (2007). Mini-TACITUS.

  • Ovchinnikova, E. (2012). Integration of world knowledge for natural language understanding. Amsterdam: Atlantis Press.

    CrossRef  MATH  Google Scholar 

  • Ovchinnikova, E., Vieu, L., Oltramari, A., Borgo, S., & Alexandrov, T. (2010). Data-driven and ontological analysis of FrameNet for natural language reasoning. In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proc. of LREC’10. Valletta, Malta: European Language Resources Association (ELRA).

    Google Scholar 

  • Peñas, A., & Ovchinnikova, E. (2012). Unsupervised acquisition of axioms to paraphrase noun compounds and genitives. In LNCS. Proc. of the international conference on intelligent text processing and computational linguistics, New Delhi, India (pp. 388–401). Berlin: Springer.

    CrossRef  Google Scholar 

  • Ruppenhofer, J., Ellsworth, M., Petruck, M., Johnson, C., & Scheffczyk, J. (2006). FrameNet II: Extended theory and practice. Berkele: International Computer Science Institute.

    Google Scholar 

  • Shen, D., & Lapata, M. (2007). Using semantic roles to improve question answering. In Proc. of EMNLP-CoNLL (pp. 12–21).

    Google Scholar 

  • Stickel, M. E. (1988). A prolog technology theorem prover: Implementation by an extended prolog compiler. Journal of Automated Reasoning, 4(4), 353–380.

    CrossRef  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Ekaterina Ovchinnikova .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2014 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Ovchinnikova, E., Montazeri, N., Alexandrov, T., Hobbs, J.R., McCord, M.C., Mulkar-Mehta, R. (2014). Abductive Reasoning with a Large Knowledge Base for Discourse Processing. In: Bunt, H., Bos, J., Pulman, S. (eds) Computing Meaning. Text, Speech and Language Technology, vol 47. Springer, Dordrecht.

Download citation

  • DOI:

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-94-007-7283-0

  • Online ISBN: 978-94-007-7284-7

  • eBook Packages: Computer ScienceComputer Science (R0)