Chapter

Computational Linguistics and Intelligent Text Processing

Volume 7816 of the series Lecture Notes in Computer Science pp 545-558

Discriminative Learning of First-Order Weighted Abduction from Partial Discourse Explanations

  • Kazeto YamamotoAffiliated withTohoku University
  • , Naoya InoueAffiliated withTohoku University
  • , Yotaro WatanabeAffiliated withTohoku University
  • , Naoaki OkazakiAffiliated withTohoku University
  • , Kentaro InuiAffiliated withTohoku University

* Final gross prices may vary according to local VAT.

Get Access

Abstract

Abduction is inference to the best explanation. Abduction has long been studied in a wide range of contexts and is widely used for modeling artificial intelligence systems, such as diagnostic systems and plan recognition systems. Recent advances in the techniques of automatic world knowledge acquisition and inference technique warrant applying abduction with large knowledge bases to real-life problems. However, less attention has been paid to how to automatically learn score functions, which rank candidate explanations in order of their plausibility. In this paper, we propose a novel approach for learning the score function of first-order logic-based weighted abduction [1] in a supervised manner. Because the manual annotation of abductive explanations (i.e. a set of literals that explains observations) is a time-consuming task in many cases, we propose a framework to learn the score function from partially annotated abductive explanations (i.e. a subset of those literals). More specifically, we assume that we apply abduction to a specific task, where a subset of the best explanation is associated with output labels, and the rest are regarded as hidden variables. We then formulate the learning problem as a task of discriminative structured learning with hidden variables. Our experiments show that our framework successfully reduces the loss in each iteration on a plan recognition dataset.