Abstract
We investigate the integration of induction and abduction in the context of logic programming. Our integration proceeds in a way that we learn theories for abductive logic programming (ALP) in the framework of inductive logic programming (ILP). Both ILP and ALP are important research areas in logic programming and AI. ILP provides theoretical frameworks and practical algorithms for inductive learning of relational descriptions in the form of logic programs (Muggleton, 1992; Lavrač and Džeroski, 1994; De Raedt, 1996). ALP, on the other hand, is usually considered as an extension of logic programming to deal with abduction so that incomplete information is represented and handled easily (Kakas et al., 1992). Learning abductive programs has also been proposed as an extension of previous work on ILP (Dimopoulos and Kakas, 1996b; Kakas and Riguzzi, 1997).1 The important question here is “how do we learn abductive theories?”
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
While we adopted the answer set semantics for LELP, other semantics for ELPs may be applicable to our learning framework with minor modification. For example, Lamma et al. use a well-founded semantics for learning ELPs, and their output hypotheses are in a slightly different form from ours (Lamma et al.,1998).
In conventional machine learning methods, a search bias and a noise-handling mechanism are usually implemented to prevent the induced hypotheses from overfuting the given examples. See (Lavra6 and Dzeroski, 1994, Chapter 8) for an overview of mechanisms for handling imperfect data in ILP. These conventional approaches to noise handling can also be applied to the determination and the implementation of GenRules in learning positive or negative rules, e.g., (Srinivasan et al.,1992), in conjunction with our solutions. Since both positive and negative concepts are learned in our proposals, the use of parallel default rules and nondeterministic rules further minimizes the number of incorrectly classified training examples.
We can also consider another criteria for learning hierarchical default cancellation rules. For example, we can even produce nondeterministic rules at lower levels of the hierarchy.
The LELP2 algorithm in this chapter has been revised from the previous version in (Inoue and Kudoh, 1997). The previous algorithm produces rules deriving counter-examples by Counter in every level of the hierarchy, while such rules are added only once at the top level (Step 7 or 8) only when parallel default rules are not learned. Then, for Example 14.3, the resulting Rules now do not include the rule (- flies (D): - ab2 (D)), which is not necessary. This redundancy in the previous version was pointed out in (Lamma et al.,1998).
To avoid inductive leaps, some researchers propose a weak form of induction by applying CWA to BG U E through Clark’s completion, e.g., (De Raedt and Lavra6, 1993 ). However, as explained earlier, CWA is not appropriate in learning ELPs.
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Inoue, K., Haneda, H. (2000). Learning Abductive and Nonmonotonic Logic Programs. In: Flach, P.A., Kakas, A.C. (eds) Abduction and Induction. Applied Logic Series, vol 18. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-0606-3_14
Download citation
DOI: https://doi.org/10.1007/978-94-017-0606-3_14
Publisher Name: Springer, Dordrecht
Print ISBN: 978-90-481-5433-3
Online ISBN: 978-94-017-0606-3
eBook Packages: Springer Book Archive