Reducing Examples in Relational Learning with Bounded-Treewidth Hypotheses
- Cite this paper as:
- Kuželka O., Szabóová A., Železný F. (2013) Reducing Examples in Relational Learning with Bounded-Treewidth Hypotheses. In: Appice A., Ceci M., Loglisci C., Manco G., Masciari E., Ras Z.W. (eds) New Frontiers in Mining Complex Patterns. NFMCP 2012. Lecture Notes in Computer Science, vol 7765. Springer, Berlin, Heidelberg
Feature selection methods often improve the performance of attribute-value learning. We explore whether also in relational learning, examples in the form of clauses can be reduced in size to speed up learning without affecting the learned hypothesis. To this end, we introduce the notion of safe reduction: a safely reduced example cannot be distinguished from the original example under the given hypothesis language bias. Next, we consider the particular, rather permissive bias of bounded treewidth clauses. We show that under this hypothesis bias, examples of arbitrary treewidth can be reduced efficiently. The bounded treewidth bias can be replaced by other assumptions such as acyclicity with similar benefits. We evaluate our approach on four data sets with the popular system Aleph and the state-of-the-art relational learner nFOIL. On all four data sets we make learning faster for nFOIL, achieving an order-of-magnitude speed up on one of the data sets, and more accurate for Aleph.
Unable to display preview. Download preview PDF.