Machine Learning

, Volume 16, Issue 3, pp 203–225

Complexity–Based Induction

  • Darrell Conklin
  • Ian H. Witten
Article

DOI: 10.1023/A:1022641209111

Cite this article as:
Conklin, D. & Witten, I.H. Machine Learning (1994) 16: 203. doi:10.1023/A:1022641209111
  • 144 Downloads

Abstract

A central problem in inductive logic programming is theory evaluation. Without some sort of preference criterion, any two theories that explain a set of examples are equally acceptable. This paper presents a scheme for evaluating alternative inductive theories based on an objective preference criterion. It strives to extract maximal redundancy from examples, transforming structure into randomness. A major strength of the method is its application to learning problems where negative examples of concepts are scarce or unavailable. A new measure called model complexity is introduced, and its use is illustrated and compared with a proof complexity measure on relational learning tasks. The complementarity of model and proof complexity parallels that of model and proof–theoretic semantics. Model complexity, where applicable, seems to be an appropriate measure for evaluating inductive logic theories.

Inductive logic programming data compression minimum description length principle model complexity learning from positive–only examples theory preference criterion 

Copyright information

© Kluwer Academic Publishers 1994

Authors and Affiliations

  • Darrell Conklin
    • 1
  • Ian H. Witten
    • 2
  1. 1.Department of Computing and Information ScienceQueen's UniversityKingstonCanada
  2. 2.Department of Computer ScienceUniversity of WaikatoHamiltonNew Zealand

Personalised recommendations