Guest editors introduction: special issue on Inductive Logic Programming (ILP 2012)
- 1.5k Downloads
This issue contains selected papers from ILP 2012: the 22nd International Conference on Inductive Logic Programming held in Dubrovnik, Croatia on September 17–19, 2012. Out of the original 47 submissions to the conference, the authors of 8 selected peer-reviewed papers were invited to submit a revised and extended version for this special issue. As a result of an additional peer review adopting the journal criteria, 5 of these papers were finally accepted.
The assorted papers demonstrate the vitality of current ILP research applied in challenging applications (here, object recognition), augmenting its logical foundations (meta-interpretative learning, interpretation dynamics), and combined with extra-logical machine-learning techniques (neural networks, Bayes nets).
In particular, the paper “Plane-based Object Categorization using Relational Learning” by R. Farid and C. Sammut, which won the best student-paper prize kindly sponsored by this journal, explores the hypothesis that relational description for classes of 3D objects can be built for robust object categorization in a real robotic application. Labelled sets of planes from 3D point clouds gathered during the RoboCup Rescue Robot competition are used as examples for an ILP system. The results show that ILP can be successfully applied to recognize objects encountered by a robot especially in an urban search and rescue environment.
The article “Meta-interpretive Learning: Application to Grammatical Inference” by S.H. Muggleton, D. Lin, N. Pahlavi and A. Tamaddoni-Nezhad develops a framework in which predicate invention and recursive generalizations are implemented using abduction with respect to a meta-interpreter. The approach is based on a previously unexplored case of inverse entailment for grammatical inference of regular languages. Meta-interpretive learning is implemented with two different declarative representations: Prolog and Answer Set Programming.
In the paper “Learning from Interpretation Transition” by K. Inoue, T. Ribeiro, and C. Sakama, a novel framework for learning normal logic programs from transitions of interpretations is proposed. The learning framework can be repeatedly applied for identifying Boolean networks from basins of attraction. Two algorithms have been implemented for this learning task, and compared using examples from biological literature.
The contribution “Fast Relational Learning using Bottom Clause Propositionalization with Artificial Neural Networks” by M.V.M. França, G. Zaverucha and A. d’Avila Garcez introduces a method for relational learning based on a novel propositionalization technique called bottom clause propositionalization, and integrates it with a well-known neural-symbolic system. In the neural-network setting, the new propositionalization method achieves significant improvements in accuracy over the existing system RSD.
The paper “Modelling Relational Statistics With Bayes Nets” by O. Schulte, H. Khosravi, A.E. Kirkpatrick, T. Gao, and Y. Zhu proposes novel random-selection semantics for parametrized Bayes nets, a first-order logic extension of Bayes nets, which supports class-level queries such as “what is the percentage of friendship pairs where both friends are women?” The computation of empirical frequencies is made tractable in the presence of negated relations by the inverse Möbius Transform.
We are grateful to the authors, the reviewers, and to the publisher Springer for giving us the opportunity to compile this reflection of current research in inductive logic programming.