We are pleased to present this special issue of Machine Learning Journal on Inductive Logic Programming, ILP 2017 and 2018.

Inductive Logic Programming (ILP) is a field at the intersection of Machine Learning and Logic Programming, based on logic as a uniform representation language for expressing examples, background knowledge and hypotheses.

Thanks to the expressiveness of first-order logic, ILP has provided an excellent means for knowledge representation and learning in relevant fields such as graph mining, multi-relational data mining and statistical relational learning, not to mention other logic-based non-propositional knowledge representation frameworks.

This special issue followed the 27th International Conference on Inductive Logic Programming, held in Orléans, France (September 4–6, 2017), and preceded the 28th edition, held in Ferrara, Italy (September 2–4, 2018). For ILP 2018, two different tracks were organized, defining six kinds of submissions:

  1. 1.

    a Journal Track,

  2. 2.

    a Conference Track, allowing five types of submissions (long papers (regular or up-and-coming), short papers, works in progress, papers already published or accepted).

The Journal track welcomed original papers to be published in this special issue, gathering both extended versions of papers submitted at ILP 2017 and new submissions for 2018. In the open call for papers, we solicited submissions on theoretical aspects of ILP, learning in different representations, development of algorithms and systems, and applications of ILP. We received eight submissions, and we selected four of them for inclusion in this special issue. The articles have been peer-reviewed according to the journal’s high standards.

A few trends in ILP 2017–2018 can be identified. There are still works on the foundations of ILP and its extensions, in particular Statistical Relational Learning, and on high performance computing for dealing with the computational costs of ILP. Nevertheless, new trends appeared exploiting knowledge graphs, exploring connections with Deep Learning and studying meta inductive learning. Let us also mention several applications of ILP to robotics, breast cancer, vision, diagnostic systems.

The accepted contributions for this special issue are three extended versions of papers presented at the ILP 2017 Conference and one new submission. The accepted articles are briefly summarized below.

In Learning efficient logic programs, Andrew Cropper and Stephen H. Muggleton introduce an ILP system that aims at learning minimal time-complexity programs, including non-deterministic programs. To achieve this, the authors introduce a cost function that measures the size of the SLD-tree searched when a program is given a goal. The system iteratively learns low cost logic programs, from a relatively small numbers of examples, using iterative descent, each time further restricting the hypothesis space. Supported by experiments on programming puzzles, robot strategies, and real-world string transformation problems, the approach is the first one guaranteeing to learn minimal cost logic programs in the ILP field.

Semi-Supervised Online Structure Learning for Composite Event Recognition, by Evangelos Michelioudakis, Alexander Artikis, and Georgios Paliouras, proposes a new approach of structure learning designed to work on streams, in case of complex events defined by sets of ground atoms. It takes unlabeled data into account and the labeling of such data relies on the adaptation of graph-cut minimization and on an improved distance among relational objects. A caching mechanism is proposed to store labelled examples. This approach aims at real applications and is evaluated on two real datasets, one on video surveillance and another on maritime traffic monitoring.

Lifted Discriminative Learning of Probabilistic Logic Programs, by Arnaud Nguembang Fadja and Fabrizio Riguzzi, deals with Statistical Relational Learning, a domain that has received much attention in the last decade. The authors study parameter and structure learning in a restricted case of Logic Programs with Annotated Disjunctions (LPADs): they consider a single uncertain predicate (the predicate to learn) and all the other predicates are defined by rules and facts that are certain. In this framework, they consider discriminative learning. Two methods are studied for parameter learning, namely a EM based algorithm and a gradient descent one, while structure learning is achieved through a beam-search.

In the last paper, Pascal Welke, Tamàs Horvàth, and Stefan Wrobel investigate Probabilistic and Exact Frequent Subtree Mining in Graphs Beyond Forests. This work illustrates connections between ILP and graph mining. Generating features is one way to address relational problems, known as propositionalization. This contribution focuses on the efficiency and completeness of the feature extraction.

We would like to thank all the people who keep the ILP community live and in particular the authors whose works are presented in this volume and the reviewers. Also we express special thanks to Peter Flach, the editor-in-chief of the Machine Learning journal and Dragos Margineantu, the editor for special issues of the journal. We do believe that ILP has still a lot to bring to Machine Learning.