Hierarchical Meta-Rules for Scalable Meta-Learning

  • Quan Sun
  • Bernhard Pfahringer
Conference paper

DOI: 10.1007/978-3-319-13560-1_31

Part of the Lecture Notes in Computer Science book series (LNCS, volume 8862)
Cite this paper as:
Sun Q., Pfahringer B. (2014) Hierarchical Meta-Rules for Scalable Meta-Learning. In: Pham DN., Park SB. (eds) PRICAI 2014: Trends in Artificial Intelligence. PRICAI 2014. Lecture Notes in Computer Science, vol 8862. Springer, Cham

Abstract

The Pairwise Meta-Rules (PMR) method proposed in [18] has been shown to improve the predictive performances of several meta-learning algorithms for the algorithm ranking problem. Given m target objects (e.g., algorithms), the training complexity of the PMR method with respect to m is quadratic: \(\binom{m}{2} = m \times (m - 1) / 2\). This is usually not a problem when m is moderate, such as when ranking 20 different learning algorithms. However, for problems with a much larger m, such as the meta-learning-based parameter ranking problem, where m can be 100+, the PMR method is less efficient. In this paper, we propose a novel method named Hierarchical Meta-Rules (HMR), which is based on the theory of orthogonal contrasts. The proposed HMR method has a linear training complexity with respect to m, providing a way of dealing with a large number of objects that the PMR method cannot handle efficiently. Our experimental results demonstrate the benefit of the new method in the context of meta-learning.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Quan Sun
    • 1
  • Bernhard Pfahringer
    • 1
  1. 1.Department of Computer ScienceThe University of WaikatoHamiltonNew Zealand

Personalised recommendations