Evaluating the Replicability of Significance Tests for Comparing Learning Algorithms

  • Remco R. Bouckaert
  • Eibe Frank
Conference paper

DOI: 10.1007/978-3-540-24775-3_3

Part of the Lecture Notes in Computer Science book series (LNCS, volume 3056)
Cite this paper as:
Bouckaert R.R., Frank E. (2004) Evaluating the Replicability of Significance Tests for Comparing Learning Algorithms. In: Dai H., Srikant R., Zhang C. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2004. Lecture Notes in Computer Science, vol 3056. Springer, Berlin, Heidelberg

Abstract

Empirical research in learning algorithms for classification tasks generally requires the use of significance tests. The quality of a test is typically judged on Type I error (how often the test indicates a difference when it should not) and Type II error (how often it indicates no difference when it should). In this paper we argue that the replicability of a test is also of importance. We say that a test has low replicability if its outcome strongly depends on the particular random partitioning of the data that is used to perform it. We present empirical measures of replicability and use them to compare the performance of several popular tests in a realistic setting involving standard learning algorithms and benchmark datasets. Based on our results we give recommendations on which test to use.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Remco R. Bouckaert
    • 1
    • 2
  • Eibe Frank
    • 2
  1. 1.Xtal Mountain Information TechnologyAucklandNew Zealand
  2. 2.Computer Science DepartmentUniversity of WaikatoHamiltonNew Zealand

Personalised recommendations