Data Mining and Knowledge Discovery

, Volume 1, Issue 3, pp 317–328

On Comparing Classifiers: Pitfalls to Avoid and a Recommended Approach


  • Steven L. Salzberg
    • Department of Computer ScienceJohns Hopkins University

DOI: 10.1023/A:1009752403260

Cite this article as:
Salzberg, S.L. Data Mining and Knowledge Discovery (1997) 1: 317. doi:10.1023/A:1009752403260


An important component of many data mining projects is finding a good classification algorithm, a process that requires very careful thought about experimental design. If not done very carefully, comparative studies of classification and other types of algorithms can easily result in statistically invalid conclusions. This is especially true when one is using data mining techniques to analyze very large databases, which inevitably contain some statistically unlikely data. This paper describes several phenomena that can, if ignored, invalidate an experimental comparison. These phenomena and the conclusions that follow apply not only to classification, but to computational experiments in almost any aspect of data mining. The paper also discusses why comparative analysis is more important in evaluating some types of algorithms than for others, and provides some suggestions about how to avoid the pitfalls suffered by many experimental studies.

classificationcomparative studiesstatistical methods

Copyright information

© Kluwer Academic Publishers 1997