Data Mining and Knowledge Discovery

, Volume 1, Issue 3, pp 317–328 | Cite as

On Comparing Classifiers: Pitfalls to Avoid and a Recommended Approach

  • Steven L. Salzberg

Abstract

An important component of many data mining projects is finding a good classification algorithm, a process that requires very careful thought about experimental design. If not done very carefully, comparative studies of classification and other types of algorithms can easily result in statistically invalid conclusions. This is especially true when one is using data mining techniques to analyze very large databases, which inevitably contain some statistically unlikely data. This paper describes several phenomena that can, if ignored, invalidate an experimental comparison. These phenomena and the conclusions that follow apply not only to classification, but to computational experiments in almost any aspect of data mining. The paper also discusses why comparative analysis is more important in evaluating some types of algorithms than for others, and provides some suggestions about how to avoid the pitfalls suffered by many experimental studies.

classification comparative studies statistical methods 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aha, D. Generalizing from case studies: A case study. In Proc. Ninth Intl. Workshop on Machine Learning, pages 1–10, San Mateo, CA, 1992. Morgan Kaufmann.Google Scholar
  2. Cochran, W. and Cox, G. Experimental Designs. Wiley, 2nd edition, 1957.Google Scholar
  3. Cohen, P.R. and Jensen, D. Overfitting explained. In Prelim. Papers Sixth Intl. Workshop on Artificial Intelligence and Statistics, pages 115–122, January 1997.Google Scholar
  4. Denton, F. Data mining as an industry. Review of Economics and Statistics, 67:124–127, 1985.Google Scholar
  5. Dietterich, T. Statistical tests for comparing supervised learning algorithms. Technical report, Oregon State University, Corvallis, OR, 1996.Google Scholar
  6. Everitt, B. The Analysis of Contingency Tables. Chapman and Hall, London., 1977.Google Scholar
  7. Fayyad, U.M. and Irani, K.B. Multi-interval discretization of continuous valued attributes for classification learning. In Proc. 13th Intl. Joint Conf. on Artificial Intelligence, pages 1022–1027, Chambery, France, 1993. Morgan Kaufmann.Google Scholar
  8. Feelders, A. and Verkooijen,W. Which method learns most from the data? In Prelim. Papers Fifth Intl. Workshop on Artificial Intelligence and Statistics, pages 219–225, Fort Lauderdale, Florida, 1995.Google Scholar
  9. Flexer, A. Statistical evaluation of neural network experiments: Minimum requirements and current practice. In R. Trappl, editor, Cybernetics and Systems '96: Proc. 13th European Meeting on Cybernetics and Systems Res., pages 1005–1008. Austrian Society for Cybernetic Studies, 1996.Google Scholar
  10. Gascuel, O. and Caraux, G. Statistical significance in inductive learning. In Proc. of the European Conf. on Artificial Intelligence (ECAI), pages 435–439, New York, 1992. Wiley.Google Scholar
  11. Hildebrand, D. Statistical Thinking for Behavioral Scientists. Duxbury Press, Boston, MA, 1986.Google Scholar
  12. Holte, R. Very simple classification rules perform well on most commonly used datasets. Machine Learning, 11(1):63–90, 1993.Google Scholar
  13. Jensen, D. Knowledge discovery through induction with randomization testing. In G. Piatetsky-Shapiro, editor, Proc. 1991 Knowledge Discovery in DatabasesWorkshop, pages 148–159, Menlo Park, CA, 1991. AAAI Press.Google Scholar
  14. Jensen, D. Labeling space: A tool for thinking about significance testing in knowledge discovery. Office of Technology Assessment, U.S. Congress, 1995.Google Scholar
  15. Kibler, D. and Langley, P. Machine learning as an experimental science. In Proc. of 1988 Euro. Working Session on Learning, pages 81–92, 1988.Google Scholar
  16. Kohavi, R. and Sommerfield, D. Oblivious decision trees, graphs, and top-down pruning. In Proc. 14th Intl. Joint Conf. on Artificial Intelligence, pages 1071–1077, Montreal, 1995. Morgan Kaufmann.Google Scholar
  17. Murphy, P.M. UCI repository of machine learning databases-a machine-readable data repository. Maintained at the Department of Information and Computer Science, University of California, Irvine. Anonymous FTP from ics.uci.edu in the directory pub/machine-learning-databases, 1995.Google Scholar
  18. Prechelt, L. A quantitative study of experimental evaluations of neural network algorithms: Current research practice. Neural Networks, 9, 1996.Google Scholar
  19. Qian, N. and Sejnowski, T. Predicting the secondary structure of globular proteins using neural network models. Journal of Molecular Biology, 202:65–884, 1988.Google Scholar
  20. Raftery, A. Bayesian model selection in social research (with discussion by Andrew Gelman, Donald B. Rubin, and Robert M. Hauser). In Peter Marsden, editor, Sociological Methodology 1995, pages 111–196. Blackwells, Oxford, UK, 1995.Google Scholar
  21. Sejnowski, T. and Rosenberg, C. Parallel networks that learn to pronounce English text. Complex Systems, 1:145–168, 1987.Google Scholar
  22. Shavlik, J., Mooney, R. and Towell, G. Symbolic and neural learning algorithms: An experimental comparison. Machine Learning, 6:111–143, 1991.Google Scholar
  23. Wettschereck, D. and Dietterich, T. An experimental comparison of the nearest-neighbor and nearest-hyperrectangle algorithms. Machine Learning, 19(1):5–28, 1995.Google Scholar
  24. Wolpert, D. On the connection between in-sample testing and generalization error. Complex Systems, 6:47–94, 1992.Google Scholar

Copyright information

© Kluwer Academic Publishers 1997

Authors and Affiliations

  • Steven L. Salzberg
    • 1
  1. 1.Department of Computer ScienceJohns Hopkins UniversityBaltimoreUSA

Personalised recommendations