Estimating the Predictive Accuracy of a Classifier

  • Hilan Bensusan
  • Alexandros Kalousis
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2167)

Abstract

This paper investigates the use of meta-learning to estimate the predictive accuracy of a classifier. We present a scenario where meta-learning is seen as a regression task and consider its potential in connection with three strategies of dataset characterization. We show that it is possible to estimate classifier performance with a high degree of confidence and gain knowledge about the classifier through the regression models generated. We exploit the results of the models to predict the ranking of the inducers. We also show that the best strategy for performance estimation is not necessarily the best one for ranking generation.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    H. Bensusan. God doesn’t always shave with Occam’s Razor — learning when and how to prune. In Proceedings of the 10th European Conference on Machine Learning, pages 119–124, 1998.Google Scholar
  2. 2.
    C. Blake and C. Merz. UCI repository of machine learning databases, 1998. http://www.ics.uci.edu/.
  3. 3.
    W. Cohen. Fast efective rule induction. In Proceedings of the 12th International Conference on Machine Learning, pages 115–123. Morgan Kaufmann, 1995.Google Scholar
  4. 4.
    R. O. Duda and P. E. Hart. Pattern classi-cation and scene analysis. John Wiley, 1973.Google Scholar
  5. 5.
    Y. Freund and R. E. Schapire. Experiments with a new boosting algorithm. In Proceedings 13th International Conference on Machine Learning, pages 148–146, 1996.Google Scholar
  6. 6.
    J. Gama. Discriminant trees. In Proceedings of the 16th International Machine Learning Conference (ICML’99), pages 134–142, 1999.Google Scholar
  7. 7.
    J. Gama and P. Brazdil. Characterization of classification algorithms. In Proceedings of the 7th Portugese Conference in AI, EPIA 95, pages 83–102, 1995.Google Scholar
  8. 8.
    C. Giraud-Carrier et al. A meta-learning assistant for providing user support in data mining and machine learning, 1999-2001. http://www.metal-kdd.org.
  9. 9.
    A. Kalousis and T. Theoharis. Noemon: Design, implementation and performance results of an intelligent assistant for classifier selection. Intelligent Data Analysis, 3(5):319–337, 1999.MATHCrossRefGoogle Scholar
  10. 10.
    C. Köpf, C. Taylor, and J. Keller. Meta-analysis: from data characterisation for meta-learning to meta-regression. In P. Brazdil and A. Jorge, editors, PKDD’2000 Workshop on Data Mining, Decision Support, Meta-Learning and ILP, 2000.Google Scholar
  11. 11.
    D. Michie, D. J. Spiegelhalter, and C. C. Taylor, editors. Machine Learning, Neural and Statistical Classification. Ellis Horwood, 1994.Google Scholar
  12. 12.
    H. Neave and Worthington P. Distribution Free Tests. Unwin Hyman, London, UK, 1992.Google Scholar
  13. 13.
    B. Pfahringer, H. Bensusan, and C. Giraud-Carrier. Meta-learning by landmarking various learning algorithms. In Proceedings of the Seventeenth International Conference on Machine Learning, pages 743–750, 2000.Google Scholar
  14. 14.
    J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, 1993.Google Scholar
  15. 15.
    J. R. Quinlan. Bagging, boosting, and C4. 5. In Proceedings of the Thirteenth National Conference on Artificial Intelligence and the Eighth Innovative Applications of Artificial Intelligence Conference, pages 725–730, 1996.Google Scholar
  16. 16.
    R. Quinlan. An overview of cubist. Rulequest Research, November 2000. http://www.rulequest.com/.
  17. 17.
    R.B. Rao, D. Gordon, and W. Spears. For every generalization action, is there really an equal and opposite reaction? In Proceedings of the 12th International Conference on Machine Learning, 1995.Google Scholar
  18. 18.
    C. Soares and P. Brazdil. Zoomed ranking: Selection of classification algrorithms based on relevant performance information. In Proceedings of the 4th European Conference on Principles of Data Mining and Knowledge Discovery, pages 126–135. Springer, 2000.Google Scholar
  19. 19.
    S. Y. Sohn. Meta analysis of classification algorithms for pattern recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(11), 1999.Google Scholar
  20. 20.
    L. Todorovski, P. Brazdil, and C. Soares. Report on the experiments with feature selection in meta-level learning. In Proceedings of the Workshop on Data Mining, Decision Support, Meta-learning and ILP at PKDD’2000, pages 27–39, 2000.Google Scholar
  21. 21.
    L. Torgo. Inductive Learning of Tree-based Regression Models. PhD thesis, Department of Computer Science, Faculty of Sciences, University of Porto, Porto, Portugal, September 1999.Google Scholar
  22. 22.
    D Wolpert. The existence of a priori distinctions between learning algorithms. Neural Computation, 8:1391–1420, 1996.CrossRefGoogle Scholar
  23. 23.
    D Wolpert. The lack of a priori distinctions between learning algorithms. Neural Computation, 8: 1341–1390, 1996.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Hilan Bensusan
    • 1
  • Alexandros Kalousis
    • 2
  1. 1.Department of Computer ScienceUniversity of BristolBristolEngland
  2. 2.CSDUniversity of GenevaGeneva 4Switzerland

Personalised recommendations