Advertisement

Fusion of Meta-knowledge and Meta-data for Case-Based Model Selection

  • Melanie Hilario
  • Alexandros Kalousis
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2168)

Abstract

Meta-learning for model selection, as reported in the symbolic machine learning community, can be described as follows. First, it is cast as a purely data-driven predictive task. Second, it typically relies on a mapping of dataset characteristics to some measure of generalization performance (e.g., error). Third, it tends to ignore the role of algorithm parameters by relying mostly on default settings. This paper describes a case-based system for model selection which combines knowledge and data in selecting a (set of) algorithm(s) to recommend for a given task. The knowledge consists mainly of the similarity measures used to retrieve records of past learning experiences as well as profiles of learning algorithms incorporated into the conceptual meta-model. In addition to the usual dataset characteristics and error rates, the case base includes objects describing the evaluation strategy and the learner parameters used. These have two major roles: they ensure valid and meaningful comparisons between independently reported findings, and they facilitate replication of past experiments. Finally, the case-based meta-learner can be used not only as a predictive tool but also as an exploratory tool for gaining further insight into previously tested algorithms and datasets.

Keywords

Model Selection Learning Algorithm Modelling Tool Evaluation Strategy Radial Basis Function Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    A. Agnar and E. Plaza. Case-based reasoning: Foundational issues, methodological variations, and system approaches. AI Communications, 7(1), 1994.Google Scholar
  2. 2.
    H. Blockeel. Cumulativity as inductive bias. In Data Mining, Decision Support, Meta-Learning and ILP: Forum for Practical Problem Presentation and Prospective Solutions, pages 61–70, Lyon, France, July 2000.Google Scholar
  3. 3.
    W. W. Cohen. Fast effective rule induction. In A. Prieditis and S. Russell, editors, Proc. of the 11th International Conference on Machine Learning, pages 115–123, Tahoe City, CA, 1995. Morgan Kaufmann.Google Scholar
  4. 4.
    J. Gama and P. Brazdil. Linear tree. Intelligent Data Analysis, 3:1–22, 1999.zbMATHCrossRefGoogle Scholar
  5. 5.
    M. Hilario and A. Kalousis. Quantifying the resilience of inductive classification algorithms. In Principles of Data Mining and Knowledge Discovery. Proceedings of the 4th European Conference, pages 106–115, Lyon, France, 2000. Springer-Verlag.Google Scholar
  6. 6.
    A. Kalousis and M. Hilario. Supervised knowledge discovery from incomplete data. In International Conference on Data Mining, Cambridge, UK, 2000.Google Scholar
  7. 7.
    R. Kohavi, G. John, R. Long, D. Manley, and K. Pfleger. MLC++: A machine learning library in C++. Technical report, CSD, Stanford University, August 1994. An abridged version of this report appears in AI’94: Tools in AI.Google Scholar
  8. 8.
    R. Kohavi and D. Wolpert. Bias plus variance decomposition for zero-one loss functions. In L. Saitta, editor, Proc. of the 13th International Conference on Machine Learning, pages 275–283, Bari (Italy), 1996. Morgan Kaufmann.Google Scholar
  9. 9.
    H. Linhart and W. Zucchini. Model Selection. J. Wiley, NY, 1986.zbMATHGoogle Scholar
  10. 10.
    R. J. Little and D. B. Rubin. Statistical Analysis withMissing Data. Wiley, 1987.Google Scholar
  11. 11.
    MetaL Consortium. Project Homepage. http://www.metal-kdd.org/.
  12. 12.
    D. Michie, D. J. Spiegelhalter, and C. C. Taylor, editors. Machine learning, neural and statistical classification. Ellis-Horwood, 1994.Google Scholar
  13. 13.
    Bernhard Pfahringer, Hilan Bensusan, and Christophe Giraud-Carrier. Metalearning by landmarking various learning algorithms. In Proc. Seventeenth International Conference on Machine Learning, ICML’2000, pages 743–750, San Francisco, California, June 2000. Morgan Kaufmann.Google Scholar
  14. 14.
    J. R. Quinlan. Comparing connectionist and symbolic learning methods. In S. J. Hanson, G. A. Drastal, and R. L. Rivest, editors, Computational Learning Theory and Natural Learning Systems, volume I, chapter 15, pages 446–456. MIT Press, 1994.Google Scholar
  15. 15.
    B.D. Ripley. Pattern Recognition and Neural Networks. Cambridge U. Press, 1996.Google Scholar
  16. 16.
    L. Todorowski. Experiments in meta-level learning with ILP. In International Workshop on Inductive Logic Programming, Bled, Slovenia, 1999. Springer-Verlag.Google Scholar
  17. 17.
    D. Wolpert. The lack of a priori distinctions between learning algorithms. Neural Computation, 8(7):1381–1390, 1996.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Melanie Hilario
    • 1
  • Alexandros Kalousis
    • 1
  1. 1.CUI - University of GenevaGeneva 4

Personalised recommendations