Machine Learning

, Volume 54, Issue 3, pp 187–193 | Cite as

Introduction to the Special Issue on Meta-Learning

  • Christophe Giraud-Carrier
  • Ricardo Vilalta
  • Pavel Brazdil
Article

Abstract

Recent advances in meta-learning are providing the foundations to construct meta-learning assistants and task-adaptive learners. The goal of this special issue is to foster an interest in meta-learning by compiling representative work in the field. The contributions to this special issue provide strong insights into the construction of future meta-learning tools. In this introduction we present a common frame of reference to address work in meta-learning through the concept of meta-knowledge. We show how meta-learning can be simply defined as the process of exploiting knowledge about learning that enables us to understand and improve the performance of learning algorithms.

meta-learning meta-knowledge inductive bias dynamic bias selection 

References

  1. Aha, D. W. (1992). Generalizing from case studies: A case study. In Proceedings of the Ninth International Workshop on Machine Learning (pp. 1–10), Morgan Kaufman.Google Scholar
  2. Baltes, J., & MacDonald, B. (1992). Case-based meta learning: Sustained learning supported by a dynamically biased version space. In Proceedings of the ML-92 Workshop on Biases in Inductive Learning.Google Scholar
  3. Baxter, J. (1998). Theoretical models of learning to learn. Learning to Learn (Chap. 4, pp. 71–94), Kluwer Academic Publishers, MA.Google Scholar
  4. Bensusan, H. (1998). God doesn't always shave with Occam's Razor—Learning when and how to prune. In Proceedings of the Tenth European Conference on Machine Learning (pp. 119–124), Springer.Google Scholar
  5. Bensusan, H., & Giraud-Carrier, C. (2000). Casa Batlo in Passeig or landmarking the expertise space. In Proceedings of the ECML-2000 Workshop on Meta-Learning: Building Automatic Advice Strategies for Model Selection and Method Combination (pp. 29–46), Barcelona, Spain.Google Scholar
  6. Brazdil, P., Soares, C., & Pinto da Costa, J. (2003). Ranking learning algorithms: Using IBL and meta-learning on accuracy and time results. Machine Learning, 50:3, 251–277.Google Scholar
  7. Brazdil, P. (1998). Data transformation and model selection by experimentation and meta-learning. In Proceedings of the ECML-98 Workshop on Upgrading Learning to Meta-Level: Model Selection and Data Transformation (pp. 11–17), Technical University of Chemnitz.Google Scholar
  8. Brodley, C. E. (1995). Recursive automatic bias selection for classifier construction. Machine Learning, 20:1, 63–94.Google Scholar
  9. Caruana, R. (1997). Multitask Learning. Second Special Issue on Inductive Transfer. Machine Learning, 28:1, 41–75.Google Scholar
  10. Chan, P. K., & Stolfo, S. (1998). On the accuracy of meta-learning for scalable data mining. Journal of Intelligent Integration of Information 8, 3–28.Google Scholar
  11. DesJardins, M., & Gordon, D. F. (1995). Evaluation and selection of biases in machine learning. Machine Learning, 20:1, 5–22.Google Scholar
  12. Dzeroski, S., & Zenko, B. (2004). Is combining classifiers better than selecting the best one? Machine Learning, 54:3, 195–209.Google Scholar
  13. Gama, J., & Brazdil, P. (1995). Characterization of classification algorithms. In Proceedings of the Seventh Portuguese Conference on Artificial Intelligence (EPIA) (189–200), Funchal, Madeira Island, Portugal.Google Scholar
  14. Geman, S., Bienenstock, E., & Doursat, R. (1991). Neural networks and the bias/variance dilemma. Neural Computation, 4, 1–58.Google Scholar
  15. Gordon, D. F. (1990). Active bias adjustment for incremental, supervised concept learning. PhD Thesis, University of Maryland, 1990.Google Scholar
  16. Hastie, T., Tibshirani, R., & Friedman, J. (2001). The Elements of Statistical Learning; Data Mining, Inference, and Prediction. Springer-Verlag.Google Scholar
  17. Kalousis, A., Gama, J., & Hilario, M. (2004). On data and algoritms: Understanding learning performance. Machine Learning, 54:3, 195–209.Google Scholar
  18. Keller, J., Paterson, I., & Berrer, H. (2000). An integrated concept for multi-crieria-ranking of data-mining algorithms. In Proceedings of the ECML-2000 Workshop on Meta-Learning: Building Automatic Advice Strategies for Model Selection and Method Combination (pp. 73–86), Barcelona, Spain.Google Scholar
  19. Merz, C. J. (1995). Dynamical selection of learning algorithms. In Learning from Data: Artificial Intelligence and Statistics, D. Fisher & H. J. Lenz (Eds.), Springer-Verlag.Google Scholar
  20. Michie, D., Spiegelhalter, D. J., & Taylor, C. C. (1994). Machine Learning, Neural and Statistical Classification. Ellis Horwood, Chichester, England.Google Scholar
  21. Ortega, J., Koppel, M., & Argamon, S. (2001). Arbitrating among competing classifiers using learned referees. Knowledge and Information Systems, 3, 470–490.Google Scholar
  22. Peng, W., Flach, P. A., Soares, C., & Brazdil, P. (2002). Improved Data Set Characterisation for Meta-learning. In Proceedings of the Fifth International Conference on Discovery Science, LNAI 2534, 141–152.Google Scholar
  23. Pfahringer, B., Bensusan, H., & Giraud-Carrier, C. (2000). Meta-learning by landmarking various learning algorithms. In Proceedings of the Seventeenth International Conference on Machine Learning (743–750), Stanford, CA.Google Scholar
  24. Pratt, L., & Thrun, S. (1997). Second Special Issue on Inductive Transfer. Machine Learning, 28(1).Google Scholar
  25. Rendell, L., Seshu, R., & Tcheng, D. (1987a). More robust concept learning using dynamically-variable bias. In Proceedings of the Fourth International Workshop on Machine Learning (pp. 66–78), Morgan Kaufman.Google Scholar
  26. Rendell, L., Seshu, R., & Tcheng, D. (1987b). Layered concept-learning and dynamically-variable bias management. In Proceedings of the International Joint Conference of Artificial Intelligence, (pp. 308–314), Milan, Italy.Google Scholar
  27. Schmidhuber, J. (2004). Optimal ordered problem solver. Machine Learning, 54:3, 195–209.Google Scholar
  28. Soares, C., Brazdil, P., & Kuba, P. (2004). A meta-learning approach to select the kernel width in support vector regression, Machine Learning, 54:3, 195–209.Google Scholar
  29. Stone, P., & Veloso, M. (2000). Layered learning, In Proceedings of the Eleventh European Conference on Machine Learning (pp. 369–381) Barcelona, Spain.Google Scholar
  30. Todorovski, L., & Dzeroski, S. (2003). Combining classifiers with meta decision trees Machine Learning, 50:3, 223–250.Google Scholar
  31. Thrun, S., & Mitchell, T. (1995). Learning one more thing. In Proceedings of the International Joint Conference on Artificial Intelligence (pp. 1217–1223), Morgan Kaufman.Google Scholar
  32. Thrun, S. (1998). Lifelong learning algorithms. Learning to Learn (Chap. 8, pp. 181–209), MA: Kluwer Academic Publishers.Google Scholar
  33. Utgoff, P. (1986). Shift of bias for inductive concept learning. In R. S. Michalski, et al. (Ed.), Machine Learning: An Artificial Intelligence Approach, Vol. II (pp. 107–148), California: Morgan Kaufman.Google Scholar
  34. Utgoff, P., & Stracuzzi, D. J. (2003). Many-layered learning. Neural Networks, 14, 2497–2529, MIT Press.Google Scholar
  35. Vilalta, R., & Drissi, Y. (2002). A perspective view and survey of meta-learning. Journal of Artificial Intelligence Review, 18:2, 77–95.Google Scholar
  36. Widmer, G., & Kubat, M. (1996). Learning in the presence of concept drift and hidden contexts. Machine Learning, 23:1, 69–101.Google Scholar
  37. Widmer, G. (1997). Tracking context changes through meta-learning. Machine Learning, 27:3, 259–286.Google Scholar
  38. Wolpert, D. (1992). Stacked Generalization. Neural Networks, 5, 241–259.Google Scholar

Copyright information

© Kluwer Academic Publishers 2004

Authors and Affiliations

  • Christophe Giraud-Carrier
    • 1
  • Ricardo Vilalta
    • 2
  • Pavel Brazdil
    • 3
  1. 1.ELCA Informatique SALausanneSwitzerland
  2. 2.Department of Computer ScienceUniversity of HoustonHoustonUSA
  3. 3.LIACC / Faculty of EconomicsUniversity of PortoPortoPortugal

Personalised recommendations