Advertisement

Towards Automatic Generation of Metafeatures

  • Fábio Pinto
  • Carlos Soares
  • João Mendes-Moreira
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9651)

Abstract

The selection of metafeatures for metalearning (MtL) is often an ad hoc process. The lack of a proper motivation for the choice of a metafeature rather than others is questionable and may originate a loss of valuable information for a given problem (e.g., use of class entropy and not attribute entropy). We present a framework to systematically generate metafeatures in the context of MtL. This framework decomposes a metafeature into three components: meta-function, object and post-processing. The automatic generation of metafeatures is triggered by the selection of a meta-function used to systematically generate metafeatures from all possible combinations of object and post-processing alternatives. We executed experiments by addressing the problem of algorithm selection in classification datasets. Results show that the sets of systematic metafeatures generated from our framework are more informative than the non-systematic ones and the set regarded as state-of-the-art.

Keywords

Metalearning Systematic metafeatures Algorithm selection Classification 

Notes

Acknowledgments

This research has received funding from the ECSEL Joint Undertaking, the framework programme for research and innovation horizon 2020 (2014–2020) under grant agreement no. 662189-MANTIS-2014-1.

References

  1. 1.
    Serban, F., Vanschoren, J., Kietz, J.U., Bernstein, A.: A survey of intelligent assistants for data analysis. ACM Comput. Surv. (CSUR) 45(3), 31 (2013)CrossRefGoogle Scholar
  2. 2.
    Brazdil, P., Carrier, C.G., Soares, C., Vilalta, R.: Metalearning: Applications to Data Mining. Springer, Heidelberg (2008)zbMATHGoogle Scholar
  3. 3.
    Brazdil, P.B., Soares, C., Da Costa, J.P.: Ranking learning algorithms: using IBL and meta-learning on accuracy and time results. Mach. Learn. 50(3), 251–277 (2003)CrossRefzbMATHGoogle Scholar
  4. 4.
    Pfahringer, B., Bensusan, H., Giraud-Carrier, C.: Tell me who can learn you and I can tell you who you are: landmarking various learning algorithms. In: International Conference on Machine Learning, pp. 743–750 (2000)Google Scholar
  5. 5.
    Sun, Q., Pfahringer, B.: Pairwise meta-rules for better meta-learning-based algorithm ranking. Mach. Learn. 93(1), 141–161 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Peng, Y.H., Flach, P.A., Soares, C., Brazdil, P.B.: Improved dataset characterisation for meta-learning. In: Lange, S., Satoh, K., Smith, C.H. (eds.) DS 2002. LNCS, vol. 2534, pp. 141–152. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  7. 7.
    Prudêncio, R.B., Ludermir, T.B.: Meta-learning approaches to selecting time series models. Neurocomputing 61, 121–137 (2004)CrossRefGoogle Scholar
  8. 8.
    Reif, M., Shafait, F., Dengel, A.: Meta-learning for evolutionary parameter optimization of classifiers. Mach. Learn. 87(3), 357–380 (2012)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Rossi, A.L.D., de Leon Ferreira, A.C.P., Soares, C., De Souza, B.F.: Metastream: a meta-learning based method for periodic algorithm selection in time-changing data. Neurocomputing 127, 52–64 (2014)CrossRefGoogle Scholar
  10. 10.
    van Rijn, J.N., Holmes, G., Pfahringer, B., Vanschoren, J.: Algorithm selection on data streams. In: Džeroski, S., Panov, P., Kocev, D., Todorovski, L. (eds.) DS 2014. LNCS, vol. 8777, pp. 325–336. Springer, Heidelberg (2014)Google Scholar
  11. 11.
    Quinlan, J.R.: Induction of decision trees. Mach. Learn. 1(1), 81–106 (1986)Google Scholar
  12. 12.
    Panov, P., Soldatova, L., Džeroski, S.: Ontology of core data mining entities. Data Min. Knowl. Disc. 28(5–6), 1222–1265 (2014)CrossRefzbMATHGoogle Scholar
  13. 13.
    Getoor, L., Mihalkova, L.: Learning statistical models from relational data. In: ACM SIGMOD International Conference on Management of Data, pp. 1195–1198. ACM (2011)Google Scholar
  14. 14.
    Kalousis, A., Theoharis, T.: Noemon: design, implementation and performance results of an intelligent assistant for classifier selection. Intell. Data Anal. 3(5), 319–337 (1999)CrossRefzbMATHGoogle Scholar
  15. 15.
    Lichman, M.: UCI Machine Learning Repository. University of California, Irvine, School of Information and Computer Sciences (2013). http://archive.ics.uci.edu/ml
  16. 16.
    Demšar, J.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7, 1–30 (2006)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Robnik-Šikonja, M., Kononenko, I.: Theoretical and empirical analysis of ReliefF and RReliefF. Mach. Learn. 53(1–2), 23–69 (2003)CrossRefzbMATHGoogle Scholar
  18. 18.
    Pinto, F., Soares, C., Mendes-Moreira, J.: Pruning bagging ensembles with metalearning. In: Schwenker, F., Roli, F., Kittler, J. (eds.) MCS 2015. LNCS, vol. 9132, pp. 64–75. Springer, Heidelberg (2015)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Fábio Pinto
    • 1
  • Carlos Soares
    • 1
  • João Mendes-Moreira
    • 1
  1. 1.INESC TEC/Faculdade de EngenhariaUniversidade do PortoPortoPortugal

Personalised recommendations