Advertisement

Algorithm Selection via Meta-Learning and Active Meta-Learning

  • Nirav BhattEmail author
  • Amit Thakkar
  • Nikita Bhatt
  • Purvi Prajapati
Conference paper
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 141)

Abstract

To find most suitable classifier is possible either through cross-validation, which suffers from computational cost or through expert advice which is not always feasible to have. Meta-Learning can be a better approach to automate this process, by generating Meta-Examples which is a combination of performance results of classification algorithms on input datasets and Meta-Features. With the increasing number of datasets and underlying complexity of algorithms, makes even the Meta-Learning process expensive. So, Active Meta-Learning can help by optimizing the generation of Meta-Examples along with maintaining the performance of classification algorithms. Proposed work here provides a ranking of classifiers using SRR and ARR ranking method and compares Meta-Learning with Active Meta-Learning. In this work, evaluation methodology based on ideal ranking is presented, which shows that proposed method leads to significantly better ranking with reduced Meta-Examples. The executed experiments discovered a considerable improvement in Meta-Learning performance that supports nonexperts users in the selection of classification algorithms.

Keywords

Meta-learning Active meta-learning SRR (Success Rate Ratio) ARR (Adjusted Ratio of Ratio) 

References

  1. 1.
    De Souto, M.C.P., Prudencio, R.B.C., Soares, R.G.F., De Araujo, D.S.A., Costa, I.G., Ludermir, T.B., Schliep, A.: Ranking and selecting clustering algorithms using a meta-learning approach. In: 2008 IEEE International Joint Conference on Neural Networks, IJCNN 2008 (IEEE World Congress on Computational Intelligence), pp. 3729–3735. IEEE (2008)Google Scholar
  2. 2.
    Bhatt, N., Thakkar, A., Ganatra, A., Bhatt, N.: Ranking of classifiers based on dataset characteristics using active meta learning. Int. J. Comput. Appl. 69(20) (2013)CrossRefGoogle Scholar
  3. 3.
    Bhatt, N., Thakkar, A., Ganatra, A.: A survey and current research challenges in meta learning approaches based on dataset characteristics. Int. J. Soft Comput. Eng. 2(10), 234–247 (2012)Google Scholar
  4. 4.
    Vilalta, R., Giraud-Carrier, C.G., Brazdil, P., Soares, C.,: Using meta-learning to support data mining. IJCSA 1(1), 31–45 (2004)zbMATHGoogle Scholar
  5. 5.
    Tanwani, A.K., Afridi, J., Shafiq, M.Z.: Farooq, M.: Guidelines to select machine learning scheme for classification of biomedical datasets. In: European Conference on Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics, pp. 128–139. Springer, Berlin (2009)CrossRefGoogle Scholar
  6. 6.
    Pechenizkiy, M.: Data mining strategy selection via empirical and constructive induction. In: Databases and Applications, pp. 59–64 (2005)Google Scholar
  7. 7.
    Moran, S., He, Y., Liu, K.: An empirical framework for automatically selecting the best Bayesian classifier. In:  Proceedings of the World Congress on Engineering, vol. 1, pp. 1–3 (2009)Google Scholar
  8. 8.
    Van Der Walt, C., Barnard, E.: Data characteristics that determine classifier performance (2006)Google Scholar
  9. 9.
    Brazdil, P., Vilalta, R., Giraud-Carrier, C., Soares, C.: Metalearning, in book: Encyclopedia of machine learning and data mining.  https://doi.org/10.1007/978-1-4899-7502-7_543-1 (2016)zbMATHGoogle Scholar
  10. 10.
    Prudêncio, R.B.C., Ludermir, T.B.: Selective generation of training examples in active meta-learning.  Int. J. Hybrid Int. Syst. 5(2) 59–70 (2008)CrossRefGoogle Scholar
  11. 11.
    Cacoveanu, S., Vidrighin, C., Potolea, R.: Evolutional meta-learning framework for automatic classifier selection. In: 2009 IEEE 5th International Conference on Intelligent Computer Communication and Processing, ICCP 2009. IEEE (2009)Google Scholar
  12. 12.
    Giraud-Carrier, C., Vilalta, R., Brazdil, P.: Introduction to the special issue on meta-learning. Mach. Learn. 54(3), 187–193 (2004)CrossRefGoogle Scholar
  13. 13.
    Soares, C., Brazdil, P.B.: Zoomed ranking: selection of classification algorithms based on relevant performance information. In: European Conference on Principles of Data Mining and Knowledge Discovery, pp. 126–135. Springer Berlin (2000)CrossRefGoogle Scholar
  14. 14.
    Giraud-Carrier, C., Chair, D.V., Dennis Ng, Y.-K., Mercer, E., Warnick, S.: Relationships among learning algorithms and tasks. In: Proceedings of the International Conference on Machine Learning and Applications (2011)Google Scholar
  15. 15.
    Abdulrahman, S., Brazdil, P., van Rijn, J.N., Vanschoren, J.:. Algorithm selection via meta-learning and sample-based active testing. In: MetaSel@ PKDD/ECML, pp. 55–66 (2015)Google Scholar
  16. 16.
    Ali, S., Smith, K.A.,: On learning algorithm selection for classification. Appl. Soft Comput. 6(2), 119–138 (2006)CrossRefGoogle Scholar
  17. 17.
    Melo, C.E.C., Prudencio, R.B.C.: Similarity measures of algorithm performance for cost-sensitive scenarios, meta-learning and algorithm selection workshop at ECAI (2014)Google Scholar
  18. 18.
    Cohn, D., Atlas, L., Ladner, R.: Improving generalization with active learning. Mach. Learn. 15(2), 201–221 (1994)Google Scholar
  19. 19.
    Prudencio, R.B.C., Carlos, S., Ludermir, T.B.: Uncertainty sampling methods for selecting datasets in active meta-learning. In: The 2011 International Joint Conference on Neural Networks (IJCNN), pp. 1082–1089. IEEE (2011)Google Scholar
  20. 20.
    Sousa, A.F.M., Prudêncio, R.B.C., Ludermir, T.B., Soares, C.: Active learning and data manipulation techniques for generating training examples in meta-learning. Neurocomputing 194, 45–55 (2016)CrossRefGoogle Scholar
  21. 21.
    Prudencio, R.B.C., Ludermir, T.B.: Active meta-learning with uncertainty sampling and outlier detection. In: 2008 IEEE International Joint Conference on Neural Networks IJCNN 2008 (IEEE World Congress on Computational Intelligence), pp. 346–351. IEEE (2008)Google Scholar
  22. 22.
    Riccardi, G., Hakkani-Tur, D.,: Active learning: theory and applications to automatic speech recognition. IEEE Trans. Speech Audio Process. 13(4), 504–511 (2005)CrossRefGoogle Scholar
  23. 23.
    Angluin, D.,: Queries and concept learning. Mach. Learn. 2(4), 319–342 (1988)MathSciNetGoogle Scholar
  24. 24.
    Lindenbaum, M., Markovitch, S., Rusakov, D.,: Selective sampling for nearest neighbour classifiers. Mach. Learn. 54(2), 125–152 (2004)CrossRefGoogle Scholar
  25. 25.
    Mathews, L.M., Seetha, H.: On improving the classification of imbalanced data. Cybern. Inf. Technol. 17(1) (2017)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Brazdil, P.B., Soares, C.: A comparison of ranking methods for classification algorithm selection. In: European Conference on Machine Learning, pp. 63–75. Springer, Berlin (2000)CrossRefGoogle Scholar
  27. 27.
    Ramsey, P.H.: Critical values of the spearman rank order correlation coefficient: the RS tables. J. Educ. Stat. 14(3) (1989)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  • Nirav Bhatt
    • 1
    Email author
  • Amit Thakkar
    • 1
  • Nikita Bhatt
    • 2
  • Purvi Prajapati
    • 1
  1. 1.Department of Information TechnologyCSPIT, CHARUSATPetladIndia
  2. 2.U and P U. Patel Department of Computer EngineeringCSPIT, CHARUSATPetladIndia

Personalised recommendations