Knowledge and Information Systems

, Volume 42, Issue 1, pp 147–180 | Cite as

An automatic extraction method of the domains of competence for learning classifiers using data complexity measures

  • Julián Luengo
  • Francisco Herrera
Regular Paper


The constant appearance of algorithms and problems in data mining makes impossible to know in advance whether the model will perform well or poorly until it is applied, which can be costly. It would be useful to have a procedure that indicates, prior to the application of the learning algorithm and without needing a comparison with other methods, whether the outcome will be good or bad using the information available in the data. In this work, we present an automatic extraction method to determine the domains of competence of a classifier using a set of data complexity measures proposed for the task of classification. These domains codify the characteristics of the problems that are suitable or not for it, relating the concepts of data geometrical structures that may be difficult and the final accuracy obtained by any classifier. In order to do so, this proposal uses 12 metrics of data complexity acting over a large benchmark of datasets in order to analyze the behavior patterns of the method, obtaining intervals of data complexity measures with good or bad performance. As a representative for classifiers to analyze the proposal, three classical but different algorithms are used: C4.5, SVM and K-NN. From these intervals, two simple rules that describe the good or bad behaviors of the classifiers mentioned each are obtained, allowing the user to characterize the response quality of the methods from a dataset’s complexity. These two rules have been validated using fresh problems, showing that they are general and accurate. Thus, it can be established when the classifier will perform well or poorly prior to its application.


Classification Data complexity Domains of competence C4.5  Support vector machines K-nearest neighbor 



Supported by the Research Projects TIN2011-28488 and P10-TIC-06858.


  1. 1.
    Alcalá-Fdez J, Sánchez L, García S, del Jesus MJ, Ventura S, Garrell JM, Otero J, Romero C, Bacardit J, Rivas VM, Fernández JC, Herrera F (2008) Keel: a software tool to assess evolutionary algorithms for data mining problems. Soft Comput 13(3):307–318CrossRefGoogle Scholar
  2. 2.
    Alcalá-Fdez Jesús, Fernández Alberto, Luengo Julián, Derrac Joaquín, García Salvador (2011) Keel data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. Multi Valued Log Soft Comput 17(2–3):255–287Google Scholar
  3. 3.
    Baskiotis N, Sebag M (2004) C4.5 competence map: a phase transition-inspired approach. In: ICML ’04: Proceedings of the twenty-first international conference on Machine learning, page 8. ACM, New York, NY, USAGoogle Scholar
  4. 4.
    Basu Mitra, Ho Tin Kam (2006) Data complexity in pattern recognition (advanced information and knowledge processing). Springe New York Inc., Secaucus, NJCrossRefGoogle Scholar
  5. 5.
    Baumgartner R, Somorjai RL (2006) Data complexity assessment in undersampled classification of high-dimensional biomedical data. Pattern Recognit Lett 12:1383–1389CrossRefGoogle Scholar
  6. 6.
    Bensusan H, Kalousis A (2001) Estimating the predictive accuracy of a classifier. In EMCL ’01: Proceedings of the 12th european conference on machine learning Springer, London, pp 25–36Google Scholar
  7. 7.
    Bernadó-Mansilla Ester, Ho Tin Kam (2005) Domain of competence of XCS classifier system in complexity measurement space. IEEE Trans Evol Comput 9(1):82–104CrossRefGoogle Scholar
  8. 8.
    Brazdil P, Giraud-Carrier C, Soares C, Vilalta R (2009) Metalearning: applications to data mining. Cognitive Technologies, SpringerGoogle Scholar
  9. 9.
    Cheeseman P, Kanefsky B, Taylor WM (1991) Where the really hard problems are. In: IJCAI’91: Proceedings of the 12th international joint conference on artificial intelligence. Morgan Kaufmann Publishers Inc, San Francisco, CA, pp 331–337Google Scholar
  10. 10.
    Demšar Janez (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30MathSciNetzbMATHGoogle Scholar
  11. 11.
    Derrac Joaquín, Triguero Isaac, García Salvador, Herrera Francisco (2012) Integrating instance selection, instance weighting, and feature weighting for nearest neighbor classifiers by coevolutionary algorithms. IEEE Trans Syst Man Cybern Part B 42(5):1383–1397CrossRefGoogle Scholar
  12. 12.
    Dong M, Kothari R (2003) Feature subset selection using a new definition of classificabilty. Pattern Recognit Lett 24:1215–1225CrossRefzbMATHGoogle Scholar
  13. 13.
    Fernández A, García S, José M, del Jesús MJ, Francisco H (2008) A study of the behaviour of linguistic fuzzy rule based classification systems in the framework of imbalanced data-sets. Fuzzy Sets Syst 159(18):2378–2398CrossRefGoogle Scholar
  14. 14.
    García Salvador, Cano José Ramón, Bernadó-Mansilla Esther, Herrera Francisco (2009) Diagnose of effective evolutionary prototype selection using an overlapping measure. Int J Pattern Recognit Artif Intell 23(8):2378–2398Google Scholar
  15. 15.
    García Salvador, Herrera Francisco (2008) An extension on “statistical comparisons of classifiers over multiple data sets” for all pairwise comparisons. J Mach Learn Res 9:2677–2694zbMATHGoogle Scholar
  16. 16.
    Ho Tin Kam, Baird Henry S (1998) Pattern classification with compact distribution maps. Comput Vis Image Underst 70(1):101–110CrossRefGoogle Scholar
  17. 17.
    Ho Tin Kam, Basu Mitra (2002) Complexity measures of supervised classification problems. IEEE Trans Pattern Anal Mach Intell 24(3):289–300CrossRefGoogle Scholar
  18. 18.
    Hoekstra A, Duin RPW (1996) On the nonlinearity of pattern classifiers. In: ICPR ’96: Proceedings of the international conference on pattern recognition (ICPR ’96) volume IV-Volume 7472. IEEE Computer Society, Washington, DC, USA, pp 271–275Google Scholar
  19. 19.
    Kalousis A (2002) Algorithm selection via meta-learning. PhD thesis, Université de GeneveGoogle Scholar
  20. 20.
    Kuncheva LI, Rodrguez JJ (2013) A weighted voting framework for classifiers ensembles. Knowl Inf Syst (in press) doi: 10.1007/s10115-012-0586-6
  21. 21.
    Lebourgeois F, Emptoz H (1996) Pretopological approach for supervised learning. In: ICPR ’96: Proceedings of the international conference on pattern recognition (ICPR ’96) volume IV-Volume 7472. IEEE Computer Society, Washington, DC, USA, pp 256–260Google Scholar
  22. 22.
    Lorena AC, Costa IG, Spolaôr N, de Souto MCP (2012) Analysis of complexity indices for classification problems: Cancer gene expression data. Neurocomputing 75(1):33–42CrossRefGoogle Scholar
  23. 23.
    Lorena AC, de Carvalho ACPLF (2010) Building binary-tree-based multiclass classifiers using separability measures. Neurocomputing 73(16–18):2837–2845CrossRefGoogle Scholar
  24. 24.
    Luengo Julián, Fernández Alberto, García Salvador, Herrera Francisco (2011) Addressing data complexity for imbalanced data sets: analysis of smote-based oversampling and evolutionary undersampling. Soft Comput 15(10):1909–1936CrossRefGoogle Scholar
  25. 25.
    Luengo Julián, Herrera Francisco (2010) Domains of competence of fuzzy rule based classification systems with data complexity measures: a case of study using a fuzzy hybrid genetic based machine learning method. Fuzzy Sets Syst 161(1):3–19CrossRefMathSciNetGoogle Scholar
  26. 26.
    Luengo Julián, Herrera Francisco (2012) Shared domains of competence of approximate learning models using measures of separability of classes. Inf Sci 185(1):43–65CrossRefMathSciNetGoogle Scholar
  27. 27.
    Macia N, Bernadó-Mansilla E, Orriols-Puig A, Kam Ho T (2012) Learner excellence biased by data set selection: A case for data characterisation and artificial data sets. Pattern Recognit (in press). doi: 10.1016/j.patcog.2012.09.022
  28. 28.
    McLachlan GJ (2004) Discriminant analysis and statistical pattern recognition. Wiley, New YorkzbMATHGoogle Scholar
  29. 29.
    Mollineda RA, Sánchez JS, Sotoca JM (2005) Data characterization for effective prototype selection. In: Proceedings of the 2nd Iberian conference on pattern recognition and image analysis. Springer, pp 27–34Google Scholar
  30. 30.
    Okun Oleg, Priisalu Helen (2009) Dataset complexity in gene expression based cancer classification using ensembles of k-nearest neighbors. Artif Intell Med 45(2–3):151–162CrossRefGoogle Scholar
  31. 31.
    Orriols-Puig Albert, Bernadó-Mansilla Ester (2008) Evolutionary rule-based systems for imbalanced data sets. Soft Comput 13(3):213–225CrossRefGoogle Scholar
  32. 32.
    Orriols-Puig Albert, Casillas Jorge (2011) Fuzzy knowledge representation study for incremental learning in data streams and classification problems. Soft Comput 15(12):2389–2414CrossRefGoogle Scholar
  33. 33.
    Pfahringer B, Bensusan H, Giraud-Carrier CG (2000) Meta-learning by landmarking various learning algorithms. In: ICML ’00: Proceedings of the seventeenth international conference on machine learning. Morgan Kaufmann Publishers Inc, San Francisco, CA, USA, pp 743–750Google Scholar
  34. 34.
    Platt J (1998) Machines using sequential minimal optimization. In: Schoelkopf B, Burges C, Smola A (eds) Advances in Kernel methods—support vector learning. MIT Press, CambridgeGoogle Scholar
  35. 35.
    Quinlan JR (1993) C4.5: programs for machine learning. Morgan Kaufmann Publishers, San Mateo-CaliforniaGoogle Scholar
  36. 36.
    Ramentol Enislay, Caballero Yaile, Bello Rafael, Herrera Francisco (2012) Smote-rsb *: a hybrid preprocessing approach based on oversampling and undersampling for high imbalanced data-sets using smote and rough sets theory. Knowl Inf Syst 33(2):245–265CrossRefGoogle Scholar
  37. 37.
    Sáez JA, Galar M, Luengo J, Herrera F (2013) Analyzing the presence of noise in multi-class problems: alleviating its influence with the one-vs-one decomposition. Knowl Inf Syst (in press) doi: 10.1007/s10115-012-0570-1
  38. 38.
    Sáez José A, Luengo Julián, Herrera Francisco (2013) Predicting noise filtering efficacy with data complexity measures for nearest neighbor classification. Pattern Recognit 46(1):355–364CrossRefGoogle Scholar
  39. 39.
    Sánchez José Salvador, Mollineda Ramón Alberto, Sotoca José Martínez (2007) An analysis of how training data complexity affects the nearest neighbor classifiers. Pattern Anal Appl 10(3):189–201CrossRefMathSciNetGoogle Scholar
  40. 40.
    Singh S (2003) Multiresolution estimates of classification complexity. IEEE Trans Pattern Anal Mach Intell 25(12):1534–1539CrossRefGoogle Scholar
  41. 41.
    Smith FW (1968) Pattern classifier design by linear programming. IEEE Trans Comput 17(4):367–372CrossRefGoogle Scholar
  42. 42.
    Vainer Igor, Kaminka Gal A, Kraus Sarit, Slovin Hamutal (2011) Obtaining scalable and accurate classification in large scale spatio-temporal domains. Knowl Inf Syst 29(3):527–564CrossRefGoogle Scholar
  43. 43.
    Vapnik VN (1998) Statistical learning theory. Wiley, New YorkzbMATHGoogle Scholar
  44. 44.
    Wolpert David H (1996) The lack of a priori distinctions between learning algorithms. Neural Comput 8(7):1341–1390CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 2013

Authors and Affiliations

  1. 1.Dept. of Civil EngineeringUniversity of BurgosBurgosSpain
  2. 2.Dept. of Computer Science and Artificial IntelligenceCITIC-University of GranadaGranadaSpain
  3. 3.Faculty of Computing and Information Technology—North JeddahKing Abdulaziz UniversityJeddahSaudi Arabia

Personalised recommendations