Advertisement

Proposal for a Unified Methodology for Evaluating Supervised and Non-supervised Classification Algorithms

  • Salvador Godoy-Calderón
  • J. Fco. Martínez-Trinidad
  • Manuel Lazo Cortés
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4225)

Abstract

There is presently no unified methodology that allows the evaluation of supervised and non-supervised classification algorithms. Supervised problems are evaluated through Quality Functions that require a previously known solution for the problem, while non-supervised problems are evaluated through several Structural Indexes that do not evaluate the classification algorithm by using the same pattern similarity criteria embedded in the classification algorithm. In both cases, a lot of useful information remains hidden or is not considered by the evaluation method, such as the quality of the supervision sample or the structural change generated by the classification algorithm on the sample. This paper proposes a unified methodology to evaluate classification problems of both kinds, that offers the possibility of making comparative evaluations and yields a larger amount of information to the evaluator about the quality of the initial sample, when it exists, and regarding the change produced by the classification algorithm.

Keywords

Classification Problem Classification Algorithm Structural Index Quality Function Final Covering 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Fukunaga, K., Hayes, R.R.: Estimation of classifier performance. IEEE Transactions on Pattern Analysis and Machine Intelligence 11(10), 1087–1101 (1989)CrossRefGoogle Scholar
  2. 2.
    Berikov, V., Litvinenko, A.: The influence of prior knowledge on the expected performance of a classifier. Pattern Recognition Letters 24(15), 2537–2548 (2003)CrossRefGoogle Scholar
  3. 3.
    Fawcett, T.: An introduction to ROC analysis. Pattern Recognition Letters 27(8), 861–874 (2006)CrossRefMathSciNetGoogle Scholar
  4. 4.
    Arbel, R., Rokach, L.: Classifier evaluation under limited resources. Pattern Recognition Letters, Available online (2006)Google Scholar
  5. 5.
    Martínez-Trinidad, J.F., Guzman-Arenas, A.: The logical combinatorial approach to Pattern Recognition an overview through selected works. Pattern Recognition 34(4), 1–11 (2001)Google Scholar
  6. 6.
    Xie, X.L., Beni, G.: A validity measure for fuzzy clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence 13(8), 841–847 (1991)CrossRefGoogle Scholar
  7. 7.
    Lee-Kwang, H., Song, Y.S., Lee, K.M.: Similarity measure between fuzzy sets and between elements. Fuzzy Sets and Systems 62, 291–293 (1994)CrossRefMathSciNetGoogle Scholar
  8. 8.
    Dae-Won, K., Kwang, H.L., Doheon, L.: On cluster validity index for estimation of the optimal number of fuzzy clusters. Pattern Recognition 37, 2009–2025 (2004)CrossRefGoogle Scholar
  9. 9.
    Bezdek, J.C.: Cluster validity with fuzzy sets. Journal of Cybernetics 3, 58–73 (1974)CrossRefMathSciNetGoogle Scholar
  10. 10.
    Godoy-Calderón, S., Lazo-Cortés, M., Martínez-Trinidad, J.F.: A non-classical view of Coverings and its implications in the formalization of Pattern Recognition problems. WSEAS Transactions on Mathematics 2(1-2), 60–66 (2003)Google Scholar
  11. 11.
    Bezdek, J.C., Keller, M.J., Krishnapuram, R.: Will the Real Iris Data Please Stand Up? IEEE Transactions on Fuzzy Systems 7(3), 368–369 (1999)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Salvador Godoy-Calderón
    • 1
  • J. Fco. Martínez-Trinidad
    • 2
  • Manuel Lazo Cortés
    • 3
  1. 1.Center for Computing ResearchIPNMexico
  2. 2.Computer Science DepartmentINAOEPueblaMexico
  3. 3.Pattern Recognition GroupICIMAFHavanaCuba

Personalised recommendations