Abstract
There is presently no unified methodology that allows the evaluation of supervised and non-supervised classification algorithms. Supervised problems are evaluated through Quality Functions that require a previously known solution for the problem, while non-supervised problems are evaluated through several Structural Indexes that do not evaluate the classification algorithm by using the same pattern similarity criteria embedded in the classification algorithm. In both cases, a lot of useful information remains hidden or is not considered by the evaluation method, such as the quality of the supervision sample or the structural change generated by the classification algorithm on the sample. This paper proposes a unified methodology to evaluate classification problems of both kinds, that offers the possibility of making comparative evaluations and yields a larger amount of information to the evaluator about the quality of the initial sample, when it exists, and regarding the change produced by the classification algorithm.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Fukunaga, K., Hayes, R.R.: Estimation of classifier performance. IEEE Transactions on Pattern Analysis and Machine Intelligence 11(10), 1087–1101 (1989)
Berikov, V., Litvinenko, A.: The influence of prior knowledge on the expected performance of a classifier. Pattern Recognition Letters 24(15), 2537–2548 (2003)
Fawcett, T.: An introduction to ROC analysis. Pattern Recognition Letters 27(8), 861–874 (2006)
Arbel, R., Rokach, L.: Classifier evaluation under limited resources. Pattern Recognition Letters, Available online (2006)
Martínez-Trinidad, J.F., Guzman-Arenas, A.: The logical combinatorial approach to Pattern Recognition an overview through selected works. Pattern Recognition 34(4), 1–11 (2001)
Xie, X.L., Beni, G.: A validity measure for fuzzy clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence 13(8), 841–847 (1991)
Lee-Kwang, H., Song, Y.S., Lee, K.M.: Similarity measure between fuzzy sets and between elements. Fuzzy Sets and Systems 62, 291–293 (1994)
Dae-Won, K., Kwang, H.L., Doheon, L.: On cluster validity index for estimation of the optimal number of fuzzy clusters. Pattern Recognition 37, 2009–2025 (2004)
Bezdek, J.C.: Cluster validity with fuzzy sets. Journal of Cybernetics 3, 58–73 (1974)
Godoy-Calderón, S., Lazo-Cortés, M., Martínez-Trinidad, J.F.: A non-classical view of Coverings and its implications in the formalization of Pattern Recognition problems. WSEAS Transactions on Mathematics 2(1-2), 60–66 (2003)
Bezdek, J.C., Keller, M.J., Krishnapuram, R.: Will the Real Iris Data Please Stand Up? IEEE Transactions on Fuzzy Systems 7(3), 368–369 (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Godoy-Calderón, S., Martínez-Trinidad, J.F., Cortés, M.L. (2006). Proposal for a Unified Methodology for Evaluating Supervised and Non-supervised Classification Algorithms. In: Martínez-Trinidad, J.F., Carrasco Ochoa, J.A., Kittler, J. (eds) Progress in Pattern Recognition, Image Analysis and Applications. CIARP 2006. Lecture Notes in Computer Science, vol 4225. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11892755_70
Download citation
DOI: https://doi.org/10.1007/11892755_70
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-46556-0
Online ISBN: 978-3-540-46557-7
eBook Packages: Computer ScienceComputer Science (R0)