Abstract
We derive upper and lower limits on the majority vote accuracy with respect to individual accuracy p, the number of classifiers in the pool (L), and the pairwise dependence between classifiers, measured by Yule’s Q statistic. Independence between individual classifiers is typically viewed as an asset in classifier fusion. We show that the majority vote with dependent classifiers can potentially offer a dramatic improvement both over independent classifiers and over an individual classifier with accuracy p. A functional relationship between the limits and the pairwise dependence Q is derived. Two patterns of the joint distribution for classifier outputs (correct/incorrect) are identified to derive the limits: the pattern of success and the pattern of failure. The results support the intuition that negative pairwise dependence is beneficial although not straightforwardly related to the accuracy. The pattern of success showed that for the highest improvement over p, all pairs of classifiers in the pool should have the same negative dependence.
Similar content being viewed by others
Author information
Authors and Affiliations
Additional information
ID="A1"Correspondance and offprint requests to: L. I. Kuncheva, School of Informatics, University of Wales, Bangor LL57 1UT, Gwynedd, UK. Email: l.i.kuncheva@bangor.ac.uk
Rights and permissions
About this article
Cite this article
Kuncheva, L., Whitaker, C., Shipp, C. et al. Limits on the majority vote accuracy in classifier fusion. Pattern Anal Appl 6, 22–31 (2003). https://doi.org/10.1007/s10044-002-0173-7
Issue Date:
DOI: https://doi.org/10.1007/s10044-002-0173-7