Statistical comparison of classifiers through Bayesian hierarchical modelling
Usually one compares the accuracy of two competing classifiers using null hypothesis significance tests. Yet such tests suffer from important shortcomings, which can be overcome by switching to Bayesian hypothesis testing. We propose a Bayesian hierarchical model that jointly analyzes the cross-validation results obtained by two classifiers on multiple data sets. The model estimates more accurately the difference between classifiers on the individual data sets than the traditional approach of averaging, independently on each data set, the cross-validation results. It does so by jointly analyzing the results obtained on all data sets, and applying shrinkage to the estimates. The model eventually returns the posterior probability of the accuracies of the two classifiers being practically equivalent or significantly different.
The research in this paper has been partially supported by the Swiss NSF grants ns. IZKSZ2_162188 and n. 200021_146606.
- Benavoli, A., Corani, G., Demsar, J., & Zaffalon, M. Time for a change: A tutorial for comparing multiple classifiers through Bayesian analysis. arXiv:1606.04316.
- Benavoli, A., Corani, G., Mangili, F., Zaffalon, M., & Ruggeri, F. (2014). A Bayesian Wilcoxon signed-rank test based on the Dirichlet process. In: Proceedings of the 31st International Conference on Machine Learning (ICML-14), (pp. 1026–1034).Google Scholar
- Lacoste, A., Laviolette, F., & Marchand, M. (2012). Bayesian comparison of machine learning algorithms on single and multiple datasets. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS-12), (pp. 665–675).Google Scholar
- Witten, I. H., Frank, E., & Hall, M. (2011). Data Mining: Practical machine learning tools and techniques (third ed.). Los Altos: Morgan Kaufmann.Google Scholar