Evaluating Predictive Uncertainty Challenge
This Chapter presents the PASCAL Evaluating Predictive Uncertainty Challenge, introduces the contributed Chapters by the participants who obtained outstanding results, and provides a discussion with some lessons to be learnt. The Challenge was set up to evaluate the ability of Machine Learning algorithms to provide good “probabilistic predictions”, rather than just the usual “point predictions” with no measure of uncertainty, in regression and classification problems. Parti-cipants had to compete on a number of regression and classification tasks, and were evaluated by both traditional losses that only take into account point predictions and losses we proposed that evaluate the quality of the probabilistic predictions.
Unable to display preview. Download preview PDF.
- Rasmussen, C.E., Quiñonero-Candela, J.: Healing the relevance vector machine by augmentation. In: De Raet, Wrobel (eds.) Proceedings of the 22nd International Conference on Machine Learning, pp. 689–696. ACM Press, New York (2005)Google Scholar
- Guyon, I., Gunn, S., Ben-Hur, A., Dror, G.: Result analysis of the nips 2003 feature selection challenge. In: Saul, L.K., Weiss, Y., Bottou, L. (eds.) Advances in Neural Information Processing Systems, Cambridge, Massachussetts, vol. 17, pp. 545–552. The MIT Press, Cambridge (2005)Google Scholar
- Platt, J.C.: Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In: Smola, A.J., Bartlett, P., Schölkopf, B., Schuurmans, D. (eds.) Advances in Large Margin Classifiers. MIT Press, Cambridge (1999)Google Scholar
- Langford, J., Zadrozny, B.: Estimating class membership probabilities using classifier learners. In: Cowell, R.G., Ghahramani, Z. (eds.) Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics. Society for Artificial Intelligence and Statistics, pp. 198–205 (2005), (Available electronically at http://www.gatsby.ucl.ac.uk/aistats/)