Advertisement

Cross-conformal predictors

  • Vladimir Vovk
Article

Abstract

Inductive conformal predictors have been designed to overcome the computational inefficiency exhibited by conformal predictors for many underlying prediction algorithms. Whereas computationally efficient, inductive conformal predictors sacrifice different parts of the training set at different stages of prediction, which affects their informational efficiency. This paper introduces the method of cross-conformal prediction, which is a hybrid of the methods of inductive conformal prediction and cross-validation, and studies its validity and informational efficiency empirically. The computational efficiency of cross-conformal predictors is comparable to that of inductive conformal predictors, and they produce valid predictions in our empirical studies.

Keywords

Conformal predictors Cross-validation Inductive conformal predictors Tolerance regions 

Mathematics Subject Classifications (2010)

68T05 68Q32 62G15 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Supplementary material

10472_2013_9368_MOESM1_ESM.zip (66 kb)
(ZIP 65.6 kb)
10472_2013_9368_MOESM2_ESM.zip (37 kb)
(ZIP 36.6 kb)

References

  1. 1.
    Balasubramanian, V.N.: Conformal predictions in multimedia pattern recognition. Ph.D. thesis, Arizona State University (2010)Google Scholar
  2. 2.
    Balasubramanian, V.N., Chakraborty, S., Panchanathan, S.: Conformal predictions for information fusion: a comparative study of p-value combination methods. Ann. Math. Artif. Intell. (2013, this issue)Google Scholar
  3. 3.
    Breiman, L., Spector, P.: Submodel selection and evaluation in regression: the X-random case. Int. Stat. Rev. 60, 291–319 (1992)CrossRefGoogle Scholar
  4. 4.
    Efron, B.: Bootstrap methods: another look at the jackknife. Ann. Stat. 7, 1–26 (1979)zbMATHMathSciNetCrossRefGoogle Scholar
  5. 5.
    Efron, B.: Estimating the error rate of a prediction rule: some improvements on cross-validation. J. Am. Stat. Assoc. 78, 316–331 (1983)zbMATHMathSciNetCrossRefGoogle Scholar
  6. 6.
    Fisher, R.A.: Combining independent tests of significance. Am. Stat. 2, 30 (1948)Google Scholar
  7. 7.
    Fraser, D.A.S.: Nonparametric Methods in Statistics. Wiley, New York (1957)zbMATHGoogle Scholar
  8. 8.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55, 119–139 (1997)zbMATHMathSciNetCrossRefGoogle Scholar
  9. 9.
    Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Data Anal. 29, 1189–1232 (2001)zbMATHGoogle Scholar
  10. 10.
    Friedman, J.H.: Stochastic gradient boosting. Comput. Stat. Data Anal. 38, 367–378 (2002)zbMATHCrossRefGoogle Scholar
  11. 11.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd edn. Springer, New York (2009)CrossRefGoogle Scholar
  12. 12.
    Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pp. 1137–1143. Morgan Kaufmann, San Mateo, CA (1995)Google Scholar
  13. 13.
    Lei, J., Rinaldo, A., Wasserman, L.: A conformal prediction approach to explore functional data. Ann. Math. Artif. Intell. (2013). doi: 10.1007/s10472-013-9366-6 Google Scholar
  14. 14.
    Lei, J., Wasserman, L.: Distribution free prediction bands. Tech. Rep. arXiv: 1203.5422 [stat.ME]. e-Print archive (2012, to appear in JRRSB)
  15. 15.
    Mosteller, F., Tukey, J.W.: Data analysis, including statistics. In: Lindzey, G., Aronson, E. (eds.) Handbook of Social Psychology, 2nd edn., vol. 2, pp. 80–203. Addison-Wesley, Reading, MA (1968)Google Scholar
  16. 16.
    Papadopoulos, H., Vovk, V., Gammerman, A.: Qualified predictions for large data sets in the case of pattern recognition. In: Proceedings of the First International Conference on Machine Learning and Applications (ICMLA), pp. 159–163. CSREA Press, Las Vegas, NV (2002)Google Scholar
  17. 17.
    Simard, P., LeCun, Y., Denker, J.: Efficient pattern recognition using a new transformation distance. In: Hanson, S., Cowan, J., Giles, C. (eds.) Advances in Neural Information Processing Systems, vol. 5, pp. 50–58. Morgan Kaufmann, San Mateo, CA (1993)Google Scholar
  18. 18.
    Stone, M.: Cross-validatory choice and assessment of statistical predictions (with discussion). J. R. Stat. Soc. Ser. B 36, 111–147 (1974)zbMATHGoogle Scholar
  19. 19.
    Vanderlooy, S., van der Maaten, L., Sprinkhuizen-Kuyper, I.: Off-line learning with transductive confidence machines: an empirical evaluation. In: Perner, P. (ed.) Proceedings of the Fifth International Conference on Machine Learning and Data Mining in Pattern Recognition. Lecture Notes in Artificial Intelligence, vol. 4571, pp. 310–323. Springer, Berlin (2007)CrossRefGoogle Scholar
  20. 20.
    Vovk, V.: Conditional validity of inductive conformal predictors. In: Hoi, S.C.H., Buntine, W. (eds.) JMLR: Workshop and Conference Proceedings, vol. 25 (Asian Conference on Machine Learning), pp. 475–490 (2012). The journal version to appear in Machine Learning (ACML 2012 Special Issue)Google Scholar
  21. 21.
    Vovk, V., Gammerman, A., Shafer, G.: Algorithmic Learning in a Random World. Springer, New York (2005)zbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2013

Authors and Affiliations

  1. 1.Computer Learning Research Centre, Department of Computer ScienceRoyal Holloway, University of LondonSurreyUK

Personalised recommendations