The Bias Variance Trade-Off in Bootstrapped Error Correcting Output Code Ensembles

  • Raymond S. Smith
  • Terry Windeatt
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5519)


By performing experiments on publicly available multi-class datasets we examine the effect of bootstrapping on the bias/variance behaviour of error-correcting output code ensembles. We present evidence to show that the general trend is for bootstrapping to reduce variance but to slightly increase bias error. This generally leads to an improvement in the lowest attainable ensemble error, however this is not always the case and bootstrapping appears to be most useful on datasets where the non-bootstrapped ensemble classifier is prone to overfitting.


Support Vector Machine Training Strength Relative Percentage Change Bootstrap Error Ensemble Error 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bishop, M.C.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1995)zbMATHGoogle Scholar
  2. 2.
    Breiman, L.: Bagging Predictors. Machine Learning 24(2), 123–140 (1994)zbMATHGoogle Scholar
  3. 3.
    Breiman, L.: Arcing Classifiers. Annals of Statistics 26(3), 801–849 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Brown, G., Wyatt, J., Harris, R., Yao, X.: Diversity Creation Methods: A Survey and Categorisation. Journal of Information Fusion 6(1) (2005)Google Scholar
  5. 5.
    Burges, C.J.C.: A Tutorial on Support Vector Machines for Pattern Recognition. Knowledge Discovery and Data Mining 2(2) (1998)Google Scholar
  6. 6.
    Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines (2001), Software:
  7. 7.
    Dietterich, T.G., Bakiri, G.: Solving Multiclass Learning Problems via Error-Correcting Output Codes. Journal of Artificial Intelligence Research 2, 263–286 (1995)zbMATHGoogle Scholar
  8. 8.
    Dietterich, T.G., Kong, E.B.: Machine learning bias, statistical bias, and statistical variance of decision tree algorithms. Technical Report, Dept. of Computer Science, Oregon State University (1995)Google Scholar
  9. 9.
    Duin, R.P.W., Juszczak, P., Paclik, P., Pekalska, E., de Ridder, D., Tax, D.M.J., Verzakov, S.: PRTools 4.1, A Matlab Toolbox for Pattern Recognition, Delft University of Technology (2007)Google Scholar
  10. 10.
    Geman, S., Bienenstock, E.: Neural networks and the bias / variance dilemma. Neural Computation 4, 1–58 (1992)CrossRefGoogle Scholar
  11. 11.
    James, G.: Majority Vote Classifiers: Theory and Applications. Ph.D Dissertation, Stanford University (1998)Google Scholar
  12. 12.
    James, G.: Variance and Bias for General Loss Functions. Machine Learning 51(2), 115–135 (2003)CrossRefzbMATHGoogle Scholar
  13. 13.
    Kohavi, R., Wolpert, D.: Bias plus variance decomposition for zero-one loss functions. In: Proc. 13th International Conference on Machine Learning, pp. 275–283 (1996)Google Scholar
  14. 14.
    Kong, E.B., Dietterich, T.G.: Error-correcting output coding corrects bias and variance. In: Proc. 12th International Conference on Machine Learning, pp. 313–321 (1995)Google Scholar
  15. 15.
    Merz, C.J., Murphy, P.M.: UCI Repository of Machine Learning Databases (1998),
  16. 16.
    Valentini, G., Dietterich, T.G.: Bias-Variance Analysis of Support Vector Machines for the Development of SVM-Based Ensemble Methods. Journal of Machine Learning Research 5, 725–775 (2004)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Windeatt, T.: Accuracy/ Diversity and Ensemble Classifier Design. IEEE Trans. Neural Networks 17(4) (July 2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Raymond S. Smith
    • 1
  • Terry Windeatt
    • 1
  1. 1.Centre for Vision, Speech and Signal ProcessingUniversity of SurreyGuildfordUK

Personalised recommendations