Machine Learning

, Volume 40, Issue 3, pp 229–242

Randomizing Outputs to Increase Prediction Accuracy

  • Leo Breiman

DOI: 10.1023/A:1007682208299

Cite this article as:
Breiman, L. Machine Learning (2000) 40: 229. doi:10.1023/A:1007682208299


Bagging and boosting reduce error by changing both the inputs and outputs to form perturbed training sets, growing predictors on these perturbed training sets and combining them. An interesting question is whether it is possible to get comparable performance by perturbing the outputs alone. Two methods of randomizing outputs are experimented with. One is called output smearing and the other output flipping. Both are shown to consistently do better than bagging.

ensemble randomization output variability 
Download to read the full article text

Copyright information

© Kluwer Academic Publishers 2000

Authors and Affiliations

  • Leo Breiman
    • 1
  1. 1.Statistics DepartmentUniversity of CaliforniaBerkeleyUSA

Personalised recommendations