Improving on Bagging with Input Smearing
Bagging is an ensemble learning method that has proved to be a useful tool in the arsenal of machine learning practitioners. Commonly applied in conjunction with decision tree learners to build an ensemble of decision trees, it often leads to reduced errors in the predictions when compared to using a single tree. A single tree is built from a training set of size N. Bagging is based on the idea that, ideally, we would like to eliminate the variance due to a particular training set by combining trees built from all training sets of size N. However, in practice, only one training set is available, and bagging simulates this platonic method by sampling with replacement from the original training data to form new training sets. In this paper we pursue the idea of sampling from a kernel density estimator of the underlying distribution to form new training sets, in addition to sampling from the data itself. This can be viewed as “smearing out” the resampled training data to generate new datasets, and the amount of “smear” is controlled by a parameter. We show that the resulting method, called “input smearing”, can lead to improved results when compared to bagging. We present results for both classification and regression problems.
KeywordsEnsemble Member Base Learner Relative Bias Minority Class Ensemble Generation
Unable to display preview. Download preview PDF.
- 2.Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: Thirteenth Int. Conf. on Machine Learning, pp. 148–156 (1996)Google Scholar
- 4.Melville, P., Mooney, R.J.: Creating diversity in ensembles using artificial data. Journal of Information Fusion (Special Issue on Diversity in Multiple Classifier Systems) 6/1, 99–111 (2004)Google Scholar
- 8.Domingos, P.: Knowledge acquisition from examples via multiple models. In: Proc. 14th Int. Conf. on Machine Learning, pp. 98–106 (1997)Google Scholar
- 10.Newman, D.J., Hettich, S., Blake, C., Merz, C.: UCI repository of machine learning databases (1998)Google Scholar
- 12.Rennie, J.D.M., Shih, L., Teevan, J., Karger, D.R.: Tackling the poor assumptions of naive Bayes text classifiers. In: Proc. Twentieth Int. Conf. on Machine Learning, pp. 616–623. AAAI Press, Menlo Park (2003)Google Scholar
- 13.Kohavi, R., Wolpert, D.H.: Bias plus variance decomposition for zero-one loss functions. In: Proc. Thirteenth Int. Conf. on Machine Learning, pp. 275–283 (1996)Google Scholar
- 14.Torgo, L.: Regression datasets (2005), www.liacc.up.pt/~ltorgo/Regression
- 15.Quinlan, J.R.: Learning with Continuous Classes. In: Proc. 5th Australian Joint Conf. on Artificial Intelligence, pp. 343–348. World Scientific, Singapore (1992)Google Scholar
- 16.Wang, Y., Witten, I.: Inducing model trees for continuous classes. In: Proc. of Poster Papers, European Conf. on Machine Learning (1997)Google Scholar
- 17.Ting, K., Witten, I.: Stacking bagged and dagged models. In: Fourteenth Int. Conf. on Machine Learning (ICML 2007), pp. 367–375 (1997)Google Scholar
- 19.Achlioptas, D.: Database-friendly random projections. In: Twentieth ACM Symposium on Principles of Database Systems, pp. 274–281 (2001)Google Scholar