Combining regularized neural networks
In this paper we show that the improvement in performance which can be achieved by averaging depends critically on the degree of regularization which is used in training the individual neural networks. We compare four different averaging approaches: simple averaging, bagging, variance-based weighting and variance-based bagging. Bagging and variance-based bagging seem to be the overall best combining methods over a wide range of degrees of regularization.
Unable to display preview. Download preview PDF.
- 1.Breiman, L.: Bagging Predictors. TR No. 421, Dept. of Statist., Berkeley (1994)Google Scholar
- 2.Efron, B. and Tibshirani, R.: An Introduction to the Bootstrap. Chapman/Hall (1993).Google Scholar
- 3.Jacobs, R. A.: Methods for Combining Experts' Probability Assessment. Neural Computation, 7 (1995) 867–888.Google Scholar
- 4.Krogh, A. and Vedelsby, J.: Neural Network Ensembles, Cross Validation, and Active Learning. Adv. in Neural Inf. Proc. Systems 7. Cambridge MA: MIT Press (1995).Google Scholar
- 5.Meir, R.:. Bias, Variance and the Combination of Estimators: The Case of Linear Least Squares. TR: Dept. of Electrical Engineering, Technion, Haifa (1994)Google Scholar
- 6.Perrone, M. P.: Improving Regression Estimates: Averaging Methods for Variance Reduction with Extensions to General Convex Measure Optimization. PhD thesis. Brown University (1993)Google Scholar
- 7.Taniguchi, M., Tresp, V.: Variance-based Combination of Estimators trained by Bootstrap Replicates. Proc. Inter. Symp. on Artificial Neural Networks, Taiwan (1995)Google Scholar
- 8.Tibshirani, R.: A Comparison of Some Error Estimates for Neural Network Models. TR Dep. of Stat, Univ. of Toronto (1994)Google Scholar
- 9.Tresp, V. and Taniguchi, M.: Combining Estimators Using Non-Constant Weighting Functions. Adv. in Neural Inf. Proc. Systems 7 Cambridge MA: MIT Press (1995)Google Scholar
- 10.Wolpert, D. H.: Stacked Generalization. Neural Networks, 5 (1992) 241–159Google Scholar