Advertisement

Building Ensembles of Neural Networks with Class-Switching

  • Gonzalo Martínez-Muñoz
  • Aitor Sánchez-Martínez
  • Daniel Hernández-Lobato
  • Alberto Suárez
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4131)

Abstract

This article investigates the properties of ensembles of neural networks, in which each network in the ensemble is constructed using a perturbed version of the training data. The perturbation consists in switching the class labels of a subset of training examples selected at random. Experiments on several UCI and synthetic datasets show that these class-switching ensembles can obtain improvements in classification performance over both individual networks and bagging ensembles.

Keywords

Neural Network Decision Tree Class Label Synthetic Dataset Hide Unit 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Schapire, R.E., Freund, Y., Bartlett, P.L., Lee, W.S.: Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics 12(5), 1651–1686 (1998)MathSciNetGoogle Scholar
  2. 2.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. In: Proc. 2nd European Conference on Computational Learning Theory, pp. 23–37 (1995)Google Scholar
  3. 3.
    Quinlan, J.R.: Bagging, boosting, and C4.5. In: Proc. 13th National Conference on Artificial Intelligence, Cambridge, MA, pp. 725–730 (1996)Google Scholar
  4. 4.
    Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)MATHMathSciNetGoogle Scholar
  5. 5.
    Breiman, L.: The Annals of Statistics 26(3), 801–849 (1998)MATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Opitz, D., Maclin, R.: Popular ensemble methods: An empirical study. Journal of Artificial Intelligence Research 11, 169–198 (1999)MATHGoogle Scholar
  7. 7.
    Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning 36(1-2), 105–139 (1999)CrossRefGoogle Scholar
  8. 8.
    Dietterich, T.G.: An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine Learning 40(2), 139–157 (2000)CrossRefGoogle Scholar
  9. 9.
    Rätsch, G., Onoda, T., Müller, K.R.: Soft margins for AdaBoost. Machine Learning 42(3), 287–320 (2001)MATHCrossRefGoogle Scholar
  10. 10.
    Breiman, L.: Random forests. Machine Learning 45(1), 5–32 (2001)MATHCrossRefGoogle Scholar
  11. 11.
    Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990)CrossRefGoogle Scholar
  12. 12.
    Wolpert, D.H.: Stacked generalization. Neural Networks 5(2), 241–259 (1992)CrossRefGoogle Scholar
  13. 13.
    Perrone, M.P., Cooper, L.N.: When networks disagree: Ensemble methods for hybrid neural networks. In: Mammone, R.J. (ed.) Neural Networks for Speech and Image Processing, pp. 126–142. Chapman and Hall, Boca Raton (1993)Google Scholar
  14. 14.
    Sharkey, A.J.C.: Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems. Springer, London (1999)MATHGoogle Scholar
  15. 15.
    Cantador, I., Dorronsoro, J.R.: Balanced boosting with parallel perceptrons. In: Cabestany, J., Prieto, A.G., Sandoval, F. (eds.) IWANN 2005. LNCS, vol. 3512, pp. 208–216. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  16. 16.
    Valentini, G., Dietterich, T.G.: Bias-variance analysis of support vector machines for the development of svm-based ensemble methods. Journal of Machine Learning Research 5, 725–775 (2004)MathSciNetGoogle Scholar
  17. 17.
    Kong, E.B., Dietterich, T.G.: Error-correcting output coding corrects bias and variance. In: Proceedings of the Twelfth International Conference on Machine Learning, pp. 313–321 (1995)Google Scholar
  18. 18.
    Ho, T.K.: C4.5 decision forests. In: Proceedings of Fourteenth International Conference on Pattern Recognition, vol. 1, pp. 545–549 (1998)Google Scholar
  19. 19.
    Dietterich, T.G., Bakiri, G.: Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research 2, 263–286 (1995)MATHGoogle Scholar
  20. 20.
    Fürnkranz, J.: Round robin classification. Journal of Machine Learning Research 2, 721–747 (2002)MATHCrossRefGoogle Scholar
  21. 21.
    Breiman, L.: Randomizing outputs to increase prediction accuracy. Machine Learning 40(3), 229–242 (2000)MATHCrossRefGoogle Scholar
  22. 22.
    Martínez-Muñoz, G., Suárez, A.: Switching class labels to generate classification ensembles. Pattern Recognition 38(10), 1483–1494 (2005)CrossRefGoogle Scholar
  23. 23.
    Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998)Google Scholar
  24. 24.
    Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Chapman & Hall, New York (1984)MATHGoogle Scholar
  25. 25.
    Igel, C., Hüsken, M.: Improving the rprop learning algorithm. In: Proceedings of the Second International Symposium on Neural Computation, pp. 115–121. ICSC Academic Press, London (2000)Google Scholar
  26. 26.
    Nissen, S.: Implementation of a fast artificial neural network library (fann). Technical report, Department of Computer Science, University of Copenhagen (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Gonzalo Martínez-Muñoz
    • 1
  • Aitor Sánchez-Martínez
    • 1
  • Daniel Hernández-Lobato
    • 1
  • Alberto Suárez
    • 1
  1. 1.Universidad Autónoma de MadridMadridSpain

Personalised recommendations