Network Game and Boosting

  • Shijun Wang
  • Changshui Zhang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3720)


We propose an ensemble learning method called Network Boosting which combines weak learners together based on a random graph (network). A theoretic analysis based on the game theory shows that the algorithm can learn the target hypothesis asymptotically. The comparison results using several datasets of the UCI machine learning repository and synthetic data are promising and show that Network Boosting has much resistance to the noisy data than AdaBoost through the cooperation of classifiers in the classifier network.


Random Graph Weak Learner Average Error Rate Connection Probability Network Game 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)zbMATHMathSciNetGoogle Scholar
  2. 2.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. In: Vitányi, P.M.B. (ed.) EuroCOLT 1995. LNCS, vol. 904. Springer, Heidelberg (1995)Google Scholar
  3. 3.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: Proceedings of the Thirteenth International conference on Machine Learning (1996)Google Scholar
  4. 4.
    Schapire, R.E.: A brief introduction to boosting. In: Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence (1999)Google Scholar
  5. 5.
    Dietterich, T.G.: An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine Learning 40, 139–158 (2000)CrossRefGoogle Scholar
  6. 6.
    Krieger, A., Long, C., Wyner, A.: Boosting noisy data. In: Proceedings of the Eighteenth International conference on Machine Learning (2001)Google Scholar
  7. 7.
    Rätsch, G., Onoda, T., Muller, K.R.: Soft margins for AdaBoost. Machine Learning 42(3), 287–320 (2001)zbMATHCrossRefGoogle Scholar
  8. 8.
    Oza, N.C.: AveBoost2: Boosting for Noisy Data. In: Roli, F., Kittler, J., Windeatt, T. (eds.) MCS 2004. LNCS, vol. 3077, pp. 31–40. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  9. 9.
    Fan, W., Stolfo, S.J., Zhang, J.: The application of AdaBoost for distributed, scalable and on-line learning. In: SIGKDD (1999)Google Scholar
  10. 10.
    Lazarevic, A., Obradovic, Z.: The distributed boosting algorithm. In: SIGKDD (2001)Google Scholar
  11. 11.
    Lazarevic, A., Obradovic, Z.: Boosting algorithm for parallel and distributed learning. Distributed and Parallel Databases 11, 203–229 (2002)zbMATHCrossRefGoogle Scholar
  12. 12.
    Merz, C.J., Murphy, P.M.: UCI repository of machine learning databases (1996),
  13. 13.
    Shijun, W., Changshui, Z.: Weighted competition scale-free network. Phys. Rev. E 70, 066127 (2004)CrossRefGoogle Scholar
  14. 14.
    Shijun, W., Changshui, Z.: Microscopic model of financial markets based on belief propagation. Physica A 354C, 496 (2005)Google Scholar
  15. 15.
    Albert, R., Barabási, A.: Statistical mechanics of complex networks. Reviews of Modern Physics 74(1), 47–97 (2002)CrossRefMathSciNetzbMATHGoogle Scholar
  16. 16.
    Albert, R., Jeong, H., Barabási, A.: Error and attack tolerance of complex networks. Nature 406, 378–382 (2000)CrossRefGoogle Scholar
  17. 17.
    Bollobas, B.: Random Graphs. Academic, London (1985)zbMATHGoogle Scholar
  18. 18.
    Freund, Y., Schapire, R.E.: Game theory, on-line prediction and boosting. In: Proceedings of the Thirteenth International conference on Machine Learning (1996)Google Scholar
  19. 19.
    Fudenberg, D., Tirole, J.: Game Theory. MIT Press, Cambridge (1991)Google Scholar
  20. 20.
    Freund, Y., Schapire, R.E.: Adaptive game playing using multiplicative weights. Game and Economic Behavior 29, 79–103 (1999)zbMATHCrossRefMathSciNetGoogle Scholar
  21. 21.
    Breiman, L.: Prediction games and arcing algorithms. Neural Computation 11, 1493–1517 (1999)CrossRefGoogle Scholar
  22. 22.
    Witten, I.H., Frank, E.: Data Mining: Practical machine learning tools with Java implementations. Morgan Kaufmann, San Francisco (2000)Google Scholar
  23. 23.
    Quinlan, J.R.: Bagging, boosting, and C4.5. In: Proceedings of the Thirteenth NationalConference on Artificial Intelligence, pp. 725–730. AAAI Press/MIT Press (1996)Google Scholar
  24. 24.
    Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning 36(1-2), 105–139 (2000)Google Scholar
  25. 25.
    Melville, P., Mooney, R.: Constructing Diverse Classifier Ensembles Using Artificial Training Examples. In: Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence, pp. 505–510 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Shijun Wang
    • 1
  • Changshui Zhang
    • 1
  1. 1.State Key Laboratory of Intelligent Technology and Systems, Department of AutomationTsinghua UniversityBeijingChina

Personalised recommendations