- Zhi-Hua Zhou
- … show all 1 hide
Committee-based learning; Multiple classifier system; Classifier combination
Ensemble is a machine learning paradigm where multiple learners are trained to solve the same problem. In contrast to ordinary machine learning approaches which try to learn one hypothesis from training data, ensemble methods try to construct a set of hypotheses and combine them.
It is difficult to trace the starting point of the history of ensemble methods since the basic idea of deploying multiple models has been in use for a long time. However, it is clear that the hot wave of research on ensemble methods since the 1990s owes much to two works. The first is an applied research conducted by Hansen and Salamon at the end of 1980s , where they found that predictions made by the combination of a set of neural networks are often more accurate than predictions made by the best single neural network. The second is a theoretical research
- Bauer E. and Kohavi R. An empirical comparison of voting classification algorithms: bagging, Boosting, and variants. Mach. Learn., 36(1–2):105–139, 1999. CrossRef
- Breiman L. Bagging predictors. Mach. Learn., 24(2):123–140, 1996.
- Breiman L. Random forests. Mach. Learn., 45(1):5–32, 2001. CrossRef
- Dietterich T.G. Machine learning research: Four current directions. AI Magn., 18(4):97–136, 1997.
- Hansen L.K. and Salamon P. Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell., 12(10):993–1001, 1990. CrossRef
- Ho T.K. The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell., 20(8):832–844, 1998. CrossRef
- Krogh A. and Neural network ensembles, cross validation, and active learning. In Advances in Neural Information Processing Systems 7, G. Tesauro, D.S. Touretzky, and T.K. Leen (eds.). MIT Press, Cambridge, MA, 1995, pp. 231–238.
- Kuncheva L.I. and Whitaker C.J. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach. Learn., 51(2):181–207, 2003. CrossRef
- Opitz D. and Maclin R. Popular ensemble methods: An empirical study. J. Artif. Intell. Res., 11:169–198, 1999.
- Schapire R.E. The Boosting approach to machine learning: An overview. In Nonlinear Estimation and Classification, D.D. Denison, M.H. Hansen, C. Holmes, B. Mallick, and B. Yu (eds.). Springer, Berlin, 2003.
- Strehl A. and Ghosh J. Cluster ensembles – a knowledge reuse framework for combining multiple partitionings. J. Mach. Learn. Res., 3:583–617, 2002. CrossRef
- Ting K.M. and Witten I.H. Issues in stacked generalization. J. Artif. Intell. Res., 10:271–289, 1999.
- Wolpert D.H. Stacked generalization. Neural Netw., 5(2):241–260, 1992. CrossRef
- Zhou Z.-H., Jiang Y., and Chen S.-F. Extracting symbolic rules from trained neural network ensembles. AI Commun., 16(1):3–15, 2003.
- Zhou Z.-H., Wu J., and Tang W. Ensembling neural networks: Many could be better than all. Artif. Intell., 137(1–2):239–263, 2002. CrossRef
- Reference Work Title
- Encyclopedia of Database Systems
- pp 988-991
- Print ISBN
- Online ISBN
- Springer US
- Copyright Holder
- Springer US
- Additional Links
- Industry Sectors
- eBook Packages
To view the rest of this content please follow the download PDF link above.