AI 2002: Advances in Artificial Intelligence

Volume 2557 of the series Lecture Notes in Computer Science pp 511-522


Solving Regression Problems Using Competitive Ensemble Models

  • Yakov FraymanAffiliated withSchool of Information Technology, Deakin University
  • , Bernard F. RolfeAffiliated withSchool of Information Technology, Deakin University
  • , Geoffrey I. WebbAffiliated withSchool of Information Technology, Deakin University

* Final gross prices may vary according to local VAT.

Get Access


The use of ensemble models in many problem domains has increased significantly in the last fewyears. The ensemble modeling, in particularly boosting, has shown a great promise in improving predictive performance of a model. Combining the ensemble members is normally done in a co-operative fashion where each of the ensemble members performs the same task and their predictions are aggregated to obtain the improved performance. However, it is also possible to combine the ensemble members in a competitive fashion where the best prediction of a relevant ensemble member is selected for a particular input. This option has been previously somewhat overlooked. The aim of this article is to investigate and compare the competitive and co-operative approaches to combining the models in the ensemble. A comparison is made between a competitive ensemble model and that of MARS with bagging, mixture of experts, hierarchical mixture of experts and a neural network ensemble over several public domain regression problems that have a high degree of nonlinearity and noise. The empirical results showa substantial advantage of competitive learning versus the co-operative learning for all the regression problems investigated. The requirements for creating the efficient ensembles and the available guidelines are also discussed.