Leverages Based Neural Networks Fusion

  • Antanas Verikas
  • Marija Bacauskiene
  • Adas Gelzinis
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3316)

Abstract

To improve estimation results, outputs of multiple neural networks can be aggregated into a committee output. In this paper, we study the usefulness of the leverages based information for creating accurate neural network committees. Based on the approximate leave-one-out error and the suggested, generalization error based, diversity test, accurate and diverse networks are selected and fused into a committee using data dependent aggregation weights. Four data dependent aggregation schemes – based on local variance, covariance, Choquet integral, and the generalized Choquet integral – are investigated. The effectiveness of the approaches is tested on one artificial and three real world data sets.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Taniguchi, M., Tresp, V.: Averaging regularized estimators. Neural Computation 9, 1163–1178 (1997)CrossRefGoogle Scholar
  2. 2.
    Verikas, A., Lipnickas, A., Malmqvist, K., Bacauskiene, M., Gelzinis, A.: Soft combination of neural classifiers: A comparative study. Pattern Recognition Letters 20, 429–444 (1999)CrossRefGoogle Scholar
  3. 3.
    Gader, P.D., Mohamed, M.A., Keller, J.M.: Fusion of handwritten word classifiers. Pattern Recognition Letters 17, 577–584 (1996)CrossRefGoogle Scholar
  4. 4.
    Verikas, A., Lipnickas, A.: Fusing neural networks through space partitioning and fuzzy integration. Neural Processing Letters 16, 53–65 (2002)MATHCrossRefGoogle Scholar
  5. 5.
    Kittler, J., Hatef, M., Duin, R.P.W., Matas, J.: On combining classifiers. IEEE Trans Pattern Analysis and Machine Intelligence 20, 226–239 (1998)CrossRefGoogle Scholar
  6. 6.
    Kim, S.P., Sanchez, J.C., Erdogmus, D., Rao, Y.N., Wessberg, J., Principe, J.C., Nicolelis, M.: Divide-and-conquer approach for brain machine interfaces: nonlinear mixture of competitive linear models. Neural Networks 16, 865–871 (2003)CrossRefGoogle Scholar
  7. 7.
    Woods, K., Kegelmeyer, W.P., Bowyer, K.: Combination of multiple classifiers using local accuracy estimates. IEEE Trans Pattern Analysis Machine Intelligence 19, 405–410 (1997)CrossRefGoogle Scholar
  8. 8.
    Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation, and active learning. In: Tesauro, G., Touretzky, D.S., Leen, T.K. (eds.) Advances in Neural Information Processing Systems, vol. 7, pp. 231–238. MIT Press, Cambridge (1995)Google Scholar
  9. 9.
    Breiman, L.: Bagging predictors. Technical Report 421, Statistics Departament, University of California, Berkeley (1994)Google Scholar
  10. 10.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55, 119–139 (1997)MATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Liu, Y., Yao, X.: Ensemble learning via negatine correlation. Neural Networks 12, 1399–1404 (1999)CrossRefGoogle Scholar
  12. 12.
    Monari, G., Dreyfus, G.: Local overfitting control via leverages. Neural Computation 14, 1481–1506 (2002)MATHCrossRefGoogle Scholar
  13. 13.
    Tresp, V.: A Bayesian committee machine. Neural Computation 12, 2719–2741 (2000)CrossRefGoogle Scholar
  14. 14.
    Yager, R.R.: Generalized OWA aggregation operators. Fuzzy Optimization and Decision Making 3, 93–107 (2004)MATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Antanas Verikas
    • 1
    • 2
  • Marija Bacauskiene
    • 2
  • Adas Gelzinis
    • 2
  1. 1.Intelligent Systems LaboratoryHalmstad UniversityHalmstadSweden
  2. 2.Department of Applied ElectronicsKaunas University of TechnologyKaunasLithuania

Personalised recommendations