Skip to main content

Part of the book series: Studies in Computational Intelligence ((SCI,volume 358))

Abstract

The cornerstone of successful data mining is to choose a suitable modelling algorithm for given data. Recent results show that the best performance can be achieved by an efficient combination of models or classifiers.

The increasing popularity of combination (ensembling, blending) of diverse models has been significantly influenced by its success in various data mining competitions [8,38].

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Uci machine learning repository (September 2006), http://www.ics.uci.edu/~mlearn/MLSummary.html

  2. The fake game environment for the automatic knowledge extraction (November 2008), http://www.sourceforge.net/projects/fakegame

  3. Abdel-Aal, R.: Improving electric load forecasts using network committees. Electric Power Systems Research (74), 83–94 (2005)

    Article  Google Scholar 

  4. Alpaydin, E., Kaynak, C.: Cascading classifiers. Kybernetika 34, 369–374 (1998)

    Google Scholar 

  5. Analoui, M., Bidgoli, B.M., Rezvani, M.H.: Hierarchical classifier combination and its application in networks intrusion detection. In: International Conference on Data Mining Workshops, vol. 0, pp. 533–538 (2007)

    Google Scholar 

  6. Bakker, B., Heskes, T.: Clustering ensembles of neural network models. Neural Netw. 16(2), 261–269 (2003)

    Article  Google Scholar 

  7. Bao, X., Bergman, L., Thompson, R.: Stacking recommendation engines with additional meta-features. In: RecSys 2009: Proceedings of the third ACM conference on Recommender systems, pp. 109–116. ACM, New York (2009)

    Chapter  Google Scholar 

  8. Bennett, J., Lanning, S., Netflix, N.: The netflix prize. In: KDD Cup and Workshop in conjunction with KDD (2007)

    Google Scholar 

  9. Brazdil, P., Giraud-Carrier, C., Soares, C., Vilalta, R.: Metalearning: Applications to Data Mining. Cognitive Technologies. Springer, Heidelberg (2009)

    Google Scholar 

  10. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MATH  MathSciNet  Google Scholar 

  11. Brown, G.: Diversity in Neural Network Ensembles. PhD thesis, The University of Birmingham, School of Computer Science, Birmingham B15 2TT, United Kingdom (January 2004)

    Google Scholar 

  12. Brown, G., Yao, X.: On the effectiveness of negative correlation learning. In: Proceedings Of First Uk Workshop On Computational Intelligence, pp. 57–62 (2001)

    Google Scholar 

  13. Chandra, Arjun, Yao, Xin: Ensemble learning using multi-objective evolutionary algorithms. Journal of Mathematical Modelling and Algorithms 5(4), 417–445 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  14. Costa, E.P., Lorena, A.C., Carvalho, A.C., Freitas, A.A.: Top-down hierarchical ensembles of classifiers for predicting g-protein-coupled-receptor functions. In: Bazzan, A.L.C., Craven, M., Martins, N.F. (eds.) BSB 2008. LNCS (LNBI), vol. 5167, pp. 35–46. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  15. Dietterich, T.G.: An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine Learning 40(2), 139–157 (2000)

    Article  Google Scholar 

  16. Donoho, D.L.: De-noising by soft-thresholding. IEEE Trans. on Inf. Theory 41(3), 613–662 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  17. Drchal, J., Šnorek, J.: Diversity visualization in evolutionary algorithms. In: Štefen, J. (ed.) Proceedings of 41th Spring International Conference MOSIS 2007, Modelling and Simulation of Systems, pp. 77–84. MARQ, Ostrava (2007)

    Google Scholar 

  18. Durham, G.B., Gallant, A.R.: Numerical techniques for maximum likelihood estimation of continuous-time diffusion processes. Journal Of Business And Economic Statistics 20, 297–338 (2001)

    Article  MathSciNet  Google Scholar 

  19. Eastwood, M., Gabrys, B.: The dynamics of negative correlation learning. J. VLSI Signal Process. Syst. 49(2), 251–263 (2007)

    Article  Google Scholar 

  20. Fahlman, S.E., Lebiere, C.: The cascade-correlation learning architecture. Technical Report CMU-CS-90-100, Carnegie Mellon University Pittsburgh, USA (1991)

    Google Scholar 

  21. Ferri, C., Flach, P., Hernández-Orallo, J.: Delegating classifiers. In: ICML 2004: Proceedings of the twenty-first international conference on Machine learning, p. 37. ACM Press, New York (2004)

    Chapter  Google Scholar 

  22. Freund, Y., Schapire, R.: A decision-theoretic generalization of on-line learning and an application to boosting. In: Proceedings of the Second European Conference on Computational Learning Theory, pp. 23–37. Springer Verlag, Heidelberg (1995)

    Google Scholar 

  23. Friedman, J.H.: Greedy function approximation: A gradient boosting machine. Annals of Statistics 29, 1189–1232 (2000)

    Article  Google Scholar 

  24. Gama, J., Brazdil, P.: Cascade generalization. Mach. Learn. 41(3), 315–343 (2000)

    Article  MATH  Google Scholar 

  25. Gelbukh, A., Reyes-Garcia, C.A. (eds.): MICAI 2006. LNCS (LNAI), vol. 4293. Springer, Heidelberg (2006)

    Google Scholar 

  26. Granitto, P., Verdes, P., Ceccatto, H.: Neural network ensembles: evaluation of aggregation algorithms. Artificial Intelligence 163, 139–162 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  27. Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Trans. Pattern Anal. Machine Intelligence 12(10), 993–1001 (1990)

    Article  Google Scholar 

  28. Islam, M. M., Yao, X., Murase, K.: A constructive algorithm for training cooperative neural network ensembles. IEEE Transitions on Neural Networks 14(4) (July 2003)

    Google Scholar 

  29. Islam, M.M., Yao, X., Nirjon, S.M.S., Islam, M.A., Murase, K.: Bagging and boosting negatively correlated neural networks (2008)

    Google Scholar 

  30. Ivakhnenko, A.G.: Polynomial theory of complex systems. IEEE Transactions on Systems, Man, and Cybernetics SMC-1(1), 364–378 (1971)

    Article  MathSciNet  Google Scholar 

  31. Jacobs, R.A.: Bias/variance analyses of mixtures-of-experts architectures. Neural Comput. 9(2), 369–383 (1997)

    Article  MATH  Google Scholar 

  32. Kaynak, C., Alpaydin, E.: Multistage cascading of multiple classifiers: One man’s noise is another man’s data. In: Proceedings of the Seventeenth International Conference on Machine Learning ICML 2000, pp. 455–462. Morgan Kaufmann, San Francisco (2000)

    Google Scholar 

  33. Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proceedings of International Joint Conference on Artificial Intelligence (1995)

    Google Scholar 

  34. Kordík, P.: Fully Automated Knowledge Extraction using Group of Adaptive Models Evolution. PhD thesis, Czech Technical University in Prague, FEE, Dep. of Comp. Sci. and Computers, FEE, CTU Prague, Czech Republic (September 2006)

    Google Scholar 

  35. Kordik, P.: Hybrid Self-Organizing Modeling Systems. In: Onwubolu, G.C. (ed.). Studies in Computational Intelligence, vol. 211, p. 290. Springer, Heidelberg (2009)

    Google Scholar 

  36. Kordík, P., Koutník, J., Drchal, J., Kovárík, O., Cepek, M., Snorek, M.: Meta-learning approach to neural network optimization. Neural Networks 23(4), 568–582 (2010)

    Article  Google Scholar 

  37. Kordík, P., Křemen, V., Lhotská, L.: The game algorithm applied to complex fractionated atrial electrograms data set. In: 18th International Conference Proceedings Artificial Neural Networks - ICANN 2008, vol. 2, pp. 859–868. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  38. Koren, Y.: Collaborative filtering with temporal dynamics. In: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining KDD 2009, pp. 447–456. ACM, New York (2009)

    Chapter  Google Scholar 

  39. Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. John Wiley and Sons, New York (2004)

    Book  MATH  Google Scholar 

  40. Kuncheva, L., Whitaker, C.: Ten measures of diversity in classifier ensembles: Limits for two classifiers. In: Proc. of IEE Workshop on Intelligent Sensor Processing, pp. 1–10 (2001)

    Google Scholar 

  41. Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning 51, 181–207 (2003)

    Article  MATH  Google Scholar 

  42. Kurkova, V.: Kolmogorov’s theorem is relevant. Neural Computation 3, 617–622 (1991)

    Article  Google Scholar 

  43. Liu, Y., Yao, X.: Ensemble learning via negative correlation. Neural Networks 12, 1399–1404 (1999)

    Article  Google Scholar 

  44. Mahfoud, S.W.: Niching methods for genetic algorithms. Technical Report 95001. Illinois Genetic Algorithms Laboratory (IlliGaL), University of Ilinios at Urbana-Champaign (May 1995)

    Google Scholar 

  45. Mandischer, M.: A comparison of evolution strategies and backpropagation for neural network training. Neurocomputing (42), 87–117 (2002)

    Article  MATH  Google Scholar 

  46. Marquardt, D.W.: An algorithm for least-squares estimation of nonlinear parameters. SIAM Journal on Applied Mathematics 11(2), 431–441 (1963)

    Article  MATH  MathSciNet  Google Scholar 

  47. Melville, P., Mooney, R.J.: Constructing diverse classifier ensembles using artificial training examples. In: Gottlob, G., Walsh, T. (eds.) IJCAI, pp. 505–512. Morgan Kaufmann, San Francisco (2003)

    Google Scholar 

  48. Mengshoel, O.J., Goldberg, D.E.: Probabilistic crowding: Deterministic crowding with probabilisitic replacement. In: Banzhaf, W., Daida, J., Eiben, A.E., Garzon, M.H., Honavar, V., Jakiela, M., Smith, R.E. (eds.) Proceedings of the Genetic and Evolutionary Computation Conference, vol. 1, pp. 409–416. Morgan Kaufmann, San Francisco (1999)

    Google Scholar 

  49. Muller, J.A., Lemke, F.: Self-Organising Data Mining. Berlin (2000), ISBN 3-89811-861-4

    Google Scholar 

  50. Nabney, I.T.: Efficient training of rbf networks for classification. Int. J. Neural Syst. 14(3), 201–208 (2004)

    Article  Google Scholar 

  51. Oh, S.K., Pedrycz, W.: The design of self-organizing polynomial neural networks. Inf. Sci. 141, 237–258 (2002)

    Article  MATH  Google Scholar 

  52. Oh, S.-K., Pedrycz, W., Park, B.-J.: Polynomial neural networks architecture: analysis and design. Computers and Electrical Engineering 29, 703–725 (2003)

    Article  Google Scholar 

  53. Pejznoch, J.: Niching Evolutionary Algorithms in GAME. Ph.D thesis, Czech Technical University in Prague, FEE, Dep. of Comp. Sci. and Computers, FEE, CTU Prague, Czech Republic (May 2010)

    Google Scholar 

  54. Pétrowski, A.: A clearing procedure as a niching method for genetic algorithms. In: International Conference on Evolutionary Computation, pp. 798–803 (1996)

    Google Scholar 

  55. Pilný, A., Kordík, P., Šnorek, M.: Feature ranking derived from data mining process. In: Kůrková, V., Neruda, R., Koutník, J. (eds.) ICANN 2008, Part II. LNCS, vol. 5164, pp. 889–898. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  56. Ritchie, M.D., White, B.C., Parker, J.S., Hahn, L.W., Moore, J.H.: Optimization of neural network architecture using genetic programmi ng improves detection and modeling of gene-gene interactions in studies of human diseases. BMC Bioinformatics 4(1) (July 2003)

    Google Scholar 

  57. Rokach, L.: Ensemble methods for classifiers. In: Maimon, O., Rokach, L. (eds.) The Data Mining and Knowledge Discovery Handbook, pp. 957–980 (2005)

    Google Scholar 

  58. Schapire, R.E.: The strength of weak learnability. Mach. Learn. 5(2), 197–227 (1990)

    Google Scholar 

  59. Sexton, R.S., Gupta, J.: Comparative evaluation of genetic algorithm and backpropagation for training neural networks. Information Sciences 129, 45–59 (2000)

    Article  MATH  Google Scholar 

  60. Stanley, K.O.: Efficient evolution of neural networks through complexification. PhD thesis (2004)

    Google Scholar 

  61. Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evolutionary Computation Massachusetts Institute of Technology 10(2), 99–127 (2002)

    Article  Google Scholar 

  62. Sung, Y.H., kyun Kim, T., Kee, S.C.: Hierarchical combination of face/non-face classifiers based on gabor wavelet and support vector machines (2009)

    Google Scholar 

  63. Töscher, A., Jahrer, M.: The bigchaos solution to the net ix grand prize. Technical report, commendo research & consulting (2009)

    Google Scholar 

  64. Černy, J.: Methods for combining models and classifiers. PhD thesis, Czech Technical University in Prague, FEE, Dep. of Comp. Sci. and Computers, FEE, CTU Prague, Czech Republic (May 2010)

    Google Scholar 

  65. Wang, S., Tang, K., Yao, X.: Diversity exploration and negative correlation learning on imbalanced data sets. In: Proceedings of the 2009 international joint conference on Neural Networks IJCNN 2009, pp. 1796–1803. IEEE Press, Piscataway (2009)

    Google Scholar 

  66. Webb, G.I., Zheng, Z.: Multi-strategy ensemble learning: Reducing error by combining ensemble learning techniques. IEEE Transactions on Knowledge and Data Engineering 16 (2004)

    Article  Google Scholar 

  67. Wolpert, D.H.: Stacked generalization. Neural Networks 5, 241–259 (1992)

    Article  Google Scholar 

  68. Wolpert, D.H., Macready, W.G.: Combining stacking with bagging to improve a learning algorithm. Technical report, Santa Fe Institute (1996)

    Google Scholar 

  69. Zhou, Z.-H., Wu, J., Tang, W.: Ensembling neural networks: Many could be better than all. Artificial Intelligence 137, 239–263 (2002)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Kordík, P., Černý, J. (2011). Self-organization of Supervised Models. In: Jankowski, N., Duch, W., Gra̧bczewski, K. (eds) Meta-Learning in Computational Intelligence. Studies in Computational Intelligence, vol 358. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-20980-2_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-20980-2_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-20979-6

  • Online ISBN: 978-3-642-20980-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics