Skip to main content

Diversity and Locality in Multi-Component, Multi-Layer Predictive Systems: A Mutual Information Based Approach

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10604))

Abstract

This paper discusses the effect of locality and diversity among the base models of a Multi-Components Multi-Layer Predictive System (MCMLPS). A new ensemble method is introduced, where in the proposed architecture, the data instances are assigned to local regions using a conditional mutual information based on the similarity of their features. Furthermore, the outputs of the base models are weighted by this similarity metric. The proposed architecture has been tested on a number of data sets and its performance was compared to four benchmark algorithms. Moreover, the effect of changing three parameters of the proposed architecture has been tested and compared.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Al-Jubouri, B., Gabrys, B.: Local learning for multi-layer, multi-component predictive system. Procedia Comput. Sci. 96, 723–732 (2016)

    Article  Google Scholar 

  2. Battiti, R.: Using mutual information for selecting features in supervised neural net learning. IEEE Trans. Neural Netw. 5(4), 537–550 (1994)

    Article  Google Scholar 

  3. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MATH  Google Scholar 

  4. Brown, G., Pocock, A., Zhao, M.J., Luján, M.: Conditional likelihood maximisation: a unifying framework for information theoretic feature selection. J. Mach. Learn. Res. 13, 27–66 (2012)

    MathSciNet  MATH  Google Scholar 

  5. Budka, M., Gabrys, B.: Density-preserving sampling: robust and efficient alternative to cross-validation for error estimation. IEEE Trans. Neural Netw. Learn. Syst. 24(1), 22–34 (2013)

    Article  Google Scholar 

  6. Cunningham, P., Carney, J.: Diversity versus quality in classification ensembles based on feature selection. In: López de Mántaras, R., Plaza, E. (eds.) ECML 2000. LNCS, vol. 1810, pp. 109–116. Springer, Heidelberg (2000). doi:10.1007/3-540-45164-1_12

    Chapter  Google Scholar 

  7. Dasarathy, B.V., Sheela, B.V.: A composite classifier system design: concepts and methodology. Proc. IEEE 67(5), 708–713 (1979)

    Article  Google Scholar 

  8. Eastwood, M., Gabrys, B.: The dynamics of negative correlation learning. J. VLSI Signal Proc. 49, 251–263 (2007)

    Article  Google Scholar 

  9. Guyon, I., Gunn, S., Nikravesh, M., Zadeh, L.A.: Feature Extraction: Foundations and Applications, vol. 207. Springer, Heidelberg (2008)

    MATH  Google Scholar 

  10. Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Comput. 3(1), 79–87 (1991)

    Article  Google Scholar 

  11. Kadlec, P., Gabrys, B.: Architecture for development of adaptive on-line prediction models. Memet. Comput. 1(4), 241–269 (2009)

    Article  Google Scholar 

  12. Lichman, M.: UCI machine learning repository (2013). http://archive.ics.uci.edu/ml

  13. Polikar, R.: Ensemble based systems in decision making. IEEE Circuits Syst. Mag. 6(3), 21–45 (2006)

    Article  Google Scholar 

  14. Riedel, S., Gabrys, B.: Pooling for combination of multi level forecasts. IEEE Trans. Knowl. Data Eng. 12(21), 1753–1766 (2009)

    Article  Google Scholar 

  15. Rodriguez, J.J., Kuncheva, L.I., Alonso, C.J.: Rotation forest: a new classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 28(10), 1619–1630 (2006)

    Article  Google Scholar 

  16. Ruta, D., Gabrys, B., Lemke, C.: A generic multilevel architecture for time series prediction. IEEE Trans. Knowl. Data Eng. 23(3), 350–359 (2011)

    Article  Google Scholar 

  17. Ruta, D., Gabrys, B.: New Measure of Classifier Dependency in Multiple Classifier Systems. In: Roli, F., Kittler, J. (eds.) MCS 2002. LNCS, vol. 2364, pp. 127–136. Springer, Heidelberg (2002). doi:10.1007/3-540-45428-4_13

    Chapter  Google Scholar 

  18. Ruta, D., Gabrys, B.: Classifier selection for majority voting. Inf. Fusion 6(1), 63–81 (2005)

    Article  MATH  Google Scholar 

  19. Schapire, R.E.: The strength of weak learnability. Mach. Learn. 5(2), 197–227 (1990)

    Google Scholar 

  20. Wolpert, D.H.: Stacked generalization. Neural Netw. 5(2), 241–259 (1992)

    Article  Google Scholar 

  21. Xue, F., Subbu, R., Bonissone, P.: Locally weighted fusion of multiple predictive models. In: International Joint Conference on Neural Networks, 2006. IJCNN’06, pp. 2137–2143. IEEE (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bassma Al-Jubouri .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Al-Jubouri, B., Gabrys, B. (2017). Diversity and Locality in Multi-Component, Multi-Layer Predictive Systems: A Mutual Information Based Approach. In: Cong, G., Peng, WC., Zhang, W., Li, C., Sun, A. (eds) Advanced Data Mining and Applications. ADMA 2017. Lecture Notes in Computer Science(), vol 10604. Springer, Cham. https://doi.org/10.1007/978-3-319-69179-4_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-69179-4_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-69178-7

  • Online ISBN: 978-3-319-69179-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics