Advertisement

Metaclassifiers

  • Massimo Buscema
  • Stefano Terzi
Chapter

Abstract

This chapter describes methods by which data can be classified. There are many methods which purport to classify data, and each one performs the classification in a different manner and typically with differing results. The variation in outcome can be explained by saying that the different mathematics associated with each method views the data from various different perspectives, assigning data to classifications that can, and usually are, different. A metaclassifier, however, is a method by which the results of these individual classifiers are considered as input to an ANN that forms the classifications based on the differing views and perspectives of the individual ANNs. In short, the different perspectives of the individual ANNs are brought together to produce a single, superior classification taking into account the various algorithms that produce certain views of the data. The MetaNet is developed in detail and shown to be better than any other metaclassifier ANN.

References

  1. Bridle, J. S. (1989). Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In F. Fogelman-Soulié & J. Hérault (Eds.), Neuro-computing: Algorithms, architectures. New York: Springer.Google Scholar
  2. Buscema, M. (1998). MetaNet: The theory of independent judges. In Substance use and misuse (Vol. 33, n. 2 (Models), pp. 439–461). New York: Marcel Dekker, Inc.Google Scholar
  3. Buscema, M., Terzi, S., & Tastle, W. (2010). A new meta-classifier. In NAFIPS 2010, 12–14 July, Toronto, Canada.Google Scholar
  4. Dietterich, T. (2002). Ensemble learning. In M. A Arbib (Ed.), The handbook of brain theory and neural networks (2nd ed.). Cambridge, MA: The MIT Press.Google Scholar
  5. Kuncheva, L. I. (2004). Combining pattern classifiers. Methods and algorithms. Hoboken: Wiley.CrossRefGoogle Scholar

Bibliography

  1. Asuncion, A., & Newman, D. J. (2007). UCI machine learning repository. http://www.ics.uci.edu/~mlearn/MLRepository.html. Irvine: University of California, School of Information and Computer Science.
  2. Breiman, L., Friedman, J. H., Olshen, R. A., & Stone, C. J. (1993). Classification and regression trees. Boca Raton: Chapman & Hall.Google Scholar
  3. Duda, R. O., Hart, P. E., & Stork, D. G. (2001). Pattern classification. New York: Wiley.Google Scholar
  4. Hastie, T., Tibshirani, R., & Friedman, J. (2). The elements of statistical learning. Data mining, inference and prediction (Springer series in statistics). New York: Springer.Google Scholar
  5. Huang, Y. S., & Suen, C. Y. (1995). A method for combining multiple expert for the recognition of unconstrained handwritten numerals. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17, 90–93.CrossRefGoogle Scholar
  6. Kuncheva, L. I. (2000). Clustering-and-selection model for classifier combination. In Proceedings of knowledge-based intelligent engineering systems and allied technologies (pp. 185–188). Brighton, UK.Google Scholar
  7. Quinlan, J. R. (1993). C.45: Programs for machine learning. San Mateo: Morgan Kaufmann Publishers.Google Scholar
  8. Rogova, G. (1994). Combining the results of several neural network classifiers. Neural Networks, 7, 777–781.CrossRefGoogle Scholar
  9. Witten, I. H., & Eibe, F. (2005). Data mining. Practical machine learning tools and techniques. Amsterdam: Elsevier.Google Scholar
  10. Woods, K., Kegelmeyer, W. P., & Bowyer, K. (1997). Combination of multiple classifiers using local accuracy estimates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, 405–410.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2013

Authors and Affiliations

  1. 1.Semeion Research Center of Sciences of CommuicationRomeItaly

Personalised recommendations