Advertisement

Hierarchical Classifier

  • Igor T. Podolak
  • Sławomir Biel
  • Marcin Bobrowski
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3911)

Abstract

Artificial Intelligence (AI) methods are used to build classifiers that give different levels of accuracy and solution explication. The intent of this paper is to provide a way of building a hierarchical classifier composed of several artificial neural networks (ANN’s) organised in a tree-like fashion. Such method of construction allows for partition of the original problem into several sub-problems which can be solved with simpler ANN’s, and be built quicker than a single ANN. As the sub-problems extracted start to be independent of one another, this paves a way to realise the solutions for the individual sub-problems in a parallel fashion. It is observed that incorrect classifications are not random and can be therefore used to find clusters defining sub-problems.

Keywords

Confusion Matrix Electric Power Consumption Satisfying Accuracy Committee Machine Practical Machine Learning Tool 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Merz, C.J., Blake, C.L.: Uci repository of machine learning databasesGoogle Scholar
  2. 2.
    Smyth, P., Hand, D., Mannila, H.: Principles of Data Mining. MIT Press, Cambridge (2001)Google Scholar
  3. 3.
    Bax, E.: Validation of voting committees. Neural Computation 10(4), 975–986 (1998)CrossRefGoogle Scholar
  4. 4.
    Haykin, S.: Neural networks, a comprehensive foundation. Prentice-Hall, Englewood Cliffs (1999)MATHGoogle Scholar
  5. 5.
    Frank, E., Witten, I.H.: Data mining: practical machine learning tools with Java implementations. Morgan Kaufmann, San Francisco (2000)Google Scholar
  6. 6.
    Sokolic, M., Zwitter, M.: Primary tumor data setGoogle Scholar
  7. 7.
    Sokal, R.R., Sneath, P.H.A.: Principles of Numerical Taxonomy. Freeman, San Francisco (1973)MATHGoogle Scholar
  8. 8.
    Quinlan, J.R.: C4.5: Programs for machine learning. Morgan Kaufmann, San Mateo (1993)Google Scholar
  9. 9.
    Leow, W.K., Setiono, R.: FERNN: An algorithm for fast extraction of rules from neural networks. Applied Intelligence 12(1-2), 15–25 (2000)Google Scholar
  10. 10.
    Setiono, R.: Feedforward neural network construction using cross validation. Neural Computation 13, 2865–2877 (2001)CrossRefMATHGoogle Scholar
  11. 11.
    Friedman, J., Hastie, T., Tibshirani, R.: The Elements of Statistical Learning. Springer, Heidelberg (2001)MATHGoogle Scholar
  12. 12.
    Rivest, R.L., Cormen, T.H., Leiserson, C.E.: Introduction to Algorithms, 2nd edn. The MIT Press, Cambridge (2001)MATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Igor T. Podolak
    • 1
  • Sławomir Biel
    • 1
  • Marcin Bobrowski
    • 1
  1. 1.Institute of Computer ScienceJagiellonian UniversityKrakówPoland

Personalised recommendations