Advertisement

A New Notion of Weakness in Classification Theory

  • Igor T. Podolak
  • Adam Roman
Part of the Advances in Intelligent and Soft Computing book series (AINSC, volume 57)

Summary

The notion of a weak classifier, as one which is “a little better” than a random one, was introduced first for 2-class problems [1]. The extensions to K-class problems are known. All are based on relative activations for correct and incorrect classes and do not take into account the final choice of the answer. A new understanding and definition is proposed here. It takes into account only the final choice of classification that must be taken. It is shown that for a K class classifier to be called “weak”, it needs to achieve lower than 1/K risk value. This approach considers only the probability of the final answer choice, not the actual activations.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Kearns, M., Valiant, L.: Cryptographic limitations on learning Boolean formulae and finite automata. Journal of the Association for Computing Machinery 41(1), 67–95Google Scholar
  2. 2.
    Bax, E.: Validation of Voting Committees. Neural Computation 4(10), 975–986 (1998)Google Scholar
  3. 3.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer, New York (2001)Google Scholar
  4. 4.
    Tresp, V.: Committee Machines. In: Hu, Y.H., Hwang, J.-N. (eds.) Handbook for Neural Network Signal Processing. CRC Press, Boca Raton (2001)Google Scholar
  5. 5.
    Kittler, J., Hojjatoleslami, A., Windeatt, T.: Strategies for combining classifiers employing shared and distinct pattern representations. Pattern Recognition Letters 18, 1373–1377 (1997)Google Scholar
  6. 6.
    Schapire, R.E.: The strength of weak learnability. Machine Learning 5, 197–227 (1990)Google Scholar
  7. 7.
    Eibl, G., Pfeiffer, K.-P.: Multiclass boosting for weak classifiers. Journal of Machine Learning 6, 189–210 (2005)Google Scholar
  8. 8.
    Freund, Y., Schapire, R.E.: A decision theoretic generalization of online learning and an application to boosting. Journal of Computer and System Sciences 55, 119–139 (1997)Google Scholar
  9. 9.
    Podolak, I.T.: Hierarchical Classifier with Overlapping Class Groups. Expert Systems with Applications 34(1), 673–682 (2008)Google Scholar
  10. 10.
    Podolak, I.T., Biel, S.: Hierarchical classifier. In: Wyrzykowski, R., Dongarra, J., Meyer, N., Waśniewski, J. (eds.) PPAM 2005. LNCS, vol. 3911, pp. 591–598. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  11. 11.
    Podolak, I.T.: Hierarchical rules for a hierarchical classifier. In: Beliczynski, B., Dzielinski, A., Iwanowski, M., Ribeiro, B. (eds.) ICANNGA 2007. LNCS, vol. 4431, pp. 749–757. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  12. 12.
    Podolak, I.T., Roman, A.: Improving the accuracy of a hierarchical classifier (in preparation)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Igor T. Podolak
    • 1
  • Adam Roman
    • 1
  1. 1.Institute of Computer Science, Faculty of Mathematics and Computer ScienceJagiellonian UniversityKrakówPoland

Personalised recommendations