Advertisement

Recognizing reliability of discovered knowledge

  • Petr Berka
Poster Session 6
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1263)

Abstract

When using discovered knowledge for decision making (e.g. classification in the case of machine learning), the question of reliability becomes very important. Unlike global view on the algorithms (evaluation of overall accuracy on some testing data) or unlike multistrategy learning (voting of more classifiers), we propose “local” evaluation for each example using one classifier. The basic idea is to learn to classify the correct decisions made by the classifier. This is done by creating new class attribute “match” and by running the learning algorithm on the same input attributes. We call this (second) step verification. Some first preliminary experimental results of this method used with C4.5 and CN4 are reported. These results show that: (1) if the classification accuracy is very high, it makes no sence to perform the verification step (since the verification step will create only the majority rule), (2) in multiple-class and/or noisy domains the verification accuracy can be significantly higher then the classification accuracy.

References

  1. 1.
    Berka, P., Ivánek, J.: Automated Knowledge Acquisition for PROSPECTOR-like Expert Systems. In: (Bergadano, deRaedt eds.) Proc. ECML'94, Springer 1994, 339–342.Google Scholar
  2. 2.
    Brazdil, P., Gama, J., Henery, B.: Characterizing the Applicability of Classification Algorithms Using Meta-Level Learning. In: (Bergadano, deRaedt eds.) Proc. ECML'94, Springer 1994, 83–102.Google Scholar
  3. 3.
    Bruha, L., Kočková, S.: A covering learning algorithm for cost-sensitive and noisy environments. In: Proc. of ECML'93 Workshop on Learning Robots, 1993.Google Scholar
  4. 4.
    Clark, P., Niblett, T.: The CN2 induction algorithm. Machine Learning 3 (1989), 261–283.Google Scholar
  5. 5.
    Domingos, P., Pazzani, M.: Beyond Independence: Conditions for the Optimality of the Simple Bayesian Classifier. In: (Saitta ed.) Proc. ICML'96, Morgan Kaufman, 1996, 105–112.Google Scholar
  6. 6.
    Kodratoff, Y., Sleeman, D., Uszynski, M., Causse, K., Craw, S.: Building a Machine Learning Toolbox. In: (Steels, Lepape eds.) Enhancing the Knowledge-Engineering Process — Contributions from Esprit. Elsevier, 1992, 81–108.Google Scholar
  7. 7.
    Kohavi, R., Wolpert, D.H.: Bias plus Variance Decomposition for Zero-One Loss Functions. In: (Saitta ed.) Proc. ICML'96, Morgan Kaufman, 1996, 275–283.Google Scholar
  8. 8.
    Murphy, P.M., Aha, D.W.: UCI Repository of Machine Learning Databases. Irvine, University of California, Dept. of Information and Computer Science.Google Scholar
  9. 9.
    Piatetsky-Shapiro. G., Matheus, C., Smyth, P.: KDD-93: Progress and Challenges in Knowledge Discovery in Databases, 1993.Google Scholar
  10. 10.
    Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufman, 1993.Google Scholar
  11. 11.
    Taylor, C., Michie, D., Spiegelhalter, D.: Machine Learning, Neural and Statistical Classification. Ellis Horwood, 1994.Google Scholar
  12. 12.
    Tsumoto, S., Tanaka, H.: Automated selection of Rule Induction Methods based on Recursive Iteration of Resampling Methods and Multiple Statistical Testing. In: Proc. KDD-95, 1995, 312–317.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Petr Berka
    • 1
  1. 1.Laboratory of Intelligent SystemsPrague University of EconomicsPrague

Personalised recommendations