Skip to main content

Advertisement

SpringerLink
  • Log in
  1. Home
  2. Machine Learning
  3. Article
Information-Based Evaluation Criterion for Classifier's Performance
Download PDF
Your article has downloaded

Similar articles being viewed by others

Slider with three articles shown per slide. Use the Previous and Next buttons to navigate the slides or the slide controller buttons at the end to navigate through each slide.

Using p-values for the comparison of classifiers: pitfalls and alternatives

11 April 2022

Daniel Berrar

An extensive experimental evaluation of automated machine learning methods for recommending classification algorithms

19 August 2020

M. P. Basgalupp, R. C. Barros, … A. A. Freitas

Extreme value correction: a method for correcting optimistic estimations in rule learning

26 June 2018

Martin Možina, Janez Demšar, … Jure Žabkar

Entropy-Based Estimation in Classification Problems

01 March 2019

Yu. A. Dubnov

Notes on the H-measure of classifier performance

10 January 2022

D. J. Hand & C. Anagnostopoulos

A power-controlled reliability assessment for multi-class probabilistic classifiers

17 November 2022

Hyukjun Gweon

General Performance Score for classification problems

31 January 2022

Isaac Martín De Diego, Ana R. Redondo, … Javier M. Moguerza

Constrained Naïve Bayes with application to unbalanced data classification

20 October 2021

Rafael Blanquero, Emilio Carrizosa, … M. Remedios Sillero-Denamiel

Efficient set-valued prediction in multi-class classification

06 May 2021

Thomas Mortier, Marek Wydmuch, … Willem Waegeman

Download PDF
  • Published: January 1991

Information-Based Evaluation Criterion for Classifier's Performance

  • Igor Kononenko1 &
  • Ivan Bratko2 

Machine Learning volume 6, pages 67–80 (1991)Cite this article

  • 1875 Accesses

  • 101 Citations

  • Metrics details

Abstract

In the past few years many systems for learning decision rules from examples were developed. As different systems allow different types of answers when classifying new instances, it is difficult to appropriately evaluate the systems' classification power in comparison with other classification systems or in comparison with human experts. Classification accuracy is usually used as a measure of classification performance. This measure is, however, known to have several defects. A fair evaluation criterion should exclude the influence of the class probabilities which may enable a completely uninformed classifier to trivially achieve high classification accuracy. In this paper a method for evaluating the information score of a classifier's answers is proposed. It excludes the influence of prior probabilities, deals with various types of imperfect or probabilistic answers and can be used also for comparing the performance in different domains.

Download to read the full article text

Working on a manuscript?

Avoid the most common mistakes and prepare your manuscript for journal editors.

Learn more

References

  • Bratko, I., & Kononenko, I.(1987).Learning rules from incomplete and noisy data.In B. Phelps (Ed.), Interac-tions in artificial intelligence and statistical methods.Hampshire, England: Technical Press.

    Google Scholar 

  • Breiman, L., Friedman, J.H., Olshen, R.A., & Stone, C.J.(1984).Classification and regression trees, Belmont, California: Wadsworth, Int.Group.

    Google Scholar 

  • Cestnik, B., Kononenko, I., & Bratko, I.(1987).ASSISTANT 86:A knowledge elicitation tool for sophisticated users.In I. Bratko, N. Lavrac (Eds.), Progress in machine learning.Wilmslow, England: Sigma Press.

    Google Scholar 

  • Clark, P., & Niblett, T.(1987).Learning if then rules in noisy domains.In B. Phelps (Ed.), Interactions in ar-tificial intelligence and statistical methods.Hampshire, England: Technical Press.

    Google Scholar 

  • Hansmann, D.R., Sheppard, J.J., & Yeshaya, A.(1976).Evaluation of the Dyna-Gram Holler ECG Analysis System, Computers in Cardiology, 1976, 171-182.

    Google Scholar 

  • Horn, K.A., Compton, P., Lazarus, L., & Quinlan, J.R.(1985).An expert system for the interpretation of thyroid assays in a clinical laboratory.The Australian Computer Journal, 17, 7-11.

    Google Scholar 

  • Michalski, R.S., & Chilausky, R.L.(1980).Learning by being told and learning from examples:An experimen-tal comparison of the two methods of knowledge acquisition in the context of developing an expert system for soybean disease diagnosis.International Journal of Policy Analysis and Information Systems, 4, 125-161.

    Google Scholar 

  • Michalski, R.S., Mozetic, I., Hong, J., & Lavrac, N.(1986).The multipurpose incremental learning system AQ15 and its testing application to three medical domains.Proceedings of the National Conference on Artificial In-telligence AMI 86.Philadelphia.

  • Mozetic, I., Lavrac, N., & Kononenko, I.(1986).Automatic construction of diagnostic rules.Proceedings of the Fourth Mediterranean Conference on Medical & Biological Engineering.Sevilla, Spain.

  • Paterson, A., & Niblett, T.(1982).The ACLS user manual.Glasgow: Intelligent Terminals Ltd.

    Google Scholar 

  • Quinlan, J.R.(1979).Discovering rules by induction from large collections of examples.In D. Michie (Ed.), Expert systems in the microelectronic age.Edinburgh University Press.

  • Quinlan, J.R.(1986).Induction of decision trees.Machine Learning, 1, 81-106.

    Google Scholar 

  • Ripley, K.L., & Arthur, R.M.(1975).Evaluation and comparison of automatic arrhythmia detectors, Computers in Cardiology, 1975, 27-32.

    Google Scholar 

  • Shannon, C.E., & Weaver, W.(1949).The mathematical theory of communications.Urbana, IL: The University of Illinois Press.

    Google Scholar 

  • Spackman, K.A.(1989).Signal detection theory:Valuable tools for evaluating inductive learning, Proceedings of the Sixth International Workshop on Machine Learning, (pp.160-163)Ithaca, NY: Cornell University.

    Google Scholar 

  • Weiss, S.M., Galen, R.S., & Tadepalli, P.V.(1987).Optimizing the predictive value of diagnostic decision rules.Proceedings of the Sixth National Conf.on Artificial Intelligence AAAI-87, (pp.521-526)Seattle, Washington.

  • Williams, B.T.(Ed.)(1982).Computer aids to clinical decisions (Vol.I & II).Boca Raton, FL: CRC Press.

    Google Scholar 

  • Winter, J.(1982).Computer assessment of observer performance by receiver operating characteristic curve and information theory, Computers and Biomedical Research, 15, 555-562.

    Google Scholar 

Download references

Author information

Authors and Affiliations

  1. Faculty of Electrical and Computer Engineering, Trzaska 25, Ljubljana, Yugoslavia

    Igor Kononenko

  2. Faculty of Electrical and Computer Engineering, Jozef Stefan Institute, Jamova 39, Ljubljana, Yugoslavia

    Ivan Bratko

Authors
  1. Igor Kononenko
    View author publications

    You can also search for this author in PubMed Google Scholar

  2. Ivan Bratko
    View author publications

    You can also search for this author in PubMed Google Scholar

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Kononenko, I., Bratko, I. Information-Based Evaluation Criterion for Classifier's Performance. Machine Learning 6, 67–80 (1991). https://doi.org/10.1023/A:1022642017308

Download citation

  • Issue Date: January 1991

  • DOI: https://doi.org/10.1023/A:1022642017308

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Classifier
  • evaluation criteria
  • machine learning
  • information theory
Download PDF

Working on a manuscript?

Avoid the most common mistakes and prepare your manuscript for journal editors.

Learn more

Advertisement

Over 10 million scientific documents at your fingertips

Switch Edition
  • Academic Edition
  • Corporate Edition
  • Home
  • Impressum
  • Legal information
  • Privacy statement
  • California Privacy Statement
  • How we use cookies
  • Manage cookies/Do not sell my data
  • Accessibility
  • FAQ
  • Contact us
  • Affiliate program

Not logged in - 3.239.117.1

Not affiliated

Springer Nature

© 2023 Springer Nature Switzerland AG. Part of Springer Nature.