Machine Learning

, Volume 29, Issue 2–3, pp 103–130 | Cite as

On the Optimality of the Simple Bayesian Classifier under Zero-One Loss

  • Pedro Domingos
  • Michael Pazzani
Article

Abstract

The simple Bayesian classifier is known to be optimal when attributes are independent given the class, but the question of whether other sufficient conditions for its optimality exist has so far not been explored. Empirical results showing that it performs surprisingly well in many domains containing clear attribute dependences suggest that the answer to this question may be positive. This article shows that, although the Bayesian classifier's probability estimates are only optimal under quadratic loss if the independence assumption holds, the classifier itself can be optimal under zero-one loss (misclassification rate) even when this assumption is violated by a wide margin. The region of quadratic-loss optimality of the Bayesian classifier is in fact a second-order infinitesimal fraction of the region of zero-one optimality. This implies that the Bayesian classifier has a much greater range of applicability than previously thought. For example, in this article it is shown to be optimal for learning conjunctions and disjunctions, even though they violate the independence assumption. Further, studies in artificial domains show that it will often outperform more powerful classifiers for common training set sizes and numbers of attributes, even if its bias is a priori much less appropriate to the domain. This article's results also imply that detecting attribute dependence is not necessarily the best way to extend the Bayesian classifier, and this is also verified empirically.

Simple Bayesian classifier naive Bayesian classifier zero-one loss optimal classification induction with attribute dependences 

References

  1. Ben-Bassat, M., Klove, K. L., & Weil, M. H. (1980). Sensitivity analysis in Bayesian classification models: Multiplicative deviations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2, 261–266.Google Scholar
  2. Breiman, L. (1996). Bias, variance and arcing classifiers (Technical Report 460). Statistics Department, University of California at Berkeley, Berkeley, CA. ftp://ftp.stat.berkeley.edu/users/breiman/arcall.ps.Z.Google Scholar
  3. Cestnik, B. (1990). Estimating probabilities: A crucial task in machine learning. Proceedings of the Ninth European Conference on Artificial Intelligence. Stockholm, Sweden: Pitman.Google Scholar
  4. Clark, P., & Boswell, R. (1991). Rule induction with CN2: Some recent improvements. Proceedings of the Sixth European Working Session on Learning (pp. 151–163). Porto, Portugal: Springer-Verlag.Google Scholar
  5. Clark, P., & Niblett, T. (1989). The CN2 induction algorithm. Machine Learning, 3, 261–283.Google Scholar
  6. Cost, S., & Salzberg, S. (1993). A weighted nearest neighbor algorithm for learning with symbolic features. Machine Learning, 10, 57–78.Google Scholar
  7. DeGroot, M. H. (1986). Probability and statistics (2nd ed.). Reading, MA: Addison-Wesley.Google Scholar
  8. Dietterich, T. (1996). Statistical tests for comparing supervised classification learning algorithms (technical report). Department of Computer Science, Oregon State University, Corvallis, OR. ftp://ftp.cs.orst.edu/pub/tgd/papers/stats.ps.gz.Google Scholar
  9. Dougherty, J., Kohavi, R., & Sahami, M. (1995). Supervised and unsupervised discretization of continuous features. Proceedings of the Twelfth International Conference on Machine Learning (pp. 194–202). Tahoe City, CA: Morgan Kaufmann.Google Scholar
  10. Duda, R. O., & Hart, P. E. (1973). Pattern classification and scene analysis. New York, NY: Wiley.Google Scholar
  11. Flury, B., Schmid, M. J., & Narayanan, A. (1994). Error rates in quadratic discrimination with constraints on the covariance matrices. Journal of Classification, 11, 101–120.Google Scholar
  12. Friedman, J. H. (1996). On bias, variance, 0/1-loss, and the curse-of-dimensionality (technical report). Department of Statistics, Stanford University, Stanford, CA. ftp://playfair.stanford.edu/pub/friedman/kdd.ps.Z.Google Scholar
  13. Friedman, N., Geiger, D., & Goldszmidt, M. (1997). Bayesian network classifiers. Machine Learning (this volume).Google Scholar
  14. Haussler, D. (1988). Quantifying inductive bias: AI learning algorithms and Valiant's learning framework. Artificial Intelligence, 36, 177–221.Google Scholar
  15. John, G., & Langley, P. (1995). Estimating continuous distributions in Bayesian classifiers. Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence (pp. 338–345). Montr´eal, Canada: Morgan Kaufmann.Google Scholar
  16. Kohavi, R. (1995). Wrappers for performance enhancement and oblivious decision graphs. PhD thesis, Department of Computer Science, Stanford University, Stanford, CA.Google Scholar
  17. Kohavi, R. (1996). Scaling up the accuracy of naive-Bayes classifiers: A decision-tree hybrid. Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (pp. 202–207). Portland, OR: AAAI Press.Google Scholar
  18. Kohavi, R., Becker, B., & Sommerfield, D. (1997). Improving simple Bayes (technical report). Data Mining and Visualization Group, Silicon Graphics Inc., Mountain View, CA. ftp://starry.stanford.edu/pub/ronnyk/impSBC.ps.Z.Google Scholar
  19. Kohavi, R., & Wolpert, D. H. (1996). Bias plus variance decomposition for zero-one loss functions. Proceedings of the Thirteenth International Conference on Machine Learning (pp. 275–283). Bari, Italy: Morgan Kaufmann.Google Scholar
  20. Kong, E. B., & Dietterich, T. G. (1995). Error-correcting output coding corrects bias and variance. Proceedings of the Twelfth International Conference on Machine Learning (pp. 313–321). Tahoe City, CA: Morgan Kaufmann.Google Scholar
  21. Kononenko, I. (1990). Comparison of inductive and naive Bayesian learning approaches to automatic knowledge acquisition. In B. Wielinga (Ed.), Current Trends in Knowledge Acquisition. Amsterdam, The Netherlands: IOS Press.Google Scholar
  22. Kononenko, I. (1991). Semi-naive Bayesian classifier. Proceedings of the Sixth European Working Session on Learning (pp. 206–219). Porto, Portugal: Springer-Verlag.Google Scholar
  23. Kubat, M., Flotzinger, D., & Pfurtscheller, G. (1993). Discovering patterns in EEG-Signals: Comparative study of a few methods. Proceedings of the Eighth European Conference on Machine Learning (pp. 366–371). Vienna, Austria: Springer-Verlag.Google Scholar
  24. Langley, P. (1993). Induction of recursive Bayesian classifiers. Proceedings of the Eighth European Conference on Machine Learning (pp. 153–164). Vienna, Austria: Springer-Verlag.Google Scholar
  25. Langley, P., Iba, W., & Thompson, K. (1992). An analysis of Bayesian classifiers. Proceedings of the Tenth National Conference on Artificial Intelligence (pp. 223–228). San Jose, CA: AAAI Press.Google Scholar
  26. Langley, P., & Sage, S. (1994). Induction of selective Bayesian classifiers. In Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (pp. 399–406). Seattle, WA: Morgan Kaufmann.Google Scholar
  27. Merz, C. J., Murphy, P. M., & Aha, D. W. (1997). UCI repository of machine learning databases. Department of Information and Computer Science, University of California, Irvine, CA. http://www.ics.uci.edu/ mlearn/MLRepository.html.Google Scholar
  28. Niblett, T. (1987). Constructing decision trees in noisy domains. Proceedings of the Second European Working Session on Learning (pp. 67–78). Bled, Yugoslavia: Sigma.Google Scholar
  29. Pazzani, M. J. (1996). Searching for dependencies in Bayesian classifiers. In D. Fisher & H.-J. Lenz (Eds.), Learning from data: Artificial intelligence and statistics V (pp. 239–248). New York, NY: Springer-Verlag.Google Scholar
  30. Pazzani, M., Muramatsu, J., & Billsus, D. (1996). Syskill&Webert: Identifying interesting web sites. Proceedings of the Thirteenth National Conference on Artificial Intelligence (pp. 54–61). Portland, OR: AAAI Press.Google Scholar
  31. Pazzani, M., & Sarrett, W. (1990). A framework for average case analysis of conjunctive learning algorithms. Machine Learning, 9, 349–372.Google Scholar
  32. Quinlan, J. R. (1993). C4.5: Programs for machine learning. San Mateo, CA: Morgan Kaufmann.Google Scholar
  33. Russek, E., Kronmal, R. A., & Fisher, L. D. (1983). The effect of assuming independence in applying Bayes' theorem to risk estimation and classification in diagnosis. Computers and Biomedical Research, 16, 537–552.Google Scholar
  34. Sahami, M. (1996). Learning limited dependence Bayesian classifiers. Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (pp. 335–338). Portland, OR: AAAI Press.Google Scholar
  35. Singh, M., & Provan, G. M. (1995). Acomparison of induction algorithms for selective and non-selective Bayesian classifiers. Proceedings of the Twelfth International Conference on Machine Learning (pp. 497–505). Tahoe City, CA: Morgan Kaufmann.Google Scholar
  36. Singh, M., & Provan, G. M. (1996). Efficient learning of selective Bayesian network classifiers. Proceedings of the Thirteenth International Conference on Machine Learning (pp. 453–461). Bari, Italy: Morgan Kaufmann.Google Scholar
  37. Tibshirani, R. (1996). Bias, variance and prediction error for classification rules (technical report). Department of Preventive Medicine and Biostatistics, University of Toronto, Toronto, Ontario. http://utstat.toronto.edu/reports/tibs/biasvar.ps.Google Scholar
  38. Wan, S. J., & Wong, S. K. M. (1989). A measure for concept dissimilarity and its applications in machine learning. Proceedings of the International Conference on Computing and Information (pp. 267–273). Toronto, Ontario: North-Holland.Google Scholar

Copyright information

© Kluwer Academic Publishers 1997

Authors and Affiliations

  • Pedro Domingos
    • 1
  • Michael Pazzani
    • 1
  1. 1.Department of Information and Computer ScienceUniversity of CaliforniaIrvine

Personalised recommendations