Advertisement

Adjusted probability Naive Bayesian induction

  • Geoffrey I. Webb
  • Michael J. Pazzani
Scientific Track
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1502)

Abstract

Naive Bayesian classifiers utilise a simple mathematical model for induction. While it is known that the assumptions on which this model is based are frequently violated, the predictive accuracy obtained in discriminate classification tasks is surprisingly competitive in comparison to more complex induction techniques. Adjusted probability naive Bayesian induction adds a simple extension to the naive Bayesian classifier. A numeric weight is inferred for each class. During discriminate classification, the naive Bayesian probability of a class is multiplied by its weight to obtain an adjusted value. The use of this adjusted value in place of the naive Bayesian probability is shown to significantly improve predictive accuracy.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Clark, P. & Niblett, T. (1989). The CN2 induction algorithm. Machine Learning, 3, 261–284.Google Scholar
  2. Domingos, P. & Pazzani, M. (1996). Beyond independence: Conditions for the optimality of the simple Bayesian classifier. In Proceedings of the Thirteenth International Conference on Machine Learning, pp. 105–112, Bari, Italy. Morgan Kaufmann.Google Scholar
  3. Duda, R. & Hart, P. (1973). Pattern Classification and Scene Analysis. John Wiley and Sons, New York.MATHGoogle Scholar
  4. Kohavi, R. (1996). Scaling up the accuracy of naive-Bayes classifiers: A decision-tree hybrid. In KDD-96 Portland, Or.Google Scholar
  5. Kononenko, I. (1991). Semi-naive Bayesian classifier. In ECAI-91, pp. 206–219.Google Scholar
  6. Langley, P. (1993). Induction of recursive Bayesian classifiers. In Proceedings of the 1993 European Conference on Machine Leanring, pp. 153–164, Vienna. Springer-Verlag.Google Scholar
  7. Langley, P., Iba, W., & Thompson, K. (1992). An analysis of Bayesian classifiers. In Proceedings of the Tenth National Conference on Artificial Intelligence, pp. 223–228, San Jose, CA. AAAI Press.Google Scholar
  8. Langley, P. & Sage, S. (1994). Induction of selective Bayesian classifiers. In Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pp. 399–406, Seattle, WA. Morgan Kaufmann.Google Scholar
  9. Merz, C. J. & Murphy, P. M. (1998) UCI Repository of Machine Learning Databases. [Machine-readable data repository]. University of California, Department of Information and Computer Science, Irvine, CA.Google Scholar
  10. Pazzani, M. J. (1996) Constructive induction of Cartesian product attributes. In ISIS: Information, Statistics and Induction in Science, pp. 66–77, Melbourne, Aust. World Scientific.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Geoffrey I. Webb
    • 1
  • Michael J. Pazzani
    • 2
  1. 1.School of Computing and MathematicsDeakin UniversityGeelongAustralia
  2. 2.Department of Information and Computer ScienceUniversity of California, IrvineIrvineUSA

Personalised recommendations