Skip to main content

Adjusted probability Naive Bayesian induction

  • Scientific Track
  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1502))

Abstract

Naive Bayesian classifiers utilise a simple mathematical model for induction. While it is known that the assumptions on which this model is based are frequently violated, the predictive accuracy obtained in discriminate classification tasks is surprisingly competitive in comparison to more complex induction techniques. Adjusted probability naive Bayesian induction adds a simple extension to the naive Bayesian classifier. A numeric weight is inferred for each class. During discriminate classification, the naive Bayesian probability of a class is multiplied by its weight to obtain an adjusted value. The use of this adjusted value in place of the naive Bayesian probability is shown to significantly improve predictive accuracy.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Clark, P. & Niblett, T. (1989). The CN2 induction algorithm. Machine Learning, 3, 261–284.

    Google Scholar 

  • Domingos, P. & Pazzani, M. (1996). Beyond independence: Conditions for the optimality of the simple Bayesian classifier. In Proceedings of the Thirteenth International Conference on Machine Learning, pp. 105–112, Bari, Italy. Morgan Kaufmann.

    Google Scholar 

  • Duda, R. & Hart, P. (1973). Pattern Classification and Scene Analysis. John Wiley and Sons, New York.

    MATH  Google Scholar 

  • Kohavi, R. (1996). Scaling up the accuracy of naive-Bayes classifiers: A decision-tree hybrid. In KDD-96 Portland, Or.

    Google Scholar 

  • Kononenko, I. (1991). Semi-naive Bayesian classifier. In ECAI-91, pp. 206–219.

    Google Scholar 

  • Langley, P. (1993). Induction of recursive Bayesian classifiers. In Proceedings of the 1993 European Conference on Machine Leanring, pp. 153–164, Vienna. Springer-Verlag.

    Google Scholar 

  • Langley, P., Iba, W., & Thompson, K. (1992). An analysis of Bayesian classifiers. In Proceedings of the Tenth National Conference on Artificial Intelligence, pp. 223–228, San Jose, CA. AAAI Press.

    Google Scholar 

  • Langley, P. & Sage, S. (1994). Induction of selective Bayesian classifiers. In Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pp. 399–406, Seattle, WA. Morgan Kaufmann.

    Google Scholar 

  • Merz, C. J. & Murphy, P. M. (1998) UCI Repository of Machine Learning Databases. [Machine-readable data repository]. University of California, Department of Information and Computer Science, Irvine, CA.

    Google Scholar 

  • Pazzani, M. J. (1996) Constructive induction of Cartesian product attributes. In ISIS: Information, Statistics and Induction in Science, pp. 66–77, Melbourne, Aust. World Scientific.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Grigoris Antoniou John Slaney

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Webb, G.I., Pazzani, M.J. (1998). Adjusted probability Naive Bayesian induction. In: Antoniou, G., Slaney, J. (eds) Advanced Topics in Artificial Intelligence. AI 1998. Lecture Notes in Computer Science, vol 1502. Springer, Berlin, Heidelberg . https://doi.org/10.1007/BFb0095060

Download citation

  • DOI: https://doi.org/10.1007/BFb0095060

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65138-3

  • Online ISBN: 978-3-540-49561-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics