Encyclopedia of Machine Learning

2010 Edition
| Editors: Claude Sammut, Geoffrey I. Webb

Semi-Naive Bayesian Learning

  • Fei Zheng
  • Geoffrey I. Webb
Reference work entry
DOI: https://doi.org/10.1007/978-0-387-30164-8_748

Definition

Semi-naive Bayesian learning refers to a field of  Supervised Classification that seeks to enhance the classification and conditional probability estimation accuracy of  naive Bayes by relaxing its attribute independence assumption.

Motivation and Background

The assumption underlying  naive Bayes is that attributes are independent of each other, given the class. This is an unrealistic assumption for many applications. Violations of this assumption can render naive Bayes’ classification suboptimal. There have been many attempts to improve the classification accuracy and probability estimation of naive Bayes by relaxing the attribute independence assumption while at the same time retaining much of its simplicity and efficiency.

Taxonomy of Semi-Naive Bayesian Techniques

Semi-naive Bayesian methods can be roughly subdivided into five high-level strategies for relaxing the independence assumption.
  • The first strategy forms an attribute subset by deleting attributes to remove...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. Ayer, M., Brunk, H. D., Ewing, G. M., Reid, W. T., & Silverman, E. (1955). An empirical distribution function for sampling with incomplete information. The Annals of Mathematical Statistics, 26(4), 641–647.MathSciNetMATHCrossRefGoogle Scholar
  2. Brain, D., & Webb, G. I. (2002). The need for low bias algorithms in classification learning from large data sets. In Proceedings of the Sixteenth European Conference on Principles of Data Mining and Knowledge Discovery (pp. 62–73). Berlin: Springer-Verlag.CrossRefGoogle Scholar
  3. Cerquides, J., & Mántaras, R. L. D. (2005). Robust Bayesian linear classifier ensembles. In Proceedings of the Sixteenth European Conference on Machine Learning, pp. 70–81.Google Scholar
  4. Frank, E., Hall, M., & Pfahringer, B. (2003). Locally weighted naive Bayes. In Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, Acapulco, Mexico (pp. 249–256). San Francisco, CA: Morgan Kaufmann.Google Scholar
  5. Friedman, N., Geiger, D., & Goldszmidt, M. (1997). Bayesian network classifiers. Machine Learning, 29(2), 131–163.MATHCrossRefGoogle Scholar
  6. Gama, J. (2003). Iterative Bayes. Theoretical Computer Science, 292(2), 417–430.MathSciNetMATHCrossRefGoogle Scholar
  7. Keogh, E. J., & Pazzani, M. J. (1999). Learning augmented Bayesian classifiers: A comparison of distribution-based and classification-based approaches. In Proceedings of the International Workshop on Artificial Intelligence and Statistics, pp. 225–230.Google Scholar
  8. Kittler, J., (1986). Feature selection and extraction. In T. Y. Young & K. S. Fu (Eds.), Handbook of Pattern Recognition and Image Processing. New York: Academic Press.Google Scholar
  9. Kohavi, R. (1996). Scaling up the accuracy of naive-Bayes classifiers: A decisiontree hybrid. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, pp. 202–207.Google Scholar
  10. Pazzani, M. J. (1996). Constructive induction of Cartesian product attributes. In ISIS: Information. Statistics and Induction in Science, Melbourne, Australia, (pp. 66–77). Singapore: World Scientific.Google Scholar
  11. Rissanen, J. (1978). Modeling by shortest data description. Automatica, 14, 465–471.MATHCrossRefGoogle Scholar
  12. Sahami, M. (1996). Learning limited dependence Bayesian classifiers. In Proceedings of the Second International Conference on Knowledge Discovery in Databases (pp. 334–338) Menlo Park: AAAI Press.Google Scholar
  13. Webb, G. I., & Pazzani, M. J. (1998). Adjusted probability naive Bayesian induction. In Proceedings of the Eleventh Australian Joint Conference on Artificial Intelligence, Sydney, Australia (pp. 285–295). Berlin: Springer.Google Scholar
  14. Webb, G. I., Boughton, J., & Wang, Z. (2005). Not so naive Bayes: Aggregating onedependence estimators. Machine Learning, 58(1), 5–24.MATHCrossRefGoogle Scholar
  15. Zadrozny, B., & Elkan, C. (2002). Transforming classifier scores into accurate multiclass probability estimates. In Proceedings of the Eighth International Conference on Knowledge Discovery and Data Mining, Edmonton, Alberta, Canada (pp. 694–699). New York: ACM Press.Google Scholar
  16. Zhang, N. L., Nielsen, T. D., & Jensen, F. V. (2004). Latent variable discovery in classification models. Artificial Intelligence in Medicine, 30(3), 283–299.CrossRefGoogle Scholar
  17. Zheng, Z., & Webb, G. I. (2000). Lazy learning of Bayesian rules. Machine Learning, 41(1), 53–84.CrossRefGoogle Scholar
  18. Zheng, F., & Webb, G. I. (2005). A comparative study of semi-naive Bayes methods in classification learning. In Proceedings of the Fourth Australasian Data Mining Conference, Sydney, pp. 141–156.Google Scholar
  19. Zheng, F., & Webb, G. I. (2006). Efficient lazy elimination for averaged-one dependence estimators. In Proceedings of the Twenty-third International Conference on Machine Learning (pp. 1113–1120). New York: ACM Press.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • Fei Zheng
  • Geoffrey I. Webb

There are no affiliations available