Semi-Naive Bayesian Learning
Motivation and Background
The assumption underlying naive Bayes is that attributes are independent of each other, given the class. This is an unrealistic assumption for many applications. Violations of this assumption can render naive Bayes’ classification suboptimal. There have been many attempts to improve the classification accuracy and probability estimation of naive Bayes by relaxing the attribute independence assumption while at the same time retaining much of its simplicity and efficiency.
Taxonomy of Semi-Naive Bayesian Techniques
The first strategy forms an attribute subset by deleting attributes to remove...
- Cerquides, J., & Mántaras, R. L. D. (2005). Robust Bayesian linear classifier ensembles. In Proceedings of the Sixteenth European Conference on Machine Learning, pp. 70–81.Google Scholar
- Frank, E., Hall, M., & Pfahringer, B. (2003). Locally weighted naive Bayes. In Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, Acapulco, Mexico (pp. 249–256). San Francisco, CA: Morgan Kaufmann.Google Scholar
- Keogh, E. J., & Pazzani, M. J. (1999). Learning augmented Bayesian classifiers: A comparison of distribution-based and classification-based approaches. In Proceedings of the International Workshop on Artificial Intelligence and Statistics, pp. 225–230.Google Scholar
- Kittler, J., (1986). Feature selection and extraction. In T. Y. Young & K. S. Fu (Eds.), Handbook of Pattern Recognition and Image Processing. New York: Academic Press.Google Scholar
- Kohavi, R. (1996). Scaling up the accuracy of naive-Bayes classifiers: A decisiontree hybrid. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, pp. 202–207.Google Scholar
- Pazzani, M. J. (1996). Constructive induction of Cartesian product attributes. In ISIS: Information. Statistics and Induction in Science, Melbourne, Australia, (pp. 66–77). Singapore: World Scientific.Google Scholar
- Sahami, M. (1996). Learning limited dependence Bayesian classifiers. In Proceedings of the Second International Conference on Knowledge Discovery in Databases (pp. 334–338) Menlo Park: AAAI Press.Google Scholar
- Webb, G. I., & Pazzani, M. J. (1998). Adjusted probability naive Bayesian induction. In Proceedings of the Eleventh Australian Joint Conference on Artificial Intelligence, Sydney, Australia (pp. 285–295). Berlin: Springer.Google Scholar
- Zadrozny, B., & Elkan, C. (2002). Transforming classifier scores into accurate multiclass probability estimates. In Proceedings of the Eighth International Conference on Knowledge Discovery and Data Mining, Edmonton, Alberta, Canada (pp. 694–699). New York: ACM Press.Google Scholar
- Zheng, F., & Webb, G. I. (2005). A comparative study of semi-naive Bayes methods in classification learning. In Proceedings of the Fourth Australasian Data Mining Conference, Sydney, pp. 141–156.Google Scholar
- Zheng, F., & Webb, G. I. (2006). Efficient lazy elimination for averaged-one dependence estimators. In Proceedings of the Twenty-third International Conference on Machine Learning (pp. 1113–1120). New York: ACM Press.Google Scholar