Abstract
Simple Bayes algorithm captures the assumption that every feature is independent from the rest of the features, given the state of the class feature. The fact that the assumption of independence is clearly almost always wrong has led to a general rejection of the crude independence model in favor of more complicated alternatives, at least by researchers knowledgeable about theoretical issues. In this study, we attempted to increase the prediction accuracy of the simple Bayes model. Because the concept of combining classifiers is proposed as a new direction for the improvement of the performance of individual classifiers, we made use of Adaboost, with the difference that in each iteration of Adaboost, we used a discretization method and we removed redundant features using a filter feature selection method. Finally, we performed a large-scale comparison with other attempts that have tried to improve the accuracy of the simple Bayes algorithm as well as other state-of-the-art algorithms and ensembles on 26 standard benchmark datasets and we took better accuracy in most cases using less time for training, too.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Aha, D.: Lazy Learning. Kluwer Academic Publishers, Dordrecht (1997)
Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning 36, 105–139 (1999)
Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases. Irvine, CA: University of California, Department of Information and Computer Science (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html
Breiman, L.: Bagging Predictors. Machine Learning 24, 123–140 (1996)
Domingos, P., Pazzani, M.: On the optimality of the simple Bayesian classifier under zero-one loss. Machine Learning 29, 103–130 (1997)
Dougherty, J., Kohavi, R., Shami, M.: Supervised and unsupervised discretization of continuous features. In: Proceedings of the twelfth International Conference of Machine Learning, Morgan Kaufmann, San Francisco (1995)
Fayyad, U., Irani, K.: Multi-interval discretization of continuous-valued attributes for classification learning. In: Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, pp. 1022–1027. Morgan Kaufmann, San Francisco (1993)
Freund, Y., Schapire, R.E.: Experiments with a New Boosting Algorithm. In: Proceedings of ICML 1996, pp. 148–156 (1996)
Friedman, J.H.: On bias, variance, 0/1-loss and curse-of-dimensionality. Data Mining and Knowledge Discovery 1, 55–77 (1997)
Friedman, N., Geiger, D., Goldszmidt, M.: Bayesian network classifiers. Machine Learning 29, 131–163 (1997)
Gama, J.: Iterative Bayes. Intelligent Data Analysis 6, 463–473 (2000)
Kohavi, R., John, G.: Wrappers for feature subset selection. Artificial Intelligence 2, 273–324 (1997)
Langley, P., Sage, S.: Induction of selective Bayesian classifiers. In: Proc. of the 10th Conference on Uncertainty in Artificial Intelligence, Seattle, pp. 399–406 (1994)
Michie, D., Spiegelhalter, D., Taylor, C.: Machine Learning, Neural and Statistical Classification, Ellis Horwood (1994)
Mitchell, T.: Machine Learning. McGraw-Hill, New York (1997)
Pazzani, M.: Searching for dependencies in Bayesian classifiers. In: Artificial Intelligence and Statistics IV. Lecture Notes in Statistics, Springer, New York (1997)
Platt, J.: Using sparseness and analytic QP to speed training of support vector machines. In: Kearns, M.S., Solla, S.A., Cohn, D.A. (eds.) Advances in neural information processing systems 11, MIT Press, MA (1999)
Quinlan, J.: C4.5: Programs for machine learning. Morgan Kaufmann, San Francisco (1993)
Quinlan, J.R.: Bagging, boosting, and C4.5. In: Proceedings of the Thirteenth National Conference on Artificial Intelligence, pp. 725–730 (1996)
Ratanamahatana, C., Gunopulos, D.: Feature Selection for the Naive Bayesian Classifier using Decision Trees. Applied Artificial Intelligence 17, 475–487 (2003)
Ridgeway, G., Madigan, D., Richardson, T.: Interpretable boosted Naive Bayes classification. In: Proceedings of the Fourth International Conference on Knowledge Discovery and Data Mining, Menlo Park, pp. 101–104 (1998)
Salzberg, S.: On Comparing Classifiers: Pitfalls to Avoid and a Recommended Approach. Data Mining and Knowledge Discovery 1, 317–328 (1997)
Singh, M., Provan, G.: Efficient learning of selective Bayesian network classifiers. In: Proc of the 13th International Conference on Machine Learning, Bari, pp. 453–461 (1996)
Ting, K., Zheng, Z.: Improving the Performance of Boosting for Naive Bayesian Classification. In: Zhong, N., Zhou, L. (eds.) PAKDD 1999. LNCS (LNAI), vol. 1574, pp. 296–305. Springer, Heidelberg (1999)
Tsymbal, A., Puuronen, S., Patterson, D.: Feature Selection for Ensembles of Simple Bayesian Classifiers. In: Hacid, M.-S., Raś, Z.W., Zighed, D.A., Kodratoff, Y. (eds.) ISMIS 2002. LNCS (LNAI), vol. 2366, pp. 592–600. Springer, Heidelberg (2002)
Witten, Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann, San Mateo (2000)
Zheng, Z., Webb, G.I.: Lazy learning of Bayesian rules. Machine Learning 41, 53–84 (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kotsiantis, S.B., Pintelas, P.E. (2004). Increasing the Classification Accuracy of Simple Bayesian Classifier. In: Bussler, C., Fensel, D. (eds) Artificial Intelligence: Methodology, Systems, and Applications. AIMSA 2004. Lecture Notes in Computer Science(), vol 3192. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30106-6_20
Download citation
DOI: https://doi.org/10.1007/978-3-540-30106-6_20
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-22959-9
Online ISBN: 978-3-540-30106-6
eBook Packages: Springer Book Archive