A Linear-Bayes Classifier

  • João Gama
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1952)

Abstract

Naive Bayes is a well known and studied algorithm both in statistics and machine learning. Although its limitations with respect to expressive power, this procedure has a surprisingly good performance in a wide variety of domains, including many where there are clear dependencies between attributes. In this paper we address its main perceived limitation - its inability to deal with attribute dependencies. We present Linear Bayes that uses, for the continuous attributes, a multivariate normal distribution to compute the require probabilities. In this way, the interdependencies between the continuous attributes are considered. On the empirical evaluation, we compare Linear Bayes against a naive- Bayes that discretize continuous attributes, a naive-Bayes that assumes a univariate Gaussian for continuous attributes, and a standard Linear discriminant function. We show that Linear Bayes is a plausible algorithm, that competes quite well against other well established techniques.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    C. Blake, E. Keogh, and C.J. Merz. UCI repository of Machine Learning databases, 1999.Google Scholar
  2. 2.
    Pedro Domingos and Michael Pazzani. On the optimality of the simple Bayesian classifier under zero-one loss. Machine Learning,29:103–129, 1997.MATHCrossRefGoogle Scholar
  3. 3.
    J. Dougherty, R. Kohavi, and M. Sahami. Supervised and unsupervised discretization of continuous features. In A. Prieditis and S. Russel, editors, Machine Learning Proc. of 12th International Conference. Morgan Kaufmann, 1995.Google Scholar
  4. 4.
    R.O. Duda and P.E. Hart. Pattern Classification and Scene Analysis. New York, Willey and Sons, 1973.Google Scholar
  5. 5.
    Nir Friedman, Moises Goldszmidt, and Thomas J. Lee. Bayesian network classification with continuous features: Getting the best of both discretization and parametric fitting. In Jude Shavlik, editor, Proceedings of the 15th International Conference-ICML’98. Morgan Kaufmann, 1998.Google Scholar
  6. 6.
    J. Gama. Iterative Bayes. In S. Arikawa and K. Furukawa, editors, Discovery Science-Second International Conference. LNAI 1721, Springer Verlag, 1999.Google Scholar
  7. 7.
    George John. Enhancements to the data mining process. PhD thesis, Stanford University, 1997.Google Scholar
  8. 8.
    J. Kittler. Combining classifiers: A theoretical framework. Pattern analysis and Applications, Vol. 1, No. 1, 1998.Google Scholar
  9. 9.
    R. Kohavi. Scaling up the accuracy of naive Bayes classifiers: a decision tree hybrid. In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining. AAAI Press, 1996.Google Scholar
  10. 10.
    Ron Kohavi, B. Becker, and D. Sommerfield. Improving simple Bayes. In Maarten van Someren and Gerhard Widmer, editors, Poster Papers of ECML-97. Charles University, Prague, 1997.Google Scholar
  11. 11.
    I. Kononenko. Semi-naive Bayesian classifier. In Y. Kodratoff, editor, European Working Session on Learning-EWSL91. LNAI 482 Springer Verlag, 1991.Google Scholar
  12. 12.
    P. Langley. Induction of recursive Bayesian classifiers. In P. Brazdil, editor, Proc. of European Conf. on Machine Learning. LNAI 667, Springer Verlag, 1993.Google Scholar
  13. 13.
    Pat Langley. Tractable average-case analysis of naive Bayesian classifiers. In I. Bratko and S. Dzeroski, editors, Machine Learning, Proceedings of the 16th International Conference. Morgan Kaufmann, 1999.Google Scholar
  14. 14.
    D. Michie, D.J. Spiegelhalter, and C. Taylor. Machine Learning, Neural and Statistical Classification. Ellis Horwood, 1994.Google Scholar
  15. 15.
    Tom Mitchell. Machine Learning. MacGraw-Hill Companies, Inc., 1997.Google Scholar
  16. 16.
    M. Pazzani. Constructive induction of cartesian product attributes. In Proc. of the Conference ISIS96: Information, Statistics and Induction in Science, pages 66–77. World Scientific, 1996.Google Scholar
  17. 17.
    W. Press, S. Teukolsky, W. Vetterling, and B. Flannery. Numerical Recipes in C: the art of scientific computing 2 Ed. University of Cambridge, 1992.Google Scholar
  18. 18.
    G. Webb and M. Pazzani. Adjusted probability naive Bayesian induction. In 11th Australian Joint Conference on Artificial Intelligence. World Scientific, 1998.Google Scholar
  19. 19.
    Z. Zheng. Naive Bayesian classifier committees. In C. Nedellec and C. Rouveirol, editors, Proc. of European Conf. on Machine Learning ECML-98. LNAI 1398, Springer Verlag, 1998.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • João Gama
    • 1
  1. 1.LIACC, FEPUniversity of PortoPortoPortugal

Personalised recommendations