Machine Learning

, Volume 29, Issue 2, pp 131–163

Bayesian Network Classifiers

  • Nir Friedman
  • Dan Geiger
  • Moises Goldszmidt
Article

DOI: 10.1023/A:1007465528199

Cite this article as:
Friedman, N., Geiger, D. & Goldszmidt, M. Machine Learning (1997) 29: 131. doi:10.1023/A:1007465528199

Abstract

Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness that characterize naive Bayes. We experimentally tested these approaches, using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for feature selection.

Bayesian networks classification 
Download to read the full article text

Copyright information

© Kluwer Academic Publishers 1997

Authors and Affiliations

  • Nir Friedman
    • 1
  • Dan Geiger
    • 2
  • Moises Goldszmidt
    • 3
  1. 1.Computer Science DivisionUniversity of CaliforniaBerkeley
  2. 2.Computer Science DepartmentTechnionHaifaIsrael
  3. 3.SRI InternationalMenlo Park

Personalised recommendations