Advertisement

Abstract

We propose the recursive autonomy identification (RAI) algorithm for constraint-based Bayesian network structure learning. The RAI algorithm learns the structure by sequential application of conditional independence (CI) tests, edge direction and structure decomposition into autonomous sub-structures. The sequence of operations is performed recursively for each autonomous sub-structure while simultaneously increasing the order of the CI test. In comparison to other constraint-based algorithms d-separating structures and then directing the resulted undirected graph, the RAI algorithm combines the two processes from the outset and along the procedure. Thereby, learning a structure using the RAI algorithm requires a smaller number of high order CI tests. This reduces the complexity and run-time as well as increases structural and prediction accuracies as demonstrated in extensive experimentation.

Keywords

Bayesian Network Recursive Call Minimum Description Length Conditional Mutual Information Bayesian Network Structure 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Heckerman, D.: A tutorial on learning with Bayesian networks. MS TR-95-06 (March 1995)Google Scholar
  2. 2.
    Cooper, G.F., Herskovits, E.: A Bayesian method for the induction of probabilistic networks from data. Machine Learning 9, 309–347 (1992)MATHGoogle Scholar
  3. 3.
    Cheng, J., Bell, D., Liu, W.: Learning Bayesian networks from data: an efficient approach based on information theory. In: Sixth ACM Int. Conf. on Information and Knowledge Management, pp. 325–331 (1997)Google Scholar
  4. 4.
    Spirtes, P., Glymour, C., Scheines, R.: Causation, Prediction and Search, 2nd edn. MIT Press, Cambridge (2000)Google Scholar
  5. 5.
    Pearl, J.: Causality: Models, Reasoning, and Inference, Cambridge (2000)Google Scholar
  6. 6.
    Murphy, K.: Bayes net toolbox for Matlab. Computing Science and Statistics 33 (2001)Google Scholar
  7. 7.
    Leray, P., Francois, O.: BNT structure learning package: documentation and experiments, PSI TR (2004)Google Scholar
  8. 8.
    Beinlich, I.A., Suermondt, H.J., Chavez, R.M., Cooper, G.F.: The ALARM monitoring system: A case study with two probabilistic inference techniques for belief networks. In: Second European Conf. on Artificial Intelligence in Medicine, pp. 246–256 (1989)Google Scholar
  9. 9.
    Newman, D.J., Hettich, S., Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases. U. of California, Irvine, Dept. of Information and Computer Science (1998)Google Scholar
  10. 10.
    Cheng, J., Greiner, R.: Comparing Bayesian network classifiers. In: Fifteenth Conf. on Uncertainty in Artificial Intelligence, pp. 101–107 (1999)Google Scholar
  11. 11.
    Friedman, N., Geiger, D., Goldszmidt, M.: Bayesian network classifiers. Machine Learning 29, 131–161 (1997)MATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Raanan Yehezkel
    • 1
  • Boaz Lerner
    • 1
  1. 1.Pattern Analysis and Machine Learning Lab, Department of Electrical & Computer EngineeringBen-Gurion UniversityBeer-ShevaIsrael

Personalised recommendations