Skip to main content

Learning Hybrid Bayesian Networks by MML

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4304))

Abstract

We use a Markov Chain Monte Carlo (MCMC) MML algorithm to learn hybrid Bayesian networks from observational data. Hybrid networks represent local structure, using conditional probability tables (CPT), logit models, decision trees or hybrid models, i.e., combinations of the three. We compare this method with alternative local structure learning algorithms using the MDL and BDe metrics. Results are presented for both real and artificial data sets. Hybrid models compare favourably to other local structure learners, allowing simple representations given limited data combined with richer representations given massive data.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Friedman, N., Goldszmidt, M.: Learning Bayesian networks with local structure. In: Uncertainty in Artificial Intelligence (1996)

    Google Scholar 

  2. Neil, J.R., Wallace, C.S., Korb, K.B.: Learning Bayesian networks with restricted causal interactions. In: Uncertainty in Artificial Intelligence (1999)

    Google Scholar 

  3. Wallace, C.S., Korb, K.B.: Learning linear causal models by MML samplling. In: Gammerman, A. (ed.) Causal Models and Intelligent Data Management, Springer, Heidelberg (1999)

    Google Scholar 

  4. Korb, K., Nicholson, A.: Bayesian Artificial Intelligence. CRC Press, Boca Raton (2003)

    Book  Google Scholar 

  5. O’Donnell, R.T., Nicholson, A.E., Han, B., Korb, K.B., Alam, M.J., Hope, L.R.: Causal discovery with prior information. In: 19th Australian Joint Conf. on AI (2006)

    Google Scholar 

  6. Heckerman, D., Geiger, D., Chickering, D.: Learning bayesian networks: The combination of knowledge and statistical data. Machine Learning 20, 197–243 (1995)

    MATH  Google Scholar 

  7. Lam, W., Bacchus, F.: Learning Bayesian belief networks. Computational Intelligence 10 (1994)

    Google Scholar 

  8. Wallace, C., Patrick, J.: Coding decision trees. Machine Learning 11, 7 (1993)

    Article  MATH  Google Scholar 

  9. Bayes, T.: An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Soc. of London (1764/1958) reprinted in Biometrika 45(3/4), 293–315 (1958)

    Google Scholar 

  10. Shannon, C.E.: A mathematical theory of communication. Bell System Technical Journal 27(3), 379–423 (1948)

    MATH  MathSciNet  Google Scholar 

  11. Farr, G.E., Wallace, C.S.: The complexity of strict minimum message length inference. Computer Journal 45(3), 285–292 (2002)

    Article  MATH  Google Scholar 

  12. Wallace, C.S., Boulton, D.M.: An information measure for classification. The Computer Journal 11, 185–194 (1968)

    MATH  Google Scholar 

  13. Wallace, C.S.: Statistical and Inductive Inference by Minimum Message Length. Springer, Berlin (2005)

    MATH  Google Scholar 

  14. Allison, L.: Models for machine learning and data mining in functional programming. Journal of Functional Programming (2005)

    Google Scholar 

  15. Rissanen, J.: Modeling by shortest data description. Automatica 14 (1978)

    Google Scholar 

  16. Baxter, R., Oliver, J.: MDL and MML: similarities and differences. Technical Report 207, Dept of Computer Science, Monash University (1994)

    Google Scholar 

  17. Chickering, D.: A transformational characterization of equivalent Bayesian network structures. In: Uncertainty in Artificial Intelligence (1995)

    Google Scholar 

  18. Cooper, G., Herskovits, E.: A Bayesian method for the induction of probabilistic networks from data. Machine Learning 9, 309–347 (1992)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

O’Donnell, R.T., Allison, L., Korb, K.B. (2006). Learning Hybrid Bayesian Networks by MML. In: Sattar, A., Kang, Bh. (eds) AI 2006: Advances in Artificial Intelligence. AI 2006. Lecture Notes in Computer Science(), vol 4304. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11941439_23

Download citation

  • DOI: https://doi.org/10.1007/11941439_23

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-49787-5

  • Online ISBN: 978-3-540-49788-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics