Advertisement

Machine Learning

, 71:265 | Cite as

Improving the structure MCMC sampler for Bayesian networks by introducing a new edge reversal move

  • Marco GrzegorczykEmail author
  • Dirk Husmeier
Article

Abstract

Applications of Bayesian networks in systems biology are computationally demanding due to the large number of model parameters. Conventional MCMC schemes based on proposal moves in structure space tend to be too slow in mixing and convergence, and have recently been superseded by proposal moves in the space of node orders. A disadvantage of the latter approach is the intrinsic inability to specify the prior probability on network structures explicitly. The relative paucity of different experimental conditions in contemporary systems biology implies a strong influence of the prior probability on the posterior probability and, hence, the outcome of inference. Consequently, the paradigm of performing MCMC proposal moves in order rather than structure space is not entirely satisfactory. In the present article, we propose a new and more extensive edge reversal move in the original structure space, and we show that this significantly improves the convergence of the classical structure MCMC scheme.

Keywords

Bayesian networks Structure learning MCMC sampling 

References

  1. Beinlich, I., Suermondt, R., Chavez, R., & Cooper, G. (1989). The alarm monitoring system: A case study with two probabilistic inference techniques for belief networks. In J. Hunter (Ed.), Proceedings of the second European conference on artificial intelligence and medicine. Berlin: Springer. Google Scholar
  2. Castelo, R., & Kočka, T. (2003). On inclusion-driven learning of Bayesian networks. Journal of Machine Learning Research, 4, 527–574. CrossRefGoogle Scholar
  3. Chickering, D. M. (1995). A transformational characterization of equivalent Bayesian network structures. In International conference on uncertainty in artificial intelligence (UAI) (Vol. 11, pp. 87–98). Google Scholar
  4. Chickering, D. M. (2002). Learning equivalence classes of Bayesian network structures. Journal of Machine Learning Research, 2, 445–498. zbMATHCrossRefMathSciNetGoogle Scholar
  5. Cooper, G. F., & Herskovits, E. (1992). A Bayesian method for the induction of probabilistic networks from data. Machine Learning, 9, 309–347. zbMATHGoogle Scholar
  6. Eaton, D., & Murphy, K. (2007). Bayesian structure learning using dynamic programming and MCMC. In Proceedings of the twenty-third conference on uncertainty in artificial intelligence (UAI 2007). Google Scholar
  7. Ellis, B., & Wong, W. (2006). Sampling Bayesian networks quickly. In Interface, Pasadena, CA. Google Scholar
  8. Friedman, N., & Koller, D. (2003). Being Bayesian about network structure. Machine Learning, 50, 95–126. zbMATHCrossRefGoogle Scholar
  9. Friedman, N., Linial, M., Nachman, I., & Pe’er, D. (2000). Using Bayesian networks to analyze expression data. Journal of Computational Biology, 7, 601–620. CrossRefGoogle Scholar
  10. Geiger, D., & Heckerman, D. (1994). Learning Gaussian networks. In Proceedings of the tenth conference on uncertainty in artificial intelligence (pp. 235–243). Google Scholar
  11. Giudici, P., & Castelo, R. (2003). Improving Markov chain Monte Carlo model search for data mining. Machine Learning, 50, 127–158. zbMATHCrossRefGoogle Scholar
  12. Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57, 97–109. zbMATHCrossRefGoogle Scholar
  13. Husmeier, D. (2003). Sensitivity and specificity of inferring genetic regulatory interactions from microarray experiments with dynamic Bayesian networks. Bioinformatics, 19, 2271–2282. CrossRefGoogle Scholar
  14. Imoto, S., Higuchi, T., Goto, T., Kuhara, S., & Miyano, S. (2003). Combining microarrays and biological knowledge for estimating gene networks via Bayesian networks. In Proceedings IEEE computer society bioinformatics conference (CSB’03) (pp. 104–113). Google Scholar
  15. Imoto, S., Higuchi, T., Goto, T., & Miyano, S. (2006). Error tolerant model for incorporating biological knowledge with expression data in estimating gene networks. Statistical Methodology, 3(1), 1–16. CrossRefMathSciNetGoogle Scholar
  16. Jensen, F. V. (1996). An introduction to Bayesian networks. London: UCL Press. Google Scholar
  17. Kanehisa, M. (1997). A database for post-genome analysis. Trends in Genetics, 13, 375–376. CrossRefGoogle Scholar
  18. Kanehisa, M., & Goto, S. (2000). Kegg: Kyoto encyclopedia of genes and genomes. Nucleic Acids Research, 28, 27–30. CrossRefGoogle Scholar
  19. Kanehisa, M., Goto, S., Hattori, M., Aoki-Kinoshita, K., Itoh, M., Kawashima, S., Katayama, T., Araki, M., & Hirakawa, M. (2006). From genomics to chemical genomics new developments in kegg. Nucleic Acids Research, 34, 354–357. CrossRefGoogle Scholar
  20. Kovisto, M. (2006). Advances in exact Bayesian structure discovery in Bayesian networks. In Proceedings of the twenty-second conference on uncertainty in artificial intelligence (UAI 2006). Google Scholar
  21. Kovisto, M., & Sood, K. (2004). Exact Bayesian structure discovery in Bayesian networks. Journal of Machine Learning Research, 5, 549–573. Google Scholar
  22. Larget, B., & Simon, D. L. (1999). Markov chain Monte Carlo algorithms for the Bayesian analysis of phylogenetic trees. Molecular Biology and Evolution, 16(6), 750–759. Google Scholar
  23. Madigan, D., & York, J. (1995). Bayesian graphical models for discrete data. International Statistical Review, 63, 215–232. zbMATHCrossRefGoogle Scholar
  24. Mansinghka, V. K., Kemp, C., Tenenbaum, J. B., & Griffiths, T. L. (2006). Structured priors for structure learning. In Proceedings of the twenty-second conference on uncertainty in artificial intelligence (UAI 2006). Google Scholar
  25. Moore, A., & Wong, W. K. (2003). Optimal Reinsertion: a new search operator for accelerated and more accurate Bayesian network structure learning. In T. Fawcett & N. Mishra (Eds.), Proceedings of the 20th international conference on machine learning (ICML ’03) (pp. 552–559). Menlo Park: AAAI Press. Google Scholar
  26. Nariai, N., Tamada, Y., Imoto, S., & Miyano, S. (2005). Estimating gene regulatory networks and protein-protein interactions of saccharomyces cerevisiae from multiple genome-wide data. Bioinformatics, 21(Suppl. 2), ii206–ii212. CrossRefGoogle Scholar
  27. Newman, D. J., Hettich, S., Blake, C. L., & Merz, C. J. (1998). UCI repository of machine learning databases. http://www.ics.uci.edu/~mlearn/MLRepository.html.
  28. Ott, S., Imoto, S., & Miyano, S. (2004). Finding optimal models for small gene networks. In Pacific symposium on biocomputing (Vol. 9, pp. 557–567). Google Scholar
  29. Pearl, J. (2000). Causality: models, reasoning and intelligent systems. London: Cambridge University Press. Google Scholar
  30. Sachs, K., Perez, O., Pe’er, D. A., Lauffenburger, D. A., & Nolan, G. P. (2005). Protein-signaling networks derived from multiparameter single-cell data. Science, 308, 523–529. CrossRefGoogle Scholar
  31. Tierney, L. (1994). Markov chains for exploring posterior distributions. The Annals of Statistics, 22(4), 1701–1728. zbMATHCrossRefMathSciNetGoogle Scholar
  32. Verma, T., & Pearl, J. (1990). Equivalence and synthesis of causal models. In Proceedings of the sixth conference on uncertainty in artificial intelligence (Vol. 6, pp. 220–227). Google Scholar
  33. Werhli, A. V., & Husmeier, D. (2007). Reconstructing gene regulatory networks with Bayesian networks by combining expression data with multiple sources of prior knowledge. Statistical Applications in Genetics and Molecular Biology, 6 (Article 15). Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  1. 1.Centre for Systems Biology at Edinburgh (CSBE)EdinburghUK
  2. 2.Biomathematics and Statistics Scotland (BioSS)EdinburghUK

Personalised recommendations