Computational Statistics

, Volume 32, Issue 2, pp 691–716 | Cite as

Graph_sampler: a simple tool for fully Bayesian analyses of DAG-models

  • Sagnik Datta
  • Ghislaine Gayraud
  • Eric Leclerc
  • Frederic Y. Bois
Original Paper


Bayesian networks (BNs) are widely used graphical models usable to draw statistical inference about directed acyclic graphs. We presented here Graph_sampler a fast free C language software for structural inference on BNs. Graph_sampler uses a fully Bayesian approach in which the marginal likelihood of the data and prior information about the network structure are considered. This new software can handle both the continuous as well as discrete data and based on the data type two different models are formulated. The software also provides a wide variety of structure prior which can depict either the global or local properties of the graph structure. Now based on the type of structure prior selected, we considered a wide range of possible values for the prior making it either informative or uninformative. We proposed a new and much faster jumping kernel strategy in the Metropolis–Hastings algorithm. The source C code distributed is very compact, fast, uses low memory and disk storage. We performed out several analyses based on different simulated data sets and synthetic as well as real networks to discuss the performance of Graph_sampler.


Bayesian networks Structure learning Posterior distribution MCMC Metropolis–Hasting algorithm 



S. Datta is funded by a Ph.D. studentship for the French Ministry of Research. The research leading to these results has received funding from the Innovative Medicines Initiative Joint Undertaking, under Grant Agreement No. 115439 (StemBANCC), resources of which are composed of financial contribution from the European Union Seventh Framework Programme (FP7/2007-2013) and EFPIA companies in kind contribution. This publication reflects only the author’s views and neither the IMI JU nor EFPIA nor the European Commission are liable for any use that may be made of the information contained therein.


  1. Andreassen S, Riekehr C, Kristensen B, Schonheyder H, Leibovici L (1999) Using probabilistic and decision-theoretic methods in treatment and prognosis modeling. Artif Intell Med 15:121–134CrossRefGoogle Scholar
  2. Barker D, Hill S, Mukherjee S (2010) Mc4: a tempering algorithm for large-sample network inference. Pattern Recognit Bioinform 6282:431–442CrossRefGoogle Scholar
  3. Boettcher S, Dethlefsen C (2003) deal: a package for learning bayesian networks. J Stat Softw 8:1–40Google Scholar
  4. Bois F, Gayraud G (2015) Probabilistic generation of random networks taking into account information on motifs occurrence. J Comput Biol 22(1):25–36CrossRefGoogle Scholar
  5. Edwards D (2000) Introduction to graphical modelling, 2nd edn. Springer, New yorkCrossRefzbMATHGoogle Scholar
  6. Friedman N, Murphy K, Russell S (1998) Learning the structure of dynamic probabilistic networks. In: Proceedings of the fourth conference on uncertainity in artificial intelligence (UAI). Morgan Kaufmann Publishers Inc., San Francisco, pp 139–147Google Scholar
  7. Gelman A, Rubin DB (1992) Inference from iterative simulation using multiple sequences. Stat Sci 7:457–511CrossRefGoogle Scholar
  8. Heckerman D, Geiger D, Chickering D (1995) Learning bayesian networks: the combination of knowledge and statistical data. Mach Learn 20:197–243zbMATHGoogle Scholar
  9. Husmeier D (2003) Sensitivity and specificity of inferring genetic regulatory interactions from microarray experiments with dynamic bayesian networks. Bioinformatics 19(17):2271–2282CrossRefGoogle Scholar
  10. Korb K, Nicholson A (2010) Bayesian artificial intelligence. CRC Press, Boca RatonzbMATHGoogle Scholar
  11. Lauritzen S (1996) Graphical models. Oxford University Press, OxfordzbMATHGoogle Scholar
  12. Mukherjee S, Speed P (2008) Network inference using informative priors. Proc Natl Acad Sci USA 105(38):14,313–14,318CrossRefGoogle Scholar
  13. Murphy K (2007) Software for graphical models : a review. ISBA (Intl Soc for Bayesian Analysis). Bulletin 14(4):13–15Google Scholar
  14. Neapolitan R (1990) Probabilistic reasoning in expert systems: theory and algorithms. John Wiley and Sons, Inc., New YorkGoogle Scholar
  15. Nott D, Green P (2004) Bayesian variable selection and the swendsen–wang algorithm. J Comput Graph Stat 13:141–157MathSciNetCrossRefGoogle Scholar
  16. Pearce D, Kelly P (2006) A dynamic topological sort algorithm for directed acyclic graphs. ACM J Exp Algorithmics 11:1–7MathSciNetzbMATHGoogle Scholar
  17. Pearl J (1988) Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, BurlingtonzbMATHGoogle Scholar
  18. R Core Team (2013) R: a language and environment for statistical computing. R foundation for statistical computing, Vienna.
  19. Robert C, Casella G (2004) Monte Carlo statistical methods. Springer Texts in Statistics, BerlinCrossRefzbMATHGoogle Scholar
  20. Robinson R (1973) Counting labeled acyclic digraphs. In: New directions in the theory of graphs. New York Academic Press, pp 239–273Google Scholar
  21. Scutari M (2010) Learning bayesian networks with the bnlearn r package. J Stat Softw 35:1–22CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2017

Authors and Affiliations

  • Sagnik Datta
    • 1
  • Ghislaine Gayraud
    • 2
  • Eric Leclerc
    • 3
  • Frederic Y. Bois
    • 4
  1. 1.BMBISorbonne Universités, Université de Technologie de CompiègneCompiègne cedexFrance
  2. 2.LMACSorbonne Universités, Université de Technologie de CompiègneCompiègne cedexFrance
  3. 3.LIMMS/CNRS-IIS (UMI 2820), Institute of Industrial ScienceThe University of TokyoTokyoJapan
  4. 4.INERIS DRC/VIVA/METO Parc ALATAVerneuil en HalatteFrance

Personalised recommendations