Advertisement

Statistics and Computing

, Volume 27, Issue 2, pp 301–318 | Cite as

Efficient designs for Bayesian networks with sub-tree bounds

  • Marie LilleborgeEmail author
  • Jo Eidsvik
Article
  • 241 Downloads

Abstract

We present upper and lower bounds for information measures, and use these to find the optimal design of experiments for Bayesian networks. The bounds are inspired by properties of the junction tree algorithm, which is commonly used for calculating conditional probabilities in graphical models like Bayesian networks. We demonstrate methods for iteratively improving the upper and lower bounds until they are sufficiently tight. We illustrate properties of the algorithm by tutorial examples in the case where we want to ensure optimality and for the case where the goal is an approximate solution with a guarantee. We further use the bounds to accelerate established algorithms for constructing useful designs. An example with petroleum fields in the North Sea is studied, where the design problem is related to exploration drilling campaigns. All of our examples consider binary random variables, but the theory can also be applied to other discrete or continuous distributions.

Keywords

Information measure Bayesian networks Upper and lower bounds Almond tree Junction tree algorithm Petroleum exploration Subset selection Design of experiment 

Notes

Acknowledgments

This work is funded by Statistics for Innovation, \((\text {sfi})^2\), one of the Norwegian Centres for Research-based Innovation.

References

  1. Almond, R., Kong, A.: Optimality issues in constructing a markov tree from graphical models. Technical report. Department of Statistics, Harvard University (1991)Google Scholar
  2. Bonneau, M., Gaba, S., Peyrard, N., Sabbadin, R.: Reinforcement learning-based design of sampling policies under cost constraints in markov random fields: Application to weed map reconstruction. Comput. Stat. Data Anal. 72, 30–44 (2014). doi: 10.1016/j.csda.2013.10.002 MathSciNetCrossRefGoogle Scholar
  3. Brown, D., Smith, J.: Optimal sequential exploration: bandits, clairvoyants, and wildcats. Oper. Res. 61(3), 644–665 (2013). doi: 10.1287/opre.2013.1164 MathSciNetCrossRefzbMATHGoogle Scholar
  4. Cowell, R., Dawid, P., Lauritzen, S., Spiegelhalter, D.: Probabilistic Networks and Expert Systems: Exact Computational Methods for Bayesian Networks. Statistics for Engineering and Information Science Series. Springer, New York (2007)zbMATHGoogle Scholar
  5. Darwiche, P.A.: Modeling and Reasoning with Bayesian Networks, 1st edn. Cambridge University Press, New York (2009)CrossRefzbMATHGoogle Scholar
  6. Ginebra, J.: On the measure of the information in a statistical experiment. Bayesian Anal. 2(1), 167–212 (2007). doi: 10.1214/07-BA207 MathSciNetCrossRefzbMATHGoogle Scholar
  7. Iyer, R., Bilmes, J.: Algorithms for approximate minimization of the difference between submodular functions, with applications. In: Uncertainty in Artificial Intelligence (UAI), AUAI, Catalina Island (2012)Google Scholar
  8. Jensen, F.V., Nielsen, T.D.: Bayesian Networks and Decision Graphs, 2nd edn. Springer Publishing Company, Incorporated, New York (2007). doi: 10.1007/978-0-387-68282-2 CrossRefzbMATHGoogle Scholar
  9. Koller, D., Friedman, N.: Probabilistic Graphical Models: Principles and Techniques. MIT Press, Cambridge (2009)zbMATHGoogle Scholar
  10. Krause, A., Guestrin, C.: Near-optimal value of information in graphical models. In: Conference on Uncertainty in Artificial Intelligence (UAI) (2005)Google Scholar
  11. Lauritzen, S.L., Spiegelhalter, D.J.: Local Computation with probabilities on graphical structures and their application to expert systems (with discussion). J. R. Stat. Soc. Ser. B 50(2), 157–224 (1988)MathSciNetzbMATHGoogle Scholar
  12. Lilleborge, M., Hauge, R., Eidsvik, J.: Information gathering in bayesian networks applied to petroleum prospecting. Math. Geosci. 27, 1–25 (2015). doi: 10.1007/s11004-015-9616-8 Google Scholar
  13. Lindley, D.V.: On a measure of the information provided by an experiment. Ann. Math. Stat. 27(4), 986–1005 (1956). doi: 10.1214/aoms/1177728069 MathSciNetCrossRefzbMATHGoogle Scholar
  14. Martinelli, G., Eidsvik, J.: Dynamic exploration designs for graphical models using clustering with applications to petroleum exploration. Knowl.Based Syst. 58, 113–126 (2014). doi: 10.1016/j.knosys.2013.08.020 CrossRefGoogle Scholar
  15. Martinelli, G., Eidsvik, J., Hauge, R., Førland, M.D.: Bayesian networks for prospect analysis in the North Sea. AAPG Bull. 95(8), 1423–1442 (2011). doi: 10.1306/01031110110 CrossRefGoogle Scholar
  16. Palhazi Cuervo, D., Goos, P., Sörensen, K.: Optimal design of large-scale screening experiments: a critical look at the coordinate-exchange algorithm. Stat. Comput. (2014). doi: 10.1007/s11222-014-9467-z
  17. Peyrard, N., Sabbadin, R., Spring, D., Brook, B., Mac Nally, R.: Model-based adaptive spatial sampling for occurrence map construction. Stat. Comput. 23(1), 29–42 (2013). doi: 10.1007/s11222-011-9287-3 MathSciNetCrossRefzbMATHGoogle Scholar
  18. Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach, 2nd edn. Prentice Hall, Upper Saddle River (2003)zbMATHGoogle Scholar
  19. Schrijver, A.: A combinatorial algorithm minimizing submodular functions in strongly polynomial time. J. Comb. Theory Ser. B 80(2), 346–355 (2000). doi: 10.1006/jctb.2000.1989 MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.Norwegian Computing CenterOsloNorway
  2. 2.Department of Mathematical SciencesNTNUTrondheimNorway

Personalised recommendations