Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

PRM inference using Jaffray & Faÿ’s Local Conditioning

  • 65 Accesses

  • 3 Citations

Abstract

Probabilistic Relational Models (PRMs) are a framework for compactly representing uncertainties (actually probabilities). They result from the combination of Bayesian Networks (BNs), Object-Oriented languages, and relational models. They are specifically designed for their efficient construction, maintenance and exploitation for very large scale problems, where BNs are known to perform poorly. Actually, in large-scale problems, it is often the case that BNs result from the combination of patterns (small BN fragments) repeated many times. PRMs exploit this feature by defining these patterns only once (the so-called PRM’s classes) and using them through multiple instances, as prescribed by the Object-Oriented paradigm. This design induces low construction and maintenance costs. In addition, by exploiting the classes’ structures, PRM’s state-of-the-art inference algorithm “Structured Variable Elimination” (SVE) significantly outperforms BN’s classical inference algorithms (e.g., Variable Elimination, VE; Local Conditioning, LC). SVE is actually an extension of VE that simply exploits classes to avoid redundant computations. In this article, we show that SVE can be enhanced using LC. Although LC is often thought as being outperformed by VE-like algorithms in BNs, we do think that it should play an important role for PRMs because its features are very well suited for best exploiting PRM classes. In this article, relying on Faÿ and Jaffray’s works, we show how LC can be used in conjunction with VE and deduce an extension of SVE that outperforms it for large-scale problems. Numerical experiments highlight the practical efficiency of our algorithm.

This is a preview of subscription content, log in to check access.

References

  1. Allen, D., Darwiche, A. (2003). New advances in inference by recursive conditioning. In Proceedings of UAI, pp. 2–10.

  2. Arnborg S., Corneil D., Proskurowski A. (1987) Complexity of finding embeddings in a k-tree. SIAM Journal on Algebraic and Discrete Methods 8(2): 277–284

  3. Ben Naceur O., Gonzales C. (2004) Une unification des algorithmes d’inférence de Pearl et de Jensen. Revue d’Intelligence Artificielle 18(2): 229–260

  4. Cowell R., Dawid A., Lauritzen S., Spiegelhalter D. (2007) Probabilistic networks and expert systems: Exact computational methods for Bayesian networks. Information science and statistics. Springer, New York

  5. Cozman F. (2000) Credal networks. Artificial Intelligence Journal 120: 199–233

  6. Dechter R. (1999) Bucket elimination: A unifying framework for reasoning. Artificial Intelligence 113: 41–85

  7. Diez F. (1996) Local conditioning in Bayesian networks. Artificial Intelligence 87: 1–20

  8. Faÿ A., Jaffray J. Y. (2000) A justification of local conditioning in Bayesian networks. International Journal of Approximate Reasoning 24(1): 59–81

  9. Getoor L., Friedman N., Koller D., Pfeffer A., Taskar B. (2007) Probabilistic relational models. In: Getoor L., Taskar B. (eds) Introduction to statistical relational learning, Chap. 5. MIT Press, Cambridge

  10. Getoor L., Taskar B. (2007) Introduction to statistical relational learning. MIT Press, Cambridge

  11. Gonzales, C., Mellouli, K., & Mourali, O. (2007). On directed and undirected propagation algorithms for Bayesian networks. In Proceedings of ECSQARU. Lecture Notes in Artificial Intelligence (Vol. 4724, pp. 598–610).

  12. Heckerman, D. (1996). A tutorial on learning with Bayesian networks. Technical Report MSR-TR-95-06, Microsoft Research—Advanced Technology Division.

  13. Jaeger, M. (1997). Relational Bayesian networks. In Proceedings of UAI (pp. 266–273).

  14. Jensen F. (1996) An introduction to Bayesian Networks. Taylor and Francis, London

  15. Jensen F., Lauritzen S., Olesen K. (1990) Bayesian updating in causal probabilistic networks by local computations. Computational Statistics Quarterly 4: 269–282

  16. Kjærulff, U. (1990). Triangulation of graphs—algorithms giving small total state space. Technical Report R-90-09, Department of Mathematics and Computer Science, Aalborg University.

  17. Kjærulff U., Madsen A. (2008) Bayesian networks and influence diagrams: A guide to construction and analysis. Information science and statistics. Springer, New York

  18. Laskey K. (2008) MEBN: A language for first-order Bayesian knowledge bases. Artificial Intelligence 172(2–3): 140–178

  19. Lauritzen S. (1992) Propagation of probabilities, means and variances in mixed graphical association models. Journal of the American Statistical Association 87: 1098–1108

  20. Madsen A., Jensen F. (1999) LAZY propagation: A junction tree inference algorithm based on lazy inference. Artificial Intelligence 113(1–2): 203–245

  21. Mahoney, S., & Laskey, K. (1996). Network engineering for complex belief networks. In Proceedings of UAI.

  22. Naïm, P., Wuillemin, P. H., Leray, P., Pourret, O. (2007). Réseaux Bayésiens (3rd ed.). Eyrolles.

  23. Nilsson D. (1998) An efficient algorithm for finding the M most probable configurations in probabilistic expert systems. Statistics and Computing 8(2): 159–173

  24. Park, J., & Darwiche, A. (2003). Solving MAP exactly using systematic search. In Proceedings of UAI (pp. 459–468).

  25. Pearl J. (1988) Probabilistic reasoning in intelligent systems: Networks of plausible inference. Morgan Kaufman, San Francisco

  26. Peot M., Shachter R. (1991) Fusion and propagation with multiple observations in belief networks. Artificial Intelligence 48: 299–318

  27. Pfeffer, A. (2000). Probabilistic reasoning for complex systems. PhD thesis, Stanford.

  28. Pfeffer, A., Koller, D., Milch, B., Takusagawa, K. (1999). SPOOK: A system for probabilistic object-oriented knowledge representation. In Proceedings of UAI (pp. 541–550).

  29. Rose D., Tarjan R., Lueker G. (1976) Algorithmic aspects of vertex elimination on graphs. SIAM journal on Computing 5: 266–283

  30. Shachter, R., Andersen, S., Szolovits, P. (1994). Global conditioning for probabilistic inference in belief networks. In Proceedings of UAI.

  31. Shafer, G. (1996). Probabilistic expert systems. SIAM, Philadelphia.

  32. Shenoy P. (1997) Binary join trees for computing marginals in the Shenoy-Shafer architecture. International Journal of Approximate Reasoning 17(1): 1–25

  33. Shoikhet, K., & Geiger, D. (1997). Finding optimal triangulations via minimal vertex separators. In Proceedings of AAAI.

  34. Sun, X., Druzdzel, M., Yuan, C. (2007). Dynamic weighted A* search-based MAP algorithm for Bayesian networks. In Proceedings of IJCAI (pp. 2385–2390).

  35. van den Eijkhof, F., & Bodlaender, H. (2002). Safe reduction rules for weighted treewidth. In Proceedings of WG. LNCS (Vol. 2573, pp. 176–185), Springer.

  36. Yuan, C., & Hansen, E. (2009). Efficient computation of jointree bounds for systematic MAP search. In Proceedings of IJCAI (pp. 1962–1969).

  37. Zhang, N., & Poole, D. (1994). A simple approach to Bayesian network computation. In 10th Canadian conference on artificial intelligence (pp. 16–22).

Download references

Author information

Correspondence to Christophe Gonzales.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Gonzales, C., Wuillemin, P. PRM inference using Jaffray & Faÿ’s Local Conditioning. Theory Decis 71, 33–62 (2011). https://doi.org/10.1007/s11238-010-9219-2

Download citation

Keywords

  • Bayesian networks
  • Probabilistic relation models
  • Lifted inference
  • Structured Variable Elimination
  • Local Conditioning