Advertisement

Probabilistic Graphical Models and Markov Networks

  • Roberto Santana
  • Siddhartha Shakya
Part of the Adaptation, Learning, and Optimization book series (ALO, volume 14)

Abstract

This chapter introduces probabilistic graphical models and explain their use for modelling probabilistic relationships between variables in the context of optimisation with EDAs.We focus on Markov networksmodels and review different algorithms used to learn and sample Markov networks. Other probabilistic graphical models are also reviewed and their differences with Markov networks are analysed.

Keywords

Bayesian Network Maximal Clique Variable Node Factor Node Factor Graph 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Abbeel, P., Koller, D., Ng, A.Y.: Learning factor graphs in polynomial time and sample complexity. Journal of Machine Learning Research 7, 1743–1788 (2006)MathSciNetMATHGoogle Scholar
  2. 2.
    Bach, F.R., Jordan, M.I.: Thin junction trees. In: Proceedings of the Conference Advances in Neural Information Processing Systems 20, NIPS, pp. 569–576. MIT Press (2002)Google Scholar
  3. 3.
    Besag, J.: Spatial interactions and the statistical analysis of lattice systems (with discussions). Journal of the Royal Statistical Society 36, 192–236 (1974)MathSciNetMATHGoogle Scholar
  4. 4.
    Bromberg, F., Margaritis, D., Honavar, V.: Efficient Markov network structure discovery using independence tests. Journal of Artificial Intelligence Research 35, 449–484 (2009)MathSciNetMATHGoogle Scholar
  5. 5.
    Castillo, E., Gutierrez, J.M., Hadi, A.S.: Expert Systems and Probabilistic Network Models. Springer (1997)Google Scholar
  6. 6.
    Chechetka, A., Guestrin, C.: Efficient principled learning of thin junction trees. In: Proceedings of the Conference Advances in Neural Information Processing Systems 20, NIPS. MIT Press (2008)Google Scholar
  7. 7.
    Cox, D., Wermuth, N.: Linear dependencies represented by chain graphs. Statistical Science 8(3), 204–218 (1993)MathSciNetMATHCrossRefGoogle Scholar
  8. 8.
    d’Aspremont, A., Banerjee, O., Ghaoui, L.: First-order methods for sparse covariance selection. SIAM Journal on Matrix Analysis and Applications 30(1), 56–66 (2008)MathSciNetMATHCrossRefGoogle Scholar
  9. 9.
    Davis, J., Domingos, P.: Bottom-up learning of Markov network structure. In: Proceedings of the Twenty-Seventh International Conference on Machine Learning. ACM Press (2010)Google Scholar
  10. 10.
    Della Pietra, S., Della Pietra, V., Lafferty, J.: Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(4), 380–393 (1997)CrossRefGoogle Scholar
  11. 11.
    Duchi, J., Gould, S., Koller, D.: Projected subgradient methods for learning sparse Gaussians. In: Proceedings of the 24th Annual Conference on Uncertainty in Artificial Intelligence, UAI 2008 (2008)Google Scholar
  12. 12.
    Frey, B.: Graphical Models for Machine Learning and Digital Communication. MIT Press (1998)Google Scholar
  13. 13.
    Fung, R., Chang, K.: Weighting and integrating evidence for stochastic simulation in Bayesian networks. Uncertainty in Artificial Intelligence 5, 209–219 (1989)Google Scholar
  14. 14.
    Gandhi, P., Bromberg, F., Margaritis, D.: Learning Markov network structure using few independence tests. In: Proceedings of the SIAM Conference on Data Mining, pp. 680–691 (2008)Google Scholar
  15. 15.
    Geman, S., Geman, D.: Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. In: Fischler, M.A., Firschein, O. (eds.) Readings in Computer Vision: Issues, Problems, Principles, and Paradigms, pp. 564–584. Kaufmann, Los Altos (1987)Google Scholar
  16. 16.
    Gogate, V., Austin, W., Domingos, W.: Learning efficient Markov networks. In: Proceedings of the Conference on Neural Information Processing Systems (NIPS 2010). MIT Press (2010)Google Scholar
  17. 17.
    Guo, J., Levina, E., Michailidis, G., Zhu, J.: Joint structure estimation for categorical Markov networks (2010) (summited), http://www.stat.lsa.umich.edu/~elevina
  18. 18.
    Hammersley, J.M., Clifford, P.: Markov fields on finite graphs and lattices (1971) (unpublished )Google Scholar
  19. 19.
    Kikuchi, R.: A theory of cooperative phenomena. Physical Review 81(6), 988–1003 (1951)MathSciNetMATHCrossRefGoogle Scholar
  20. 20.
    Koller, D., Friedman, N.: Probabilistic Graphical Models: Principles and Techniques. MIT Press (2009)Google Scholar
  21. 21.
    Kschischang, F.R., Frey, B.J., Loeliger, H.A.: Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory 47(2), 498–519 (2001)MathSciNetMATHCrossRefGoogle Scholar
  22. 22.
    Larrañaga, P.: An Introduction to probabilistic graphical models. In: Estimation of Distribution Algorithms. A New Tool for Evolutionary Computation, pp. 25–54. Kluwer Academic Publishers, Boston (2002)Google Scholar
  23. 23.
    Larrañaga, P., Calvo, B., Santana, R., Bielza, C., Galdiano, J., Inza, I., Lozano, J.A., Armañanzas, R., Santafé, G., Pérez, A., Robles, V.: Machine learning in bioinformatics. Briefings in Bioinformatics 7, 86–112 (2006)CrossRefGoogle Scholar
  24. 24.
    Lauritzen, S.L.: Graphical Models. Oxford Clarendon Press (1996)Google Scholar
  25. 25.
    Li, S.: Markov Random Field Modeling in Image Analysis. Springer-Verlag New York Inc. (2009)Google Scholar
  26. 26.
    Li, S.Z.: Markov Random Field modeling in computer vision. Springer (1995)Google Scholar
  27. 27.
    Margaritis, D., Bromberg, F.: Efficient Markov network discovery using particle filters. Computational Intelligence 25(4), 367–394 (2009)MathSciNetCrossRefGoogle Scholar
  28. 28.
    McCallum, A.: Efficiently inducing features of conditional random fields. In: Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence (UAI 2003), pp. 403–410 (2003)Google Scholar
  29. 29.
    McLachlan, G., Peel, D.: Finite Mixture Models. John Wiley & Sons (2000)Google Scholar
  30. 30.
    Meinshausen, N., Bühlmann, P.: High-dimensional graphs and variable selection with the lasso. Annals of Statistics 34, 1436–1462 (2006)MathSciNetMATHCrossRefGoogle Scholar
  31. 31.
    Morita, T.: Formal structure of the cluster variation method. Progress of Theoretical Physics Supplements 115, 27–39 (1994)CrossRefGoogle Scholar
  32. 32.
    Murphy, K.: Dynamic Bayesian Networks: Representation, Inference and Learning. PhD thesis, University of California, Berkeley (2002)Google Scholar
  33. 33.
    Neapolitan, R.E.: Learning Bayesian Networks. Prentice-Hall, Upper Saddle River (2003)Google Scholar
  34. 34.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems. Morgan Kaufman Publishers, Palo Alto (1988)Google Scholar
  35. 35.
    Pearl, J., Paz, A.: Graphoids: A graph-based logic for reasoning about relevance relations. Technical Report R–53–L, Cognitive Systems Laboratory, University of California, Los Angeles (1985)Google Scholar
  36. 36.
    Pelikan, M.: Hierarchical Bayesian Optimization Algorithm. Toward a New Generation of Evolutionary Algorithms. STUDFUZZ, vol. 170. Springer, Heidelberg (2005)MATHGoogle Scholar
  37. 37.
    Ravikumar, P., Wainwright, M., Lafferty, J.: High-dimensional ising model selection using l1-regularized logistic regression. The Annals of Statistics 38(3), 1287–1319 (2010)MathSciNetMATHCrossRefGoogle Scholar
  38. 38.
    Rue, H., Held, L.: Gaussian Markov Random Fields: Theory and Applications, vol. 104. Chapman & Hall (2005)Google Scholar
  39. 39.
    Scheinberg, K., Rish, I.: Learning sparse Gaussian Markov networks using a greedy coordinate ascent approach. In: Machine Learning and Knowledge Discovery in Databases, pp. 196–212 (2010)Google Scholar
  40. 40.
    Schlüter, F., Bromberg, F.: A survey on independence-based Markov networks learning, arXiv.org, arXiv:1108.2283 (2011)Google Scholar
  41. 41.
    Shachter, R., Peot, M.: Simulation approaches to general probabilistic inference on belief networks. In: Proceedings of the Fifth Annual Conference on Uncertainty in Artificial Intelligence, pp. 221–234. North-Holland Publishing Co., Amsterdam (1990)Google Scholar
  42. 42.
    Shakya, S.: DEUM: A Framework for an Estimation of Distribution Algorithm based on Markov Random Fields. PhD thesis, The Robert Gordon University, Aberdeen, UK (April 2006)Google Scholar
  43. 43.
    Von Neumann, J.: Various techniques used in connection with random digits. Applied Math Series 12(36-38), 1 (1951)Google Scholar
  44. 44.
    Whittaker, J.: Graphical Models in Applied Multivariate Statistics. Wiley Series in Probability and Mathematical Statistics, New York (1991)Google Scholar
  45. 45.
    Yedidia, J.S., Freeman, W.T., Weiss, Y.: Constructing free energy approximations and generalized belief propagation algorithms. Technical Report TR-2002-35, Mitsubishi Electric Research Laboratories (August 2002)Google Scholar

Copyright information

© Springer Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.Intelligent Systems Group, Faculty of InformaticsUniversity of the Basque Country (UPV/EHU)San-SebastianSpain
  2. 2.Business Modelling and Operational Transformation PracticeBT Innovate & DesignIpswichUK

Personalised recommendations