Advertisement

Causal Graphical Models with Latent Variables: Learning and Inference

  • Philippe Leray
  • Stijn Meganek
  • Sam Maes
  • Bernard Manderick
Part of the Studies in Computational Intelligence book series (SCI, volume 156)

Abstract

This chapter discusses causal graphical models for discrete variables that can handle latent variables without explicitly modeling them quantitatively. In the uncertainty in artificial intelligence area there exist several paradigms for such problem domains. Two of them are semi-Markovian causal models and maximal ancestral graphs. Applying these techniques to a problem domain consists of several steps, typically: structure learning from observational and experimental data, parameter learning, probabilistic inference, and, quantitative causal inference.

Keywords

Latent Variable Bayesian Network Variable Versus Causal Inference Joint Probability Distribution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ali, A., Richardson, T.: Markov equivalence classes for maximal ancestral graphs. In: Proc. of the 18th Conference on Uncertainty in Artificial Intelligence (UAI), pp. 1–9 (2002)Google Scholar
  2. Ali, A.R., Richardson, T., Spirtes, P., Zhang, J.: Orientation rules for constructing markov equivalence classes of maximal ancestral graphs. Technical Report 476, Dept. of Statistics, University of Washington (2005)Google Scholar
  3. Chickering, D.: Learning equivalence classes of Bayesian-network structures. Journal of Machine Learning Research 2, 445–498 (2002)MATHCrossRefMathSciNetGoogle Scholar
  4. Cooper, G.F., Yoo, C.: Causal discovery from a mixture of experimental and observational data. In: Proceedings of Uncertainty in Artificial Intelligence, pp. 116–125 (1999)Google Scholar
  5. Eberhardt, F., Glymour, C., Scheines, R.: On the number of experiments sufficient and in the worst case necessary to identify all causal relations among n variables. In: Proc. of the 21st Conference on Uncertainty in Artificial Intelligence (UAI), pp. 178–183 (2005)Google Scholar
  6. Friedman, N.: Learning belief networks in the presence of missing values and hidden variables. In: Proc. of the 14th International Conference on Machine Learning, pp. 125–133 (1997)Google Scholar
  7. Heckerman, D.: A tutorial on learning with bayesian networks. Technical report, Microsoft Research (1995)Google Scholar
  8. Huang, Y., Valtorta, M.: Pearl’s calculus of intervention is complete. In: Proc. of the 22nd Conference on Uncertainty in Artificial Intelligence (UAI), pp. 217–224 (2006)Google Scholar
  9. Jordan, M.I. (ed.): Learning in Graphical Models. MIT Press, Cambridge (1998)MATHGoogle Scholar
  10. Jordan, M.I., Ghahramani, Z., Jaakkola, T., Saul, L.K.: An introduction to variational methods for graphical models. Machine Learning 37(2), 183–238 (1999)MATHCrossRefGoogle Scholar
  11. Lauritzen, S.L., Spiegelhalter, D.J.: Local computations with probabilities on graphical structures and their application to expert systems. Journal of the Royal Statistical Society, series B 50, 157–244 (1988)MATHMathSciNetGoogle Scholar
  12. Mackay, D.: Introduction to monte carlo methods. In: Jordan, M.I. (ed.) Learning in Graphical Models, pp. 175–204. MIT Press, Cambridge (1999)Google Scholar
  13. Meganck, S., Leray, P., Manderick, B.: Learning causal bayesian networks from observations and experiments: A decision theoretic approach. In: Modeling Decisions in Artificial Intelligence. LNCS, pp. 58–69 (2006)Google Scholar
  14. Murphy, K.P.: Active learning of causal bayes net structure. Technical report, Department of Computer Science, UC Berkeley (2001)Google Scholar
  15. Neapolitan, R.: Learning Bayesian Networks. Prentice Hall, Englewood Cliffs (2003)Google Scholar
  16. Pearl, J.: Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, San Francisco (1988)Google Scholar
  17. Pearl, J.: Causality: Models, Reasoning and Inference. MIT Press, Cambridge (2000)MATHGoogle Scholar
  18. Richardson, T., Spirtes, P.: Ancestral graph markov models. Technical Report 375, Dept. of Statistics, University of Washington (2002)Google Scholar
  19. Richardson, T., Spirtes, P.: Causal inference via Ancestral graph models. In: Highly Structured Stochastic Systems. Oxford Statistical Science Series, ch. 3. Oxford University Press, Oxford (2003)Google Scholar
  20. Russell, S.J., Norvig, P. (eds.): Artificial Intelligence: A Modern Approach. Prentice Hall, Englewood Cliffs (1995)MATHGoogle Scholar
  21. Shpitser, I., Pearl, J.: Identification of conditional interventional distributions. In: Proc. of the 22nd Conference on Uncertainty in Artificial Intelligence (UAI), pp. 437–444 (2006)Google Scholar
  22. Spirtes, P., Glymour, C., Scheines, R.: Causation, Prediction and Search. MIT Press, Cambridge (2000)Google Scholar
  23. Spirtes, P., Meek, C., Richardson, T.: An algorithm for causal inference in the presence of latent variables and selection bias. In: Computation, Causation, and Discovery, pp. 211–252. AAAI Press, Menlo Park (1999)Google Scholar
  24. Tian, J.: Generating markov equivalent maximal ancestral graphs by single edge replacement. In: Proc. of the 21st Conference on Uncertainty in Artificial Intelligence (UAI), pp. 591–598 (2005)Google Scholar
  25. Tian, J., Pearl, J.: On the identification of causal effects. Technical Report (R-290-L), UCLA C.S. Lab (2002a)Google Scholar
  26. Tian, J., Pearl, J.: On the testable implications of causal models with hidden variables. In: Proc. of the 18th Conference on Uncertainty in Artificial Intelligence (UAI), pp. 519–527 (2002b)Google Scholar
  27. Tong, S., Koller, D.: Active learning for structure in bayesian networks. In: Seventeenth International Joint Conference on Artificial Intelligence (2001)Google Scholar
  28. Zhang, J.: Causal Inference and Reasoning in Causally Insufficient Systems. PhD thesis, Carnegie Mellon University (2006)Google Scholar
  29. Zhang, J., Spirtes, P.: A characterization of markov equivalence classes for ancestral graphical models. Technical Report 168, Dept. of Philosophy, Carnegie-Mellon University (2005a)Google Scholar
  30. Zhang, J., Spirtes, P.: A transformational characterization of markov equivalence for directed acyclic graphs with latent variables. In: Proc. of the 21st Conference on Uncertainty in Artificial Intelligence (UAI), pp. 667–674 (2005b)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Philippe Leray
    • 1
  • Stijn Meganek
    • 2
  • Sam Maes
    • 3
  • Bernard Manderick
    • 2
  1. 1.LINA Computer Science Lab UMR6241, Knowledge and Decision TeamUniversité de NantesFrance
  2. 2.Computational Modeling LabBelgium
  3. 3.LITIS Computer Science, Information Processing and Systems Lab EA4108INSA RouenFrance

Personalised recommendations