Learning belief network structure from data under causal insufficiency
Hidden variables are well known sources of disturbance when recovering belief networks from data based only on measurable variables. Hence models assuming existence of hidden variables are under development. This paper presents a new algorithm exploiting the results of the known CI algorithm of Spirtes, Glymour and Scheines . CI algorithm produces partial causal structure from data indicating for some variables common unmeasured causes. We claim that there exist belief network models which (1) have connections identical with those of CI output, (2) have edge orientations identical with CI (3) have no other latent variables than those indicated by CI, and (4) and the same time fit the data. We present a non-deterministic algorithm generating the whole family of such belief networks.
- 1.Cooper, G.F., Herskovits, E.: A Bayesian method for the induction of probabilistic networks from data, Machine Learning 9 (1992), 309–347.Google Scholar
- 2.Pearl, J.: Probabilistic Reasoning in Intelligent Systems:Networks of Plausible Inference, Morgan Kaufmann, San Mateo CA, 1988Google Scholar
- 3.Pearl, J., Verma, T.: A theory of inferred causation, [in:] Principles of Knowledge Representation and Reasoning, Proc. of the Second International Conference, Cambridge, Massachusetts, April 22–25, 1991, Allen, J., Fikes, R., Sandewell, E. Eds, San Mateo CA:Morgen Kaufmann, 441–452.Google Scholar
- 4.Spirtes P., Glymour C, Schemes R.: Causation, Prediction and Search, Lecture Notes in Statistics 81, Springer-Verlag, 1993.Google Scholar
- 5.Verma T., Pearl J.: Equivalence and synthesis of causal models, Proc. 6th Conference on Uncertainty in AI, 220–227, 1990.Google Scholar