EM Algorithm for Symmetric Causal Independence Models

  • Rasa Jurgelenaite
  • Tom Heskes
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4212)


Causal independence modelling is a well-known method both for reducing the size of probability tables and for explaining the underlying mechanisms in Bayesian networks. In this paper, we present the EM algorithm to learn the parameters in causal independence models based on the symmetric Boolean function. The developed algorithm enables us to assess the practical usefulness of the symmetric causal independence models, which has not been done previously. We evaluate the classification performance of the symmetric causal independence models learned with the presented EM algorithm. The results show the competitive performance of these models in comparison to noisy OR and noisy AND models as well as other state-of-the-art classifiers.


Conditional Probability Bayesian Network Boolean Function Hide Variable Interaction Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kauffman Publishers, San Francisco (1988)Google Scholar
  2. 2.
    Díez, F.J.: Parameter Adjustment in Bayes Networks. The generalized noisy OR-gate. In: Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence, pp. 99–105 (1993)Google Scholar
  3. 3.
    Heckerman, D., Breese, J.S.: A New Look at Causal Independence. In: Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pp. 286–292 (1994)Google Scholar
  4. 4.
    Zhang, N.L., Poole, D.: Exploiting Causal Independence in Bayesian Networks Inference. Journal of Artificial Intelligence Research 5, 301–328 (1996)zbMATHMathSciNetGoogle Scholar
  5. 5.
    Kappen, H.J., Neijt, J.P.: Promedas, a Probabilistic Decision Support System for Medical Diagnosis. Technical report, SNN - UMCU (2002)Google Scholar
  6. 6.
    Shwe, M.A., Middleton, B., Heckerman, D.E., Henrion, M., Horvitz, E.J., Lehmann, H.P., Cooper, G.F.: Probabilistic Diagnosis using a Reformulation of the INTERNIST-1/QMR Knowledge Base, I – The Probabilistic Model and Inference Algorithms. Methods of Information in Medicine 30, 241–255 (1991)Google Scholar
  7. 7.
    Meek, C., Heckerman, D.: Structure and Parameter Learning for Causal Independence and Causal Interaction Models. In: Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence, pp. 366–375 (1997)Google Scholar
  8. 8.
    Lucas, P.J.F.: Bayesian Network Modelling Through Qualitative Patterns. Artificial Intelligence 163, 233–263 (2005)zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Jurgelenaite, R., Lucas, P.J.F., Heskes, T.: Noisy Threshold Functions for Modelling Causal Independence in Bayesian Networks. Technical report ICIS–R06014, Radboud University Nijmegen (2006)Google Scholar
  10. 10.
    Visscher, S., Lucas, P.J.F., Bonten, M., Schurink, K.: Improving the Therapeutic Performance of a Medical Bayesian Network using Noisy Threshold Models. In: Oliveira, J.L., Maojo, V., Martín-Sánchez, F., Pereira, A.S. (eds.) ISBMDA 2005. LNCS (LNBI), vol. 3745, pp. 161–172. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  11. 11.
    Vomlel, J.: Noisy-or Classifier. International Journal of Intelligent Systems 21, 381–398 (2006)zbMATHCrossRefGoogle Scholar
  12. 12.
    Henrion, M.: Some Practical Issues in Constructing Belief Networks. Uncertainty in Artificial Intelligence 3, 161–173 (1989)Google Scholar
  13. 13.
    Enderton, H.B.: A Mathematical Introduction to Logic. Academic Press, San Diego (1972)zbMATHGoogle Scholar
  14. 14.
    Wegener, I.: The Complexity of Boolean Functions. John Wiley & Sons, New York (1987)zbMATHGoogle Scholar
  15. 15.
    Le Cam, L.: An Approximation Theorem for the Poisson Binomial Distribution. Pacific Journal of Mathematics 10, 1181–1197 (1960)zbMATHMathSciNetGoogle Scholar
  16. 16.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society 39, 1–38 (1977)zbMATHMathSciNetGoogle Scholar
  17. 17.
    Darroch, J.: On the Distribution of the Number of Successes in Independent Trials. The Annals of Mathematical Statistics 35, 1317–1321 (1964)zbMATHCrossRefMathSciNetGoogle Scholar
  18. 18.
    Lucas, P.J.F., Boot, H., Taal, B.: Computer-based Decision Support in Management of Primary Gastric non-Hodgkin Lymphoma. Methods of Information in Medicine 37, 206–219 (1998)Google Scholar
  19. 19.
    Bast, R.C., Kufe, D.W., Pollock, R.E., Weichselbaum, R.R., Holland, J.F., Frei, E.: Cancer Medicine - 5 Review. B C Decker Inc., Ontario (2000)Google Scholar
  20. 20.
    Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann, San Francisco (2005)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Rasa Jurgelenaite
    • 1
  • Tom Heskes
    • 1
  1. 1.Institute for Computing and Information SciencesRadboud University NijmegenNijmegenThe Netherlands

Personalised recommendations