Skip to main content

A Parallel Algorithm for Learning Bayesian Networks

  • Conference paper
Advances in Knowledge Discovery and Data Mining (PAKDD 2007)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4426))

Included in the following conference series:

Abstract

Computing the expected statistics is the main bottleneck in learning Bayesian networks in large-scale problem domains. This paper presents a parallel learning algorithm, PL-SEM, for learning Bayesian networks, based on an existing structural EM algorithm (SEM). Since the computation of the expected statistics is in the parametric learning part of the SEM algorithm, PL-SEM exploits a parallel EM algorithm to compute the expected statistics. The parallel EM algorithm parallelizes the E-step and M-step. At the E-step, PL-SEM parallel computes the expected statistics of each sample; and at the M-step, with the conditional independence of Bayesian networks and the expected statistics computed at the E-step, PL-SEM exploits the decomposition property of the likelihood function under the completed data to parallel estimate each local likelihood function. PL-SEM effectively computes the expected statistics, and greatly reduces the time complexity of learning Bayesian networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Ghahramani, Z.: An Introduction to Hidden Markov Models and Bayesian Networks. IJPRAI 15(1), 9–42 (2001)

    Google Scholar 

  2. Chickering, D.M., Heckerman, D.: Efficient approximations for the marginal likelihood of Bayesian networks with hidden variables. Machine Learning 29(2-3), 181–221 (1997)

    Article  MATH  Google Scholar 

  3. Schwarz, G.: Estimating the dimension of a model. Ann. Stat. 6, 461–464 (1978)

    Article  MATH  Google Scholar 

  4. Wang, S.-C., Yaun, S.-M.: Research on Learning Bayesian Networks Structure with Missing Date. Journal of Software 15(7), 1042–1048 (2004)

    MATH  Google Scholar 

  5. Friedman, N.: The Bayesian Structural EM Algorithm. In: UAI-98 (1998)

    Google Scholar 

  6. Tian, F., Lu, Y., Shi, C.: Learning Bayesian Networks with Hidden Variables Using the Combination of EM and Evolutionary Algorithms. In: Cheung, D., Williams, G.J., Li, Q. (eds.) PAKDD 2001. LNCS (LNAI), vol. 2035, pp. 568–574. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  7. James, W., Myers, K., Laskey, B., et al.: Learning Bayesian networks from incomplete data using evolutionary algorithms. In: Proc. of the 15th Conf on Uncertainty in Artificial Intelligence, Stockholm, Sweden (1999)

    Google Scholar 

  8. Luna, J.E.O., Zanusso, M.B.: Revisited EM Algorithms for Learning Structure and Parameters in Bayesian Networks. In: IC-AI 2005, pp. 572–578 (2005)

    Google Scholar 

  9. Chickering, D.M., Heckerman, D., Meek, C.: Large-Sample Learning of Bayesian Networks is NP-Hard. Journal of Machine Learning Research 5, 1287–1330 (2004)

    MathSciNet  Google Scholar 

  10. Anderson, T.E., Culler, D.E., Patterson, D.A.: A Case for NOW. IEEE Micro 15(1), 54–64 (1995)

    Article  Google Scholar 

  11. Chu, T., Xiang, Y.: Exploring Parallelism in Learning Belief Networks. In: Proc. of Conference on Uncertainty in Artificial Intelligence, pp. 90–98 (1997)

    Google Scholar 

  12. Lam, W., Segre, A.M.: A Distributed Learning Algorithm for Bayesian Inference Networks. IEEE Transactions on Knowledge and Data Engineering 14(1), 93–105 (2002)

    Article  Google Scholar 

  13. Munetomo, F., Murao, N., Akama, K.: Empirical studies on parallel network construction of Bayesian optimization algorithms. In: The IEEE Congress on Evolutionary Computation, 2-5 Sep. 2005, pp. 1524–1531. IEEE Computer Society Press, Los Alamitos (2005)

    Chapter  Google Scholar 

  14. Gropp, W., Lusk, E., Skjellum, A.: Using MPI: portable parallel programming with the message-passing. The MIT Press, Cambridge (1999)

    Google Scholar 

  15. Norsys Software Corp. (2006), http://www.norsys.com

Download references

Author information

Authors and Affiliations

Authors

Editor information

Zhi-Hua Zhou Hang Li Qiang Yang

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer Berlin Heidelberg

About this paper

Cite this paper

Yu, K., Wang, H., Wu, X. (2007). A Parallel Algorithm for Learning Bayesian Networks. In: Zhou, ZH., Li, H., Yang, Q. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2007. Lecture Notes in Computer Science(), vol 4426. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-71701-0_119

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-71701-0_119

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-71700-3

  • Online ISBN: 978-3-540-71701-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics