Advertisement

Update Rules for Parameter Estimation in Continuous Time Bayesian Network

  • Dongyu Shi
  • Jinyuan You
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4099)

Abstract

Continuous time Bayesian network is a new kind of dynamic graphical models developed in recent year, which describe structured stochastic processes with finitely many states that evolve over continuous time. The parameters for each variable in the model represent a finite state continuous time Markov process, whose transition model is a function of its parents. This paper presents an algorithm for updating parameters from an existing CTBN model with a set of data samples. It is a unified framework for online parameter estimation and batch parameter updating where a pre-accumulated set of samples is used. We analyze different conditions of the algorithm, and show its performance in experiments.

Keywords

Markov Process Bayesian Network Multinomial Distribution Update Rule Structure State Space 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Aoki, M.: State space modeling of time series. Springer, Heidelberg (1987)zbMATHGoogle Scholar
  2. 2.
    Bauer, E., Koller, D., Singer, Y.: Update rules for parameter estimation in bayesian networks. In: Proceedings of the 13th Conference on Uncertainty in Artificial Intelligence (UAI 1997), pp. 3–13 (1997)Google Scholar
  3. 3.
    Blossfeld, H.-P., Rohwer, G.: Tech. of event history modeling. Lawrence Erlbaum Associates, Mahwah (1995)Google Scholar
  4. 4.
    Cohen, I., Bronstein, A.: Online Learning of Bayesian Networks. HP-Labs Tech report, HPL-2001-55TR1 (2001)Google Scholar
  5. 5.
    Dean, T., Kanazawa, K.: A model for reasoning about persistence and causation. Computational Intelligence 5, 142–150 (1989)CrossRefGoogle Scholar
  6. 6.
    Helmbold, D.P., Schapire, R.E., Singer, Y., Warmuth, M.K.: A comparison of new and old algorithms for a mixture estimation problem. Machine Learning 27, 97–119 (1997)CrossRefGoogle Scholar
  7. 7.
    Jordan, M.I., Weiss, Y.: Graphical models: Probabilistic inference. In: Arbib, M. (ed.) The Handbook of Brain Theory and Neural Networks, 2nd edn., MIT Press, Cambridge (2002)Google Scholar
  8. 8.
    Lando, D.: On Cox processes and credit risky securities. Review of Derivatives Research 2, 99–120 (1998)Google Scholar
  9. 9.
    Nodelman, U., Shelton, C.R., Koller, D.: Continuous time Bayesian networks. In: Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence (UAI 2002), pp. 378–387 (2002)Google Scholar
  10. 10.
    Nodelman, U., Shelton, C.R., Koller, D.: Learning continuous time Bayesian networks. In: Proceedings of the 19th Conference on Uncertainty in Artificial Intelligence (UAI 2003), pp. 451–458 (2003)Google Scholar
  11. 11.
    Nodelman, U., Shelton, C.R., Koller, D.: Expectation maximization and complex duration distributions for continuous time Bayesian networks. In: Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence (UAI 2005) (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Dongyu Shi
    • 1
  • Jinyuan You
    • 1
  1. 1.Department of Computer Science and EngineeringShanghai Jiao Tong UniversityShanghaiP.R. China

Personalised recommendations