Advertisement

Online Bayesian Inference for the Parameters of PRISM Programs

  • James Cussens
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7207)

Abstract

This paper presents a method for approximating posterior distributions over the parameters of a given PRISM program. A sequential approach is taken where the distribution is updated one datapoint at a time. This makes it applicable to online learning situations where data arrives over time. The method is applicable whenever the prior is a mixture of products of Dirichlet distributions. In this case the true posterior will be a mixture of very many such products. An approximation is effected by merging products of Dirichlet distributions. An analysis of the quality of the approximation is presented. Due to the heavy computational burden of this approach, the method has been implemented in the Mercury logic programming language. Initial results using a hidden Markov model are presented.

Keywords

Posterior Distribution Logic Program Latent Dirichlet Allocation Dirichlet Distribution Ground Instance 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. Journal of Machine Learning Research 3, 993–1022 (2003)zbMATHGoogle Scholar
  2. 2.
    Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley (1991)Google Scholar
  3. 3.
    Cowell, R.G., Dawid, A.P., Sebastiani, P.: A comparison of sequential learning methods for incomplete data. In: Bernado, J.M., Berger, J., Dawid, A.P., Smith, A.F.M. (eds.) Bayesian Statistics, vol. 5, pp. 533–541. Clarendon Press, Oxford (1995)Google Scholar
  4. 4.
    Cowell, R.G.: Mixture reduction via predictive scores. Statistics and Computing 8, 97–103 (1998)CrossRefGoogle Scholar
  5. 5.
    Cowell, R.G., Philip Dawid, A., Lauritzen, S.L., Spiegelhalter, D.J.: Probabilistic Networks and Expert Systems. Springer, New York (1999)zbMATHGoogle Scholar
  6. 6.
    Penny, W.D.: KL-divergences of Normal, Gamma, Dirichlet and Wishart densities. Technical report, University College London (2001)Google Scholar
  7. 7.
    R Development Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2011) ISBN 3-900051-07-0Google Scholar
  8. 8.
    Sato, T., Kameya, Y.: Parameter learning of logic programs for symbolic-statistical modeling. Journal of Artificial Intelligence Research 15, 391–454 (2001)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Somogyi, Z., Henderson, F., Conway, T.: The execution algorithm of Mercury: an efficient purely declarative logic programming language. Journal of Logic Programming 29(1-3), 17–64 (1996)zbMATHCrossRefGoogle Scholar
  10. 10.
    West, M.: Modelling with mixtures. In: Bernado, J.M., Berger, J.O., Dawid, A.P., Smith, A.F.M. (eds.) Bayesian Statistics, vol. 4, pp. 503–524. Clarendon Press, Oxford (1992)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • James Cussens
    • 1
  1. 1.Dept of Computer Science & York Centre for Complex Systems AnalysisUniversity of York Deramore LaneYorkUK

Personalised recommendations