Learning Mixtures of Polynomials of Conditional Densities from Data

  • Pedro L. López-Cruz
  • Thomas D. Nielsen
  • Concha Bielza
  • Pedro Larrañaga
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8109)

Abstract

Mixtures of polynomials (MoPs) are a non-parametric density estimation technique for hybrid Bayesian networks with continuous and discrete variables. We propose two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities is found. We illustrate the methods using data sampled from a simple Gaussian Bayesian network. We study and compare the performance of these methods with the approach for learning mixtures of truncated basis functions from data.

Keywords

Hybrid Bayesian networks conditional density estimation mixtures of polynomials 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Shenoy, P.P., West, J.C.: Inference in hybrid Bayesian networks using mixtures of polynomials. Int. J. Approx. Reason. 52, 641–657 (2011)MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Shenoy, P.P.: Two issues in using mixtures of polynomials for inference in hybrid Bayesian networks. Int. J. Approx. Reason. 53, 847–866 (2012)MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Langseth, H., Nielsen, T.D., Rumí, R., Salmerón, A.: Mixtures of truncated basis functions. Int. J. Approx. Reason. 53, 212–227 (2012)CrossRefMATHGoogle Scholar
  4. 4.
    Moral, S., Rumí, R., Salmerón, A.: Mixtures of Truncated Exponentials in Hybrid Bayesian Networks. In: Benferhat, S., Besnard, P. (eds.) ECSQARU 2001. LNCS (LNAI), vol. 2143, pp. 156–167. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  5. 5.
    Langseth, H., Nielsen, T.D., Rumí, R., Salmerón, A.: Inference in hybrid Bayesian networks with mixtures of truncated basis functions. In: Proceedings of the 6th European Workshop on Probabilistic Graphical Models, pp. 163–170 (2012)Google Scholar
  6. 6.
    López-Cruz, P.L., Bielza, C., Larrañaga, P.: Learning mixtures of polynomials from data using B-spline interpolation. In: Proceedings of the 6th European Workshop on Probabilistic Graphical Models, pp. 211–218 (2012)Google Scholar
  7. 7.
    Langseth, H., Nielsen, T.D., Rumí, R., Salmerón, A.: Learning mixtures of truncated basis functions from data. In: Proceedings of the 6th European Workshop on Probabilistic Graphical Models, pp. 163–170 (2012)Google Scholar
  8. 8.
    Langseth, H., Nielsen, T.D., Rumí, R., Salmerón, A.: Maximum likelihood learning of conditional MTE distributions. In: Sossai, C., Chemello, G. (eds.) ECSQARU 2009. LNCS, vol. 5590, pp. 240–251. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  9. 9.
    Harris, L.A.: Bivariate Lagrange interpolation at the Chebyshev nodes. Proc. Amer. Math. Soc. 138, 4447–4453 (2010)MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Caliari, M., De Marchi, S., Sommariva, A., Vianello, M.: Padua2DM: Fast interpolation and cubature at the Padua points in Matlab/Octave. Numer. Algorithms 56, 45–60 (2011)MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Pedro L. López-Cruz
    • 1
  • Thomas D. Nielsen
    • 2
  • Concha Bielza
    • 1
  • Pedro Larrañaga
    • 1
  1. 1.Department of Artificial IntelligenceUniversidad Politécnica de MadridSpain
  2. 2.Department of Computer ScienceAalborg UniversityDenmark

Personalised recommendations