A Graphical Meta-Model for Reasoning about Bayesian Network Structure

  • Luis M. de Campos
  • José A. Gámez
  • J. Miguel Puerta
Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 146)

Abstract

When the amount of available data is small with respect to the problem size, many Bayesian networks can account for the data in a similar way. In these cases model averaging offers a framework which allows us to make better predictions than model selection. Selective model averaging directly uses a subset of these networks with high probability mass to reason over the probability of the structural features (arcs) which can appear in the problem domain network model. In this paper, instead of using this subset of networks to reason about the domain network structure, we propose to use the probability distribution over the structural features induced by these networks, in order to learn a new Bayesian network whose variables are the structural features which can appear in the problem domain network. This network can be considered as a higher level model or meta model, because it can be seen as an approximation of the whole space of Bayesian networks defined over the problem domain variables. This meta-model (Bayesian network) can be used to reason about the probability of structural features, as selective model averaging, but also as a decision support tool to be used by an expert in the problem domain.

Keywords

Bayesian Network Directed Acyclic Graph Problem Domain Variable Neighbourhood Search Markov Chain Monte Carlo Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    S. Acid and L.M. de Campos. A hybrid methodology for learning belief nertworks: Benedict. International Journal of Approximate Reasoning, 27(3):235–262, 2001.MATHCrossRefGoogle Scholar
  2. 2.
    R. R. Bouckaert. Bayesian Belief Networks: From Construction to Inference. PhD thesis, University of Utrecht, 1995.Google Scholar
  3. 3.
    C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. IEEE transactions on Information Theory, 14:462–467, 1968.MATHCrossRefGoogle Scholar
  4. 4.
    G.F. Cooper and E. Herskovits. A Bayesian method for the induction of probabilistic networksmfrom data. Machine Learning, 9:309–347, 1992.MATHGoogle Scholar
  5. 5.
    L.M. de Campos, J.A. Gámez and J.M. Puerta. Ant colony optimization for learning Bayesian networks. International Journal of Approximate Reasoning, 31(3) :291–311, 2002.MathSciNetMATHCrossRefGoogle Scholar
  6. 6.
    L.M. de Campos and J.M. Puerta. Stochastic local and distributed search algorithms for learning belief networks. In Proceedings of the Third International Symposium on Adaptive Systems: Evolutionary Computation and Probabilistic Graphical Models, pages 109–115, 2001.Google Scholar
  7. 7.
    L.M. de Campos and J.M. Puerta. Stochastic local search algorithms for learning belief networks: Searching in the space of orderings. In ECSQAR U — Lecture Notes in Artificial Intelligence, volume 2143, pages 228–239. Springer-Verlag, 2001.Google Scholar
  8. 8.
    N. Friedman and D. Koller. Being Bayesian about network structure. In Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, pages 201–210, San Mateo, 2000. Morgan Kaufmann.Google Scholar
  9. 9.
    D. Heckerman, D. Geiger, and D.M. Chickering. Learning bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20:197–243, 1995.Google Scholar
  10. 10.
    D. Heckerman, C. Meek, and G. Cooper. A Bayesian approach to causal discovery. Technical Report MSR-TR-97–05, Microsoft Research, Advanced Techchonology Division, 1997Google Scholar
  11. 11.
    M. Henrion. Propagating uncertainty by logic sampling in bayes networks. In J. Lemmer and L.N. Kanal, editors, Uncertainty in Artificial Intelligence, 2, pages 149–164. North Holland, Amsterdam, 1988.Google Scholar
  12. 12.
    W. Lam and F. Bacchus. Learning bayesian belief networks. an approach based on the mdl principle. Computational Intelligence, 10(4):269–293, 1994.CrossRefGoogle Scholar
  13. 13.
    P. Larrañaga, M. Poza, Y. Yurramendi, R. Murga, and C. Kuijpers. Structure learning of Bayesian networks by genetic algorithms. A perfomance analysis of control parameters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(9):912–926, 1996.CrossRefGoogle Scholar
  14. 14.
    D. Madigan and E. Raftery. Model selection and accounting for model uncertainty in graphical models using Occam’s window. Journal Americal Statistical Association, 89:1535–1546, 1994.MATHCrossRefGoogle Scholar
  15. 15.
    J. Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, San Mateo, 1988.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Luis M. de Campos
    • 1
  • José A. Gámez
    • 2
  • J. Miguel Puerta
    • 2
  1. 1.Dpto. de Ciencias de la Computación e I.A.ETSII - Universidad de GranadaGranadaSpain
  2. 2.Dpto. de InformáticaEPSA - Universidad de Castilla-La ManchaAlbaceteSpain

Personalised recommendations