A Direct Measure for the Efficacy of Bayesian Network Structures Learned from Data

  • Gary F. Holness
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4571)

Abstract

Current metrics for evaluating the performance of Bayesian network structure learning includes order statistics of the data likelihood of learned structures, the average data likelihood, and average convergence time. In this work, we define a new metric that directly measures a structure learning algorithm’s ability to correctly model causal associations among variables in a data set. By treating membership in a Markov Blanket as a retrieval problem, we use ROC analysis to compute a structure learning algorithm’s efficacy in capturing causal associations at varying strengths. Because our metric moves beyond error rate and data-likelihood with a measurement of stability, this is a better characterization of structure learning performance. Because the structure learning problem is NP-hard, practical algorithms are either heuristic or approximate. For this reason, an understanding of a structure learning algorithm’s stability and boundary value conditions is necessary. We contribute to state of the art in the data-mining community with a new tool for understanding the behavior of structure learning techniques.

Keywords

Ground Truth Receiver Operating Characteristic Curve Bayesian Network True Positive Rate Structure Learning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Chickering, D.: Learning equivalence classes of bayesian-network structures. Journal of Machine Learning Research 2, 445–498 (2002)MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Cooper, G.: Probabilistic inference using belief networks is np-hard. Artificial Intelligence 42(1), 393–405 (1990)MATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Cooper, G., Herskovits, E.: A bayesian method for the induction of probabilistic networks from data. Mahcine Learning 9(4), 309–347 (1992)MATHGoogle Scholar
  4. 4.
    Dagum, P., Luby, M.: Approximating probabilistic inference in bayesian belief networks is np-hard. Artificial Intelligence 60(1), 141–153 (1993)MATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Heckerman, D., Geiger, D., Chickering, D.: Learning bayesian networsk: The combination of knowledge and statistical data. Machine Learning 20, 197–243 (1995)MATHGoogle Scholar
  6. 6.
    Eaton, D., Murphy, K.: Bayesian structure learning using dynamic programming and mcmc. In: NIPS Workshop on Causality and Feature Selection (2006)Google Scholar
  7. 7.
    Faulkner, E.: K2ga: Heuristically guided evolution of bayesian network structures from data. In: IEEE Symposium on Computational Intelligence and Data Mining, 3 Innovation Way, Newark, DE. 19702, April 2007, IEEE Computer Society Press, Los Alamitos (2007)Google Scholar
  8. 8.
    Fawcett, T.: Roc graphs: Notes and practical considerations for data mining researchers. Technical Report HPL-2003-04, Hewlett Packard Research Labs (2003)Google Scholar
  9. 9.
    Friedman, N., Nachman, I., Peér, D.: Learning bayesian network structure from massive datasets: The sparse candidate algorithm. In: Proceedings of UAI, pp. 206–215 (1999)Google Scholar
  10. 10.
    Heckerman, D.: A tutorial on learning with bayesian networks (1995)Google Scholar
  11. 11.
    Pearl, J., Verma, T.S.: A theory of inferred causation. In: Allen, J.F., Fikes, R., Sandewall, E. (eds.) KR 1991. Principles of Knowledge Representation and Reasoning, San Mateo, California, pp. 441–452. Morgan Kaufmann, San Francisco (1991)Google Scholar
  12. 12.
    Shaughnessy, P., Livingston, G.: Evaluating the causal explanatory value of bayesian network structure learning algorithms. In: AAAI Workshop on Evaluation Methods for Machine Learning (2006)Google Scholar
  13. 13.
    Singh, M., Valtorta, M.: Construction of bayesian network structures from data: A brief survey and an efficient algorithm. International Journal of Approximate Reasoning 12(2), 111–131 (1995)MATHCrossRefGoogle Scholar
  14. 14.
    Spirtes, P., Glymour, C., Scheines, R.: Causation, prediction, and search. Springer, Heidelberg (1993)MATHGoogle Scholar
  15. 15.
    Lam, W., Bacchus, F.: Learning bayesian belief networks: An approach based on the mdl principle. Comp. Int. 10, 269–293 (1994)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Gary F. Holness
    • 1
  1. 1.Quantum Leap Innovations, 3 Innovation Way, Newark, DE. 19711 

Personalised recommendations