Most Relevant Explanation: computational complexity and approximation methods



Most Relevant Explanation (MRE) is the problem of finding a partial instantiation of a set of target variables that maximizes the generalized Bayes factor as the explanation for given evidence in a Bayesian network. MRE has a huge solution space and is extremely difficult to solve in large Bayesian networks. In this paper, we first prove that MRE is at least NP-hard. We then define a subproblem of MRE called MRE k that finds the most relevant k-ary explanation and prove that the decision problem of MRE k is \(NP^{\it PP}\)-complete. Since MRE needs to find the best solution by MRE k over all k, and we can also show that MRE is in \(NP^{\it PP}\), we conjecture that a decision problem of MRE is \(NP^{\it PP}\)-complete as well. Furthermore, we show that MRE remains in \(NP^{\it PP}\) even if we restrict the number of target variables to be within a log factor of the number of all unobserved variables. These complexity results prompt us to develop a suite of approximation algorithms for solving MRE, One algorithm finds an MRE solution by integrating reversible-jump MCMC and simulated annealing in simulating a non-homogeneous Markov chain that eventually concentrates its mass on the mode of a distribution of the GBF scores of all solutions. The other algorithms are all instances of local search methods, including forward search, backward search, and tabu search. We tested these algorithms on a set of benchmark diagnostic Bayesian networks. Our empirical results show that these methods could find optimal MRE solutions for most of the test cases in our experiments efficiently.


Most Relevant Explanation Computational complexity \(NP^{\it PP}\)-complete Reversible jump MCMC Local search 

Mathematics Subjext Classification (2010)



Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Andrieu, C., de Freitas, N., Doucet, A.: Reversible jump MCMC simulated annealing for neural networks. In: Proceedings, 16th Annual Conference on Uncertainty in Artificial Intelligence (UAI-00), pp. 11–18. Morgan Kaufmann, San Francisco (2000)Google Scholar
  2. 2.
    Cook, S.: The complexity of theorem proving procedures. In: Proceedings, 3rd Annual ACM Symposium on Theory of Computing, pp. 151–158 (1971)Google Scholar
  3. 3.
    Fitelson, B.: Studies in Bayesian Confirmation Theory. Ph.D. thesis, Philosophy Department, University of Wisconsin, Madison (2001)Google Scholar
  4. 4.
    Fitelson, B.: Likelihoodism, Bayesianism, and relational confirmation. Synthese 156(3), 473–489 (2007)Google Scholar
  5. 5.
    Gill, J.: Computational complexity of probabilistic Turing machines. SIAM J. Comput. 6(4), 675–695 (1977)MathSciNetMATHCrossRefGoogle Scholar
  6. 6.
    Good, I.: Weight of evidence: a brief survey. Bayesian Stat. 2, 249–270 (1985)MathSciNetGoogle Scholar
  7. 7.
    Green, P.: Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrica 82, 711–732 (1995)MATHCrossRefGoogle Scholar
  8. 8.
    Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983)MathSciNetMATHCrossRefGoogle Scholar
  9. 9.
    Lin, Y., Druzdzel, M.J.: Relevance-based sequential evidence processing in Bayesian networks. In: Proceedings, Uncertain Reasoning in Artificial Intelligence Track of the Eleventh International Florida Artificial Intelligence Research Symposium (FLAIRS–98), pp. 446–450. AAAI Press/The MIT Press, Menlo Park (1998)Google Scholar
  10. 10.
    Littman, M., Goldsmith, J., Mundhenk, M.: The computational complexity of probabilistic planning. J. Artif. Intell. Res. 9, 1–36 (1998)MathSciNetMATHGoogle Scholar
  11. 11.
    Lu, T.C., Przytula, K.W.: Focusing strategies for multiple fault diagnosis. In: Proceedings, 19th International FLAIRS Conference (FLAIRS-06), pp. 842–847. Malbourne Beach, FL (2006)Google Scholar
  12. 12.
    Molina, L., Belanche, L., Nebot, A.: Feature selection algorithms: a survey and experimental evaluation. In: Proceedings, 2002 IEEE International Conference on Data Mining, pp. 306–313 (2002)Google Scholar
  13. 13.
    Park, J.D., Darwiche, A.: Approximating MAP using local search. In: Proceedings, 17th Conference on Uncertainty in Artificial Intelligence (UAI-01), pp. 403–410. Morgan Kaufmann, San Francisco (2001)Google Scholar
  14. 14.
    Park, J.D., Darwiche, A.: Complexity results and approximation strategies for MAP explanations. J. Artif. Intell. Res. 21, 101–133 (2004)MathSciNetMATHGoogle Scholar
  15. 15.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo (1988)Google Scholar
  16. 16.
    Yuan, C.: Some properties of Most Relevant Explanation. In: Proceedings, 21st International Joint Conference on Artificial Intelligence ExaCt Workshop (ExaCt-09), pp. 118–125. Pasadena, CA (2009)Google Scholar
  17. 17.
    Yuan, C., Liu, X., Lu, T.C., Lim, H.: Most Relevant Explanation: properties, algorithms, and evaluations. In: Proceedings, 25th Conference on Uncertainty in Artificial Intelligence (UAI-09), pp. 631–638. Montreal, Canada (2009)Google Scholar
  18. 18.
    Yuan, C., Lu, T.C.: A general framework for generating multivariate explanations in Bayesian networks. In: Proceedings, 23rd National Conference on Artificial Intelligence (AAAI-08), pp. 1119–1124 (2008)Google Scholar
  19. 19.
    Yuan, C., Lu, T., Druzdzel, M.: Annealed MAP. In: Proceedings, 20th Annual Conference on Uncertainty in Artificial Intelligence (UAI–04), pp. 628–635. AUAI Press, Arlington (2004)Google Scholar
  20. 20.
    Yuan, C., Lim, H., Lu, T.C.:Most relevant explanation in Bayesian networks. J. Artif. Intell. Res. 42, 309–352 (2011)MATHGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2011

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringMississippi State UniversityMississippi StateUSA
  2. 2.Department of Computer ScienceRutgers UniversityPiscatawayUSA

Personalised recommendations