Skip to main content

Finding Dissimilar Explanations in Bayesian Networks: Complexity Results

  • Conference paper
  • First Online:
Artificial Intelligence (BNAIC 2018)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1021))

Included in the following conference series:

  • 698 Accesses

Abstract

Finding the most probable explanation for observed variables in a Bayesian network is a notoriously intractable problem, particularly if there are hidden variables in the network. In this paper we examine the complexity of a related problem, that is, the problem of finding a set of sufficiently dissimilar, yet all plausible, explanations. Applications of this problem are, e.g., in search query results (you won’t want 10 results that all link to the same website) or in decision support systems. We show that the problem of finding a ‘good enough’ explanation that differs in structure from the best explanation is at least as hard as finding the best explanation itself.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    An example of such a situation would be when \(m = 5\) and the solutions with binary encodings 000, 011, 101, 110, and 111 would be in \( optsol _{\mathcal {B}}^{1 \dots m}\), with \( optsol _{\mathcal {B}}= 000\).

References

  1. de Campos, C.P.: New complexity results for MAP in Bayesian networks. In: Walsh, T. (ed.) Proceedings of IJCAI, vol. 11 (2011)

    Google Scholar 

  2. Chen, C., Kolmogorov, V., Zhu, Y., Metaxas, D., Lampert, C.: Computing the M most probable modes of a graphical model. In: Proceedings of the 16th International Conference on Articial Intelligence and Statistics (AISTATS) (2013)

    Google Scholar 

  3. Chen, C., Yuan, C., Ye, Z., Chen, C.: Solving M-modes in loopy graphs using tree decompositions. In: Kratochvíl, V., Studený, M. (eds.) Proceedings of the Ninth International Conference on Probabilistic Graphical Models. Proceedings of Machine Learning Research, vol. 72, pp. 145–156 (2018)

    Google Scholar 

  4. Darwiche, A.: Modeling and Reasoning with Bayesian Networks. Cambridge University Press, Cambridge (2009)

    Book  Google Scholar 

  5. Felzenszwalb, P., Girshick, R., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Trans. Softw. Eng. 32(9), 1627–1645 (2010)

    Google Scholar 

  6. Kwisthout, J.: Most probable explanations in Bayesian networks: complexity and tractability. Int. J. Approx. Reason. 52(9), 1452–1469 (2011)

    Article  MathSciNet  Google Scholar 

  7. Kwisthout, J.: Tree-width and the computational complexity of MAP approximations in Bayesian networks. J. Artif. Intell. Res. 53, 699–720 (2015)

    Article  MathSciNet  Google Scholar 

  8. Kwisthout, J.H.P., Bodlaender, H.L., van der Gaag, L.C.: The complexity of finding kth most probable explanations in probabilistic networks. In: Černá, I., et al. (eds.) SOFSEM 2011. LNCS, vol. 6543, pp. 356–367. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-18381-2_30

    Chapter  Google Scholar 

  9. Park, J., Darwiche, A.: Complexity results and approximation settings for MAP explanations. J. Artif. Intell. Res. 21, 101–133 (2004)

    Article  Google Scholar 

  10. Toda, S.: Simple characterizations of P(#P) and complete problems. J. Comput. Syst. Sci. 49, 1–17 (1994)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johan Kwisthout .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kwisthout, J. (2019). Finding Dissimilar Explanations in Bayesian Networks: Complexity Results. In: Atzmueller, M., Duivesteijn, W. (eds) Artificial Intelligence. BNAIC 2018. Communications in Computer and Information Science, vol 1021. Springer, Cham. https://doi.org/10.1007/978-3-030-31978-6_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-31978-6_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-31977-9

  • Online ISBN: 978-3-030-31978-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics