Parent Assignment Is Hard for the MDL, AIC, and NML Costs
Several hardness results are presented for the parent assignment problem: Given m observations of n attributes x 1, ..., x n , find the best parents for x n , that is, a subset of the preceding attributes so as to minimize a fixed cost function. This attribute or feature selection task plays an important role, e.g., in structure learning in Bayesian networks, yet little is known about its computational complexity. In this paper we prove that, under the commonly adopted full-multinomial likelihood model, the MDL, BIC, or AIC cost cannot be approximated in polynomial time to a ratio less than 2 unless there exists a polynomial-time algorithm for determining whether a directed graph with n nodes has a dominating set of size logn, a LOGSNP-complete problem for which no polynomial-time algorithm is known; as we also show, it is unlikely that these penalized maximum likelihood costs can be approximated to within any constant ratio. For the NML (normalized maximum likelihood) cost we prove an NP-completeness result. These results both justify the application of existing methods and motivate research on heuristic and super-polynomial-time algorithms.
KeywordsPolynomial Time Bayesian Network Directed Graph Positive Instance Parent Assignment
Unable to display preview. Download preview PDF.
- 3.Suzuki, J.: Learning Bayesian belief networks based on the Minimun Description Length principle: An efficient algorithm using the b & b technique. In: Proceedings of the Thirteenth International Conference on Machine Learning (ICML), pp. 462–470 (1996)Google Scholar
- 4.Tian, J.: A branch-and-bound algorithm for MDL learning Bayesian networks. In: Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI), pp. 580–588. Morgan Kaufmann, San Francisco (2000)Google Scholar
- 7.Bouckaert, R.R.: Properties of Bayesian belief network learning algorithms. In: de Mantaras, R.L., Poole, D. (eds.) Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI), pp. 102–109. Morgan Kaufmann, San Francisco (1994)Google Scholar
- 11.Kontkanen, P., Buntine, W., Myllymäki, P., Rissanen, J., Tirri, H.: Efficient computation of stochastic complexity. In: Bishop, C.M., Frey, B.J. (eds.) Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics (AISTAT), Key West, FL, pp. 181–188 (2003)Google Scholar
- 14.Garey, M., Johnson, D.: Computers and Intractability - A Guide to the Theory of NP-completeness. W. H. Freeman & Co., San Fransisco (1971)Google Scholar
- 15.Chickering, D.M., Meek, C.: Finding optimal Bayesian networks. In: Proceedings of Eighteenth Conference on Uncertainty in Artificial Intelligence (UAI), pp. 94–102. Morgan Kaufmann, Edmonton (2002)Google Scholar
- 16.Koller, D., Sahami, M.: Toward optimal feature selection. In: Proceedings of the Thirteenth International Conference on Machine Learning (ICML), pp. 284–292. Morgan Kaufmann, San Francisco (1996)Google Scholar