Skip to main content

Local Search Methods for Learning Bayesian Networks Using a Modified Neighborhood in the Space of DAGs

  • Conference paper
  • First Online:

Part of the Lecture Notes in Computer Science book series (LNAI,volume 2527)

Abstract

The dominant approach for learning Bayesian networks from data is based on the use ofa scoring metric, that evaluates the fitness of any given candidate network to the data, and a search procedure, that explores the space ofp ossible solutions. The most efficient methods used in this context are (Iterated) Local Search algorithms. These methods use a predefined neighborhood structure that defines the feasible elementary modifications (local changes) that can be applied to a given solution in order to get another, potentially better solution. Ifthe search space is the set of directed acyclic graphs (dags), the usual choices for local changes are arc addition, arc deletion and arc reversal. In this paper we propose a new definition ofneigh borhood in the dag space, which uses a modified operator for arc reversal. The motivation for this new operator is the observation that local search algorithms experience problems when some arcs are wrongly oriented. We exemplify the general usefulness of our proposal by means ofa set ofexp eriments with different metrics and different local search methods, including Hill-Climbing and Greedy Randomized Adaptive Search Procedure (GRASP), as well as using several domain problems.

Keywords

  • Local Search
  • Bayesian Network
  • Neighborhood Structure
  • Local Search Algorithm
  • Bayesian Belief Network

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/3-540-36131-6_19
  • Chapter length: 11 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   139.00
Price excludes VAT (USA)
  • ISBN: 978-3-540-36131-2
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   179.00
Price excludes VAT (USA)

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S. Acid and L. M. de Campos. A hybrid methodology for learning belief networks: Benedict. International Journal of Approximate Reasoning, 27(3):235–262, 2001.

    MATH  CrossRef  Google Scholar 

  2. J. Binder, D. Koller, S. Russell, and K. Kanazawa. Adaptive probabilistic networks with hidden variables. Machine Learning, 29(2):213–244, 1997.

    MATH  CrossRef  Google Scholar 

  3. I. A. Beinlich, H. J. Suermondt, R. M. Chavez, and G. F. Cooper. The ALARM monitoring system: A case study with two probabilistic inference techniques for beliefnet works. In Proceedings of the Second European Conference on Artificial Intelligence in Medicine, 247–256, 1989.

    Google Scholar 

  4. R. R. Bouckaert. Bayesian BeliefNet works: From Construction to Inference. PhD. Thesis, University ofUtrec ht, 1995.

    Google Scholar 

  5. W. Buntine. Theory refinement of Ba yesian networks. In Proceedings of the Seventh Conference on Uncertaintyin Artificial Intelligence, 52–60, 1991.

    Google Scholar 

  6. W. Buntine. A guide to the literature on learning probabilistic networks from data. IEEE Transactions on Knowledge and Data Engineering, 8:195–210, 1996.

    CrossRef  Google Scholar 

  7. D. M. Chickering, D. Geiger, and D. Heckerman. Learning Bayesian networks is NP-Complete. In D. Fisher and H. Lenz, Eds., Learning from Data: Artificial Intelligence and Statistics V, Springer-Verlag, 121–130, 1996.

    Google Scholar 

  8. G. F. Cooper and E. Herskovits. A Bayesian method for the induction of probabilistic networks from data. Machine Learning, 9(4):309–348, 1992.

    MATH  Google Scholar 

  9. D. Dash and M. Druzdel. A hybrid anytime algorithm for the construction of causal models from sparse data. In Proceedings of the Fifteenth Conference on Uncertaintyin Artificial Intelligence, 142–149, 1999.

    Google Scholar 

  10. L. M. de Campos and J. F. Huete. A new approach for learning belief networks using independence criteria. International Journal of Approximate Reasoning, 24:11–37, 2000.

    MATH  CrossRef  MathSciNet  Google Scholar 

  11. L. M. de Campos and J. M. Puerta. Stochastic local search algorithms for learning beliefnet works: Searching in the space oforderings. Lecture Notes in Artificial Intelligence, 2143:228–239, 2001.

    Google Scholar 

  12. T. A. Feo and M. G. C. Resende. Greedy randomized adaptive search procedures. Journal of Global Optimization, 6:109–133, 1995.

    MATH  CrossRef  MathSciNet  Google Scholar 

  13. D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination ofkno wledge and statistical data. Machine Learning, 20:197–244, 1995.

    MATH  Google Scholar 

  14. F. V. Jensen. An Introduction to Bayesian Networks. UCL Press, 1996.

    Google Scholar 

  15. T. Kocka and R. Castelo. Improved learning ofBa yesian networks. In Proceedings of the Seventeenth Conference on Uncertaintyin Artificial Intelligence, 269–276, 2001.

    Google Scholar 

  16. P. Larrañaga, M. Poza, Y. Yurramendi, R. Murga, and C. Kuijpers. Structure learning of Ba yesian networks by genetic algorithms: A performance analysis of control parameters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(9):912–926, 1996.

    CrossRef  Google Scholar 

  17. W. Lam and F. Bacchus. Learning Bayesian beliefnet works. An approach based on the MDL principle. Computational Intelligence, 10(4):269–293, 1994.

    CrossRef  Google Scholar 

  18. J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, 1988.

    Google Scholar 

  19. J. Pearl and T. S. Verma. Equivalence and synthesis ofcausal models. In Proceedings of the Sixth Conference on Uncertaintyin Artificial Intelligence, 220–227, 1990.

    Google Scholar 

  20. M. G. C. Resende and C. C. Ribeiro. Greedy randomized adaptive search procedures. In F. Glover and G. Kochenberger, Eds., State of the Art Handbook in Metaheuristics, Kluwer. To appear.

    Google Scholar 

  21. P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. Lecture Notes in Statistics 81, Springer Verlag, 1993.

    Google Scholar 

  22. M. Singh and M. Valtorta. Construction of Ba yesian network structures from data: A brief survey and an efficient algorithm. International Journal of Approximate Reasoning, 12:111–131, 1995.

    MATH  CrossRef  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Miguel de Campos, L., Manuel Fernández-Luna, J., Miguel Puerta, J. (2002). Local Search Methods for Learning Bayesian Networks Using a Modified Neighborhood in the Space of DAGs. In: Garijo, F.J., Riquelme, J.C., Toro, M. (eds) Advances in Artificial Intelligence — IBERAMIA 2002. IBERAMIA 2002. Lecture Notes in Computer Science(), vol 2527. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-36131-6_19

Download citation

  • DOI: https://doi.org/10.1007/3-540-36131-6_19

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-00131-7

  • Online ISBN: 978-3-540-36131-2

  • eBook Packages: Springer Book Archive

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.