Advertisement

Loopy Substructural Local Search for the Bayesian Optimization Algorithm

  • Claudio F. Lima
  • Martin Pelikan
  • Fernando G. Lobo
  • David E. Goldberg
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5752)

Abstract

This paper presents a local search method for the Bayesian optimization algorithm (BOA) based on the concepts of substructural neighborhoods and loopy belief propagation. The probabilistic model of BOA, which automatically identifies important problem substructures, is used to define the topology of the neighborhoods explored in local search. On the other hand, belief propagation in graphical models is employed to find the most suitable configuration of conflicting substructures. The results show that performing loopy substructural local search (SLS) in BOA can dramatically reduce the number of generations necessary to converge to optimal solutions and thus provides substantial speedups.

Keywords

Local Search Bayesian Network Problem Size Factor Node Local Search Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Pelikan, M., Goldberg, D.E., Cantú-Paz, E.: BOA: The Bayesian Optimization Algorithm. In: Banzhaf, W., et al. (eds.) Proceedings of the Genetic and Evolutionary Computation Conference GECCO 1999, pp. 525–532. Morgan Kaufmann, San Francisco (1999)Google Scholar
  2. 2.
    Pelikan, M.: Hierarchical Bayesian Optimization Algorithm: Toward a New Generation of Evolutionary Algorithms. Springer, Heidelberg (2005)CrossRefzbMATHGoogle Scholar
  3. 3.
    Larrañaga, P., Lozano, J.A. (eds.): Estimation of distribution algorithms: a new tool for Evolutionary Computation. Kluwer Academic Publishers, Boston (2002)zbMATHGoogle Scholar
  4. 4.
    Pelikan, M., Goldberg, D.E., Lobo, F.: A survey of optimization by building and using probabilistic models. Computational Optimization and Applications 21(1), 5–20 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Moscato, P.: On evolution, search, optimization, genetic algorithms and martial arts: Towards memetic algorithms. Technical Report C3P 826, Caltech Concurrent Computation Program, California Institute of Technology, Pasadena, CA (1989)Google Scholar
  6. 6.
    Hart, W.E.: Adaptive global optimization with local search. PhD thesis, University of California, San Diego, CA (1994)Google Scholar
  7. 7.
    Sastry, K., Goldberg, D.E.: Let’s get ready to rumble: Crossover versus mutation head to head. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3103, pp. 126–137. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  8. 8.
    Lima, C.F., Pelikan, M., Sastry, K., Butz, M., Goldberg, D.E., Lobo, F.G.: Substructural neighborhoods for local search in the bayesian optimization algorithm. In: Runarsson, T.P., Beyer, H.-G., Burke, E.K., Merelo-Guervós, J.J., Whitley, L.D., Yao, X. (eds.) PPSN 2006. LNCS, vol. 4193, pp. 232–241. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  9. 9.
    Pearl, J.: Probabilistic reasoning in intelligent systems: Networks of plausible inference. Morgan Kaufmann, San Mateo (1988)zbMATHGoogle Scholar
  10. 10.
    Pelikan, M., Sastry, K.: Fitness inheritance in the bayesian optimization algorithm. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3103, pp. 48–59. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  11. 11.
    Lima, C.F.: Substructural Local Search in Discrete Estimation of Distribution Algorithms. PhD thesis, University of Algarve, Faro, Portugal (2009)Google Scholar
  12. 12.
    Kschischang, F., Frey, B., Loeliger, H.A.: Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory 47(2), 498–519 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Mooij, J.M.: Understanding and Improving Belief Propagation. PhD thesis, Radboud University Nijmegen, Nijmegen, Netherlands (2008)Google Scholar
  14. 14.
    Kindermann, R., Snell, J.L.: Markov Random Fields and Their Applications. American Mathematics Society, Providence (1980)CrossRefzbMATHGoogle Scholar
  15. 15.
    Mendiburu, A., Santana, R., Lozano, J.A., Bengoetxea, E.: A parallel framework for loopy belief propagation. In: GECCO 2007: Proceedings of the 2007 GECCO conference companion on Genetic and evolutionary computation, pp. 2843–2850. ACM, New York (2007)CrossRefGoogle Scholar
  16. 16.
    Braunstein, A., Mezard, M., Zecchina, R.: Survey propagation: An algorithm for satisfiability. Random Structures and Algorithms 27(2), 201–226 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Feige, U., Mossel, E., Vilenchik, D.: Complete convergence of message passing algorithms for some satisfiability problems. In: Díaz, J., Jansen, K., Rolim, J.D.P., Zwick, U. (eds.) APPROX 2006 and RANDOM 2006. LNCS, vol. 4110, pp. 339–350. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  18. 18.
    Bayati, M., Shah, D., Sharma, M.: Max-product for maximum weight matching: Convergence, correctness, and LP duality. IEEE Transactions on Information Theory 54(3), 1241–1251 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Mendiburu, A., Santana, R., Lozano, J.A.: Introducing belief propagation in estimation of distribution algorithms: A parallel approach. Technical Report EHU-KAT-IK-11-07, Department of Computer Science and Artificial Intelligence, University of the Basque Country (2007)Google Scholar
  20. 20.
    Etxeberria, R., Larrañaga, P.: Global optimization using Bayesian networks. In: Rodriguez, A.A.O., et al. (eds.) Second Symposium on Artificial Intelligence (CIMAF 1999), Habana, Cuba, pp. 332–339 (1999)Google Scholar
  21. 21.
    Henrion, M.: Propagation of uncertainty in Bayesian networks by logic sampling. In: Lemmer, J.F., Kanal, L.N. (eds.) Uncertainty in Artificial Intelligence, pp. 149–163. Elsevier, Amsterdam (1988)CrossRefGoogle Scholar
  22. 22.
    Elidan, G., Mcgraw, I., Koller, D.: Residual belief propagation: Informed scheduling for asynchronous message passing. In: Proceedings of the Twenty-second Conference on Uncertainty in AI, UAI (2006)Google Scholar
  23. 23.
    Yu, T.L., Sastry, K., Goldberg, D.E., Pelikan, M.: Population sizing for entropy-based model building in genetic algorithms. In: Thierens, D., et al. (eds.) Proceedings of the ACM SIGEVO Genetic and Evolutionary Computation Conference (GECCO 2007), pp. 601–608. ACM Press, New York (2007)Google Scholar
  24. 24.
    Deb, K., Goldberg, D.E.: Analyzing deception in trap functions. Foundations of Genetic Algorithms 2, 93–108 (1993)CrossRefGoogle Scholar
  25. 25.
    Goldberg, D.E.: The Design of Innovation - Lessons from and for Competent Genetic Algorithms. Kluwer Academic Publishers, Norwell (2002)zbMATHGoogle Scholar
  26. 26.
    Thierens, D., Goldberg, D.E.: Mixing in genetic algorithms. In: Forrest, S. (ed.) Proceedings of the Fifth International Conference on Genetic Algorithms, San Mateo, CA, pp. 38–45. Morgan Kaufmann, San Francisco (1993)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Claudio F. Lima
    • 1
  • Martin Pelikan
    • 2
  • Fernando G. Lobo
    • 1
  • David E. Goldberg
    • 3
  1. 1.University of AlgarvePortugal
  2. 2.University of Missouri at St. LouisUSA
  3. 3.University of Illinois at Urbana-ChampaignUSA

Personalised recommendations