Linkage Learning Accuracy in the Bayesian Optimization Algorithm

  • Claudio F. Lima
  • Martin Pelikan
  • David E. Goldberg
  • Fernando G. Lobo
  • Kumara Sastry
  • Mark Hauschild
Part of the Studies in Computational Intelligence book series (SCI, volume 157)

Summary

The Bayesian optimization algorithm (BOA) uses Bayesian networks to learn linkages between the decision variables of an optimization problem. This chapter studies the influence of different selection and replacement methods on the accuracy of linkage learning in BOA. Results on concatenated m-k deceptive trap functions show that the model accuracy depends on a large extent on the choice of selection method and to a lesser extent on the replacement strategy used. Specifically, it is shown that linkage learning in BOA is more accurate with truncation selection than with tournament selection. The choice of replacement strategy is important when tournament selection is used, but it is not relevant when using truncation selection. On the other hand, if performance is our main concern, tournament selection and restricted tournament replacement should be preferred.

Additionally, the learning procedure of Bayesian networks in BOA is investigated to clarify the difference observed between tournament and truncation selection in terms of model quality. It is shown that if the metric that scores candidate networks is changed to take into account the nature of tournament selection, the linkage learning accuracy with tournament selection improves dramatically.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ackley, D.H.: A connectionist machine for genetic hill climbing. Kluwer Academic, Boston (1987)Google Scholar
  2. 2.
    Blickle, T., Thiele, L.: A comparison of selection schemes used in genetic algorithms. Evolutionary Computation 4(4), 311–347 (1997)Google Scholar
  3. 3.
    Brindle, A.: Genetic Algorithms for Function Optimization. PhD thesis, University of Alberta, Edmonton, Canada. Unpublished doctoral dissertation (1981)Google Scholar
  4. 4.
    Chickering, D.M., Heckerman, D., Meek, C.: A Bayesian approach to learning Bayesian networks with local structure. Technical Report MSR-TR-97-07, Microsoft Research, Redmond, WA (1997)Google Scholar
  5. 5.
    Cooper, G.F., Herskovits, E.H.: A Bayesian method for the induction of probabilistic networks from data. Machine Learning 9, 309–347 (1992)MATHGoogle Scholar
  6. 6.
    Correa, E.S., Shapiro, J.L.: Model complexity vs. performance in the bayesian optimization algorithm. In: Runarsson, T.P., Beyer, H.-G., Burke, E.K., Merelo-Guervós, J.J., Whitley, L.D., Yao, X. (eds.) PPSN 2006. LNCS, vol. 4193, pp. 998–1007. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  7. 7.
    Deb, K., Goldberg, D.E.: Analyzing deception in trap functions. Foundations of Genetic Algorithms 2, 93–108 (1993)Google Scholar
  8. 8.
    Echegoyen, C., Lozano, J.A., Santana, R., Larrañaga, P.: Exact bayesian network learning in estimation of distribution algorithms. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1051–1058. IEEE Press, Los Alamitos (2007)CrossRefGoogle Scholar
  9. 9.
    Friedman, N., Goldszmidt, M.: Learning bayesian networks with local structure. Graphical Models, 421–459 (1999)Google Scholar
  10. 10.
    Goldberg, D.E.: The Design of Innovation - Lessons from and for Competent Genetic Algorithms. Kluwer Academic Publishers, Norwell (2002)MATHGoogle Scholar
  11. 11.
    Goldberg, D.E., Korb, B., Deb, K.: Messy genetic algorithms: Motivation, analysis, and first results. Complex Systems 3(5), 493–530 (1989)MATHMathSciNetGoogle Scholar
  12. 12.
    Harik, G.R.: Finding multimodal solutions using restricted tournament selection. In: Proceedings of the Sixth International Conference on Genetic Algorithms, pp. 24–31 (1995)Google Scholar
  13. 13.
    Hauschild, M., Pelikan, M., Lima, C.F., Sastry, K.: Analyzing probabilistic models in hierarchical BOA on traps and spin glasses. MEDAL Report No. 2007001, University of Missouri at St. Louis, St. Louis, MO (2007)Google Scholar
  14. 14.
    Heckerman, D., Geiger, D., Chickering, D.M.: Learning Bayesian networks: The combination of knowledge and statistical data. Technical Report MSR-TR-94-09, Microsoft Research, Redmond, WA (1994)Google Scholar
  15. 15.
    Larrañaga, P., Lozano, J.A.: Estimation of distribution algorithms: a new tool for Evolutionary Computation. Kluwer Academic Publishers, Boston (2002)MATHGoogle Scholar
  16. 16.
    Lima, C.F., Lobo, F.G., Pelikan, M.: From mating-pool distributions to model overfitting. In: Accepted for the ACM SIGEVO Genetic and Evolutionary Computation Conference (GECCO 2008) (2008)Google Scholar
  17. 17.
    Lima, C.F., Pelikan, M., Sastry, K., Butz, M., Goldberg, D.E., Lobo, F.G.: Substructural neighborhoods for local search in the Bayesian optimization algorithm. In: Runarsson, T.P., Beyer, H.-G., Burke, E.K., Merelo-Guervós, J.J., Whitley, L.D., Yao, X. (eds.) PPSN 2006. LNCS, vol. 4193, pp. 232–241. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  18. 18.
    Lima, C.F., Sastry, K., Goldberg, D.E., Lobo, F.G.: Combining competent crossover and mutation operators: a probabilistic model building approach. In: Beyer, H., et al. (eds.) Proceedings of the ACM SIGEVO Genetic and Evolutionary Computation Conference (GECCO 2005), pp. 735–742. ACM Press, New York (2005)CrossRefGoogle Scholar
  19. 19.
    Mühlenbein, H., Schlierkamp-Voosen, D.: Predictive models for the breeder genetic algorithm: I. Continuous parameter optimization. Evolutionary Computation 1(1), 25–49 (1993)CrossRefGoogle Scholar
  20. 20.
    Pearl, J.: Probabilistic reasoning in intelligent systems: Networks of plausible inference. Morgan Kaufmann, San Mateo (1988)Google Scholar
  21. 21.
    Pelikan, M.: Hierarchical Bayesian Optimization Algorithm: Toward a New Generation of Evolutionary Algorithms. Springer, Heidelberg (2005)MATHGoogle Scholar
  22. 22.
    Pelikan, M., Goldberg, D.E.: Escaping hierarchical traps with competent genetic algorithms. In: Spector, L., et al. (eds.) Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2001), pp. 511–518. Morgan Kaufmann, San Francisco (2001)Google Scholar
  23. 23.
    Pelikan, M., Goldberg, D.E., Cantu-Paz, E.: BOA: The Bayesian Optimization Algorithm. In: Banzhaf, W., et al. (eds.) Proceedings of the Genetic and Evolutionary Computation Conference GECCO 1999, pp. 525–532. Morgan Kaufmann, San Francisco (1999)Google Scholar
  24. 24.
    Pelikan, M., Goldberg, D.E., Lobo, F.: A survey of optimization by building and using probabilistic models. Computational Optimization and Applications 21(1), 5–20 (2002)MATHCrossRefMathSciNetGoogle Scholar
  25. 25.
    Pelikan, M., Sastry, K.: Fitness inheritance in the bayesian optimization algorithm. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3103, pp. 48–59. Springer, Heidelberg (2004)Google Scholar
  26. 26.
    Santana, R., Larrañaga, P., Lozano, J.A.: Interactions and dependencies in estimation of distribution algorithms. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1418–1425. IEEE Press, Los Alamitos (2005)CrossRefGoogle Scholar
  27. 27.
    Sastry, K.: Evaluation-relaxation schemes for genetic and evolutionary algorithms. Master’s thesis, University of Illinois at Urbana-Champaign, Urbana, IL (2001)Google Scholar
  28. 28.
    Sastry, K., Abbass, H.A., Goldberg, D.E., Johnson, D.D.: Sub-structural niching in estimation distribution algorithms. In: Beyer, H., et al. (eds.) Proceedings of the ACM SIGEVO Genetic and Evolutionary Computation Conference (GECCO 2005). ACM Press, New York (2005)Google Scholar
  29. 29.
    Sastry, K., Goldberg, D.E.: Designing competent mutation operators via probabilistic model building of neighborhoods. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3103, pp. 114–125. Springer, Heidelberg (2004)Google Scholar
  30. 30.
    Sastry, K., Lima, C.F., Goldberg, D.E.: Evaluation relaxation using substructural information and linear estimation. In: Keijzer, M., et al. (eds.) Proceedings of the ACM SIGEVO Genetic and Evolutionary Computation Conference (GECCO 2006), pp. 419–426. ACM Press, New York (2006)CrossRefGoogle Scholar
  31. 31.
    Sastry, K., Pelikan, M., Goldberg, D.E.: Efficiency enhancement of genetic algorithms via building-block-wise fitness estimation. In: Proceedings of the IEEE International Conference on Evolutionary Computation, pp. 720–727 (2004)Google Scholar
  32. 32.
    Thierens, D., Goldberg, D.E.: Mixing in genetic algorithms. In: Forrest, S. (ed.) Proceedings of the Fifth International Conference on Genetic Algorithms, pp. 38–45. Morgan Kaufmann, San Mateo (1993)Google Scholar
  33. 33.
    Wu, H., Shapiro, J.L.: Does overfitting affect performance in estimation of distribution algorithms. In: Keijzer, M., others (eds.) Proceedings of the ACM SIGEVO Genetic and Evolutionary Computation Conference (GECCO 2006), pp. 433–434. ACM Press, New York (2006)CrossRefGoogle Scholar
  34. 34.
    Yu, T.-L., Goldberg, D.E.: Dependency structure matrix analysis: Offline utility of the dependency structure matrix genetic algorithm. In: Deb, K.,et al. (eds.) GECCO 2004. LNCS, vol. 3103, pp. 355–366. Springer, Heidelberg (2004)Google Scholar
  35. 35.
    Yu, T.-L., Sastry, K., Goldberg, D.E.: Population size to go: Online adaptation using noise and substructural measurements. In: Lobo, F.G., et al. (eds.) Parameter Setting in Evolutionary Algorithms, pp. 205–224. Springer, Heidelberg (2007)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Claudio F. Lima
    • 1
  • Martin Pelikan
    • 2
  • David E. Goldberg
    • 3
  • Fernando G. Lobo
    • 1
  • Kumara Sastry
    • 3
  • Mark Hauschild
    • 2
  1. 1.University of AlgarvePortugal
  2. 2.University of Missouri at St. LouisUSA
  3. 3.University of Illinois at Urbana-ChampaignUSA

Personalised recommendations