Summary
The Bayesian optimization algorithm (BOA) uses Bayesian networks to learn linkages between the decision variables of an optimization problem. This chapter studies the influence of different selection and replacement methods on the accuracy of linkage learning in BOA. Results on concatenated m-k deceptive trap functions show that the model accuracy depends on a large extent on the choice of selection method and to a lesser extent on the replacement strategy used. Specifically, it is shown that linkage learning in BOA is more accurate with truncation selection than with tournament selection. The choice of replacement strategy is important when tournament selection is used, but it is not relevant when using truncation selection. On the other hand, if performance is our main concern, tournament selection and restricted tournament replacement should be preferred.
Additionally, the learning procedure of Bayesian networks in BOA is investigated to clarify the difference observed between tournament and truncation selection in terms of model quality. It is shown that if the metric that scores candidate networks is changed to take into account the nature of tournament selection, the linkage learning accuracy with tournament selection improves dramatically.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Ackley, D.H.: A connectionist machine for genetic hill climbing. Kluwer Academic, Boston (1987)
Blickle, T., Thiele, L.: A comparison of selection schemes used in genetic algorithms. Evolutionary Computation 4(4), 311–347 (1997)
Brindle, A.: Genetic Algorithms for Function Optimization. PhD thesis, University of Alberta, Edmonton, Canada. Unpublished doctoral dissertation (1981)
Chickering, D.M., Heckerman, D., Meek, C.: A Bayesian approach to learning Bayesian networks with local structure. Technical Report MSR-TR-97-07, Microsoft Research, Redmond, WA (1997)
Cooper, G.F., Herskovits, E.H.: A Bayesian method for the induction of probabilistic networks from data. Machine Learning 9, 309–347 (1992)
Correa, E.S., Shapiro, J.L.: Model complexity vs. performance in the bayesian optimization algorithm. In: Runarsson, T.P., Beyer, H.-G., Burke, E.K., Merelo-Guervós, J.J., Whitley, L.D., Yao, X. (eds.) PPSN 2006. LNCS, vol. 4193, pp. 998–1007. Springer, Heidelberg (2006)
Deb, K., Goldberg, D.E.: Analyzing deception in trap functions. Foundations of Genetic Algorithms 2, 93–108 (1993)
Echegoyen, C., Lozano, J.A., Santana, R., Larrañaga, P.: Exact bayesian network learning in estimation of distribution algorithms. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1051–1058. IEEE Press, Los Alamitos (2007)
Friedman, N., Goldszmidt, M.: Learning bayesian networks with local structure. Graphical Models, 421–459 (1999)
Goldberg, D.E.: The Design of Innovation - Lessons from and for Competent Genetic Algorithms. Kluwer Academic Publishers, Norwell (2002)
Goldberg, D.E., Korb, B., Deb, K.: Messy genetic algorithms: Motivation, analysis, and first results. Complex Systems 3(5), 493–530 (1989)
Harik, G.R.: Finding multimodal solutions using restricted tournament selection. In: Proceedings of the Sixth International Conference on Genetic Algorithms, pp. 24–31 (1995)
Hauschild, M., Pelikan, M., Lima, C.F., Sastry, K.: Analyzing probabilistic models in hierarchical BOA on traps and spin glasses. MEDAL Report No. 2007001, University of Missouri at St. Louis, St. Louis, MO (2007)
Heckerman, D., Geiger, D., Chickering, D.M.: Learning Bayesian networks: The combination of knowledge and statistical data. Technical Report MSR-TR-94-09, Microsoft Research, Redmond, WA (1994)
Larrañaga, P., Lozano, J.A.: Estimation of distribution algorithms: a new tool for Evolutionary Computation. Kluwer Academic Publishers, Boston (2002)
Lima, C.F., Lobo, F.G., Pelikan, M.: From mating-pool distributions to model overfitting. In: Accepted for the ACM SIGEVO Genetic and Evolutionary Computation Conference (GECCO 2008) (2008)
Lima, C.F., Pelikan, M., Sastry, K., Butz, M., Goldberg, D.E., Lobo, F.G.: Substructural neighborhoods for local search in the Bayesian optimization algorithm. In: Runarsson, T.P., Beyer, H.-G., Burke, E.K., Merelo-Guervós, J.J., Whitley, L.D., Yao, X. (eds.) PPSN 2006. LNCS, vol. 4193, pp. 232–241. Springer, Heidelberg (2006)
Lima, C.F., Sastry, K., Goldberg, D.E., Lobo, F.G.: Combining competent crossover and mutation operators: a probabilistic model building approach. In: Beyer, H., et al. (eds.) Proceedings of the ACM SIGEVO Genetic and Evolutionary Computation Conference (GECCO 2005), pp. 735–742. ACM Press, New York (2005)
Mühlenbein, H., Schlierkamp-Voosen, D.: Predictive models for the breeder genetic algorithm: I. Continuous parameter optimization. Evolutionary Computation 1(1), 25–49 (1993)
Pearl, J.: Probabilistic reasoning in intelligent systems: Networks of plausible inference. Morgan Kaufmann, San Mateo (1988)
Pelikan, M.: Hierarchical Bayesian Optimization Algorithm: Toward a New Generation of Evolutionary Algorithms. Springer, Heidelberg (2005)
Pelikan, M., Goldberg, D.E.: Escaping hierarchical traps with competent genetic algorithms. In: Spector, L., et al. (eds.) Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2001), pp. 511–518. Morgan Kaufmann, San Francisco (2001)
Pelikan, M., Goldberg, D.E., Cantu-Paz, E.: BOA: The Bayesian Optimization Algorithm. In: Banzhaf, W., et al. (eds.) Proceedings of the Genetic and Evolutionary Computation Conference GECCO 1999, pp. 525–532. Morgan Kaufmann, San Francisco (1999)
Pelikan, M., Goldberg, D.E., Lobo, F.: A survey of optimization by building and using probabilistic models. Computational Optimization and Applications 21(1), 5–20 (2002)
Pelikan, M., Sastry, K.: Fitness inheritance in the bayesian optimization algorithm. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3103, pp. 48–59. Springer, Heidelberg (2004)
Santana, R., Larrañaga, P., Lozano, J.A.: Interactions and dependencies in estimation of distribution algorithms. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1418–1425. IEEE Press, Los Alamitos (2005)
Sastry, K.: Evaluation-relaxation schemes for genetic and evolutionary algorithms. Master’s thesis, University of Illinois at Urbana-Champaign, Urbana, IL (2001)
Sastry, K., Abbass, H.A., Goldberg, D.E., Johnson, D.D.: Sub-structural niching in estimation distribution algorithms. In: Beyer, H., et al. (eds.) Proceedings of the ACM SIGEVO Genetic and Evolutionary Computation Conference (GECCO 2005). ACM Press, New York (2005)
Sastry, K., Goldberg, D.E.: Designing competent mutation operators via probabilistic model building of neighborhoods. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3103, pp. 114–125. Springer, Heidelberg (2004)
Sastry, K., Lima, C.F., Goldberg, D.E.: Evaluation relaxation using substructural information and linear estimation. In: Keijzer, M., et al. (eds.) Proceedings of the ACM SIGEVO Genetic and Evolutionary Computation Conference (GECCO 2006), pp. 419–426. ACM Press, New York (2006)
Sastry, K., Pelikan, M., Goldberg, D.E.: Efficiency enhancement of genetic algorithms via building-block-wise fitness estimation. In: Proceedings of the IEEE International Conference on Evolutionary Computation, pp. 720–727 (2004)
Thierens, D., Goldberg, D.E.: Mixing in genetic algorithms. In: Forrest, S. (ed.) Proceedings of the Fifth International Conference on Genetic Algorithms, pp. 38–45. Morgan Kaufmann, San Mateo (1993)
Wu, H., Shapiro, J.L.: Does overfitting affect performance in estimation of distribution algorithms. In: Keijzer, M., others (eds.) Proceedings of the ACM SIGEVO Genetic and Evolutionary Computation Conference (GECCO 2006), pp. 433–434. ACM Press, New York (2006)
Yu, T.-L., Goldberg, D.E.: Dependency structure matrix analysis: Offline utility of the dependency structure matrix genetic algorithm. In: Deb, K.,et al. (eds.) GECCO 2004. LNCS, vol. 3103, pp. 355–366. Springer, Heidelberg (2004)
Yu, T.-L., Sastry, K., Goldberg, D.E.: Population size to go: Online adaptation using noise and substructural measurements. In: Lobo, F.G., et al. (eds.) Parameter Setting in Evolutionary Algorithms, pp. 205–224. Springer, Heidelberg (2007)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Lima, C.F., Pelikan, M., Goldberg, D.E., Lobo, F.G., Sastry, K., Hauschild, M. (2008). Linkage Learning Accuracy in the Bayesian Optimization Algorithm. In: Chen, Yp., Lim, MH. (eds) Linkage in Evolutionary Computation. Studies in Computational Intelligence, vol 157. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-85068-7_5
Download citation
DOI: https://doi.org/10.1007/978-3-540-85068-7_5
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-85067-0
Online ISBN: 978-3-540-85068-7
eBook Packages: EngineeringEngineering (R0)