Model Complexity vs. Performance in the Bayesian Optimization Algorithm

  • Elon S. Correa
  • Jonathan L. Shapiro
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4193)


The Bayesian Optimization Algorithm (BOA) uses a Bayesian network to estimate the probability distribution of promising solutions to a given optimization problem. This distribution is then used to generate new candidate solutions. The objective is to improve the population of candidate solutions by learning and sampling from good solutions. A Bayesian network (BN) is a graphical representation of a probability distribution over a set of variables of a given problem domain. The number of topological states that a BN can create depends on a parameter called maximum allowed indegree. We show that the value of the maximum allowed indegree given to the Bayesian network used by the BOA strongly affects the performance of this algorithm. Furthermore, there is a limited set of values for this parameter for which the performance of the BOA is maximized.


Bayesian Network Candidate Solution Partial Solution Conjunctive Normal Form Spurious Correlation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Berthier, L., Young, A.P.: Time and length scales in spin glasses. Journal of Physics: Condensed Matter 16, 729–734 (2004)CrossRefGoogle Scholar
  2. 2.
    Borgelt, C., Kruse, R.: An empirical investigation of the k2 metric. In: Proceedings of the 6th European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty, Toulouse, France, pp. 240–251. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  3. 3.
    Bouckaert, R.R.: Properties of Bayesian belief network learning algorithms. In: de Mantaras, R.L., Poole, D. (eds.) Proceedings of the 10th Conference on Uncertainty in Artificial Intelligence, Seattle, WA, USA, pp. 102–109. Morgan Kaufmann, San Francisco (1994)Google Scholar
  4. 4.
    Chickering, D.M., Geiger, D., Heckerman, D.: Learning Bayesian networks is NP-hard. Technical Report MSR-TR-94-17, Microsoft Research (November 1994)Google Scholar
  5. 5.
    Cook, S.A.: The complexity of theorem proving procedures. In: Proceedings of the third Annual ACM Symposium on Theory of Computing, Shaker Heights, Ohio, USA, May 1971, pp. 151–158 (1971)Google Scholar
  6. 6.
    Cooper, G.F., Herskovits, E.: A Bayesian method for induction of probabilistic networks from data. Technical Report SMI-91-01, University of Pittsburgh, Pittsburgh, PA, USA (January 1991)Google Scholar
  7. 7.
    Gent, I.P., Walsh, T.: An empirical analysis of search in GSAT. Artificial Intelligence Research 1, 47–59 (1993)MATHGoogle Scholar
  8. 8.
    Gu, J., Pardalos, P., Du, D.-Z.: Satisfiability problem: theory and applications. Dimacs Series in Discrete Mathematics and Theoretical Computer Science, vol. 35. American Mathematical Society (October 1997)Google Scholar
  9. 9.
    Jensen, F.V.: Bayesian networks and decision graphs, 1st edn. Springer, Heidelberg (2001)MATHGoogle Scholar
  10. 10.
    Mitchell, D.G., Selman, B., Levesque, H.J.: Hard and easy distributions for SAT problems. In: Rosenbloom, P., Szolovits, P. (eds.) Proceedings of the tenth National Conference on Artificial Intelligence, pp. 459–465. AAAI Press, Menlo Park (1992)Google Scholar
  11. 11.
    Pearl, J.: Probabilistic reasoning in intelligent systems: networks of plausible inference, 1st edn. Morgan Kaufmann, San Francisco (1988)Google Scholar
  12. 12.
    Pelikan, M.: Bayesian optimization algorithm: from single level to hierarchy. PhD thesis, Department of Computer Science at the University of Illinois at Urbana-Champaign, Urbana-Champaign, IL, USA (2002)Google Scholar
  13. 13.
    Pelikan, M., Goldberg, D.E.: Research on the Bayesian optimization algorithm. In: Wu, A.S. (ed.) Workshop of the Genetic and Evolutionary Computation Conference - GECCO 2000, Las Vegas, NV, USA, pp. 216–219. Morgan Kaufmann, San Francisco (2000)Google Scholar
  14. 14.
    Pelikan, M., Goldberg, D.E., Cantú-Paz, E.: BOA: the Bayesian optimization algorithm. In: Banzhaf, W., Daida, J., Eiben, A.E., Garzon, M.H., Honavar, V., Jakiela, M., Smith, R.E. (eds.) Proceedings of the Genetic and Evolutionary Computation Conference GECCO 1999, Orlando, Florida, USA, vol. 1, pp. 525–532. Morgan Kaufmann, San Francisco (1999)Google Scholar
  15. 15.
    Pennock, D.M., Stout, Q.F.: Exploiting a theory of phase transitions in three-satisfiability problems. In: Proceedings of the 13th National Conference on Artificial Intelligence (AAAI 1996), Portland, OR, USA, August 1996, pp. 253–258 (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Elon S. Correa
    • 1
  • Jonathan L. Shapiro
    • 2
  1. 1.Computing LaboratoryUniversity of KentCanterbury, KentUnited Kingdom
  2. 2.School of Computer ScienceUniversity of ManchesterManchesterUnited Kingdom

Personalised recommendations