Journal of Computer Science and Technology

, Volume 28, Issue 4, pp 720–731 | Cite as

Parameter Control of Genetic Algorithms by Learning and Simulation of Bayesian Networks — A Case Study for the Optimal Ordering of Tables

  • Concha Bielza
  • Juan A. Fernández del Pozo
  • Pedro Larrañaga
Regular Paper


Parameter setting for evolutionary algorithms is still an important issue in evolutionary computation. There are two main approaches to parameter setting: parameter tuning and parameter control. In this paper, we introduce self-adaptive parameter control of a genetic algorithm based on Bayesian network learning and simulation. The nodes of this Bayesian network are genetic algorithm parameters to be controlled. Its structure captures probabilistic conditional (in)dependence relationships between the parameters. They are learned from the best individuals, i.e., the best configurations of the genetic algorithm. Individuals are evaluated by running the genetic algorithm for the respective parameter configuration. Since all these runs are time-consuming tasks, each genetic algorithm uses a small-sized population and is stopped before convergence. In this way promising individuals should not be lost. Experiments with an optimal search problem for simultaneous row and column orderings yield the same optima as state-of-the-art methods but with a sharp reduction in computational time. Moreover, our approach can cope with as yet unsolved high-dimensional problems.


genetic algorithm estimation of distribution algorithm parameter control parameter setting Bayesian network 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Supplementary material

11390_2013_1370_MOESM1_ESM.doc (38 kb)
(DOC 38.0 KB)


  1. [1]
    De Jong K A. An analysis of behavior of a class of genetic adaptive systems [Ph.D. Thesis]. University of Michigan, USA, 1975.Google Scholar
  2. [2]
    Grefenstette JJ (1986) Optimization of control parameters for genetic algorithms. IEEE Transactions on Systems, Man and Cybernetics 16(1):122–128CrossRefGoogle Scholar
  3. [3]
    Wolpert D, Macready W (1997) No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1(1):67–82CrossRefGoogle Scholar
  4. [4]
    Eiben A E, Michalewicz Z, Schoenauer M, Smith J E. Parameter control in evolutionary algorithms. In Studies in Computational Intelligence 54, Lobo F G, Lina C F, Michalewicz Z et al. (eds.), Springer, 2007, pp.19–46.Google Scholar
  5. [5]
    de Lima E B, Pappa G L, de Almeida J M et al. Tuning genetic programming parameters with factorial designs. In Proc. 2010 IEEE Congress on Evolutionary Computation, July 2010.Google Scholar
  6. [6]
    Rojas I, González J, Pomares H et al. Statistical analysis of the main parameters involved in the design of a genetic algorithm. IEEE Transactions on Systems, Man, and CyberneticsPart C, Applications and Reviews, 2002, 32(1): 31–37.Google Scholar
  7. [7]
    Czarn A, MacNish C, Vijayan K et al (2004) Statistical exploratory analysis of genetic algorithms: The importance of interaction. IEEE Trans Evolutionary Computation 8(4):405–421CrossRefGoogle Scholar
  8. [8]
    Smit S K, Eiben A E. Parameter tuning of evolutionary algorithms: Generalist vs. specialist. In Lecture Notes in Computer Science 6024, Di Chio C, Cagnoni S, Cotta C et al. (eds.), Springer, 2010, pp.542–551.Google Scholar
  9. [9]
    Rechenberg I. Evolutionsstrategie: Optimierung Technischer Systeme nach Prinzipien der Biologischen Evolution. Stuttgart, Germany: Frommann-Holzboog, 1973. (In German)Google Scholar
  10. [10]
    Santana R, Larrañaga P, Lozano J A. Adaptive estimation of distribution algorithms. In Studies in Computational Intelligence 136, Cotta C, Sevaux M, Sörensen K et al. (eds.), Springer, 2008, pp.177–197.Google Scholar
  11. [11]
    Kramer O. Self-Adaptive Heuristics for Evolutionary Computation. Berlin, Germany: Springer-Verlag, 2008.Google Scholar
  12. [12]
    Angeline P J. Adaptive and self-adaptive evolutionary computation. In Computational Intelligence: A Dynamic System Perspective, Palaniswami Y, Attikiouzel R, Marks R et al. (eds.), IEEE, 1995, pp.152–161.Google Scholar
  13. [13]
    Hinterding R, Michalewicz Z, Eiben A E. Adaptation in evolutionary computation: A survey. In Proc. the 4th IEEE Conf. Evolutionary Computation, Apr. 1997, pp.65–69.Google Scholar
  14. [14]
    Smith J. Self adaptation in evolutionary algorithms [Ph.D. Thesis]. University of the West of England, UK, 1997.Google Scholar
  15. [15]
    Friesleben B, Hartfelder M. Optimisation of genetic algorithms by genetic algorithms. In Proc. Artificial Neural Networks and Genetic Algorithms, Apr. 1993, pp.392–399.Google Scholar
  16. [16]
    Bäck T. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms. New York, USA: Oxford University Press, 1996.Google Scholar
  17. [17]
    Rechenberg I (1994) Evolutionsstrategie'94. Frommann-Holzboog, Stuttgart, GermanyGoogle Scholar
  18. [18]
    Larrañaga P (2002) Lozano J A. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation. Kluwer Academic Publishers, New York, USACrossRefGoogle Scholar
  19. [19]
    Peña J M, Robles V, Larrañaga P et al. GA-EDA: Hybrid evolutionary algorithm using genetic and estimation of distribution algorithms. In Proc. the 17th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, May 2004, pp.361–371.Google Scholar
  20. [20]
    Santana R, Larrañaga P, Lozano JA (2008) Combining variable neighborhood search and estimation of distribution algorithms in the protein side chain placement problem. Journal of Heuristics 14(5):519–547zbMATHCrossRefGoogle Scholar
  21. [21]
    Sun J, Zhang Q, Tsang E (2005) DE/EDA: A new evolutionary algorithm for global optimisation. Information Sciences 169(3/4):249–262MathSciNetCrossRefGoogle Scholar
  22. [22]
    Dong W, Yao X. NichingEDA: Utilizing the diversity inside a population of EDAs for continuous optimization. In Proc. the 2008 IEEE Congress on Evolutionary Computation, June 2008, pp.1260–1267.Google Scholar
  23. [23]
    Nannen V, Smit S K, Eiben A E. Costs and benefits of tuning parameters of evolutionary algorithms. In Proc. the 10th Int. Conf. Parallel Problem Solving from Nature, Sept. 2008, Vol.5199, pp.528–538.Google Scholar
  24. [24]
    Bielza C, Fernández del Pozo JA, Larrañaga P et al (2010) Multidimensional statistical analysis of the parameterization of a genetic algorithm for the optimal ordering of tables. Expert Systems with Applications 37(1):804–815CrossRefGoogle Scholar
  25. [25]
    Etxeberria R, Larrañaga P. Global optimization using Bayesian networks. In Proc. the 2nd Symposium on Artificial Intelligence, March 1999, pp.332–339.Google Scholar
  26. [26]
    Pelikan M, Goldberg D E, Cantú-Paz E. BOA: The Bayesian optimization algorithm. In Proc. the Genetic and Evolutionary Computation Conference, July 1999, pp.525–532.Google Scholar
  27. [27]
    Yuan B, Gallagher M. Combining meta-EAs and racing for difficult EA parameter tuning tasks. In Studies in Computational Intelligence 54, Lobo F J, Lima C F, Michalewicz Z (eds.), Springer, 2007, pp.121–142.Google Scholar
  28. [28]
    Lobo F G, Lima C F. Adaptive population sizing schemes in genetic algorithms. In Studies in Computational Intelligence 54, Lobo F, Lima C F, Michalewicz Z (eds.), Springer, 2007, pp.185–204.Google Scholar
  29. [29]
    Friedman N, Yakhini Z. On the sample complexity of learning Bayesian networks. In Proc. the 12th Conference on Uncertainty in Artificial Intelligence, Aug. 1996, pp.274–282.Google Scholar
  30. [30]
    Lam W, Bacchus F (1994) Learning Bayesian belief networks: An approach based on the MDL principle. Computational Intelligence 10(3):269–293CrossRefGoogle Scholar
  31. [31]
    Henrion M. Propagating uncertainty in Bayesian networks by probabilistic logic sampling. In Proc. the 2nd Annual Conf. Uncertainty in Artificial Intelligence, Aug. 1986, pp.149–164.Google Scholar
  32. [32]
    Bertin J (1981) Graphics and Graphic Information Processing. Walter de Gruyter & Co., UKCrossRefGoogle Scholar
  33. [33]
    Johnson D S, Papadimitriou C H. Computational complexity. In The Traveling Salesman Problem, Lawler E L, Lenstra J K, Rinnooy Kan A et al. (eds.), John Wiley & Sons, 1985, pp.37–85.Google Scholar
  34. [34]
    Niermann S (2005) Optimizing the ordering of tables with evolutionary computation. The American Statistician 59(1):41–46MathSciNetCrossRefGoogle Scholar
  35. [35]
    Larrañaga P, Kuijpers CMH, Murga RH et al (1999) Genetic algorithms for the travelling salesman problem: A review of representations and operators. Artificial Intelligence Review 13(2):129–170CrossRefGoogle Scholar
  36. [36]
    Croes GA (1958) A method for solving traveling-salesman problems. Operations Research 6(6):791–812MathSciNetCrossRefGoogle Scholar
  37. [37]
    Pearl J (1988) Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco, USAGoogle Scholar

Copyright information

© Springer Science+Business Media New York & Science Press, China 2013

Authors and Affiliations

  • Concha Bielza
    • 1
  • Juan A. Fernández del Pozo
    • 1
  • Pedro Larrañaga
    • 1
  1. 1.Computational Intelligence Group, Department of Artificial IntelligenceTechnical University of MadridMadridSpain

Personalised recommendations