Large-Scale Global Optimization Using Cooperative Coevolution with Variable Interaction Learning

  • Wenxiang Chen
  • Thomas Weise
  • Zhenyu Yang
  • Ke Tang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6239)

Abstract

In recent years, Cooperative Coevolution (CC) was proposed as a promising framework for tackling high-dimensional optimization problems. The main idea of CC-based algorithms is to discover which decision variables, i.e, dimensions, of the search space interact. Non-interacting variables can be optimized as separate problems of lower dimensionality. Interacting variables must be grouped together and optimized jointly. Early research in this area started with simple attempts such as one-dimension based and splitting-in-half methods. Later, more efficient algorithms with new grouping strategies, such as DECC-G and MLCC, were proposed. However, those grouping strategies still cannot sufficiently adapt to different group sizes. In this paper, we propose a new CC framework named Cooperative Coevolution with Variable Interaction Learning (CCVIL), which initially considers all variables as independent and puts each of them into a separate group. Iteratively, it discovers their relations and merges the groups accordingly. The efficiency of the newly proposed framework is evaluated on the set of large-scale optimization benchmarks.

Keywords

Variable Interaction Learning Large-Scale Optimization Numerical Optimization Incremental Group Strategy Cooperative Coevolution 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Sarker, R., Mohammadian, M., Yao, X.: Evolutionary Optimization. Kluwer Academic Publishers, Norwell (2002)MATHGoogle Scholar
  2. 2.
    van den Bergh, F., Engelbrecht, A.: A cooperative approach to particle swarm optimization. IEEE Transactions on Evolutionary Computation 8(3), 225–239 (2004)CrossRefGoogle Scholar
  3. 3.
    Husbands, P., Mill, F.: Simulated co-evolution as the mechanism for emergent planning and scheduling. In: 4th Intl. Conf. on Genetic Algorithms, pp. 264–270. Morgan Kaufmann, San Francisco (1991)Google Scholar
  4. 4.
    Potter, M.A., De Jong, K.A.: Cooperative coevolution: architecture for evolving coadapted subcomponents. Evolutionary Computation 8(1), 1–29 (2000)CrossRefGoogle Scholar
  5. 5.
    Yong, C.H., Miikkulainen, R.: Cooperative coevolution of multi-agent systems. Technical Report AI01-287, University of Texas at Austin, Austin, TX, USA (2001)Google Scholar
  6. 6.
    Yang, Z., Tang, K., Yao, X.: Large scale evolutionary optimization using cooperative coevolution. Information Sciences 178(15), 2985–2999 (2008)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Tang, K., Li, X., Suganthan, P.N., Yang, Z., Weise, T.: Benchmark functions for the CEC’2010 special session and competition on large scale global optimization. In: TR, NICAL, USTC, Hefei, Anhui, China (2009), http://nical.ustc.edu.cn/cec10ss.php
  8. 8.
    Weicker, K., Weicker, N.: On the improvement of coevolutionary optimizers by learning variable interdependencies. In: IEEE CEC, pp. 1627–1632. IEEE Press, Los Alamitos (1999)Google Scholar
  9. 9.
    Potter, M.A., De Jong, K.A.: A cooperative coevolutionary approach to function optimization. In: 3rd Conf. on Parallel Problem Solving from Nature, vol. 2, pp. 249–257 (1994)Google Scholar
  10. 10.
    Aickelin, U.: A Pyramidal Evolutionary Algorithm with Different Inter-Agent Partnering Strategies for Scheduling Problems. In: GECCO Late-Breaking Papers, pp. 1–8Google Scholar
  11. 11.
    Yang, Z., Tang, K., Yao, X.: Multilevel cooperative coevolution for large scale optimization. In: IEEE Congress on Evolutionary Computation, pp. 1663–1670. IEEE Press, Los Alamitos (2008)Google Scholar
  12. 12.
    Price, K., Storn, R., Lampinen, J.A.: Differential Evolution: A Practical Approach to Global Optimization. Springer, Heidelberg (2005)MATHGoogle Scholar
  13. 13.
    Chakraborty, U.K. (ed.): Advances in Differential Evolution. Springer, Berlin (2008)MATHGoogle Scholar
  14. 14.
    Zhang, J., Sanderson, A.C.: JADE: Adaptive differential evolution with optional external archive. IEEE Transactions on Evolutionary Computation 13(5), 945–958 (2009)CrossRefGoogle Scholar
  15. 15.
    Auger, A., Hansen, N.: A restart CMA evolution strategy with increasing population size. In: IEEE Congress on Evolutionary Computation, vol. 2. IEEE Press, Los Alamitos (2005)Google Scholar
  16. 16.
    Streeter, M.J.: Upper bounds on the time and space complexity of optimizing additively separable functions. In: GECCO, pp. 186–197. Springer, Heidelberg (2004)Google Scholar
  17. 17.
    Hansen, N., Kern, S.: Evaluating the CMA evolution strategy on multimodal test functions. In: Parallel Problem Solving from Nature – PPSN VIII, pp. 282–291. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  18. 18.
    Mann, H.B., Whitney, D.R.: On a test of whether one of two random variables is stochastically larger than the other. The Annals of Mathematical Statistics 18(1), 50–60 (1947)MathSciNetMATHCrossRefGoogle Scholar
  19. 19.
    Hansen, N., Müller, S.D., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary Computation 11(1), 1–18 (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Wenxiang Chen
    • 1
  • Thomas Weise
    • 1
  • Zhenyu Yang
    • 1
  • Ke Tang
    • 1
  1. 1.Nature Inspired Computation and Applications Laboratory, School of Computer Science and TechnologyUniversity of Science and Technology of China

Personalised recommendations