Using Hybrid Dependency Identification with a Memetic Algorithm for Large Scale Optimization Problems

  • Eman Sayed
  • Daryl Essam
  • Ruhul Sarker
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7673)


Decomposing a large scale problem into smaller subproblems is one of the approaches used to overcome the usual performance deterioration that occurs in EA because of the large dimensionality. To achieve a good performance with a decomposition approach, the dependent variables need to be grouped into the same subproblem. In this paper, the Hybrid Dependency Identification with Memetic Algorithm (HDIMA) model is proposed for large scale optimization problems. The Dependency Identification (DI) technique identifies the variables that must be grouped together to form the subproblems. These subproblems are then evolved using a Memetic Algorithm (MA). Before the end of the evolution process, the subproblems are then aggregated and optimized as a complete large scale problem. A newly designed test suite of problems has been used to evaluate the performance of HDIMA over different dimensions. The evaluation shows that HDIMA is competitive to other models in the literature in terms of both consuming less computational resources and better performance.


Large Scale Problems Optimization Evolutionary Algorithms Memetic Algorithms Problem Decomposition Dependency Identification 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Omidvar, M., Li, X., Yang, Z., Yao, X.: Cooperative co-evolution for large scale optimization through more frequent random grouping (2010)Google Scholar
  2. 2.
    Potter, M., Jong, K.D.: A Cooperative Coevolutionary Approach to Function Optimization. In: Davidor, Y., Männer, R., Schwefel, H.-P. (eds.) PPSN III. LNCS, vol. 866. Springer, Heidelberg (1994)Google Scholar
  3. 3.
    Goldberg, D., Voessner, S.: Optimizing global-local search hybrids. In: Proceedings of the Genetic and Evolutionary Computation Conference, San Mateo, California, pp. 220–228 (1999)Google Scholar
  4. 4.
    Potter, M., Jone, K.D.: Cooperative Coevolution: An Architecture for Evolving Coadapted Subcomponents. Evolutionary Computation 8, 1–29 (2000)CrossRefGoogle Scholar
  5. 5.
    Yang, Z., Tang, K., Yao, X.: Large scale evolutionary optimization using cooperative coevolution. Information Sciences 178, 2986–2999 (2008)MathSciNetGoogle Scholar
  6. 6.
    Ray, T., Yao, X.: A Cooperative Coevolutionary Algorithm with Correlation Based Adaptive Variable Partitioning. In: IEEE Congress on Evolutionary Computation, pp. 983–989 (2009)Google Scholar
  7. 7.
    Omidvar, M., Li, X., Yao, X.: Cooperative Co-evolution with Delta Grouping for Large Scale Non-separable Function Optimization. In: 2010 IEEE World Congress on Computational Intelligence, Barcelona, Spain, pp. 18–23 (2010)Google Scholar
  8. 8.
    Chen, W., Weise, T., Yang, Z., Tang, K.: Large-Scale Global Optimization Using Cooperative Coevolution with Variable Interaction Learning. In: Schaefer, R., Cotta, C., Kołodziej, J., Rudolph, G. (eds.) PPSN XI. LNCS, vol. 6239, pp. 300–309. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  9. 9.
    Sayed, E., Essam, D., Sarker, R.: Dependency Identification Technique for Large Scale Optimization Problems. In: Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, Australia, pp. 1442–1449 (2012)Google Scholar
  10. 10.
    Mosk-Aoyama, D., Shah, D.: Fast Distributed Algorithms for Computing Separable Functions. IEEE Transactions on Information Theory 54, 2997–3007 (2008)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Merz, P.: Memetic Algorithms for Combinational Optimization Problems: Fitness Landscapes and Effective Search Strategies. PhD, Gesamthochschule Siegen, University of Siegen, Germany (2000)Google Scholar
  12. 12.
    Molina, D., Lozano, M., García-Martínez, C., Herrera, F.: Memetic Algorithms for Continuous Optimisation Based on Local Search Chains. Evolutionary Computation 18, 27–63 (2010)CrossRefGoogle Scholar
  13. 13.
    Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley Longman Publishing Co., Inc. (1989)Google Scholar
  14. 14.
    Deb, K., Agrawal, R.: Simulated Binary Crossover for Continuous Search Space. Complex Systems 9, 115–148 (1995)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Michalewicz, Z.: Genetic Algorithms + Data Structures = Evolution Programs. Springer, New York (1992)zbMATHGoogle Scholar
  16. 16.
    Molina, D., Lozano, M., Sánchez, A., Herrera, F.: Memetic algorithms based on local search chains for large scale continuous optimisation problems: MA-SW-Chains. In: Soft Computing - A Fusion of Foundations, Methodologies and Applications, pp. 1–20 (2010)Google Scholar
  17. 17.
    Tang, K., Yao, X., Suganthan, P.N., MacNish, C., Chen, Y.-P., Chen, C.-M., Yang, A.Z.: Benchmark Functions for the CEC 2008 Special Session and Competition on Large Scale Global Optimization. University of Science and Technology of China (USTC), School of Computer Science and Technology, Nature Inspired Computation and Applications Laboratory, NICAL, Héféi, Ānhū, China (2007)Google Scholar
  18. 18.
    Talbi, E.: Metaheuristics: from design to implementation, pp. 124–127. John Wiley & Sons, Hoboken (2009)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Eman Sayed
    • 1
  • Daryl Essam
    • 1
  • Ruhul Sarker
    • 1
  1. 1.School of Engineering and Information Technology at Australian Defence Force AcademyUNSWCanberraAustralia

Personalised recommendations