A diverse human learning optimization algorithm
- 650 Downloads
Human Learning Optimization is a simple but efficient meta-heuristic algorithm in which three learning operators, i.e. the random learning operator, the individual learning operator, and the social learning operator, are developed to efficiently search the optimal solution by imitating the learning mechanisms of human beings. However, HLO assumes that all the individuals possess the same learning ability, which is not true in a real human population as the IQ scores of humans, one of the most important indices of the learning ability of humans, follow Gaussian distribution and increase with the development of society and technology. Inspired by this fact, this paper proposes a Diverse Human Learning Optimization algorithm (DHLO), into which the Gaussian distribution and dynamic adjusting strategy are introduced. By adopting a set of Gaussian distributed parameter values instead of a constant to diversify the learning abilities of DHLO, the robustness of the algorithm is strengthened. In addition, by cooperating with the dynamic updating operation, DHLO can adjust to better parameter values and consequently enhances the global search ability of the algorithm. Finally, DHLO is applied to tackle the CEC05 benchmark functions as well as knapsack problems, and its performance is compared with the standard HLO as well as the other eight meta-heuristics, i.e. the Binary Differential Evolution, Simplified Binary Artificial Fish Swarm Algorithm, Adaptive Binary Harmony Search, Binary Gravitational Search Algorithms, Binary Bat Algorithms, Binary Artificial Bee Colony, Bi-Velocity Discrete Particle Swarm Optimization, and Modified Binary Particle Swarm Optimization. The experimental results show that the presented DHLO outperforms the other algorithms in terms of search accuracy and scalability.
KeywordsHuman learning optimization Gaussian distribution Meta-heuristic Global optimization Computational experiments
This work is supported by National Natural Science Foundation of China (Grant No. 61304031, 61374044 & 61304143), Innovation Program of Shanghai Municipal Education Commission (14YZ007), Key Project of Science and Technology Commission of Shanghai Municipality under Grant No. 14JC1402200 and 14DZ1206302, Key Project of Shanghai Municipal Commission of Economy and Informatization (ZB-ZBYZ-02-14-0825), AirForce and DTRA grants (P. Pardalos, J. Pi), and a Paul and Heidi Brown Preeminent Professorship in Industrial and Systems Engineering, University of Florida.
- 7.Wang, L., Ye, W., Mao, Y.F., Georgiev, P.G., Wang, H.K., Fei, M.R.: The node placement of large-scale industrial wireless sensor networks based on binary differential evolution harmony search algorithm. Int. J. Innov. Comput. Inf. Control. 9(3), 955–970 (2013)Google Scholar
- 10.Yang, X.S.: A New Metaheuristic Bat-Inspired Algorithm. Nature Inspired Cooperative Strategies for Optimization. Springer, Berlin (2010)Google Scholar
- 17.Wang, L., Ni, H.Q., Yang, R.X., Fei, M.R., Ye, W.: A simple human learning optimization algorithm. Commun. Comput. Inf. Sci. 462, 56–65 (2014)Google Scholar
- 18.Herrnstein, R.J., Murry, C.: The bell curve: intelligence and class structure in American life. The Free Press, New York (1994)Google Scholar
- 21.Cziko, G.: Without Miracles: Universal Selection Theory and the Second Darwinian Revolution. MIT Press, Cambridge (1997)Google Scholar
- 22.Forcheri, P., Molfino, M.T., Quarati, A.: ICT driven individual learning: new opportunities and perspectives. Educ. Technol. Soc. 3, 51–61 (2000)Google Scholar
- 24.Zhong, J.G.: A review of studies on the individual difference of intelligence in the IQ-normal group. Psychol. Sci. 30(2), 394–399 (2007). (in Chinese)Google Scholar
- 25.Suganthan, P.N., Hansen, N., Liang, J.J., Deb, K., Chen, Y., Auger, A., Tiwari, S.: Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization, KanGAL Report 2005005 (2005)Google Scholar
- 30.Mirjalili, S., Mirjalili, S.M., Yang, X.S.: Binary bat algorithm. Neural Comput. Appl. 25(3–4), 1–19 (2014)Google Scholar
- 37.Gottlieb, J.: On the feasibility problem of penalty-based evolutionary algorithms for knapsack problems. Lecture Notes in Computer Science, pp. 50–59 (2001)Google Scholar