Social group optimization (SGO): a new population evolutionary optimization technique

Social group optimization (SGO), a population-based optimization technique is proposed in this paper. It is inspired from the concept of social behavior of human toward solving a complex problem. The concept and the mathematical formulation of SGO algorithm is explained in this paper with a flowchart. To judge the effectiveness of SGO, extensive experiments have been conducted on number of different unconstrained benchmark functions as well as standard numerical benchmark functions taken from the IEEE Congress on Evolutionary Computation 2005 competition. Performance comparisons are made between state-of-the-art optimization techniques, like GA, PSO, DE, ABC and its variants, and the recently developed TLBO. The investigational outcomes show that the proposed social group optimization outperforms all the investigated optimization techniques in computational costs and also provides optimal solutions for most of the functions considered in our work. The proposed technique is found to be very simple and straightforward to implement as well. It is believed that SGO will supplement the group of effective and efficient optimization techniques in the population-based category and give researchers wide scope to choose this in their respective applications.


Introduction
Population-based optimization algorithms motivated from nature commonly locate near-optimal solution to optimization problems. Every population-based algorithm has the common characteristics of finding out global solution of the problem. A population begins with initial solutions and gradually moves toward a better solution area of search space based on the information of their fitness. Over the last few decades, numbers of successful population-based algorithms have been emerged for solving complex optimization problems. Some of the well-known population-based optimization techniques are comprehensively cited below, and readers can refer details in the respective papers. Genetic algorithms (GAs) [1], being the most popular ones, are based on genetic science and natural selection operators. The differential evolution (DE) [2] is based on similar concept of GA but it offers all solutions an equal chance irrespective of their fitness to get selected as parents, unlike GA, and has found to be recently very well known to optimization researchers. Bacteria foraging (BF) [3] based on the social foraging behavior of Escherichia coli, shuffled frog leaping (SFL) [4] inspired by natural memetics providing beauty of local search and global information exchange, simulated annealing (SA) [5] based on steel annealing process, and ant colony optimization (ACO) [6] motivated from the manners of real ant colony. A technique based on swarm behavior such as fish schooling and bird flocking in nature known as Particle Swarm Optimization (PSO) [7] has been widely researched and applied to various fields of engineering-allied subjects. Artificial bee colony (ABC) [8] algorithm based on the intelligent foraging behavior of honeybee swarm, the gravitational search algorithm (GSA) [9] based on the law of gravity and notions of mass interactions, cuckoo search [10] inspired by the obligate brood parasitism of some cuckoo species by laying their eggs in the nests of other host birds (of other species) are gaining popularity as well among users. Biogeography-based optimization (BBO) [11] based on the idea of the migration strategy of animals or other species for solving optimization problems, the intelligent water drops (IWDs) [12] algorithm enthused from observing natural water drops that flow in rivers and find almost optimal paths to their destination, the firefly algorithm (FA) [13] inspired by the flashing behavior of fireflies in nature, the honey bee mating optimization (HBMO) [14,15] algorithm inspired by the process of marriage in real honey bees, the bat algorithm (BA) [16] inspired by the echolocation behavior of bats are few more population-based techniques in this category. The harmony search (HS) [17] optimization algorithm inspired by the improvising process of composing a piece of music, the big bang-big crunch (BB-BC) optimization [18] based on one of the theories of the evolution of the universe [18], black hole (BH) [19] optimization inspired by the black hole phenomenon have also been extensively tried successfully for solving various problems in engineering. Recently, teaching-learning-based optimization (TLBO) [20] based on the effect of the influence of a teacher on the output of learners in a class is being extensively studied by researchers to solve a variety of optimization problems in engineering applications. Even though all these algorithms are good enough for solving optimization problems, however, issues like finding optimal solutions, providing fast convergence without over fitting (computational efforts), choosing and controlling algorithm parameters, algorithm stability and robustness, consistency in providing solutions, adaptability to wide variety of applications, etc., have been the subjects of extensive research in optimization community. To address the aforementioned issues, researchers have developed many variants of the above-mentioned optimization algorithms, and even hybridization of several algorithms has also been attempted.
In an attempt to address few challenges like computational efforts, optimal solutions and consistency in providing optimal solutions, this paper proposes a new optimization technique named social group optimization (SGO) based on the human behavior of learning and solving complex problems.
In this work, we have done extensive study to further investigate the performance of our proposed SGO algorithm on many simple benchmark functions as well as benchmark functions from CEC 2005 competitions. Many advanced versions of state-of-the art algorithms like PSO, DE and ABC etc., and their variants are simulated to compare the performances with SGO. Also, the performance of SGO is compared with recently developed TLBO algorithms. Convergence characteristics of SGO are presented in plots. Results are reported in Tables with the mean and standard deviation values for each algorithm on each function over several simulation runs. To compare the significance of the proposed algorithm, we have done Wilcoxon's rank-sum statistical tests.
The remaining of the paper is organized as follows: in "Social Group Optimization", we give a comprehensive description of SGO algorithm. The next section "Implementation of SGO for optimization" discusses the implementation of SGO for optimization followed by discussion about "Experimental results". The paper concludes with further research in "Conclusion".

Social group optimization (SGO)
There are many behavioral traits such as honesty, dishonesty, caring, compassion, courage, fear, justness, fairness, tolerance or respectfulness etc., lying dormant in human beings, which need to be harnessed and channelized in the appropriate direction to enable him/her to solve complex tasks in life. Few individuals might have required level of all these behavioral traits to be capable of solving, effectively and efficiently, complex problems in life. But very often, complex problems can be solved with the influence of traits from one person to other or from one group to other groups in the society. It has been observed that human beings are great imitators or followers in solving any task. Group solving capability has emerged to be more effective than individual capability in exploiting and exploring different traits of each individual in the group to solve a given problem. Based upon this concept, a new optimization technique is proposed which is named as social group optimization (SGO).
In SGO, each person (a candidate solution) is empowered with some sort of knowledge having a level of capacity for solving a problem. SGO is another population-based algorithm similar to other algorithms discussed in the previous section. For SGO, the population is considered as a group of persons (candidate solutions). Each person acquires knowledge and, thereby, possesses some level of capacity for solving a problem. This is corresponding to the 'fitness'. The best person is the best solution. The best person tries to propagate knowledge amongst all persons, which will, in turn, improve the knowledge level of the entire members in the group.
The procedure of SGO is divided into two parts. The first part consists of the 'improving phase'; the second part consists of the 'acquiring phase'. In 'improving phase,' the knowledge level of each person in the group is enhanced with the influence of the best person in the group. The best person in the group is the one having the highest level of knowledge and capacity to solve the problem. And in the 'acquiring phase,' each person enhances his/her knowledge with the mutual interaction with another person in the group and the best person in the group at that point in time. The basic mathematical interpretation of this concept is presented below.
Let X j , j = 1, 2, 3, . . .N be the persons of social group, i.e., social group contains N persons and each person X j is defined by X j = (x j1 , x j2 , x j3 , . . . , x j D ), where D is the number of traits assigned to a person which determines the dimensions of a person and f j , j = 1, 2, . . .N are their corresponding fitness values, respectively.

Improving phase
The best person (gbest) in each social group tries to propagate knowledge among all persons, which will, in turn, help others to improve their knowledge in the group.
Hence, gbest g = min{ f i , i = 1, 2, . . . N} at generation g for solving minimization problem. (1) In the improving phase, each person gets knowledge (here knowledge refers to change of traits with the influence of the best person's traits) from the group's best (gbest) person. The updating of each person can be computed as follows: where r is a random number, r ∼ U (0, 1) Accept Xnew if it gives a better fitness than Xold.
(2) where c is known as self-introspection parameter. Its value can be set from 0 < c < 1.

Acquiring phase
In the acquiring phase, a person of social group interacts with the best person (gbest) of that group and also interacts randomly with other persons of the group for acquiring knowledge. A person acquires new knowledge if the other person has more knowledge than him or her. The best knowledgeable person (here known as person having 'gbest') has the greatest influence on others to learn from him/her. A person will also acquire something new from other persons if they have more knowledge than him or her in the group.
The acquiring phase is expressed as given below: (X i 's are updated values at the end of the improving phase) For j = 1 : D Xnew i,: = Xold i,: + r 1 * X r,: − X i,: +r 2 * (gbest j − X i j ) End for End If Accept Xnew if it gives a better fitness function value. End for (4) where r 1 and r 2 are two independent random sequences, r 1 ∼ U (0, 1) and r 2 ∼ U (0, 1) . These sequences are used to affect the stochastic nature of the algorithm as shown above in Eq. (4). For further clarity and ease of implementation, the entire process is now presented in an easy-to-understand flowchart ( Fig. 1)

Implementation of SGO for optimization
The step-wise procedure for the implementation of SGO is given in this section.
Step 1: Enumeration of the problem and Initialization of parameters Initialize the population size (N), number of generations (g), number of design variables (D), and limits of design variables (U L , L L ). Define the optimization problem as: Minimize f (X). Sub- is the objective function, and X is a vector for design variables such that Step 2: Initialize the population A random population is generated based on the features (number of parameters) and the size of population chosen by user. For SGO, the population size indicates the number of persons and the features indicate the number of traits of a person. This population is articulated as: Calculate the fitness of the population f (X).
Step 3: Improving Phase Then, determine gbest g using Eq. (1), which is the best solution for that iteration. As in the improving phase, each person gets knowledge from their group's best, i.e., gbest End for

End for
The value of c is self-introspection factor. The value of c can be empirically chosen for a given problem. We have set it to 0.2 in this work after thorough study of our investigated problems and r is a random number, r ∼ U (0, 1). Accept Xnew if it gives better function value. Step 4: Acquiring phase As explained above, in the acquiring phase, a person of social group interacts with the best person, i.e., gbest of the group and also interacts randomly with other persons of the group for acquiring knowledge. The mathematical expression is defined in "Acquiring phase".

Step 5: Termination criterion
Stop the simulation if the maximum generation number is achieved; otherwise, repeat from Steps 3-4.

Experimental results
In this paper, the performance of SGO is compared with many classical population-based optimization techniques as well as their advanced variants using some basic benchmark functions and 25 test functions proposed in the CEC2005 special session on real-parameter optimization. A description of some basic benchmark functions is given in Appendix, and others are referred from their respective papers, and a detailed description of these 25 CEC2005 test functions can be found in [21].  -sum  test at a 0.05 significance level was conducted on the experimental results, and the last three rows of each respective table  summarize the experimental results. For comparing the speed of the algorithms, the first thing we require is a fair time measurement. The number of iterations or generations cannot be accepted as a time measure, since the algorithms perform different amount of works in their inner loops, and they have different population sizes. Hence, we choose the number of fitness function evaluations(FEs) as a measure of computation time instead of generations or iterations. Since the algorithms are stochastic in nature, the results of two successive runs usually do not match. Hence, we have taken different independent runs (with different seeds of the random number generator) of each algorithm.
Finally, we would like to point out that all the experiment codes are implemented in MATLAB 7. The experiments are conducted on a Pentium 4, 1 GB memory desktop in Windows XP 2002 environment.

Experiment 1:SGO vs. GA, PSO, DE, ABC, and TLBO
In this section, for fair comparison of the performances of algorithms, the results are directly gained form [22] for GA, PSO, DE and ABC algorithms. However, the simulations have been carried out by us for TLBO and our proposed SGO algorithm. The common parameter such as population size is set to 20 for both TLBO and SGO. The maximum number for function evaluation is set to 2000 for TLBO and 1000 for SGO. The other specific parameters of algorithms are given below: TLBO settings For TLBO, there is no such constant to set.
SGO settings For SGO, there is only one constant selfintrospection factor for optimum self-effort c. The value of c is empirically set to 0.2 for better results.
The 25 benchmark functions which are considered for simulations include many different kinds of problems such as unimodal, multimodal, regular, irregular, separable, nonseparable and multidimensional. All problems are divided into four categories such as US, MS, UN, MN, and the range, formulation, characteristics and the dimensions of these problems are described in Appendix.
Each of the experiments for TLBO and SGO is repeated 30 times (we have taken the same number of experimentations which have been done in [22] to make the comparison fair) with different random seeds, and the best mean values produced by the algorithms have been recorded. Comparison criteria are the mean solution and the standard solution for different independent runs. The mean solution describes the average ability of the algorithm to find the global solution, and the standard deviation describes the variation in solution from the mean solution. To make the comparison clear, the values below 10 −12 are assumed to be 0. Also, to have statistically sound conclusions, Wilcoxon's rank-sum test at a 0.05 significance level has been conducted on the experimental results, and the last three rows of Table 1 summarize the results. The results for GA, PSO, DE and ABC are taken from the paper [22]. The Table 1 presents the comparison of fitness values of GA, PSO, DE and ABC gained from [22] and for TLBO and SGO computed by us. In Table 1, "NA" stands for experiment is not conducted for that particular function. The best optimal values are shown in bold face.
From Table 1, it is clear that SGO provides better optimal results in as many as 12 functions compared to GA, 5 functions compared to PSO, 4 functions compared to DE, 3 functions compared to ABC and 2 functions compared to TLBO. It can be arrived at a conclusion here that SGO is very competitive compared to ABC and especially with TLBO. However, we have noted that SGO is faster when compared to other algorithms. It takes 1/500th function evaluations compared to GA, PSO, DE and ABC for all functions except for Rosenbrock wherein it takes 1/50th of total function evaluations. And it is computing almost in half of the total function evaluations compared to TLBO except for Rosenbrock for which SGO takes 1/5th number of function evaluation as against TLBO. From the above findings, we may arrive at a conclusion that our proposed algorithm not only performs better compared to many state-of-the art algorithms like GA, PSO, DE but also very competitive with ABC and TLBO in providing optimal solutions. Importantly, SGO takes less computation efforts compared to all other algorithms investigated in this section.

Experiment 2: SGO vs. HS, IBA, ABC, and TLBO
In this experiment, five different benchmark problems from Karaboga and Akay [23] are considered, and comparison is carried out between SGO, the harmony search algorithm (HS), improved bee algorithm (IBA), artificial bee colony optimization (ABC) and teaching-learning-based optimization (TLBO) [24]. To compare the results, the mean solution and the standard solution for different independent runs are taken. In our simulation, we run TLBO for maximum 2000 FEs with 10 as the population size for all functions except Rosenbrock, whereas HS, IBA and ABC run for 50,000 FEs with 50 as population size. For Rosenbrock function, TLBO takes 50,000 FEs with population size of 50. For our optimization algorithm, i.e., SGO, maximum FEs are set to 1000 with 10 as the population size for all functions except Rosenbrock for which it is set to 50,000 FEs with population size of 50. The results are gathered for different independent runs in each case, and the mean and standard deviation are calculated for the results obtained in different runs. Description of the functions is given in Appendix.
In this simulation, different dimensions (D) of the benchmark functions are chosen for study. Values starting from as small as 5 to 1000 are taken. The results for dimensions 5, 10, 30, 50, 100 are directly lifted from [24] for all investigated algorithms and put in Table 2. For SGO algorithm, we have computed results for all dimensions and generated results for two large-scale dimensions, such as 500 and 1000, to investigate the performance of SGO for large-dimension problems. The maximum number of FEs for SGO is set 1/50th of maximum FEs taken for HS, IBA and ABC for all functions except Rosenbrock. And, for Rosenbrock, it is 1/5th of HS, IBC and ABC. However, it is exactly half that of TLBO in all functions [24,25]. It may be emphasized here that the reduced value of maximum number of FEs for SGO is deliberately chosen to investigate the effectiveness and efficiency over other algorithms. Table 2 shows the results for this experiment. In Table, "NA" stands for experiment is not conducted for that particular function. The best optimal values are shown in bold face. To have statistically sound conclusions, Wilcoxon's rank-sum test at a 0.05 significance level has been conducted on the experimental results, and the last three rows of Table summarize the results. It can be seen from Table 2 that SGO has outperformed than all the algorithms for all the functions in almost all dimensions. This experiment shows that SGO is effective in finding the optimum solution with increase in dimensions. However, the performance of other algorithms in higher dimensions has not been ascertained in this work. Our preliminary literature study reveals that they do not perform well in higher dimensions.

Bohachevsky3
Mean 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 Std (0.00e+00)≈ (0.00e+00)≈ (0.00e+00)≈ (0.00e+00)≈ (0.00e+00)≈ 0.00e+00 Booth Mean 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00  Wilcoxon's rank-sum test at a 0.05 significance level is performed between SGO and each of GA, PSO, DE, ABC and TLBO. "−", "+", and "≈" denote that the performance of the corresponding algorithm is worse than, better than, and similar to that of SGO, respectively test at a 0.05 significance level has been conducted on the experimental results, and the last three rows of Table summarize the results. According to Wilcoxon's rank-sum test, SGO performs superior than OEA in seven test functions and comparable to one test function out of eight test functions, improved than HPSO-TVAC in eight test functions and equivalent to one test function out of nine test functions. It is also found to be better than APSO in eight test functions and equivalent to one test function out of nine test functions. In our work, we observe that compared to OLPSO-L, our proposed SGO is better in two OLPSO-G in four test functions and equivalent to one test function out of five test functions. Hence, it can be claimed that even though the maximum number of fitness evaluations for SGO is less than the other algorithms, still SGO is either better than or equivalent to other algorithms for each benchmark function according to the Wilcoxon's rank-sum test.

Experiment 4: SGO vs JADE, jDE, SaDE, CoDE and EPSDE
The experiments in this section constitute the comparison of the SGO algorithm versus SaDE [30], jDE [31], JADE [32], CoDE [33] and EPSDE [34] on nine benchmark functions which are listed in Appendix. The results of JADE, jDE and SaDE are gained from [32] directly and put in Table 4. For CoDE and EPSDE, we have generated results using codes given in website Q. Zhang's homepage:http://dces.essex.ac.    Wilcoxon's rank-sum test at a 0.05 significance level is performed between SGO and each of HS, IBA, ABC, and TLBO. "−", "+", and "≈" denote that the performance of the corresponding algorithm is worse than, better than, and similar to that of SGO, respectively D dimension uk/staff/qzhang/. For CoDE, EPSDE and SGO, we have considered population size as 20. The maximum number of fitness evaluations for each function is different, and FEs are noted in bracket of each cell in Table 4. Fitness values are shown in Table 4 in means and standard deviations. In Table, "NA" stands for experiment is not conducted for that particular function. The best optimal values are shown in bold face.
To have statistically sound conclusions, Wilcoxon's ranksum test at a 0.05 significance level has been conducted on the experimental results, and the last three rows of Table  summarize the results. According to Wilcoxon's rank-sum test, it can be noted that the performance of SGO is always better than all other algorithms except EPSDE in reporting the optimal value, where SGO performs better than EPSDE in five test functions and equivalent to three test functions out of eight test functions. So, it is interesting to note that even though the maximum number of fitness evaluations for SGO is less than the other algorithms, still SGO is better than or equivalent with all variants of DE algorithm in this experiment according to Wilcoxon's rank-sum test.

Experiment 5: SGO vs. CABC, GABC, RABC and IABC
In this section, we compare SGO with CABC [35], GABC [36], RABC [37] and IABC [38] on eight benchmark func- Step Wilcoxon's rank-sum test at a 0.05 significance level is performed between SGO and each of OEA, HPSO-TVAC, CLPSO, APSO, OLPSO-L, OLPSO-G. "−", "+", and "≈" denote that the performance of the corresponding algorithm is worse than, better than, and similar to that of SGO, respectively tions. The parameters of the algorithms are identical to [36]. The maximum number of fitness evaluations for each function is different, and FEs are noted in bracket of each cell in Table 5. The results have been summarized in Table 5. The fitness value in terms of mean and standard deviation is reported. The best optimal values are shown in bold face. To have statistically sound conclusions, Wilcoxon's rank-sum test at a 0.05 significance level has been conducted on the experimental results, and the last three rows of Table summarize the results. It can be observed from Table 5 that SGO performs better in comparison to all algorithms. So, we can say that even though the maximum number of fitness evaluations for SGO is less than the other algorithms, still SGO is better than all variants of ABC algorithm in this experiment.

Experiment 6: SGO vs. TLBO
In this experiment, we compare only TLBO and SGO algorithms. As TLBO is relatively new compared to other algorithms investigated in our work, we have devoted a special section to compare our approach with TLBO. In this experiment, our main objective is to see how SGO performs against TLBO in terms of optimal solution and computational costs. The common parameter such as population size is taken as 20 and maximum number fitness function evaluation is taken as 1,000 for both TLBO and SGO. We used 25 benchmark problems to test the performance of the TLBO and our proposed SGO algorithms. The ini-  Wilcoxon's rank-sum test at a 0.05 significance level is performed between SGO and each of JADE, jDE, SaDE, CoDE and EPSDE. "−", "+", and "≈" denote that the performance of the corresponding algorithm is worse than, better than, and similar to that of SGO, respectively Step Wilcoxon's rank-sum test at a 0.05 significance level is performed between SGO and each of CABC, GABC, RABC and IABC. "−", "+", and "≈" denote that the performance of the corresponding algorithm is worse than, better than, and similar to that of SGO, respectively tial range, formulation, characteristics and the dimensions of these problems are listed in Appendix. Each simulation runs for 30 times. The simulation is terminated on attaining maximum number of evaluations or obtaining global minimum values with different random seeds. The mean value and standard deviations of fitness value produced by the algorithms have been recorded in Table 6. At the same time, mean value and standard deviations of number of fitness evaluation produced by the algorithms have also been recorded in Table  7. The best optimal values are shown in bold face. To have statistically sound conclusions, Wilcoxon's rank-sum test at a 0.05 significance level has been conducted on the experimental results, and the last three rows of Table summarize the results. It is observed that SGO performs better in 23 test functions and equivalent in 2 test functions. From Tables 6 and 7, it is clear that except step and sixhump camel-back function, in all cases, SGO has shown better result than TLBO, and the maximum number of fit-ness evaluations for easom, bohachevsky1, bohachevsky2, bohachevsky3, rastrigin, noncontinuous rastrigin, multimod and for weierstrass function is less than that of TLBO algorithm and all these functions have reached to optimal solution. In both step and six-hump camel-back function cases, both TLBO and SGO algorithms have performed equivalently and given optimal result, however, in both cases, SGO reaches optimum value with lesser number of fitness evaluations than TLBO. So, we can say that SGO is better than TLBO algorithm in all cases in this experiment. The convergence characteristics of both algorithms have been shown in the graphs below (Fig. 2).

Experiment 7: SGO vs. SAABC, GABC, IABC, ABC/Best1, GPSO, DBMPSO, TCPSO and VABC
In this section, we compare SGO with both ABC and PSO variants of algorithm such as GABC(Gbest-guided artificial Wilcoxon's rank-sum test at a 0.05 significance level is performed between SGO and TLBO. "−", "+", and "≈" denote that the performance of the TLBO algorithm is worse than, better than, and similar to that of SGO, respectively bee colony algorithm) [36], IABC [38], ABC/Best1 [39], SAABC(simulated annealing-based artificial bee colony) [40], VABC(velocity-based artificial bee colony algorithm) [41], CPSO(Chaotic particle swarm optimization) [42], DBMPSO(particle swarm optimization with double-bottom chaotic maps) [43], TCPSO(two-swarm cooperative particle  [41]. The maximum number of fitness evaluations is taken as 40,000, and population size is 40, and the parameters of the algorithms are identical to [41]. The results of SAABC, GABC, IABC, ABC/Best1, VABC, CPSO, DBMPSO and TCPSO are gained from [41] for comparison with SGO. The comparison results are shown in Tables 8, 9, 10 in terms of means and standard deviations (Std) of the solutions in the 30 independent runs. Tables 8, 9 show the results for 60 and 100  dimensions, respectively, on multidimensional functions,  and Table 10 reports the results on the fixed-dimensional functions. The best optimal values are shown in bold face.
As seen from the Tables 8, 9 results, SGO found the global optimal values for all the functions except F 6 , F 7 , F 9 , F 12 , F 14  In fixed-dimensional case, according to Wilcoxon's ranksum test, SGO performs superior than SAABC in four test functions and equivalent with three test functions out of eight test functions, improved than GABC in five test functions and equivalent with two test function out of eight test functions. It is also found to be better than IABC, ABC/Best1, VABC, CPSO, DBMPSO, TCPSO in seven, five, three, two, three and seven test functions, respectively, out of eight test functions and similarly equivalent with one, two, four, five, four and zero test functions, respectively, out of eight test functions.
So, it is interesting to note that the performance of SGO is better than other algorithms according to Wilcoxon's ranksum test. In this section, the comparison of SGO versus GPSO(global PSO) [45], LPSO(local PSO) [46], FIPS(fully informed particle swarm) [47],SPSO(standard for particle swarm optimization) [48], CLPSO [28], OLPSO [29] and CLP-SOA(scatter learning particle swarm optimization Algorithm) [49], on 14 benchmark functions described in paper [49]. The maximum number of fitness evaluations is taken as 200,000, and population size is 40, and the parameters of the algorithms are identical to [49]. The comparison  Table 11 in terms of means and standard deviations (Std) of the solutions in the 25 independent runs. The results of GPSO, LPSO, FIPS, SPSO, CLPSO, OLPSO and SLPSOA are gained from [49]. As seen from Table 11 results, SGO found the global optimal solution for the functions sphere, schwefel 2.22, rastrigin, griewank, rotated rastrigin and rotated griewank. On the other hand, for the test functions noise, Ackley, generalized penalized, generalized penalised1, and rotated Ackley, the objective values obtained by SGO are extremely close to global optima. The best optimal values are shown in bold face.
To have statistically sound conclusions, Wilcoxon's ranksum test at a 0.05 significance level has been conducted on the experimental results, and the last three rows of Table  summarize the results. According to Wilcoxon's rank-sum test, SGO performs superior than GPSO, LPSO, FIPS, SPSO, CLPSO, OLPSO and SLPSOA in 13, 14, 13, 12, 12, 6, and 7 test functions, respectively, out of 14 test functions. The SGO algorithm is equivalent with OLPSO in four test functions and with SLPSOA in two test functions. So, it is interesting to tell according to this experiment that SGO is better than other algorithms according to Wilcoxon's rank-sum test.

Experiment 8. SGO vs FIPS-PSO, CPSO-H, DMSPSO-LS, CLPSO, APSO, SSG-PSO, SSG-PSO-DFP, SSG-PSO-BFGS, SSG-PSO-NM, SSG-PSO-PS
To comprehensively compare the performance of SGO with the performance of SSG-PSO (Superior solutions guided PSO with the individual level based mutation operator) and its several variants with different local search techniques, 21 test benchmark functions of different types were used, including unimodal functions, multimodal functions, miscalled functions and rotated functions. The detailed infor-mation of test functions is displayed in [50]. The maximum number of fitness evaluations is taken as 300,000, and population size is 40, and the parameters of the algorithms are identical to [50]. The comparison results are shown in Table 12 in terms of means and standard deviations (Std) of the solutions in the 30 independent runs. The results of all PSO variants and different variants of SSG-PSO with different local search techniques are gained from [50]. Wilcoxon's rank-sum test at a 0.05 significance level is performed between SGO and each, SAABC, GABC, IABC, ABC/Best1, CPSO, DBMPSO, TCPSO and VABC. "−", "+", and "≈" denote that the performance of the corresponding algorithm is worse than, better than, and similar to that of SGO, respectively Table 9 Mean and standard deviation results obtained by SGO comparing with SAABC, GABC, IABC, ABC/Best1, GPSO, DBMPSO, TCPSO and VABC on the multidimensional benchmark functions after 30 independent runs for dim Wilcoxon's rank-sum test at a 0.05 significance level is performed between SGO and each, SAABC, GABC, IABC, ABC/Best1, CPSO,DBMPSO,TCPSO and VABC. "−", "+", and "≈" denote that the performance of the corresponding algorithm is worse than, better than, and similar to that of SGO, respectively Wilcoxon's rank-sum test at a 0.05 significance level is performed between SGO and each, SAABC, GABC, IABC, ABC/Best1, CPSO,DBMPSO,TCPSO and VABC. "−", "+", and "≈" denote that the performance of the corresponding algorithm is worse than, better than, and similar to that of SGO, respectively  Wilcoxon's rank-sum test at a 0.05 significance level is performed between SGO and each GPSO, LPSO, FIPS, SPSO, CLPSO, OLPSO and SLPSOA. "−", "+", and "≈" denote that the performance of the corresponding algorithm is worse than, better than, and similar to that of SGO, respectively  Wilcoxon's rank-sum test at a 0.05 significance level is performed between SGO and each FIPS-PSO, CPSO-H, CLPSO,APSO, DMSPSO-LS, SSG-PSO, SSG-PSO-DFP, SSG-PSO-BFGS,SSG-PSO-NM, and SSG-PSO-PS. "−", "+", and "≈" denote that the performance of the corresponding algorithm is worse than, better than, and similar to that of SGO, respectively As seen from   20,21,17,14,20,14,13,13,13 and 12 test functions, respectively, and equivalent with 1, 0, 2, 6, 1, 6, 6, 6, 6 and 6 test functions, respectively, out of 21 test functions. So, it is interesting to tell according to this experiment that SGO is better than other algorithms according to Wilcoxon's rank-sum test.

Experiment 9: SGO vs. jDE, SaDE, EPSDE, CoDE, MPEDE, CLPSO, CMA-ES,GL-25 and TLBO
To study the performance of proposed SGO, 25 test functions proposed in the 2005 special session on real parameter optimization were used. A detailed description of these test functions can be found in [21]. The number of decision variables or dimension of the function was set to 30 for all test functions. For each algorithm and each test function, 25 independent runs were conducted with 300,000 function evaluations (FEs) as the termination.
SGO was compared with six DE variants, i.e., JADE [32], jDE [31], SaDE [30], EPSDE [34], CoDE [33] and MPEDE [51] and four other approaches, i.e., CLPSO [28], CMA-ES [52], GL-25 [53] and TLBO [24]. In our experiments, the parameter settings of these methods were the same as their original papers. The number of FFs in all these methods was 300,000, and each method was run 25 times on each test function. For the proposed SGO algorithm, we  Wilcoxon's rank-sum test at a 0.05 significance level is performed between SGO and each of JADE, jDE, SaDE, EPSDE, CoDE and MPEDE. "−", "+", and "≈" denote that the performance of the corresponding algorithm is worse than, better than, and similar to that of SGO, respectively  9.13e+02 ± 1.42e+00− 9.04e+02 ± 3.01e−01− 9.07e+02 ± 1.48e+00− 9.08e+02 ± 4.90e−01− 9.00e+00 ± 0.00e+00 F 19 9.14e+02 ± 1.45e+00− 9.16e+02 ± 6.03e+01− 9.06e+02 ± 1.24e+00− 9.09e+02 ± 8.00e−01− 9.00e+00 ± 0.00e+00 F 20 9.14e+02 ± 3.62e+00− 9.04e+02 ± 2.71e−01− 9.07e+02 ± 1.35e+00− 9.07e+02 ± 2.24e+00− 9.00e+00 ± 0.00e+00 Wilcoxon's rank-sum test at a 0.05 significance level is performed between SGO and each of CLPSO, CMA-ES and GL-25. "−", "+", and "≈" denote that the performance of the corresponding algorithm is worse than, better than, and similar to that of SGO, respectively have considered population size to 100, and FEs is same with other methods and is 300,000. The best optimal values are shown in bold face. To have statistically sound conclusions, Wilcoxon's rank-sum test at a 0.05 significance level has been conducted on the experimental results, and the last three rows of Tables 13 and 14   In this experiment, we have considered six composite test functions and eight novel algorithms, particle swarm optimizer (PSO) [7], cooperative PSO (CPSO) [54], comprehensive learning PSO (CLPSO) [28], evolution strategy with covariance matrix adaptation (CMA-ES) [55], G3 model with PCX crossover (G3-PCX) [56], differential evolution (DE) [2], teaching-learning-based optimization [24] and proposed SGO for testing their performances. The detailed descriptions of these functions are given in papers [57] and [25], and the algorithms in their respective papers. Parameter settings for the composite functions are as in [25]. Table 15 shows the results obtained using the eight algorithms on six composite functions. For each test function, each algorithm is run 20 times, and the maximum fitness evaluations are 50,000 for all algorithms. For our proposed algorithm, we have considered population size as 100. The mean values of the results are recorded in Table 15. The best optimal values are shown in bold face. To have statistically sound conclusions, Wilcoxon's rank-sum test at a 0.05 significance level has been conducted on the experimental results, and the last three rows of Table summarize the experimental results. According to Wilcoxon's rank-sum test, it is clear that SGO performs better than PSO in six test functions out of all six test functions, better than CPSO in six test functions out of all six test functions but better than CLPSO in three test functions out of six test functions. SGO is better than CMA-ES in six test functions out of all six functions, better than G3-PCX in five test functions out of six test functions but better than DE in three test functions and equivalent to one test function out of six test functions; whereas it is better than TLBO in three test functions out of six test functions. So, from

Conclusion
This paper proposes a new efficient optimization algorithm that is inspired by the social behavior of humans toward solving a complex problem. Whenever a problem/task has been solved by a single person, it becomes too difficult to solve or the problem may remain unsolvable. But when the same problem has been solved by a group of persons, the difficulty becomes easy and the unsolvable problem may become solvable. In a social group, people are influenced by the characteristics (i.e., traits) of the successful person, and eventually, they also change/modify their traits accordingly and become capable to solve/address complex problems/situations. This concept has motivated us to propose a new optimization algorithm known as social group optimization (SGO). The concept and the mathematical formulation of SGO algorithm are explained in this paper with a flowchart. To judge the effectiveness of SGO, extensive experiments have been conducted on number of different unconstrained benchmark functions as well as 25 standard numerical benchmark functions taken from the IEEE Congress on Evolutionary Computation 2005 competition. Performance comparisons are made with state-of-the-art optimization techniques like GA, PSO, DE, ABC and its variants and the recently developed TLBO. Different variants of the popular evolutionary optimization techniques are also taken into consideration for comparing them with SGO. The experimental results show that the proposed social group optimization outperforms all investigated optimization techniques in computational costs and also provides optimal solutions for most of the considered functions. One of the best things in this algorithm is that it is easier to understand and to implement in comparison to other algorithms and their variants. It remains to see how SGO works for multi-objective optimization problems in future.  Wilcoxon's rank-sum test at a 0.05 significance level is performed between SGO and each of PSO, CPSO, CLPSO, CMA-ES, G3-PCX, DE, TLBO. "−", "+", and "≈" denote that the performance of the corresponding algorithm is worse than, better than, and similar to that of SGO, respectively Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecomm ons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Appendix: Benchmark functions
All problems are divided into four categories such as US, MS, UN, MN, and its range, formulation, characteristics and the dimensions of these problems are listed in the following Table 16.