Multi-population cooperative teaching–learning-based optimization for nonlinear equation systems

Solving nonlinear equation systems (NESs) requires locating different roots in one run. To effectively deal with NESs, a multi-population cooperative teaching–learning-based optimization, named MCTLBO, is presented. The innovations of MCTLBO are as follows: (i) two niching technique (crowding and improved speciation) are integrated into the algorithm to enhance population diversity; (ii) an adaptive selection scheme is proposed to select the learning rules in the teaching phase; (iii) the new learning rules based on experience learning are developed to promote the search efficiency in the teaching and learning phases. MCTLBO was tested on 30 classical problems and the experimental results show that MCTLBO has better root finding performance than other algorithms. In addition, MCTLBO achieves competitive results in eighteen new test sets.


Introduction
Many real-world problems, such as those found in robotics [1], automatic control [2,3] and fuzzy systems [4], can be reduced to nonlinear equation systems (NESs).In practical application scenarios, NESs contain several different optimal solutions.Solving NESs requires finding different roots in one run, which can offer decision makers with various in different environments [5].However, it is challenging work.
The conventional approach for NESs is to use numerical methods, such as the Newton method [6] and homotopy continuation [7].Although numerical methods have several advantages, they also have several disadvantages.For instance, they are sensitive to the objective function and can only find one root at a time.Therefore, researchers have sought new ideas to replace numerical methods to deal with NESs.
Evolutionary algorithms (EAs) [8] which use natural evolution principles, are attract lots of researchers attention.Since it is insensitive to the objective function and the initial guess.Therefore, competitive results have been obtained in many optimization problems, such as in energy system design [9,10], image registration [11], many-objective optimization [12], multi-modal optimization [13], power systems [14,15], etc.Several methods for NESs have been proposed to locate the roots.Gong et al. proposed RADE, which combined repulsive technique, crowding method, and parameter adaptation to solve NESs [16].However, RADE uses a fixed repulsion radius and limits its search efficiency.Hence, Liao et al. designed a dynamic repulsion-based EA (DREA) [17].The authors designed four different dynamic variation of the repulsive radius [17].He et al. employed fuzzy neighborhood to improve population diversity, and an orientation-based mutation was proposed to generate trial vectors [18].A memetic niching-based EAs was designed in [19] to find different roots.Liao et al. [20] applied the decomposition technique to effectively divide the population and proposed subpopulation mutation strategies to deal with NESs.Wu et al. [21] present a k-means clustering-based (KSDE) for multiple roots location.Wang et al. [22], two-archive techniques was designed to exploit inferior and elite individual to guide the evolution.They also combined niching methods and differential evolution to solve NESs.The author proposed AGSDE to determine the roots of NESs [23].In AGSDE, archives save useful historical individuals and use that information to guide the evolution of algorithms.Other researcher have also adopted multi-objective techniques to deal with NESs.Song et al. [24], MONES transformed NESs into a bi-objective optimization problem and applied NSGA-II to solve it, which obtained promising results.To compensate the curse of dimensionality, Gong et al. [25] designed a A-WeB.Gao et al. developed a new multi-objective EA (MOPEA [26]) and a two-phase EA (TPEA [27]) to find different roots.Moreover, Ji et al. extended the existing work and proposed a dynamic tri-objective differential evolution (SaTriMODE), which showed competitive results [28].
Rao et al. [29] designed teaching-learning-based optimization (TLBO) to deal with optimization problems.TLBO simulates the traditional classroom teaching process, which is different from the principle of several other EAs.Because TLBO is easy to implement, efficient and does not require parameter setting, it is widely used in different optimization problems, such as solar parameter identification [30], engineering design [31], and medical disease diagnosis [32], etc.Although the TLBO variant achieves good performance in solving many optimization problems, it still has the following disadvantages: (i) most of the TLBO variants only seek an optimum, and it is difficult to find multiple optimum in a single run; (ii) there are few researches on how to use cooperation among populations in solving multi-root problems.Moreover, there are still few cases using TLBO to solve NESs.
Recently, multi-population cooperative technique is widely used to solve optimization problems [33,34].Combine the above mentioned deficiencies, this paper proposes a multi-population cooperative teaching-learning-based optimization, namely MCTLBO to solve NESs.To the best of our knowledge, this is the first case for using TLBO to solve NESs.The results show that the proposed MCTLBO can obtain promising success rate and root ratio.
The main novelties of this paper are shown below: • Two niching techniques, (ie.crowding and an improved speciation), are integrated into TLBO to guide the algorithm to search the more promising region, enhancing population diversity.
• A fitness-ranking-based adaptive selection scheme is proposed to select the learning rules in the teaching phase.

NES formulation
Many practical problems can be transformed into nonlinear equations.The mathematical expressions of NESs are generally as follows: where m is the number of equations.x = [x 1 , • • • , x D ] ∈ denotes the decision variables, where D and are the dimension and decision space.In general, where L( j), U ( j) represents the lower and upper boundaries of x in the j-th dimension, respectively.Eq. ( 1) usually has multiple roots that are equally important.They can provide decision-makers with multiple optimal options under different scenarios.Therefore, it makes sense to get more than one root at a time.
Before an EA can be applied to solve an NES, the NES must be transformed into an optimization problem.We transform the NES into a single objective problem.The transformed expression is as follows:

TLBO
In TLBO, individuals constantly search for optimal solutions through the teaching and learning phases.The specific teaching-learning process is described as follows.

Teaching phase
In general, n students can make up a class.Each learner x i is regarded as a potential optima, and the learner with the minimum fitness is the teacher.TLBO needs to improve the average grade of the whole class after the teacher phase.Teachers can impart their experience or knowledge to learners, and learners can improve their performance by absorbing useful knowledge.The formula of the teaching phase is: where x i,new is the new individual produced after learning from the teacher; r ∈ [0, 1]; T F is the teaching factor; and T F ∈ [1,2]; and x mean is the mean individual of the class expressed as:

Learning phase
In the learning phase, students can learn from their classmates who are better than them to improve their knowledge level.The formula can be stated as follows:

DE
DE [35] contains four operations: initialization, mutation, crossover, and selection.Initialization: The initialization operator can randomly generate different distinct individuals in the decision space: where i = 1, • • • , N P, and N P represents the population size; j ∈ [1, . . ., D], D is the dimension of x i ; L( j), U ( j) represent lower and upper limits in different dimensions, respectively.Mutation: The mutation operator is used to generate the mutation vector."DE/rand/1" is the classical mutation strategy and is expressed as follows: where F is the scaling factor to control the magnitude of the difference vector (x r 2 − x r 3 ).r 1 , r 2 , r 3 are three random integers between 1 and N P, and r 1 = r 2 = r 3 = i.
Crossover: The crossover operator is utilized to produce the trial vector.In this paper, the binary crossover is used and shown as follows: where C R is the crossover rate, which controls how many components are inherited from the mutant vector, and j d ∈ [1, D] and it makes sure at least one component of trial vector is inherited from the mutant vector.Selection: The selection operator ensures that the superior individual will enter the next evolution.The formula is shown in Eq. ( 9): Two niching techniques Select the most similar individual x s to u i from P 5: Compare the fitness between x s and u i 6: Choose the better one for the next evolution 7: end for

Motivations
Building on the promising performance of TLBO, we employ it to deal with NESs.However, using TLBO directly to solve NESs has several drawbacks: i) TLBO can only find one Algorithm 2 Process of NSDE 1: Sort the population P according to the fitness in ascending order 2: for i = 1 : n do 3: Select n most similar individuals to x i from P to form OP i 4: Delete x i from P 5: for j = 1 to OP i do 6: Generate the trial vector u i within OP i 7: Select the most similar individual x s to u i from OP i 8: Compare the fitness between x s and u i 9: Choose the better one for the next evolution 10: end for 11: end for 12: Incorporate all sub-populations OP i to form P optima and cannot find different roots simultaneously.Thus, population diversity must be enhanced during the run; ii) In the teaching and learning phase of TLBO, the original learning rules limit the performance of the algorithm, which leads to a reduction in search efficiency; iii) Motivated by this cue, we develop MCTLBO, where dual niching techniques, fitness-ranking-based adaptive strategy selection, and learning rules are presented to solve NESs, which are introduced in the following subsection.

Niching-based TLBO
As described in Sect.Motivation, TLBO must contain the population's diversity during the run.The niching technique helps find different promising areas by modifying the search characteristics of the algorithm.Hence, the crowding and speciation techniques are used in MCTLBO so that the learner can fully explore the search space.

Crowding in teaching phase
The crowding technique can form a subpopulation in the vicinity of each individual.This makes it easy for the algorithm to search in every small search region, which enables learners to use the neighborhood information.Algorithm 1 describes the detailed process of the technique.Notably, to improve the search performance of learning rules in the teaching phase, this paper will improve it.The details will be elaborated Sect.Learning in teaching phase.

Speciation in learning phase
The speciation technique can partition the population into multiple sub-populations and the algorithm searches the region of each sub-population.However, the original speciation technique may have a defect in determining seeds.In this section, we will improve it and the specific process is shown in Algorithm 3, where A is an external archive that saves the found root; size(A) is the size of A; dis is the dis-tance threshold given in advance; N F E is the number of fitness evaluations.

Algorithm 3 Process of the improved speciation technique
1: Sort the population P according to the fitness in ascending order 2: for i = 1 : n do 3: if size(A) = 0 then 4: Calculate the minimum Euclidean distance d i between x i and the roots in A 5: Randomly re-initialize x i 7: Evaluate the fitness x i 8: N end if 10: end if 11: end for 12: Assign different individuals to each seed and form the subpopulation OP i .
The main difference from the original speciation technique lies in the addition of the determination condition when determining the seed.In lines 3-9, if the algorithm does not find the root, the current individual can be considered as a seed.Otherwise, the distance between x i and the individual in A is judged.If it is too close, x i is initialized and its fitness value is reevaluated.The reasons for re-initialization are as follows: 1) If the seed is close to the found root, the algorithm will have a high probability of finding the same root in the subsequent search, which leads to a waste of computing resources; 2) other individuals have the opportunity to serve as seeds so the algorithm can explore other promising areas and increase the probability of finding a new root.

Multi-population cooperation mechanism
In Sect.Speciation in learning phase, the population will be divided into subpopulations.In this section, a multipopulation cooperation mechanism is proposed to further improve the diversity of algorithms.The detailed process is as follows: Step 1: Following the method in the previous section, subpopulations are arranged topologically so that each subpopulation can only transmit information to a specific subpopulation.Similarly, it can only receive information from certain subpopulations.
Step 2: The transfer of individual information is carried out by means of intergenerational communication.In other words, if iter S = 0, then multi-population cooperation mechanism is triggered.iter is the current iteration number, S is the interval generation.
Step 3: The algorithm selects the worst individual from the migrating population and replaces an individual randomly selected from the migrating subpopulation.Information exchange among subpopulations is realized through migration.
Figure 1 shows the process of multi-species cooperation.The graph on the left shows that there are four subpopulations, with the red individuals representing the worst individuals in each subpopulation.In the figure on the right, the blue individuals represent migrating individuals acquired from adjacent subpopulations.The cooperation among subpopulations can be realized by transferring individuals to improve the diversity of the algorithm.

Learning in teaching phase
In the proposed method, the learner adopts a fitness-rankingbased adaptive selection scheme to choose learning rules in the teaching phase.First, a probability p i is generated for each learner.Here, the value of p i is between 0 and 1.If rand < p i , an improved TLBO learning rule is applied by the learner; otherwise, the "DE/rand/1" in DE is applied.Algorithm 4 show the learning rule in the teaching phase.In line 5, p i is calculated as follows: where num is the neighborhood size and R(i) is calculated as follows: where R P(i) is the ranking of fitness of x i in OP i .In line 7, if rand < p i , an experience-learning-based learning rule is proposed to generate offspring: otherwise, in line 9, "DE/rand/1" is adopted: where x teacher,i is the best individual in OP i ; x mean,i is the mean vector in OP i ; r 1 and r 2 are randomly selected from The reasons for adopting improved learning strategies are as follows: • On the basis of Eqs. ( 3), ( 12) adds vector disturbance.If p i is large, it means that the individual has the highest ranking of fitness values in the current sub-population.
The learners can absorb the experience of teachers and learn other information at the same time.• If p i is small, Eq. ( 13) can balance the population diversity and convergence.
Algorithm 4 Learning strategy in teaching phase 1: for i = 1 : N P do 2: Find num individuals in the population that are closest to x i and form a cluster OP i 3: Sort OP i according to the fitness in ascending order 4: Determine the ranking of fitness of x i in OP i 5: Adopt Eq. ( 10) to calculate the probability p i 6: if rand < p i then 7: Employ Eq. ( 12) to generate offspring 8: else 9: Adopt Eq. ( 13) to generate offspring 10: end if 11: end for

Learning in learner phase
In Eq. ( 5), due to the loss of diversity, TLBO is prone to stagnation.To remedy this, an experience-learning-based learning rule is adopted in this paper: Compared with Eqs. ( 5), ( 14) adds the part of x mean and experience learning(x r 1 − x r 2 ).First, x mean can improve the exploration performance of learners.Second, experiential learning increases the search disturbance.

Implementation of MCTLBO
By integrating two improved niching techniques, fitnessranking-based adaptive selection scheme and the experiencelearning-based learning rules, we propose the MCTLBO framework.Algorithm 5 shows the steps of MCTLBO, where N F E is the number of fitness evaluations, and Max_N F E is the maximum N F E allowed.From Algorithm 5, in lines 4-6, the crowding technique and the enhanced learning rules are used in teaching phase.In lines 7-10, the improved speciation technique and the experience-learning-based learning strategy are utilized in learning phase.Lines 11-17, it determined if there are individuals satisfying the root condition in the population.If so, they are re-initialized to ensure the population diversity.is a threshold that determines whether the accuracy of x i meets the condition.

Require: Control parameters: N P, Max_N F E
Randomly initialize the population P and calculate their fitness 3: Adopt Algorithm 4 to learn in the teaching phase 5: Update N F E 6: Compare the fitness between parents and offspring and retain the better individual 7: Apply Algorithm 3 to form multiple sub-populations 8: Use multi-population cooperation mechanism to search 9: Employ Eq. ( 14) to learn in the learning phase 10: Update N F E 11: Compare the fitness between parents and offspring 12: for i = 1 : N P do 13: Determine whether the current individual x i is the root 14: if x i ≤ then 15: Obtain the minimum Euclidean distance (min_dis) between x i and all elements of A 16: if min_dis ≥ δ then 17: Save x i into A 18: Re-initialize x i and evaluate its fitness 19: end if 20: Update N F E 21: end if 22: end for iter = iter + 1 23: end while

Complexity of MCTLBO
The complexity of MCTLBO is as follows: • On the teaching phase, the complexity is O(iter where iter is the number of iterations and N P is the population size.• On the learning phase, the complexity is O(iter * N P * log(N P)).
• On the multi-population cooperation, the complexity is O( iter S * N P).• On the archive updating, the complexity is O(iter * N P * S A ), where S A is the archive size of A.
Therefore, the complexity of MCTLBO is O(iter * N P * log(N P)).

Test suite and evaluation indicators
Thirty classical NESs [22,27,36] are used to evaluate the performance of MCTLBO, which has been widely used in many papers.We use two classical evaluation indexes, root ratio and success rate, to compare the performance of different algorithms, which are demonstrated as below.
Root Ratio (R R): R R calculates the probability of finding the root of the algorithm in multiple runs, which can reflect the algorithm's root finding ability.The calculation formula is as follows: where N run represents the total number of runs; N rf,i is the number of roots found by the i−th run, and N r is the number of known roots.Success Rate (S R): S R is the probability of successfully finding all the roots of NESs in multiple runs.S R is defined as defined as S R = N success N run (16) where N success denotes the number of times the algorithm can find all roots in multiple runs.
In addition, Wilcoxon and Friedman test are used to verify the statistical differences between various methods.In the multi-problem Wilcoxon test, R + represents the rank sum where the algorithm performs better than its competitors, while R − is just the opposite.

Compared algorithms
In this section, we select several advanced algorithms classified as follows: • Repulsion-based methods: RADE [16] and DREA [17].
The parameter of the comparison algorithms remain the same as in their original papers..The parameters of MCTLBO are set as follows: •

Experimental results
The R R and S R obtained by MCTLBO and its competitors are shown in Tables 1 and 2. It is clear that MCTLBO obtained the highest R R and S R in solving 30 NESs (R R = 0.99, S R = 0.97), followed by HNDE/2A, TPEA, and DDE/R.Moreover, MCTLBO can locate all roots in 26 out of 30 NESs while HNDE/2A can successfully find all roots in 28 out of 30 NESs.Nevertheless, MCTLBO achieved better overall performance than HNDE/2A.
To further demonstrate the superiority of MCTLBO, the Friedman test and the Wilcoxon test results are given in Tables 3 and 4. From Table 3, it can be observed HNDE/2A achieves the smallest ranking of R R (5.6167) and MCTLBO obtains the best ranking of S R (5.6667).From Table 4, MCTLBO is significantly better than RADE, DREA,  Although there is no significant difference in the comparison results between MCTLBO and FNODE, HNDE/2A, DDE/R, KSDE, TPEA, and SaTriMODE, all R + values are higher than R − , which also indicates the superiority of MCTLBO.

Convergence on RR
To further investigate the convergence of MCTLBO on R R, Fig. 2 shows the convergence of R R obtained by MCTLBO and several algorithms1 on F04, F06, F12 and F17.Concretely, MCTLBO can completely find all the roots of these functions.When solving F04, F06 and F12, our algorithm converges faster in the initial stage, slows down in the middle stage and is weaker than other algorithms.However, in the last stage, it starts to rise again and finds all the roots.When solving F17, the convergence of MCTLBO is weaker than that of DDE/R at the beginning, but after nearly 300 iterations, MCTLBO obviously obtained better convergence than other algorithms.Hence, the proposed MCTLBO can

Analysis of different components of MCTLBO
Compared with the original TLBO, the search method and learning strategy of MCTLBO are modified in the teaching and learning phases.This section discuss the impact of different components on performance.The different combinations of MCTLBO are as follows: • MCTLBO-1: only Eq. ( 12) is used in the teaching phase; • MCTLBO-2: only Eq. ( 13) is used in the teaching phase; • MCTLBO-3: Remove the improved determining seeds method in the learning phase; • MCTLBO-4: Replace Eq. ( 14) with Eq. ( 5) in the learning phase.
The detailed results obtained by MCTLBO variants are shown in Table 5.In addition, the statistical results achieved by the Friedman and the Wilcoxon test are given in Tables 6  and 7, respectively.The analysis results are follow: • MCTLBO-1 and MCTLBO-2: In these two MCTLBO variants, various learning rules are used.It can be seen that no matter which rule is used alone, both MCTLBO-1 and MCTLBO-2 performed worse than that of MCTLBO.In particular, the R R and S R values obtained on F12, F13 and F25 are all decreased.This shows the effectiveness of the fitness-ranking-based adaptive selection scheme.• MCTLBO-3: In MCTLBO-3, the determining seeds method is removed.From the results, MCTLBO-3 lost roots on more NESs, such as F04, F05, F12, F13, F23 and F25.The reason may be that the lack of seed determination technology reduces the accuracy of subpopulation division and limits the algorithm's search efficiency.• MCTLBO-4: MCTLBO-4 adopted the original learning rule in TLBO.It can be observed that the overall performance of MCTLBO-4 has a very significant decline.The main reason is that the original learning rule is easy to fall into the local optimal.Using this learning strategy in multiple sub-populations makes it difficult for the algorithm to find multiple roots in a single run.

Discussions
In MCTLBO, two niching technique (crowding and the improved speciation), are used to improve the population diversity, but two parameters (num and n) are involved.In MCTLBO, num = 8 and n = 20.In this section, the impact on the algorithm will be studied.Figure 3 shows the average R R and S R obtained for different parameters.The analysis results are as follows: • In addition to MCTLBO-10, MCTLBO and MCTLBO-5 ∼ MCTLBO-9 achieve similar performance on R R and S R. The reason is that the MCTLBO-10 uses a larger num and a smaller n.Large num may limit the convergence speed of algorithm while small n make the algorithm search inadequate, resulting in the phenomenon of root loss.• The statistical result achieved by Friedman test is given in Table 8.It can be seen that MCTLBO obtains the best R R and S R rankings.It also shows that proper parameters have great influence on the performance of MCTLBO.Therefore, the feasible range for the fusion of the two parameters is: num in [5,10] and n ∈ [10,20].

Study on the new test set
We select 18 new NESs [37] to further verify the MCTLBO's performance.In the new test set, the 18 equations have higher complexity, more roots and scalability.In this experiment, 13 algorithms from Sect.Compared algorithms are selected to compare with MCTLBO.It is worth noting that we do not have codes that can obtain TPEA and SaTriMODE, these two algorithms are excluded from the experiment in this section.All the algorithms were run for 30 times, and the relevant algorithms parameters were consistent with the original paper.Tables 9 and 10 demonstrate the average R R and S R obtained by different algorithms on 18 new NESs.It can be seen that LSTP achieves the best result (R R = 0.54, S R = 0.19), followed by MCTLBO and HNDE/2A.In addition, Table 11 exhibits the rankings of the different algorithms obtained by Friedman test.LSTP get the highest ranking, followed by MCTLBO and HNDE/2A.
Concretely, our approach MCTLBO exhibits the promising result on MNE1, MNE4-MNE6, MNE8-MNE15.Although the overall performance of MCTLBO is lower than that of LSTP, it has better performance than other algorithms.It is worth noting that LSTP is specifically designed to solve these 18 problems and thus achieves the best performance.When it tested the 30 NESs problems in Sect.Experimental results, its performance was far inferior to MCTLBO.Therefore, we need to improve MCTLBO according to the characteristics of these 18 test functions in the future, so that it can obtain better results when solving new test sets.

Conclusion
To locate multiple roots of NESs, we have proposed a multipopulation cooperative teaching-learning-based optimization.In our approach, two niching technique-crowding and an improved speciation, are integrated into TLBO to enhance population diversity.Then new learning rules are designed and fitness-ranking-based adaptive selection scheme promotes MCTLBO's root location efficiency.Experimental results summary that MCTLBO achieved the best R R and S R when compared with several state-of-the-art algorithms.
In addition, MCTLBO exhibited promising performance on the new test set.
In future work, we intend to apply the proposed MCTLBO to solve several real-world problems in motor systems [41] and robot control [42].We will also focus on modifying MCTLBO for the new test set.Furthermore, multi-task evolution [43,44] may be utilized to design new approach, where multiple NES problems can be simultaneously solved in a single run.

MCTLBO MCTLBO- 5 Fig. 3
Fig. 3 Average R R and S R obtained by MCTLBO with differential parameters

•
Learning rules based on experience learning are developed to promote search efficiency, improving the search efficiency of MCTLBO.• The performance of MCTLBO is evaluated by solving NESs with different features.The rest of the paper is organized as follows.Section Background introduces the NES objective transformation technique, TLBO, DE and two niching-based DE.Sect.Our approach describes MCTLBO.In Sects.Experimental studies and Study on the new test set, the experimental results are demonstrated.Finally, Conclusion summarizes this paper.

Table 1
Comparison of MCTLBO and other methods on R R

Table 2
Comparison of MCTLBO and other methods on S R

Table 3
Average ranks of MCTLBO and other comparison algorithms obtained by the Friedman test on R R and S R

Table 4
Results obtained by theBold values represent that MCTLBO significantly outperforms these compared algorithms Italic values show that there is no significant difference between the results obtained by MCTLBO and these algorithms when solving NESs

Table 5
Comparison of MCTLBO and its variants on R R

Table 6
Average ranks of MCTLBO variants obtained by the Friedman test on R R and S R

•
In summary, MCTLBO obtained the highest average R R and S R values.Moreover, MOTLBO has the best ranking in Table6.Additionally, MCTLBO provides higher R + than R − in all cases, which shows that MCTLBO has better search efficiency.Thus, combining two niching techniques, fitness-ranking-based adaptive selection scheme, the experience-learning-based learning rules can enhance the algorithm's ability to locate the roots of NESs.

Table 8
Average ranks of MCTLBO variants obtained by the Friedman test on R R and S R

Table 7
Results obtained by the Wilcoxon test for MCTLBO in terms of R R and S RBold values represent that MCTLBO significantly outperforms these compared algorithms Italic values show that there is no significant difference between the results obtained by MCTLBO and these algorithms when solving NESs

Table 11
The average ranking of R R and S R obtained by different algorithms on the new test set