A multipopulation evolutionary framework with Steffensen’s method for dynamic multiobjective optimization problems

Dynamic multiobjective optimization problems (DMOPs) require the evolutionary algorithms that can track the moving Pareto-optimal fronts efficiently. This paper presents a dynamic multiobjective evolutionary framework (DMOEF-MS), which adopts a novel multipopulation structure and Steffensen’s method to solve DMOPs. In DMOEF-MS, only one population deals with the original DMOP, while the others focus on single-objective problems that are generated by the weighted summation of the original DMOP. Then, Steffensen’s method is used to control the evolving process in two ways: prediction and diversity-maintenance. Particularly, the prediction strategy is devised to predict the next promising positions for the individuals that handle single-objective problems, and the diversity-maintenance strategy is used to increase population diversity before the environment changes and reinitialize the multiple populations after the environment changes. This paper gives a comprehensive comparison of DMOEF-MS with some state-of-the-art DMOEAs on 14 DMOPs and the experimental results demonstrate the effectiveness of the proposed algorithm.


Introduction
With multiple conflicting objectives, multiobjective optimization problems (MOPs) [1] have been successfully solved by various evolutionary algorithms (EAs), such as NSGA-II [2], SPEA2 [3], MOPSO [4], MOEA/D [5], ACO [6], and so forth. Since lots of real-world MOPs need to be optimized in dynamic environments [7][8][9][10][11], how to extend multiobjective evolutionary algorithms (MOEAs) to solve dynamic multiobjective optimization problems (DMOPs) has attracted more B Tianyu Liu liuty@shmtu.edu.cn Lei Cao lcao@shmtu.edu.cn Zhu Wang zhuwang@shmtu.edu.cn 1 and more attention. For static MOEAs, the goal is to find accurate and well-distributed Pareto-optimal fronts (PFs). However, with time-varying optimization environments, a dynamic multiobjective evolutionary algorithm (DMOEA) is expected to find the ideal PF at the current environment and locate the new PF efficiently after the environment changes. Therefore, a promising DMOEA should take into account the two following issues: -Convergence speed. In a dynamic optimization environment, the PF may change over time. Therefore, a DMOEA is required to converge rapidly before the environment changes. -Population diversity. A DMOEA should be capable of locating the new PF after the environment changes. A poor population diversity may hinder a DMOEA from tracking the moving PF, especially after the environment changes.
Most of the existing DMOEAs are derived from MOEAs, which are originally designed to solve MOPs in static environments. In recent years, plenty of dynamic handling strategies have been inserted into the classic MOEAs to solve DMOPs. According to their inherent behaviors, the existing techniques can be classified into different categories, such as diversity introduction/maintenance strategies [12][13][14][15], memory-based strategies [16][17][18][19][20], Multipopulation-based strategies [21][22][23][24][25], and prediction-based strategies [18,[26][27][28][29]. Among all of these techniques, multipopulation-based and prediction-based strategies have received great concern. Particularly, multipopulation-based strategies carry out parallel exploration in searching space to maintain population diversity, prediction-based strategies are used to accelerate the convergence speed by predicting the next most likely positions of individuals.
In this paper, a multipopulation framework which cooperates with Steffensen's method [30], namely DMOEF-MS, is introduced to solve DMOPs. In many existing multipopulation-based methods, the multiple populations cooperate to explore the searching space and focus on the same DMOP. In this case, each population evolves with less diversity-related guiding information obtained from its optimization problem. This paper proposes a multipopulation framework, in which one population deals with the original DMOP and the others handle the single-objective problems that are generated by the weighted summation method. The single-objective problems, which are different from each other, are presented in forms of nonlinear functions. Therefore, Steffensen's method, which is a classical and efficient algorithm for solving nonlinear equations, is inserted into the proposed multipopulation framework to improve the performance of DMOEF-MS. The detailed description of Steffensen's method is presented in Sect. 2.3. Steffensen's method is adopted to control the evolving process of DMOEF-MS in two ways: prediction and diversitymaintenance strategies. Particularly, the prediction strategy is devised to predict the new location of the individuals that handle single-objective problems, and the diversity-maintenance strategy is used to increase the population diversity at fixed intervals of generations and reinitialize the multiple populations after the environment changes. DMOEF-MS was compared with some state-of-the-art DMOEAs on 14 benchmark DMOPs, and the experimental results demonstrate the effectiveness of the proposed algorithm. The main contributions of this paper are listed as follows.
(1) A multipopulation framework is proposed. This framework contains M + 1 populations, and M is the number of objectives in a DMOP. Each of the first M populations handles a single-objective problem, which is obtained as the weighted sum of the original M objective functions. The last population (the (M + 1)th population) optimizes the original DMOP. The interaction of populations is achieved by a common repository population (REP), which stored the Pareto-optimal solutions found by all the M + 1 populations (2) A prediction strategy which is inspired by Steffensen's method is introduced to update the individuals in the first M populations. (3) A diversity-maintenance strategy is used to increase the population diversity at fixed intervals of generations before the environment changes and reinitialize the multiple populations after the environment changes.
The rest of this paper is organized as follows. Section 2 introduces the background knowledge of DMOPs and the basic knowledge of Steffensen's method. In Sect. 3, the detailed description of the proposed DMOEF-MS is given. Section 4 presents the comparative experiments and the obtained results. Section 5 is the concluding remarks.

Background knowledge
A MOP can be stated as follows: and is the searching space, F : → R M and R M is the objective space. Since a MOP consists of M conflicting objective functions, one cannot find a solution that can optimization all the M objective functions simultaneously. Suppose a, b ∈ R M , a is said dominate b if and only if a i ≤ b i for all i = 1, 2, . . . , M and a = b. x ∈ is called a Pareto-optimal solution if there is no x ∈ such that F(x) dominate F(x ). For Multiobjective evolutionary algorithms, the purpose is finding a Pareto-optimal set (PS), which contains a number of Paretooptimal solutions. The set of all the objective vectors of PS is the Pareto-optimal front (PF).
The continuous DMOPs [27] adopted in this paper are defined as follows: is the candidate solution, N is the dimension of the searching space. M is the number of objective functions. t is the index of the current environment, and F(x, t) is the objective vector of x in the tth environment. For DMOPs, the challenge is designing efficient dynamic handling strategies, which are used to make a balance between population diversity and convergence speed throughout the whole dynamic optimization process. Besides, DMOEAs should also be able to detect environmental changes if they are not assumed to be knowable. The following sections present some representative studies of the change detection and dynamic handling strategies as well as the basic knowledge of Steffensen's method.

Change detection
There are two widely used methods for change detection.

Reevaluation of dedicated detectors
If the reevaluated objective value is different from the original one, then the environment is considered to be changed [12,31]. As an easy-to-implement method, it is also a robust detection if enough detectors are adopted. However, this detection method requires additional function evaluations and may generate inaccurate detection results for a noisy optimization function.

Assessment of algorithms' behaviors
In this method, the environmental change is detected if there is a discrepancy between an algorithm's behaviors and the statistical information obtained from its evolving population [32]. This method does not require additional function evaluations. However, it may need problem-related parameters and lead to an overreaction when no environmental change occurs.

Dynamic handling strategies
Many efficient dynamic handling strategies have been proposed in recent years. Most existing techniques can be classified into the following four categories: diversity introduction/maintenance strategies, memory-based strategies, multipopulation-based strategies, and prediction-based strategies.

Diversity introduction/maintenance strategies
The following techniques are widely used to introduce or maintain population diversity. Deb et al. [12] proposed a DNSGA-II, which replaced a portion of the population individuals with mutated or randomly generated ones. This technique can be easily implemented, but the introduction of population diversity may at the cost of the loss of useful information. To solve this problem, several problem-related immigration strategies, which include hybrid immigration [33], memory-based immigration [15], and elitism-based immigration [13], are presented. Besides, local searching is also an efficient way to introduce population diversity. Vavak et al. [34] proposed a variable local search strategy (VLS) to increase the population diversity. In VLS, the mutation rate is increased gradually as the algorithm runs. Ruan et al. [14] devised a diversity maintenance strategy (DMS) to increase the population diversity by generating new individuals in the estimated regions, which were determined according to the previous information of the decision vectors.

Memory-based strategies
For memory-based strategies, the basic principle is reusing past information to improve the performance of a DMOEA in the new environment after changes and thus memory-based strategies are suitable for DMOPs with periodically changing environments. Branke [16] proposed a memory scheme, which adopted a memory archive to store the best individuals in the population. After the environment changes, the individuals in the memory archive will be reused to initialize the population. Goh and Tan [17] devised a dynamic competitivecooperation co-evolutionary algorithm (dCOEA), in which the memory archive was modified by replacing the outdated solutions. Wang and Li [20] proposed two memory schemes to utilize the individuals stored in the memory archive. In the first scheme, the stored individuals are randomly selected as the members of the initial population after the environment changes. While in the second scheme, the individuals are modified before used. Recently, memory-based strategies have usually cooperated with other strategies to improve the performance of DMOEAs. For instance, Peng et al. [19] presented a novel prediction and memory strategy (PMS). In PMS, the stored individuals can be used more efficiently since they are reevaluated before used. Liang et al. [18] proposed a hybrid of memory and prediction strategies (HMPS), which devised a memory-based technique to predict the new locations of the individuals.

Multipopulation-based strategies
The basic idea of multipopulation-based strategies is to conduct simultaneous exploration in searching space by multiple populations. With the strong parallel processing ability, multipopulation-based strategies have demonstrated its effectiveness for DMOPs with multiple peaks. Branke et al. [35] proposed a multipopulation-based algorithm to solve DMOPs. This algorithm uses several smaller populations to track the most promising peaks over time, while a larger population is continuously searching for new peaks. Goh and Tan [17] proposed a competitive and cooperative mechanism to interact between multiple subpopulations for handling DMOPs. Particularly, each subpopulation will compete to represent a particular subcomponent of the original MOP, and the winners will cooperate to generate the eventual solutions. Shang et al. [21] presented a quantum immune clonal co-evolutionary algorithm (QICCA) to solve DMOPs. Having multiple populations, QICCA uses Umeasure to control the competition between populations and a new cooperative operation to get better solutions. Liu et al. [22] proposed a modified coevolutionary multi-swarm particle swarm optimizer (CMPSODMO) to solve DMOPs with rapidly changing environments. In CMPSODMO, the number of swarms is determined by the number of the objective functions, and an information-sharing strategy is adopted to realize the interaction among all swarms. Gong et al. [23] introduced a framework of dynamic interval multi-objective cooperative co-evolutionary optimization. In this framework, the decision variables are divided into two groups according to the interval similarity, and two populations are generated to optimize the above-mentioned two variable groups cooperatively. In a parallel DMOEA [24], the decision variables are divided into several groups according to Spearman rank correlation analysis. Then multiple subpopulations, which are utilized to optimize the variable groups in parallel, cooperate to solve DMOPs with changing variables. Xu et al. [25] proposed a cooperative co-evolutionary strategy for solving DMOPs. In this strategy, the decision variables are divided into two subcomponents according to whether or not they interrelate with the environment. Then two subpopulations, which optimize two subcomponents respectively, are adopted to explore the searching space. It can be seen from the description mentioned above that how to construct multiple populations and interact among the populations has a crucial influence on the performance of the multipopulation-based strategies.

Prediction-based strategies
The basic idea of the prediction-based strategies is reusing as much information from the past environment to speed up the searching process in the new environment. Therefore, prediction-based strategies always cooperate with memorybased strategies and are widely adopted to solve DMOPs with periodic and trending changes. Hatzakis and Wallace [26] proposed a forward-looking approach to solve DMOPs. The proposed approach uses an autoregressive (AR) model to predict the new positions of the individuals according to a sequence of optimum solutions generated from the previous environments. Zhou et al. [27] presented a population prediction strategy (PPS), which divided the Pareto-optimal Set (PS) into two parts: a center point and a manifold. In particular, the next center point is predicted by a univariate AR model according to a series of stored center points, the next manifold is estimated according to the manifolds generated in the past two environments. Wu et al. [28] introduced a directed search strategy (DSS) to accelerate the convergence speed. In DSS, the new individuals are predicted by using the moving directions generated from the PS in the previous two generations. Muruganantham et al. [36] proposed a new DMOEA, which used the Kalman filter (KF) to estimate the next locations of the solutions. Jiang and Yang [37] presented a steady-state and generational evolu-tionary algorithm (SGEA), which relocate the Pareto-optimal solutions according to the information collected from the previous and current environments. Jiang et al. [38] proposed a Tr-DMOEA, which combined population-based EAs with transfer learning to solve DMOEAs. Li et al. [39] introduced a special points-based hybrid prediction strategy (SHPS) for solving DMOPs. In SHPS, the initial population in a new environment consists of two parts: the predicted special points and the individuals predicted by PPS. Rong et al. [40] adopted multiple prediction models to relocate the Paretooptimal solutions once the environment changes. Liang et al. [18] proposed an MOEA/D-HMPS, which adopted two prediction strategies to relocate the Pareto-optimal solutions according to whether the current environmental change is similar to the historical changes. Rong et al. [29] proposed a multi-model prediction (MMP) method to tackle continuous DMOPs with more than one type of the unknown PS changes. MMP method utilizes four prediction models to handle different types of PS changes.

Steffensen's method
As a classical problem in scientific computation, how to find the root of a nonlinear equation f (a) = 0 has been widely concerned over the years. Among the variety of root searching methods, Newton's method [41] is probably the most famous one. The iterative process of Newton's method can be implemented according to Eq. (3), with a(0) as the initial estimate of the root. As shown in Eq. (3), a(g) is the current root, a(g + 1) is the root after one iteration, f (a(g)) is the derivation value of a(g). However, the fact that it is difficult to obtain the derivation of a nonlinear function sometimes limits the application of Newton's method.
As a notable improvement of Newton's method, Steffensen's method [30] is proposed. It can be seen from Eq. (4), .
In other words, Steffensen's method is derivation free. The theoretical analysis given by Conte and Boor [42] demonstrates that both Newton's method and Steffensen's method are quadratic convergence. ,

Dynamic multipopulation evolutionary framework with Steffensen's method (DMOEF-MS)
This paper proposes a dynamic multipopulation evolutionary framework (DMOEF-MS), which integrates Steffensen's method into a novel multipopulation framework to solve DMOPs. In this section, the motivation and framework of the proposed algorithm are given, followed by a detailed description of the key components in DMOEF-MS.

Motivations of DMOEF-MS
As described in Sect. 2.2.3, the basic idea of multipopulationbased strategies is that multiple populations cooperate to explore searching spaces and compete to generate promising solutions. With the strong parallel processing ability, multipopulation-based strategies have demonstrated its effectiveness for DMOPs with multiple peaks. However, in many existing multipopulation-based algorithms, the multiple populations focus on the same optimization problem. In this case, two different populations may generate Paretooptimal solutions that focus on the same or close areas in PFs, which may cause a waste of computations. In the multipopulation structure of DMOEF-MS, multiple populations handle different optimization problems, respectively. Therefore, the adopted multipopulation structure can improve the ability of DMOEF-MS to maintain population diversity and obtain well-distributed PFs, since the multiple populations focus on different areas of PFs by handling different optimization problems. For DMOEAs with prediction-based strategies, the basic idea is to reuse as much information from the past environment to speed up the searching process in the new environment. Thus prediction-based strategies always cooperate with memory-based strategies and are widely adopted to solve DMOPs with periodic and trending changes. However, the selection and utilization of memory information from past environments have a crucial influence on the performance of DMOEAs. The improper use of past information may directly weaken the performance of a DMOEA with prediction-based strategies and memory strategies. In DMOEF-MS, Steffensen's prediction strategy is used to predict the next likely positions of the individuals that handle single-objective problems. The proposed prediction strategy is implemented according to the populations in the current environment. On the one hand, Steffensen's prediction strategy can easily find more accurate solutions with the help of Steffensen's method, which is a classical and efficient rootsearching method for solving nonlinear equations. On the other hand, without reusing the information from the past environment, DMOEF-MS does not need to consider the selection and utilization of the past information. Moreover, Steffensen's diversity-maintenance strategy is proposed to increase the population diversity during the whole run in DMOEF-MS. In this strategy, a closeness metric is introduced to measure the closeness between a solution and the weight vectors, which are evenly distributed in the objective space. With the guidance of the closeness metric, Steffensen's diversity-maintenance strategy try to reduce the waste of computations while increasing the population diversity in DMOEF-MS.
The detailed description of the framework and key components of DMOEF-MS are given in Sect. 3.2.

Overall framework of DMOEF-MS
The overall framework of DMOEF-MS is illustrated in Fig. 1, and the detailed procedure of the proposed algorithm can be found in Algorithm 1. M is the dimension of the objective space, t is the index of the current environment, g is the number of generations. DMOEF-MS starts with initial multiple populations and a repository population (REP) which stores the Pareto-optimal solutions found so far. Among the multiple populations, each of the first M populations handles a single-objective problem that is generated by the weighted summation method. The last population (the(M + 1)th population) optimizes the original DMOP. For a DMOP, if the environmental change is not detected, then the next positions of the individuals in the first M populations can be updated by Steffensen's prediction strategy (line 9). Since (P O P M+1 ) deals with the original DMOP, which can be regarded as a static MOP when no environmental change occurs, thus P O P M+1 can be evolved according to any existing MOEAs. In this paper, the underlying MOEA is NSGA-II [2] (line 10), which has received significant atten-Algorithm 1 Overall framework of DMOEF-MS Input: M (dimension of the objective space) n 1 (size of the first M populations) n 2 (size of the last population) Output: if the environmental change occurs then 6: Reinitialize the multiple populations according to Steffensen's diversity-maintenance strategy and R E P(g); 7: Obtain P O P i (g)(i = 1, 2, . . . , M) by Steffensen's prediction strategy; 10: Obtain P O P M+1 (g) according to NSGA-II; 11: Obtain R E P(g) as the nondominated solutions found by all populations so far; 12: if mod(g, s) = 0 then 13: Update R E P(g) by Steffensen's diversity-maintenance strategy; 14: end if 15: end while tion due to its good optimization performance in solving MOPs since proposed. To increase the population diversity, Steffensen's diversity-maintenance strategy is adopted to update REP at fixed intervals of generations, which is determined by a preset parameter s (line 12). Moreover, Steffensen's diversity-maintenance strategy is also used to reinitialize the multiple populations in DMOEF-MS when the environment changes (line 6).

Key components in DMOEF-MS
The following section gives a detailed description of the key components in DMOEF-MS, including the construction of single-objective problems, change detection, Steffensen's prediction strategy, Steffensen's diversity-maintenance strategy, and reinitialization strategy.

Construction of single-objective problems
For the ith (i = 1, 2, . . . , M) population, its optimization function can be obtained according to Eq. (5), where M is the dimension of the objective space, f j is the jth objective function of the original DMOP, λ j is the jth element of λ, λ = [λ 1 , λ 2 , . . . , λ M ] is a randomly generated vector and M j=1 λ j = 1. By exchanging values between λ i and the largest member in λ, λ i is set to be the largest one in λ for the ith (i = 1, 2, . . . , M) population P O P i , as shown in Eq. (5). In other words, P O P i (i = 1, 2, . . . , M) places emphasis on the ith objective function. Therefore, for a DMOP with M objective functions, P O P 1 , P O P 2 , . . . , P O P M will focus on different objective functions during their evolving process. The first M populations and the last population P O P M+1 , which handles the original DMOP, will share the nondominated solutions found in their evolving process in order to get well-distributed Pareto-optimal solutions.

Change detection
Since there is no noise in the test DMOPs adopted in this paper, the environmental change is detected by reevaluating the selected detectors. Specifically, several individuals are selected as detectors, and the objective values of the detectors are re-calculated at each generation. Please note that it will cost additional function evaluations to detect environmental changes. If the new objective values are different from the previous ones, then the environmental change can be thought to be detected.

Prediction strategy based on Steffensen's method
In DMOEF-MS, the first M populations deal with singleobjective problems, which are obtained according to Eq. (5). Suppose the single-objective problem for . Then, Steffensen's method can be utilized to update the individuals in P O P i . Suppose x(g) is an individual in P O P i at the gth generation, G i (x(g)) is the objective value of x(g). As shown in Eq. (6), r is randomly generated in (0, 1). Therefore, l is smaller than G i (x(g)). The goal of the proposed prediction strategy is to decrease the value of G i (x(g)), since the original DMOP is a minimization problem. In this case, G i (x(g)) can be decreased by reducing G i (x(g)) to l. Suppose x j (g) is the jth dimension which is chosen to be predicted. Thus, the problem becomes finding the root of the nonlinear equation G i (x j ) − l = 0, with x j (g) as its initial root. Therefore, the next position of x j (g), namely x j (g + 1), can be estimated by the proposed prediction strategy, which is based on the above-mentioned Steffenson's method. Steffensen's method can be implemented according to Eq. (7), where x j (g + 1) is the updated value after using the proposed prediction strategy, N is the dimension of the searching space. In this case, Steffensen's prediction strategy will cost at most 2 × N function evaluations for producing each offspring. The detailed procedure of the Steffensen's prediction strategy is shown in Algorithm 2.

Diversity-maintenance strategy based on Steffensen's method
A DMOEA must have a fast convergence speed in order to find the ideal Pareto-optimal solutions before the environmental change occurs. In this paper, Steffensen's prediction strategy is proposed to accelerate the convergence speed. However, fast convergence speed may also lead to a rapid loss of population diversity. As shown in Fig. 2, the obtained Pareto-optimal solutions may concentrate in a small area of the whole PF, which makes it difficult to track the ideal PF after the environment changes. This paper devises a diversity-maintenance strategy, which is also based on Steffensen's method, to increase the population diversity. In Steffensen's diversity-maintenance strategy, the objective space is divided by a set of evenly distributed weight vectors. Suppose the dimension of the objective space is 2. As shown in Fig. 3, w 1 , w 2 , . . . , w u are u weight vectors and the solid points represent the Pareto-optimal solutions found by algorithms so far. Steffensen's diversitymaintenance strategy is designed to maintain population diversity by obtaining u well-distributed solutions, which are as close as possible to the u weight vectors respectively.  The most important step in Steffensen's diversity-maintenance strategy is to evaluate the closeness between a solution and a weight vector. To demonstrate the closeness metric used in this paper, x and w 3 in Fig. 3 are taken as an example. Specifically, the closeness metric c between x and w 3 can be calculated according to Eq. (8). Let x * be the projection of x on w 3 . As illustrated in Fig. 3, d 1 is the distance between x and x * , and d 2 is the distance between x * and O. θ > 0 is a preset accuracy parameter and is set to 0.5 in this paper. Therefore, the smaller the value of c, the closer x to w 3 . r is randomly generated in (0, 1), hence c 1 = r · c is smaller than c. In this case, a solution that is closer to w 3 can be obtained by finding the root of the nonlinear equation c − c 1 = 0, as shown in Eq. (9). Similarly to Sect. 3.3.3, the nonlinear equation can be solved according to Steffensen's method. In Eq. (9), x j andx j are the jth elements of x before and after using Steffensen's diversitymaintenance strategy. The detailed procedure of Steffensen's diversity-maintenance strategy is given in Algorithm 3. Obtain c and c 1 by Eq. (8), respectively; 5: Set j = 1, stop_ f lag = 0; 6: while stop_ f lag = 0 do 7: Obtain a j by Eq. (9); 8: if L(x) ≤ c 1 then 9: Replace the jth dimension of x withx j ; 10: end if 11:  if i ≤ M then 6: n = n 1 ; 7: else 8: n = n 2 ; 9: end if 10:

Algorithm 4 Reinitialization strategy
while P O P i < n do 11: Select a weight vector w from w 1 , w 2 , . . . , w u randomly; 12: Obtain the individual a, which is the closest to w from P O P; 13: Update

Reinitialization strategy
In this paper, the multiple populations will be reinitialized by reusing the nondominated solutions in REP obtained in the last environment when the environmental change occurs. Steffensen's diversity-maintenance strategy is adopted in the reinitialization process to increase the population diversity after the environment changes. The detailed procedure is shown in Algorithm 4.

Test problems and parameter settings
According to whether the PFs or PSs change with the environment, DMOPs fall into four types. In this paper, the proposed DMOEF-MS is tested on 14 DMOPs, whose detailed information is given in Table 1.
The FDA test problems are frequently adopted in the performance assessment of DMOEAs and the dMOP test problems are an extension of the FDA problems. The above two test suits consist of 8 problems, which pertain to different types of DMOPs and have linear linkages between the decision variables. For FDA3 and FDA5 in the FDA test suite, the density distribution of the Pareto-optimal solutions along PF is time-varying, which makes it difficult for DMOEAs to find the Pareto-optimal solutions that can maintain a good distribution over time. For dMOP3 in the dMOP test suite, the variables that control the spread of the PF are time-changing, which makes it difficult for DMOEAs to maintain population diversity, especially when the environment changes. The F test suite is composed of ten problems and four of them (F1-F4) are chosen from the FDA and dMOP test suites. This paper utilizes the other test instances (F5-F10), which have nonlinear linkages between the decision variables. Unlike the test problems which have smoothly changing environments, the environmental changes are sharp and irregular for F9 and F10.
As described in Sect. 3.2, the underlying MOEA adopted in the proposed DMOEF-MS is NSGA-II. To evaluate the effectiveness of the proposed strategies, DMOEF-MS is compared with two dynamic NSGA-II algorithms, namely DNSGA-II-A and DNSGA-II-B [12]. Furthermore, for comprehensive assessment, the proposed DMOEF-MS is also compared with four state-of-the-art DMOEAs: MOEA/D with a hybrid of memory and prediction strategies (MOEA/D- HMPS) [18], quantum immune clonal coevolutionary algorithm based on competition and cooperation among multiple populations (QICCA) [21], steady-state and generational evolutionary algorithm (SGEA) [37], and Kalman Filter prediction implemented in MOEA/D-DE (MOEA/D-KF) [36]. The parameter settings used in the test problems and the comparative algorithms are as follows. For all test problems, the index of environments t is calculated as Eq. (10). Where g is the number of generations, n t and τ t are the severity and frequency of the environmental changes, respectively. In this paper, n t is set to 10, and τ t is set to 300. The maximal number of function evaluations is set to 30,000 before the environmental change occurs and the environment will change 40 times in each run. To get statistical results, each algorithm will run 20 times for each test problem independently. For all algorithms, the population size is set to 100 for 2-objective problems and 105 for 3-objective problems. Specifically, the population sizes of the multiple populations in DMOEF-MS are shown in Table 2. Following MOEA/D-KF and MOEA/D-HMPS, 10 individuals, which are randomly generated in searching space is reevaluated to detect the environmental changes in DMOEF-MS. The construction method of the weight vectors used in DMOEF-MS can refer to Das and Dennis [44], which provided an efficient way to produce a set of points that are evenly distributed in a multidimensional space. In the proposed algorithm, the empirical setting of the number of weight vectors is shown in Table 2. Moreover, the parameter s in Algorithm 1 is set to 5, which means Steffensen's diversity-maintenance strategy is implemented at intervals of 5 generations. In DNSGA-II-A and DNSGA-II-B, the re-initialization parameter ζ is set to 20, and the hypermutation rate is 0.5.

Performance metrics
In this paper, the following metrics are adopted to assess the performance of algorithms in terms of convergence and diversity.

Modified inverted generational distance
Modified inverted generational distance (M I G D) [27] is derived from the inverted generational distance (I G D) [45,46], which can be calculated according to Eq. (11) and is widely used to evaluate the performance of MOEAs in static environments. In Eq.
is the Euclidean distance between F(x i , t) and F(x i , t). P * t is a set of points which are evenly distributed along the ideal PF in the tth environment, and |P * t | is the number of the members in P * t . P t is the Pareto-optimal solutions obtained by the algorithm that needs to be assessed. I G D assesses the performance of an algorithm in terms of diversity and accuracy. If I G D = 0 then each solution in P * t will be found in P t . In this case, the obtained Pareto-optimal solutions are just on the ideal PF and do not miss any part of the whole PF. As shown in Eq. (12), |T | is the number of the environmental changes in a single run. The last population 40 28 3-Objective problems The first 3 populations 25 21 The last population 30 28

Modified hypervolume
Modified hypervolume (M H V ) [18] is an extension of H V [47] and can be calculated as shown in Eq. (13). H V computes the area of the hypervolume which is constructed by the obtained P t and a reference point z * . For minimization prob- is the maximum objective value for the ith objective function in the tth environment, M is the dimension in the objective space. Suppose M = 2, the computation of H V is shown in Fig. 4.

Investigation of parameter s in DMOEF-MS
In DMOEF-MS, s controls the frequency of using Steffensen's diversity-maintenance strategy. If s is too large, Steffensen's diversity-maintenance strategy will function inadequately. However, if s is too small, too much function evaluations will be spent on maintaining population diversity. To demonstrate the influence of s on the performance of DMOEF-MS, FDA1 and dMOP2 are taken as examples. In this section, s is set to 1, 3, 5, 7, and 9, respectively. Figure 5 shows the mean M I G D and M H V obtained by DMOEF-MS with s changing from 1 to 9 for FDA1 and dMOP2 over 20 independent runs. It can be seen from Fig. 5, as s decreases, M I G D and M H V tend to become better at first, and then become worse severely. If s is too small, DMOEF-MS will spend too much function evaluations on Steffensen's diversity-maintenance strategy and the other strategies proposed in DMOEF-MS will function inadequately, since the maximum number of function evaluations has been given for each environment in this paper. If s is too large, DMOEF-MS will have a poor performance in maintaining population diversity. As Fig. 5 shows, DMOEF-MS with s = 5 performs best in terms of both M I G D and M H V for FDA1. For dMOP2, DMOEF-MS with s = 5 also has the best performance in terms of M H V . Therefore, s is set to 5 in this paper. Tables 3 and 4 FDA3 and FDA5, the density distribution of the Paretooptimal solutions along PF changes over time. However, in Steffensen's diversity-maintenance strategy adopted in DMOEF-MS, a set of evenly distributed weight vectors is used to obtain the Pareto-optimal solutions. As a result, Steffensen's diversity-maintenance strategy may weaken the performance of DMOEF-MS, as shown in Fig. 6b. For F8, DMOEF-MS performs worse than DNSGA-II-B in terms of M I G D. However, the M H V value obtained by DMOEF-MS is the best among the 3 algorithms.

Performance comparison with DNSGA-II-A and DNSGA-II-B
To better compare the performance of the 3 (k) algorithms on all the 14 (N ) test problems, the Friedman test and the pairwise post hoc Bonferroni-Dunn test [48] are conducted on the statistical results. Table 5 gives the average ranks achieved by the 3 algorithms on all test problems in terms of M I G D and M H V . The boldface numbers give the best rank values obtained by DNSGA-II-A, DNSGA-II-B, and DMOEF-MS. As shown in Table 5, for M I G D and M H V , the Friedman statistic (F F ) values are greater than the critical value of the F-distribution with (k − 1) and (k −1)×(N −1) degrees of freedom at 90% confidence level (F 0.1 (2, 26) = 2.52). In this case, the Friedman test reports significant differences among the comparative algorithms for   Table  5, the differences between the average ranks of DMOEF-MS and the other comparative algorithms are greater than 0.74, which means DMOEF-MS performs better than DNSGA-II-A and DNSGA-II-B at 90% confidence level in terms of both M I G D and M H V . In other words, as shown in Fig. 6, DMOEF-MS generates solutions more efficiently and stably than the other two algorithms for most test problems.

Effects of the key operators in DMOEF-MS
The key operators adopted in DMOEF-MS are Steffensen's prediction strategy, Steffensen's diversity-maintenance strategy, and the reinitialization strategy. Since the multiple populations in DMOEF-MS are reinitialized based on Steffensen's diversity-maintenance strategy, the contributions of Steffensen's prediction and diversity-maintenance strategies are studied in this section. In this section, two variants of DMOEF-MS are adopted to demonstrate the effectiveness of the two strategies mentioned above. One variant is DMOEF-MS without Steffensen's prediction strategy and the other one is DMOEF-MS without Steffensen's diversitymaintenance strategy. In DMOEF-MS without Steffensen's diversity-maintenance strategy, the multiple populations are reinitialized randomly in searching space when the environment changes. Two problems, namely FDA1 and dMOP2, are tested in this section. Figure 7 demonstrates the mean MIGD values obtained by 3 algorithms over 20 independent runs. It can be seen from Fig. 7 that the performance of DMOEF-MS without Steffensen's prediction strategy is significantly worse than that of DMOEF-MS for FDA1 and dMOP2. This may indicate that the proposed prediction strategy does help the algorithm find more promising solutions throughout the whole run. The performance of DMOEF-MS without Steffensen's diversity-maintenance strategy is also worse than that of DMOEF-MS for FDA1 and dMOP2. Therefore, Steffensen's diversity-maintenance strategy also can improve the performance of DMOEF-MS in terms of IGD. As shown in Fig. 7, Steffensen's prediction strategy may play a greater role in improving the performance of DMOEF-MS.

Influence of the severity and frequency of environmental changes
To investigate the performance of DMOEF-MS on DMOPs with different change severity and frequency, FDA1 and dMOP2 are tested in this section. To study the influence of n t (severity of the environmental change), τ t (frequency of the environmental change) is fixed to 30 and n t is set to 5, 10, and 20, respectively. In this section, the number of environmental changes in a single run is set to 120. The mean values and the standard deviations of M I G D of 20 independent runs on FDA1 and dMOP2 are demonstrated in Fig. 8. It can be observed from Fig. 8, the performance of DMOEF-MS becomes better as n t increases. Similarly, to investigate the influence of τ t , n t is fixed to 10 and τ t is set to 10, 20 and 30, respectively. The mean values and the standard deviations of M I G D of 20 independent runs on FDA1 and dMOP2 are demonstrated in Fig. 9. As Fig. 9 shows, the performance of DMOEF-MS also becomes better as τ t increases. On the other hand, the adopted multipopulation structure and Steffensen's diversity maintenance strategy are effective for maintaining population diversity. Moreover, the reinitialization strategy improves the ability to adapt to the new environments in DMOEF-MS. Table 8 shows the average ranks of MOEA/D-HMPS, QICCA, SGEA, MOEA/D-KF, and DMOEF-MS by the Friedman test. The boldface numbers are the best results achieved by all 5 algorithms. It can be see from Table 8 that the statistic value F F is greater than F 0.1 (5 − 1, (5 − 1) × (14 − 1)). Therefore, there are significant differences among the 5 algorithms at 90% confidence level. As shown in Table 8, the proposed DMOEF-MS achieves the best and the second-best average ranks in terms of M I G D and M H V , respectively.

Performance comparison with other DMOEAs
Although Steffensen's prediction strategy adopted in DMOEF-MS can help the algorithm find accurate and efficient solutions, the experimental results in Tables 6 and 7 show that DMOEF-MS does not achieve the best performance for some test problems. For FDA3 and FDA5, the  In QICCA, the coevolutionary competitive and cooperative operators, which can obtain uniform and diverse solutions, help QICCA find good approximations of the ideal solutions. Therefore, DMOEF-MS performs relatively worse than

Running time
This section demonstrates the running times of the algorithms on three two-objective problems (FDA2, FDA4, and dMOP2) and two three-objective problems (F5 and F8). This experiment is carried out on a personal computer (Intel(R) Core(TM) i7-8700K CPU @ 3.70 GHz, 32.0 G RAM) and each algorithm runs 20 times independently. Table 9 gives the running times of the algorithms.
It can be observed from Table 9, MOEA/D-KF is the most time-consuming, since the prediction model adopted in it is more complex than the others. MOEA/D-HMPS, SGEA, and DMOEF-MS take relatively less time to predict the next locations of the candidate solutions. As shown in Table 9, the time taken by MOEA/D-HMPS and DMOEF-MS are relatively close. Specifically, SGEA takes more time than MOEA/D-HMPS and DMOEF-MS because the generational environmental selection method spends some time on pre- serving good solutions for the next generation. QICCA takes more time than DMOEF-MS because the quantum updating operator and the competitive-cooperative operator takes more time to improve the performance of the obtained solutions.

Concluding remarks
This paper proposes a dynamic multiobjective evolutionary framework (DMOEF-MS), which adopts a novel multipopulation structure and Steffensen's method to handle DMOPs. Among the multiple populations in DMOEF-MS, one population evolves by the classic NSGA-II while the others evolve according to the proposed prediction strategy which is based on Steffenson's method. Moreover, Steffenson's diversitymaintenance strategy is introduced to increase population diversity at fixed intervals of generations before the environment changes and reinitialize the multiple populations after the environment changes. DMOEF-MS has been compared with some state-ofthe-art DMOEAs on several widely used DMOPs. The experimental results demonstrate that DMOEF-MS can track the moving Pareto-optimal solutions efficiently on most of the test problems. Specifically, the multiple populations handle different optimization problems and cooperate to get well-distributed Pareto-optimal solutions. Steffensen's prediction strategy accelerates the convergence speed by predicting the next positions of individuals efficiently. Meanwhile, Steffensen's diversity-maintenance strategy increases the population diversity by using a set of evenly-distributed weight vectors.
However, it can be observed from Table 9, DMOEF-MS takes more time than some of the comparative algorithms. The main reason for this is that the prediction strategy adopted in DMOEF-MS needs to consider all dimensions of an individual sometimes, as shown in Algorithm 2. Thus DMOEF-MS takes a relatively long time to predict the next positions of individuals. To improve the efficiency of DMOEF-MS, the possible directions of this paper are listed below.
1. A potential weakness of this paper is the lack of correlation analysis of decision variables, which can be used to divide the decision variables into several groups. As discussed in Sect. 3.3.3, the proposed prediction strategy will cost at most 2 × N function evaluations for producing each offspring. The combination of Steffensen's prediction strategy and the correlation analysis of decision variables might be a possible way to improve the time efficiency of DMOEF-MS. 2. The main framework of DMOEF-MS is to optimize multiple problems, including the constructed singleobjective problems and the original DMOP, by multiple populations. In this case, the construction method of single-objective problems plays an important role in the performance of DMOEF-MS. Therefore, the analysis of different methods of constructing single-objective problems is also one future work of this paper. 3. The maintenance of population diversity plays a crucial role in DMOEAs. Therefore, investigating other efficient methods to update the repository population, such as DAA [49], is also a possible work of this paper.