A pairwise ranking estimation model for surrogate-assisted evolutionary algorithms

Surrogate-assisted evolutionary algorithms (SAEAs) have attracted considerable attention for reducing the computation time required by an EA on computationally expensive optimization problems. In such algorithms, a surrogate model estimates the solution evaluation with a low computing cost and is used to obtain promising solutions to which the accurate evaluation with an expensive computation cost is then applied. This study proposes a novel pairwise ranking surrogate model called the Extreme Learning-machine-based DirectRanker (ELDR). ELDR integrates two machine learning models: extreme learning machine (ELM) and DirectRanker (DR). ELM is a single-layer neural network capable of fast learning, whereas DR uses pairwise learning to rank using a neural network developed mainly for information retrieval. To investigate the effectiveness of the proposed surrogate model, this study first examined the estimation accuracy of ELDR. Subsequently, ELDR was incorporated into a state-of-the-art SAEA and compared with existing SAEAs on well-known real-valued optimization benchmark problems. The experimental results revealed that ELDR has a high estimation accuracy even on high-dimensional problems with a small amount of training data. In addition, the SAEA using ELDR exhibited a high search performance compared with other existing SAEAs, especially on high-dimensional problems.


Introduction
Evolutionary Algorithms (EAs) are population-based optimization methods that are applied to many real-world problems.However, because an expensive fitness evaluation is often required in real-world applications owing to simulations or complex numerical calculations, the computational cost of EAs is typically high.To reduce the computing cost of EAs, surrogate-assisted EA (SAEA) has been studied [1,2] and applied to real-world applications such as aerospace engineering [3,4], vehicle design [5], and manufacturing process optimization [6].An SAEA utilizes a surrogate model that estimates fitness instead of a computationally expensive fitness function and finds promising solutions for actual fitness evaluation.Because surrogate estimation is computationally cheaper than actual fitness evaluation, the execution B Tomohiro Harada harada@tmu.ac.jp 1 Faculty of System Design, Tokyo Metropolitan University, Hino, Tokyo, Japan time of the SAEA is reduced compared to that of conventional EAs.
Three types of surrogate models are used in SAEAs [7]asfollows: (1) a regression model that directly estimates the fitness evaluation, (2) a classification model that estimates the relative acceptability compared to a reference value rather than a fitness evaluation, or (3) a ranking model that estimates the relative superiority of solutions compared to each other.This study proposes a novel ranking-based surrogate model that exploits the fact that general EAs can perform parent selection and survival selection based on the superiority of solutions.Specifically, this paper proposes the Extreme Learning-machine-based DirectRanker (ELDR), which combines an extreme learning machine (ELM) [8,9]-a type of single-layer neural network (NN) with fast learning capability-with DirectRanker (DR) [10]-a pairwise ranking method based on a NN developed primarily for information retrieval.
To investigate the effectiveness of ELDR, its prediction accuracy was first analyzed by comparing it with other surrogate models used in previous research.Then, to confirm its capability on an SAEA, it was incorporated into a state-of-the-art SAEA-specifically, surrogate-assisted hybrid optimization (SAHO) [11]-and the consequent ELDR-SAHO was compared with recent SAEA methods, including SAHO.
The remainder of this paper is organized as follows: "Related Work" discusses related work on SAEAs."Proposed Method" describes the proposed new pairwise ranking surrogate model, ELDR, in detail."Example Application of ELDR to SAEA: ELDR Surrogate-Assisted Hybrid Optimization" presents the details of ELDR-assisted SAEA.In particular, this study employed SAHO-a state-of-theart SAEA."Preliminary Experiment: Accuracy of ELDR analyzes the parameter sensitivity of ELDR and compares its estimation accuracy with other conventional surrogate models used in existing SAEAs.Section "Numerical Experiments" outlines the numerical experiments conducted to investigate the effectiveness of ELDR-assisted SAHO and analyzes the results obtained.Finally, "Conclusion and Future Work" presents concluding remarks and outlines possible future work.

Related work
This section reviews surrogate models used in SAEAs in previous studies.This study focuses primarily on singleobjective real-valued optimization problems but also refers to some studies on multi-objective optimization and discrete optimization, including genetic programming.
In previous studies, the mainstream approach has been SAEAs using regression models.In particular, most previous works [12][13][14][15] used the radial basis function (RBF) [16] as a surrogate model.Other SAEAs that use the Kriging model [17,18], Gaussian process regression [19], and the nearest neighbor method [20] have also been proposed.Pavelski et al. proposed ELMOEA/D [21], a surrogateassisted multi-objective evolutionary algorithm that uses ELM as a regression surrogate model.
Recent studies have proposed classification-based SAEAs.Pan et al. [22] proposed a classification-based SAEA (CSEA) that learns the dominance relationship between candidate solutions and reference solutions using an artificial neural network (ANN) [23].Sonoda et al. [24] used a support vector machine (SVM) [25] to optimize a decomposition-based MOEA (MOEA/D) to classify whether an offspring solution is better than its parent solution for each aggregation function.Wei et al. [26] proposed a classifier-assisted level-based learning swarm optimizer (CA-LLSO) that uses a multiclass gradient boosting classifier (GBC) [27] to classify the swarm (population) into different levels for applying LLSO [28].
In contrast to the regression and classification models, few studies have reported ranking-based SAEAs.Ranking models are usually applied for pre-selection [7], for example, to estimate the population ranking in CMA-ES [29] in the works Fig. 1 A topology of ELM of Runarsson [30] and Loshchilov et al. [31].Lu et al. [32] proposed Differential Evolution (DE) [33] with surrogateassisted self-adaptation (DESSA), which uses ranking SVM (RankSVM) [34] to select the most promising trial vector.In recent years, Hao et al. [35] proposed a ranking model using a vector concatenating two solutions as input and demonstrated that the ranking model shows higher estimation accuracy than regression and classification models with a small number of training samples.Another work by Hao et al. [36] proposed a similar ranking model based on a neural network that estimates the dominance relationship for solving multi-objective optimization problems.However, there is limited research on high-performance SAEAs using rank models compared with SAEAs using regression and classification models.
Because the ranking model can estimate the dominance relation between solutions rather than the objective function value, it can be applied to a broader range of problem domains where regression and classification models are not applicable.One example is constrained optimization problems for the definition of parental or survival selection methods (e.g., feasibility rules [37] or the -constrained method [38]) without estimating all the constraint values.Another example is the subject human evaluation in interactive EAs [39], which generally uses relative rather than quantitative assessment.Therefore, developing a useful ranking surrogate model is more effective for applying SAEA to a different problem domain than regression and classification models.

Proposed method
This section first introduces ELM [8,9] and DR [10], which are the components of ELDR.Then, the operation of ELDR, which integrates these two techniques, is explained.

Extreme learning machine
ELM is a single-layer NN.The topology of ELM is illustrated in Fig. 1.ELM is formed with a fully connected NN.Given a d-dimensional input x, ELM with L hidden neurons calculates the output y as where h indicates an activation function and w i and b i indicate the weight vector and bias value from the input to the i-th hidden neuron (1 ≤ i ≤ L).The value of β i represents the weight from the i-th hidden neuron to the output.The most notable feature of ELM is that it randomly assigns hidden layer weights W and biases b and does not learn them, whereas the output weights β are the only parameters learned.For the training data of N input-output pairs (1) can be expressed as where For this purpose, the ELM output weights β are calculated by the following equation using pseudo-inverse matrix operations with the regularization term [40]:

T,
when the number of training samples was not large.Conversely, for the case where the number of training samples is large (N L), the following alternative formula is used: The output weights of ELM can be quickly calculated using the pseudo-inverse matrix operation without any iterative procedures.The user parameters are only the activation function h, regularization coefficient C, and number of hidden neurons L, making parameter tuning easy.Furthermore, the activation function is non-differentiable, which provides flexibility.In ELM, the following activation functions are generally employed: The advantages of ELM against conventional machine learning models such as NNs and SVMs have been reported in some previous works [41,42] as follows: • The training cost is low because ELM does not require the iterative procedures of backpropagation.

DirectRanker
DR is a NN-based pairwise ranking method that implements a quasi-order in future space F such that the ranking is unique.In particular, the quasi-order satisfies the following three conditions for all x, y, z ∈ F: 1. Reflexivity: x x 2. Antisymmetry: x y ⇒ y x 3. Transitivity: This quasi-order can be defined by the ranking function r : F × F → R as follows: x y ⇔ r (x, y) ≥ 0.
The ranking function r (x, y) in DR is constructed using two subnetworks nn and the output layer.Subnetworks nn have the same structure and shared weights.On the other hand, the output layer calculates the output from the difference in the outputs of the two subnetworks, the output weights w, and the activation function τ as follows: where the activation function τ : R → R satisfies the following conditions: τ (−x) = −τ (x), sgn(τ (x)) = sgn(x); the original work [10] used the hyperbolic tangent (tanh).The function expressed in (3) satisfies the three conditions that Fig. 2 A topology of ELDR the quasi-order must satisfy: Please refer to the details in the original work [10].

Extreme learning machine-based DirectRanker (ELDR)
Assuming that DR is applied to SAEA as a ranking-based surrogate model, the problem of high computational cost for the training of DR may arise, because it optimizes the weights of NN through iterative procedures using backpropagation.
To address this issue, this study proposes a novel pairwise ranking method, ELM-based DR (ELDR), which constructs the subnetwork of DR with ELM, enabling quick training by the pseudo-inverse matrix operation without the iterative procedures of backpropagation.Figure 2 illustrates the topology of ELDR.ELDR takes two d-dimensional inputs, x and y, and calculates the output r from the subtraction of two hidden layers.ELDR employs the random-weighted single-layer network in (1) for the DR subnetwork and defines the ranking function r E L D R as follows: where W and b are the weight parameters randomly assigned to the same ELM and β is the only parameter to be trained.Because ELDR uses two subnetworks with the same structure, weights, and activation functions as DR, the ranking function r E L D R also satisfies the three conditions of the quasi-order.
Algorithm 1 shows the training procedure for ELDR.First, all feature vectors x i are normalized in the range [−1, 1] using the minimum and maximum values in the dataset D.Then, the paired dataset D pair is constructed with all combinations of the two solutions in dataset D as end for 8: end for 9: Randomly assign w i and b where sgn represents the sign function that returns 1, −1, 0 according to the sign of the input value.Here, it is assumed that the minimization of f i and t j is +1 if x (1) j is better than x (2) j , −1 if worse, and 0 if equal.For the constructed training dataset D pair , ELDR is formulated as (H (1) − H (2) H = H (1) − H (2) where H (k) is calculated from the k-th feature vector x 2) in the i-th input pair as follows: In this study, the weights W and bias b are randomly sampled from a uniform distribution in the ranges [−1, 1] and [0, 1], respectively [43].From (4), the output weights β can be calculated as follows: with a pseudo-inverse matrix calculation, as in Eq. ( 2).When the prediction gives two solutions x 1 and x 2 , the output of ELDR is calculated as by using the weights W and biases b assigned in the training phase and the output weights β obtained by Eq. ( 5).If t = +1, ELDR predicts that x 1 is better than x 2 ; otherwise, it predicts that x 2 is better.

Computational complexity
This subsection discusses the computational complexity of training ELDR.Let the size of the dataset D be N and the size of the paired dataset D pair be N p = N (N − 1)/2.The number of hidden neurons is L. Thence, the size of the matrix H of the hidden layer is N p × L, and the size of the matrix T representing the training label is N p × L.
First, the time complexity of the hidden layer calculation is O(N Ld) for any activation function.Next, for the calculation of the output weights β, assuming that N p L in general, and computing (2), the time complexity can be calculated as follows: The time complexity of H T H becomes O(N p L 2 ), and that of The most computationally expensive part is N p L 2 and L 3 , but because N p L, the time complexity of ELDR finally becomes O(N p L 2 ) = O(N 2 L 2 ).Thus, the time complexity of ELDR increases with the product of the square of the dataset size N and the square of the number of hidden neurons L.

Example application of ELDR to SAEA: ELDR surrogate-assisted hybrid optimization
To investigate the applicability of ELDR to SAEAs, this study incorporated ELDR into the state-of-the-art SAEA method SAHO [11] to produce ELDR-SAHO.The original SAHO utilizes RBF as its surrogate model and finds a promising solution by simultaneously using Differential Evolution (DE) [33] and Teaching-learning-based Optimization (TLBO) [44,45].ELDR-SAHO estimates the dominance relationship between solutions using ELDR instead of predicting the fitness using RBF.This section briefly explains the search algorithms, DE and TLBO, used in SAHO and describes in detail the algorithm employed in ELDR-SAHO.

DE
SAHO uses the DE/1/best strategy because of its high convergence capability.The DE/1/best strategy generates a mutant vector v i for a solution x i as follows: where x best represents the best-fitted individual in the current population, r 1 and r 2 (r 1 = r 2 = i) denote random indices, and F is the scaling parameter.Using a mutant vector v i , a trial vector u i for a solution x i is calculated as where rand(0, 1) produces a uniform random value in the range [0, 1], Cr is the crossover rate, and j rand denotes a random index in [1, D].If the fitness value of the trial vector u i is better than that of the current solution x i , then its position is updated.

TLBO
TLBO is a population-based search algorithm that simulates teacher instruction and mutual learning in education and consists of a teacher phase and a learner phase.
In the teacher phase, the best solution in the current population is selected as a teacher individual x teacher , and the positions of other solutions (students) are updated using the mean of the population x mean as follows: where r i is a uniform random value in [0, 1], and T F is a variable calculated as T F = 1 + round (rand(0, 1)) (round rounds off the input value); that is, T F takes 1 or 2 with equal probability.If the fitness value of an updated position x new,i is better than that of the original position x old,i , then its position is updated.By contrast, the learner phase updates each solution in the population by interacting with a random solution.For a solution x i , a random solution x j (i = j) is selected, and the following equation produces a new position: where r i denotes a uniform random value in [0, 1].Similar to the teaching phase, the position of a solution x old,i is updated if the fitness value of the updated position x new,i is better.TLBO alternatively repeats the teacher and learner phases, until the termination condition is satisfied.

The ELDR-SAHO Algorithm
The procedure used by the ELDR-SAHO algorithm, in which the proposed ELDR is incorporated into SAHO, is Algorithm 2 ELDR-assisted SAHO (based on Algorithm 1 in [11]) Input: Population size ps, maximum number of fitness evaluations Max F E, search generations K Output: Optimal solution 1: Generate initial ps samples using LHS, evaluate them using the real fitness function, and store them to Initialize population with ps top-ranking best samples in D, gen = 1 5: while gen ≤ K do 6: Select n nearest neighbors for each solution in the population as the training dataset D ⊆ D B 7: Build ELDR surrogate model r E L D R using Algorithm 1 8: if Run D E then Run DE 9: Generate the trial population using ( 6) and ( 7) 10: Replace parent individual x i with trial vector else Run TLBO 13: Perform teacher phase using (8) and replace individuals if Perform learner phase using ( 9) and replace individuals if end while 19: Use Algorithm 3 to select the individual x eval for the real fitness evaluation, if f (x eval ) is worse than the former best then 22: Run D E = ¬Run D E Reverse Run D E 23: end if 24: end while 25: return Optimal solution shown in Algorithm 2. ELDR-SAHO first produces ps welldistributed initial samples using Latin hypercube sampling (LHS) [46], and evaluates them using an actual (computationally expensive) fitness evaluation.All evaluated initial samples are stored in the dataset D B. For each iteration, the initial population is generated from the top ps solutions in the dataset D B, and K generations are evolved using DE or TLBO.The ELDR surrogate model is trained using a subset D extracted from D B consisting of n nearest neighbors in D B for each solution in the current population.Note that the design variable is normalized within the minimum and maximum variables in the extracted dataset D. A variable Run D E determines the algorithm used for each search.DE is performed if Run D E = T rue, whereas TLBO is performed if Run D E = False; After the K -generations search, a promising solution for the actual fitness evaluation is selected according to Algorithm 3. In Algorithm 3, the best solution in the population at K generations is compared with the mean of the top r subpopulation (r ∈ [1, ps]), and a solution that the ELDR return x mean 9: end if surrogate predicts better is selected as a promising solution.The selected promising solution is actually evaluated using the computationally expensive fitness function.If the promising solution is better than the best solution thus far, the current search algorithm (DE or TLBO) is continuously used.Otherwise, the search algorithm is switched.
The essential procedure is the same as that of the original SAHO with the RBF surrogate model but differs in Steps 7, 10, 13, and 15 in Algorithm 2 and Steps 1 and 5 in Algorithm 3. In these steps, the ELDR surrogate model is used to predict the dominance relationship.

Preliminary experiment: accuracy of ELDR
This section analyzes the estimation accuracy of the proposed ELDR surrogate model.The first experiment explored the parameter sensitivity of ELDR regarding the activation function h, number of hidden neurons L, and regularization coefficient C.Then, "Comparison with other surrogate models" compared the estimation accuracy of ELDR with other surrogate models, RBF and RankSVM, which are generally used in previous SAEA research.Finally, the computational time of these surrogate models were compared in "Computation time".

Experimental sttings
This study used the eight single-objective continuous optimization benchmarks shown in Table 1.This choice can be justified because these functions have been widely used to investigate the performance of SAEAs in recent research and have different fitness landscape characteristics.The dimensions of the design variables were set as d = 20 and 100 to compare surrogate model performance for small and large problems.It is known that RBF has high estimation accuracy for problems up to 20-30 dimensions, while 100 dimensions are considered a large-scale problem in the SAEA domain.
The training datasets consisting of N train = 2d, 5d (d is the dimension) randomly sampled data units were used for train- The prediction accuracy was assessed using Kendall's rank correlation coefficient for 20 independent pairs of training and test data, and the average rank correlation was compared.Kendall's rank correlation coefficient (hereafter, Kendall's τ ) returns τ = +1 if the predicted and actual ranks are entirely the same, whereas it returns τ = −1 if they are entirely different.A higher rank correlation indicates a better rank prediction.For the ranking method (ELDR and RankSVM), the test data are sorted by the prediction, and their rank is compared with the actual data.For the RBF model, the test data are sorted based on the predicted fitness values, and their rank is also compared with the actual rank (not the comparison of the actual fitness values).The experiments were executed on a computer with an Intel Xeon W-2295 3.00 GHz CPU with 64 GB RAM using MATLAB R2019a.

Comparison of hyperparameters
This subsection reports the experiment conducted to evaluate the different hyperparameters of ELDR: activation function h, number of hidden neurons L, and regularization coefficient C. For the activation function h, the experiment used three functions: Sigmoid (SIG), Gaussian (GAU), and Multiquadric (MQ), as shown in "Extreme Learning Machine".The number of hidden neurons was set to L = {d +1, 2d +1, 3d +1, • • • , 10d +1} according to the dimension d of the design variable.The regularization coefficient Because the difficulty of estimation varies for each problem, the scale of the estimation accuracy (i.e., Kendall's τ ) varies.To consider these differences and investigate the overall performance for all problems, this study uses the same index as in the literature [7,48].Specifically, this study investigates the influence of the regularization coefficient C and the number of hidden neurons L for each activation function.For ELDRs using each activation function, the problem instance is denoted as and ELDR with the parameter setting s i is denoted as A s i .The following equation calculates the performance P M(s i ) for each parameter setting s i : where P(A s i A s j ) represents the ratio that A s i outperforms A s j and is calculated as follows: where P q i,k > q j,k | f k represents the ratio that the estimation accuracy of A s i is better than that of A s j for a problem instance f k .P q i,k > q j,k | f k is calculated using the following equation: where τ i,k,t represents Kendall's τ for the t-th dataset of a problem instance f k obtained by A s i .N i and N j represent  Figure 3 shows the performance metric P M(s i ) calculated using Eq. ( 10) for each activation function.The horizontal axis shows the regularization coefficient and the vertical axis shows the number of hidden neurons.The value in each cell indicates the P M(s i ) value for the corresponding parameter setting.
The result in Fig. 3a indicates that when using the Sigmoid function, the number of hidden neurons significantly affects its estimation accuracy, and a larger number of hidden neurons results in better performance.Conversely, the effect of the regularization coefficient is small.The results show that the combination of L = 10d + 1 and C = 2 −2 achieves the highest estimation accuracy when using the Sigmoid function.However, the difference in the performance metric is small among the different parameter settings.This characteristic indicates that the optimal parameter setting varies depending on the characteristic of target problems.
Next, Fig. 3b shows that when using the Gaussian function, both the number of hidden neurons and the regularization coefficient significantly affect the estimation accuracy of ELDR.Specifically, a larger number of hidden neurons tends to result in high estimation accuracy, whereas the maximum performance is obtained around C = 2 −3 to 2 0 especially when the number of neurons is large for the regularization coefficient.Consequently, the combination of L = 10d + 1 and C = 2 −3 achieves the highest estimation accuracy when using the Gaussian function.The performance metric among the different parameter settings is large when using the Gaussian function.This indicates that the optimal parameter setting is largely independent of the problem, and the best parameter setting of L = 10d + 1 and C = 2 −3 can obtain stable performance when using the Gaussian activation function.
Finally, focusing on the Multiquadric function, Fig. 3c indicates that small normalization coefficients tend to give stable performance regardless of the number of hidden neurons.In particular, the normalization coefficient of C = 2 −5 provides the highest estimation accuracy with L ≥ 2d + 1.Although the performance difference between different L is slight, the combination of L = 2d + 1 and C = 2 −5 achieves the highest estimation accuracy when using the Multiquadric function.The large difference in performance metrics between the parameter settings can be seen.In particular, the choice of C = 2 −5 can be optimal.This indicates that the best parameter setting of L = 2d + 1 and C = 2 −5 can be recommended regardless of problems for the Multiquadric function.

Comparison with other surrogate models
This subsection compares ELDR with RBF and RankSVM.ELDR employed the hyperparameters chosen from the results in the previous subsection.In particular, this comparison used ELDR with: The best and second-best results are highlighted in boldface and underlined, respectively RBF uses the cubic basis function (φ(x) = x 3 ), which has been mostly used in previous works that employed the RBF surrogate.The parameters of RankSVM were chosen from the literature [7].
Table 2 shows the estimation accuracies for ELDR, RBF, and RankSVM.The bottom row summarizes the average rank of all problem instances and all dimensions.
First, it can be seen that ELDR using the Sigmoid function has a significantly lower estimation accuracy than the other methods in the first five benchmarks and is less accurate.
However, its accuracy improves on the CEC 2005 benchmarks and is higher than that of the Gaussian activation function for some problems, e.g., 100-dimensional cases of F 10 and F 16 .The average rank is the lowest among the five surrogate models.This indicates that the Sigmoid function has low estimation accuracy and low usefulness even when the parameters are optimally configured, but it can be an option for complex composition functions.
Second, ELDRs with the Gaussian and Multiquadric functions outperform RBF and RankSVM used in conventional SAEAs and achieve the best or second-best performance on most problems.In particular, ELDR with the Multiquadric

Computation time
To evaluate the computation time of the proposed ELDR, the training times of the three methods are compared.Note that because the influence of differences in the regularization coefficients and activation functions of ELDR on the computation time is extremely small, only results using Gaussian as the activation function and C = 2 −3 as the regularization coefficient are compared in this paper.It has been confirmed that similar results can be obtained for other activation functions and regularization coefficients.In addition, because the training time does not depend on the characteristics of the problem, this paper addresses the average of all problems.Figure 4 shows the average training time for ELDR, RBF, and RankSVM.The blue, orange, and green lines represent the results for ELDR, RBF, and RankSVM, respectively.In each figure, the horizontal axis represents the number of hidden neurons in ELDR, while the vertical axis represents the training time [s].
These figures show that the training time for ELDR increases with increasing the number of hidden neurons L in a nearly linear trend.This trend is smaller than the increase of L squared, derived from O(N 2 L 2 ) for the computational complexity of ELDR discussed in "Computational complexity".This is because this paper uses MATLAB to implement ELDR, and the computational complexity required to calculate H T H , which is the most computationally expensive, is less than that required for a simple matrix product.This indicates that the computational complexity of ELDR is smaller than the theoretical one, and the computation time linearly increases with increasing L in practice.
Next, the computation times of RBF and RankSVM is compared.First, ELDR has the shortest training time when the number of dimensions and training data are small (d = 20, N train = 2d = 40).When the number of dimensions and training data is large, the computation time of ELDR for small L (L = d + 1) is similar to or slightly longer than that of RBF and RankSVM.On the other hand, as the number of hidden neurons L increases, the training time of ELDR becomes longer than that of other surrogate models.In particular, in the case of d = 100, N train = 5d = 500, which has the highest number of dimensions and training data, the computation time of ELDR for L = 10d + 1 is approximately 60 times longer.
These results indicate that ELDR can be trained with the same training time as existing surrogate models such as RBF and RankSVM for low-dimension, small-sample training.On the other hand, for high-dimension, many-sample training, ELDR requires significantly longer training time than existing models, especially when the number of hidden neurons L is increased.However, the training time for ELDR is acceptable compared to the expensive solution evaluations, which sometimes require several hours or days, targeted by SAEAs.

Numerical experiments
Numerical experiments were conducted using well-known benchmark problems to investigate the effectiveness of the proposed method.The experiments used the eight singleobjective continuous optimization benchmarks listed in Table 1.The dimensions of the problems were set to d = {10, 20, 30, 50, 100}.

Experimental settings
In the experiments, ELDR-SAHO-explained in "Example application of ELDR to SAEA: ELDR surrogate-assisted aybrid optimization"-was used as the ELDR-assisted EA.It was compared with the conventional state-of-the-art SAEA methods GORS-SSLPSO [13], FSAPSO [15], SLPSO [49], CA-LLSO [26], and SAHO [11].GORS-SSLPSO, FSAPSO, SLPSO, and SAHO use the regression model based on the RBF, while CA-LLSO uses the classification model based on gradient boosting classifier (GBC) [50].The implementations of GORS-SSLPSO, 1 FSAPSO, 2 SLPSO, 3 and 1 https://github.com/yuhaibo2017/GORS-SSLPSO_code(accessed April 12, 2022).From the preliminary experimental results in "Preliminary experiment: accuracy of ELDR", it was revealed that ELDR using either the Gaussian or Multiquadric function has high potential as a surrogate model.However, the optimal activation function varies depending on the problem.Based on this analysis, the model selection through validation is introduced into ELDR-SAHO.Algorithm 4 shows the model selection procedure of ELDR-SAHO.The model selection compares the Gaussian function with L = 10d + 1, C = 2 −3 , and the Multiquadric function with L = 2d + 1, C = 2 −5 as candidates.Subsequently, the model with the highest Kendall's τ on the validation set between these two models is used as a surrogate in the search.The validation set for model selection comprises randomly selected 20% of the data from the training dataset.This procedure is used at line 7 in Algorithm 2.

Results
Table 3 shows the mean and standard deviation of the best fitness after the maximum fitness evaluations in 20 independent runs.For each benchmark, the best (smallest) value is highlighted in boldface, whereas the second-best is underlined.The Mann-Whitney U test with a significance level of 5% was performed, and the results were adjusted using the Bonferroni correction [51] to confirm the statistical difference between the proposed method and the other methods.Where the proposed method obtained significantly better results than the other methods, "+" marks are put, "−" marks denote significantly worse, and "≈" denotes no significant difference is found.Finally, the results are summarized at the bottom of the table.
The experimental results show that ELDR-SAHO achieved the best mean fitness in 21 out of 40 cases and was the second-best in two cases.Moreover, the proposed method is significantly better than the other algorithms on many benchmarks.However, on the F 10 and F 16 problems, the proposed method was inferior to SAHO, which uses RBF as a surrogate.Moreover, on the low-dimensional problem (d ≤ 30) of the Griewank, Rastrigin, and F 19 , ELDR-SAHO does not achieve good results compared with the other methods.On the other hand, ELDR-SAHO significantly outperformed the other algorithms for the high-dimensional benchmarks except for the Rosenbrock (d = 100), F 10 , and F 19 benchmarks.
Figure 5 shows the median objective function values transitions and the interquartile ranges [range between the third quartile (Q3) and the first quartile (Q1)] when d = 100.The horizontal axis indicates the number of actual fitness evaluations, whereas the vertical axis shows the objective function value on a logarithmic scale.Note that for the CEC 2005 benchmarks, the difference between the optimum and obtained value is plotted.
First, focusing on the F 10 and F 16 problems, where ELDR-SAHO was significantly inferior to the other methods on the 100-dimensional problem, it can be seen that the results are comparable to the other methods until the middle stage of the search (about 400 fitness evaluations).On the other hand, in the subsequent search, the proposed method stagnates, whereas the other methods decrease their objective function values.This suggests that the proposed method is stuck in a local solution from which it cannot escape on these benchmarks.
By contrast, the figures show ELDR-SAHO as having a rapid improvement in the objective function value from the early stage of optimization, and it continues the search without stagnation on the Ellipsoid, Rosenbrock, Ackley, Griewank, Rastrigin, and F 19 problems.This can be attributed to the ability of ELDR to estimate the dominance relationship more accurately with a smaller amount of training data than RBF and GBC.In particular, although the final objective function values of ELDR-SAHO were worse than those of SAHO on the 100-dimensional Rosenbrock function (Fig. 5b), ELDR-SAHO converges to the final result in fewer evaluations than the other algorithms.These results indicate that ELDR-SAHO has better search performance than conventional SAEA using regression and classification surrogate models for many problems (especially high-dimensional problems).However, the results suggest that ELDR-SAHO may fall into the local optimum on some problems.Note that although ELDR was applied to SAHO in this study, the application of ELDR is not necessarily suitable for SAHO because SAHO is designed as an SAEA using RBF.In particular, for the F 10 and F 16 problems, it may be possible to enhance the performance from ELDR by proposing an SAEA with a mechanism to prevent (or escape from) falling into local optimum.In addition, because ELDR can be searched only by the superiority of the solutions in all problems, it may be applicable for interactive EAs that assume each function as a human evaluation model.Therefore, ELDR has high potential as a new surrogate model for SAEA.

Conclusion and future work
This paper proposed a novel ranking-based surrogate model, called ELDR, for SAEA that combines ELM and Direc-tRanker.Further, it was incorporated into SAHO, a stateof-the-art SAEA.To investigate the estimation accuracy of ELDR, this study compared the estimation accuracy of ELDR with that of RBF and RankSVM, which are commonly used in SAEAs.The comparison results indicated that ELDR with an appropriate hyperparameter setting exhibited the highest rank estimation accuracy compared RBF and RankSVM, especially when the amount of training data was small.To investigate the compatibility of ELDR with SAEA, ELDR-SAHO was compared with existing SAEAs.The experimental results showed that ELDR-SAHO significantly outperforms existing SAEAs using regression and classification models, including RBF-based SAHO, on many problems, particularly on high-dimensional (d ≥ 50) problems.Based on these results, it is concluded that ELDR is a promising surrogate model for SAEAs.However, because the combination of SAHO and ELDR may fall into the local optimum, future research should explore a different SAEA that can fully utilize the power of ELDR.
Because ELDR is a NN-based method, it has the advantage that, unlike RBF, it can be applied to both discrete and mixed variable optimization problems and real-valued optimization problems.In addition, a ranking-based surrogate model is useful for constrained and aesthetic optimizations.Therefore, future research should examine the application of ELDR to these problem domains.
activation function h, regularization coefficient C, number of hidden neurons L Output: Trained ELDR model 1: D =Sort original dataset D according to the objective function value f i 2: D pair = ∅ Paired dataset generation 3:

Algorithm 3 1 r r x i 4 :
Selecting the individual for real fitness evaluation (based on Algorithm 3 in[11]) 1: Sort individuals in the current population according to the ELDR surrogate model 2: Randomly generate an integer number r in the range [1, ps] 3: Compute the mean of the top subpopulation with the indexes [1, r ] as x mean = Select the best individual in the current population as x best 5: if r E L D R (x best , x mean ) = +1 then 6: return x best 7: else 8:

Fig. 3
Fig.3Performance metric P M(s i ) calculated using Eq.(10) for each activation function

Algorithm 4
Model selection procedure in ELDR-SAHO Input: Dataset D = {(x 1 , f 1 ), • • • , (x N , f N )} Output: Trained ELDR model 1: Randomly select 80% of D as D train , and 20% as D valid 2: Train E L D R G AU with the Gaussian function, L = 10d + 1, C = 2 −3 by using D train 3: Train E L D R M Q with the Multiquadric function, L = 2d + 1, C = 2 −5 by using D train 4: Calculate the Kendall'sτ G AU of E L D R G AU and τ M Q of E L D R M Q using D valid 5: if τ G AU > τ M Q then 6: return E L D R G AU 7: else 8: return E L D R G AU 9: end if CA-LLSO4 available on the Internet were used.The implementation of SAHO was provided by Pan et al.[11], and ELDR-SAHO was implemented based on it.The experimental setup was established based on Pan et al.[11].The population size was set to ps = 5 × d when d = {10, 20, 30} and to ps = 100 + d/10 when d = {50, 100}.The neighbor size n when training ELDR (Step 6 in Algorithm 2) was set to n = d.The parameter values for DE were Cr = 0.9 and F = 0.5.The number of generations K required for DE or TLBO on the surrogate model was set to K = 30.The maximum fitness evaluation was Max F E = 11 × d when d = {10, 20, 30} and Max F E = 1000 when d = {50, 100}.For the other algorithms, the parameter values recommended in their respective studies were used.

19 Fig. 5
Fig. 5 Transitions of median objective function values and the interquartile ranges [ranges between the third quartile (Q3) and the first quartile (Q1)] when number of dimensions d = 100 (the vertical axis is logarithmic scale)

Table 1
Benchmark problems