Skip to main content
Log in

A two-stage surrogate-assisted meta-heuristic algorithm for high-dimensional expensive problems

  • Optimization
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

This study proposes a two-stage surrogate-assisted meta-heuristic algorithm named SDAMA-SPS to solve computationally expensive problems with high dimensions. In this algorithm, a surrogate-assisted monkey algorithm with dynamic adaptation (SDAMA) is presented to globally search for the best solution of the first stage, and a surrogate-based perturbation search (SPS) is designed to perform a more intensive local search for the final optimal solution. In the first stage, a global radial basis function (RBF) surrogate model is constructed with all historical solutions and is used to evaluate the positions of monkeys in the climb process and watch-jump process. Such global RBF model is updated with the positions of monkeys and their real objective function values after the second round of the climb process for each iteration. In the second stage, a local RBF surrogate model is built with a set of current best solutions, which can help to select the most promising sample so as to further locally improve the solution searched in the first stage. Experimental studies are conducted on eight benchmark optimization problems with the number of dimensions varying from 30 to 100, and numerical results show that the proposed algorithm achieves better performance than five other state-of-the-art surrogate-assisted algorithms with a limited budget of function evaluations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  • Arora S, Singh S (2019) Butterfly optimization algorithm: a novel approach for global optimization. Soft Comput 23(3):715–734

    Google Scholar 

  • Büche D, Schraudolph NN, Koumoutsakos P (2005) Accelerating evolutionary algorithms with Gaussian process fitness function models. IEEE Trans Syst Man Cybern Part C Appl Rev 35:183–194

    Google Scholar 

  • Cheng R, Jin Y (2015) A social learning particle swarm optimization algorithm for scalable optimization. Inf Sci (n y) 291:43–60

    MathSciNet  MATH  Google Scholar 

  • Cressie N (1990) The Origins of Kriging Math Geol 22:239–252

    Google Scholar 

  • Deb A, Roy JS, Gupta B (2014) Performance comparison of differential evolution, particle swarm optimization and genetic algorithm in the design of circularly polarized microstrip antennas. IEEE Trans Antennas Propag 62(8):3920–3928

    MATH  Google Scholar 

  • Díaz-Manríquez A, Toscano G, Coello Coello CA (2017) Comparison of metamodeling techniques in evolutionary algorithms. Soft Comput 21:5647–5663

    Google Scholar 

  • Dong H, Dong Z (2020) Surrogate-assisted grey wolf optimization for high-dimensional, computationally expensive black-box problems. Swarm Evol Comput 57:100713

    Google Scholar 

  • Dyn N, Levin D, Rippa S (1986) Numerical Procedures for Surface Fitting of Scattered Data by Radial Functions. SIAM J Sci Stat Comput 7:639–659

    MathSciNet  MATH  Google Scholar 

  • Eason J, Cremaschi S (2014) Adaptive sequential sampling for surrogate model generation with artificial neural networks. Comput Chem Eng 68:220–232

    Google Scholar 

  • Eberhart R, Kennedy J (1995) New optimizer using particle swarm theory. In: Proceedings of the International Symposium on Micro Machine and Human Science. p 39–43

  • Emmerich MTM, Giannakoglou KC, Naujoks B (2006) Single- and multiobjective evolutionary optimization assisted by Gaussian random field metamodels. IEEE Trans Evol Comput 10:421–439

    Google Scholar 

  • Gadekallu TR, Srivastava G, Liyanage M et al (2022) Hand gesture recognition based on a Harris Hawks optimized Convolution Neural Network. Comput Electr Eng 100:107836

    Google Scholar 

  • Gerrard CE, McCall J, Coghill GM, Macleod C (2014) Exploring aspects of cell intelligence with artificial reaction networks. Soft Comput 18:1899–1912

    Google Scholar 

  • Guo D, Jin Y, Ding J, Chai T (2019) Heterogeneous Ensemble-Based Infill Criterion for Evolutionary Multiobjective Optimization of Expensive Problems. IEEE Trans Cybern 49:1012–1025

    Google Scholar 

  • Gutmann HM (2001) A Radial Basis Function Method for Global Optimization. J Global Optim 19:201–227

    MathSciNet  MATH  Google Scholar 

  • Haris M, Zubair S (2022) Mantaray modified multi-objective Harris hawk optimization algorithm expedites optimal load balancing in cloud computing. J King Saud University - Comput Inform Sci 34:9696

    Google Scholar 

  • Heidari AA, Mirjalili S, Faris H et al (2019) Harris hawks optimization: Algorithm and applications. Futur Gener Comput Syst 97:849–872

    Google Scholar 

  • Jamil M, Yang XS (2013) A literature survey of benchmark functions for global optimisation problems. Inter J Math Model Num Opt 4:150–194

    MATH  Google Scholar 

  • Ji B, Song X, Sciberras E et al (2015) Multiobjective design optimization of IGBT power modules considering power cycling and thermal cycling. IEEE Trans Power Electron 30(5):2493–2504

    Google Scholar 

  • Jia H, Lang C, Oliva D et al (2019) Dynamic Harris hawks optimization with mutation mechanism for satellite image segmentation. Remote Sens (basel) 11(12):1421

    Google Scholar 

  • Jin Y (2005) A comprehensive survey of fitness approximation in evolutionary computation. Soft Comput 9(1):3–12

    Google Scholar 

  • Jin Y (2011) Surrogate-assisted evolutionary computation: Recent advances and future challenges. Swarm Evol Comput 1:61–70

    Google Scholar 

  • Jin R, Chen W, Simpson TW (2001) Comparative studies of metamodelling techniques under multiple modelling criteria. Struct Multidiscip Optim 23:1–13

    Google Scholar 

  • Jones DR (2001) A Taxonomy of Global Optimization Methods Based on Response Surfaces. J Global Optim 21:345–383

    MathSciNet  MATH  Google Scholar 

  • Jones DR, Schonlau M, Welch WJ (1998) Efficient Global Optimization of Expensive Black-Box Functions. J Global Optim 13:455–492

    MathSciNet  MATH  Google Scholar 

  • Kleijnen JPC (2005) An overview of the design and analysis of simulation experiments for sensitivity analysis. Eur J Oper Res 164:287–300

    MATH  Google Scholar 

  • Lesh FH (1959) Multi-dimensional least-squares polynomial curve fitting. Commun ACM 2:29–30

    MATH  Google Scholar 

  • Li F, Cai X, Gao L (2019) Ensemble of surrogates assisted particle swarm optimization of medium scale expensive problems. Applied Soft Computing Journal 74:291–305

    Google Scholar 

  • Li F, Cai X, Gao L, Shen W (2021) A Surrogate-Assisted Multiswarm Optimization Algorithm for High-Dimensional Computationally Expensive Problems. IEEE Trans Cybern 51(3):1390–1402

    Google Scholar 

  • Li F, Li Y, Cai X, Gao L (2022) A Surrogate-Assisted Hybrid Swarm Optimization Algorithm for High-Dimensional Computationally Expensive Problems. Swarm Evolutionar Comput 72:101096. https://doi.org/10.1016/j.swevo.2022.101096

    Article  Google Scholar 

  • Lim D, Jin Y, Ong YS, Sendhoff B (2010) Generalizing surrogate-assisted evolutionary computation. IEEE Trans Evol Comput 14:329–355

    Google Scholar 

  • Liu B, Zhang Q, Gielen GGE (2014) A gaussian process surrogate model assisted evolutionary algorithm for medium scale expensive optimization problems. IEEE Trans Evol Comput 18:180–192

    Google Scholar 

  • Liu Y, Liu J, Jin Y (2022) Surrogate-Assisted Multipopulation Particle Swarm Optimizer for High-Dimensional Expensive Optimization. IEEE Trans Syst Man Cybern Syst 52(7):4671–4684

    Google Scholar 

  • Long W, Jiao J, Xu M et al (2022) Lens-imaging learning Harris hawks optimizer for global optimization and its application to feature selection. Expert Syst Appl 202:117255

    Google Scholar 

  • Ma H, Shen S, Yu M et al (2019) Multi-population techniques in nature inspired optimization algorithms: a comprehensive survey. Swarm Evol Comput 44:365–387

    Google Scholar 

  • Mirjalili S, Lewis A (2016) The Whale Optimization Algorithm. Adv Eng Softw 95:51–67

    Google Scholar 

  • Mitchell M (1998) An Introduction to Genetic Algorithms (Complex Adaptive Systems). The MIT Press

    MATH  Google Scholar 

  • Ong YS, Nair PB, Keane AJ (2003) Evolutionary optimization of computationally expensive problems via surrogate modeling. AIAA J 41(4):687–696

    Google Scholar 

  • Qin AK, Suganthan PN (2005) Self-adaptive differential evolution algorithm for numerical optimization. In: 2005 IEEE Congress on Evolutionary Computation, IEEE CEC 2005. Proceedings

  • Regis RG, Shoemaker CA (2007) A stochastic radial basis function method for the global optimization of expensive functions. INFORMS J Comput 19:497–509

    MathSciNet  MATH  Google Scholar 

  • Regis RG, Shoemaker CA (2013) Combining radial basis function surrogates and dynamic coordinate search in high-dimensional expensive black-box optimization. Eng Optim 45:529–555

    MathSciNet  Google Scholar 

  • Smola AJ, Schölkopf B (2004) A tutorial on support vector regression. Stat Comput 14:199–222

    MathSciNet  Google Scholar 

  • Song A, Chen WN, Gu T et al (2021) Distributed Virtual Network Embedding System with Historical Archives and Set-Based Particle Swarm Optimization. IEEE Trans Syst Man Cybern Syst 51(2):927–942

    Google Scholar 

  • Spethmann P, Herstatt C, Thomke SH (2006) The impact of crash simulation on productivity and problem-solving in automotive R&D. Working Paper.

  • Storn R, Price K (1997) Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J Global Optim 11:341–359

    MathSciNet  MATH  Google Scholar 

  • Suganthan PN, Hansen N, Liang JJ, et al (2005) Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Technical Report, Nanyang Technological University, Singapore, May 2005 AND KanGAL Report 2005005, IIT Kanpur, India

  • Sun C, Jin Y, Zeng J, Yu Y (2015) A two-layer surrogate-assisted particle swarm optimization algorithm. Soft Comput 19(6):1461–1475

    Google Scholar 

  • Sun C, Jin Y, Cheng R et al (2017) Surrogate-Assisted Cooperative Swarm Optimization of High-Dimensional Expensive Problems. IEEE Trans Evol Comput 21:644–660

    Google Scholar 

  • Takashina K, Ueda K, Ohtsuka T (2009) Investigation of Accuracy Improvement on Crashworthiness Simulation with Pre-Simulation of Metal Forming. In: Proceedings of the 7th European LS-DYNA Conference, Salzburg

  • Tolson BA, Shoemaker CA (2007) Dynamically dimensioned search algorithm for computationally efficient watershed model calibration. Water Resour Res 43

  • Tripathy BK, Maddikunta PKR, Pham QV (2022) Harris hawk optimization: a survey onvariants and applications. Comput Intell Neurosci 2022:1–20. https://doi.org/10.1155/2022/2218594

    Article  Google Scholar 

  • Tubishat M, Abushariah MAM, Idris N, Aljarah I (2019) Improved whale optimization algorithm for feature selection in Arabic sentiment analysis. Appl Intell 49(5):1688–11707

    Google Scholar 

  • Tunay M (2021) A new design of metaheuristic search called improved monkey algorithm based on random perturbation for optimization problems. Sci Program 2021:1–14. https://doi.org/10.1155/2021/5557259

    Article  Google Scholar 

  • Wang GG, Shan S (2007) Review of metamodeling techniques in support of engineering design optimization. J Mech Des, Transact ASME 129:370–380

    Google Scholar 

  • Wang H, Jin Y, Doherty J (2017) Committee-based active learning for surrogate-assisted particle swarm optimization of expensive problems. IEEE Trans Cybern 47:2664–2677

    Google Scholar 

  • Wang X, Gary Wang G, Song B et al (2019) a novel evolutionary sampling assisted optimization method for high-dimensional expensive problems. IEEE Trans Evol Comput 23:815–827

    Google Scholar 

  • Wang J, Yu Y, Zeng Y, Luan W (2010) Discrete monkey algorithm and its application in transmission network expansion planning. In: IEEE PES General Meeting, PES 2010

  • Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82

    Google Scholar 

  • Xiong G, Zhang J, Shi D, He Y (2018) Parameter extraction of solar photovoltaic models using an improved whale optimization algorithm. Energy Convers Manag 174:388–405

    Google Scholar 

  • Yang Z, Qiu H, Gao L et al (2019) A surrogate-assisted particle swarm optimization algorithm based on efficient global optimization for expensive black-box problems. Eng Optim 51:549–566

    MathSciNet  MATH  Google Scholar 

  • Yi TH, Li HN, Zhang XD (2012) A modified monkey algorithm for optimal sensor placement in structural health monitoring. Smart Mater Struct 21(10):105033

    Google Scholar 

  • Yoon Y, Kim YH (2013) An efficient genetic algorithm for maximum coverage deployment in wireless sensor networks. IEEE Trans Cybern 43(5):1473–1483

    Google Scholar 

  • Yu H, Tan Y, Zeng J et al (2018) Surrogate-assisted hierarchical particle swarm optimization. Inf Sci (n y) 454–455:59–72

    MathSciNet  Google Scholar 

  • Zhang H-L, Chen M-R, Li P-S, Huang J-J (2022) An improved Harris Hawks optimizer combined with extremal optimization. International Journal of Machine Learning and Cybernetics 1–28

  • Zhao R, Tang W (2008) Monkey Algorithm for Global Numerical Optimization. J Uncertain Syst 2:165–176

    Google Scholar 

  • Zhao D, Xue D (2010) A comparative study of metamodeling methods considering sample quality merits. Struct Multidiscip Optim 42:923–938

    Google Scholar 

  • Zheng L (2013) An improved monkey algorithm with dynamic adaptation. Appl Math Comput 222:645–657

    MATH  Google Scholar 

  • Zhou Z, Ong YS, Nair PB et al (2007) Combining global and local surrogate models to accelerate evolutionary optimization. IEEE Trans Syst Man Cybern Part C Appl Rev 37:66–76

    Google Scholar 

  • Zhou Y, Chen X, Zhou G (2016) An improved monkey algorithm for a 0–1 knapsack problem. Appl Soft Comput J 38:817–830

    Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant No. 71871227), the Innovation Driven Plan of Central South University (Grant No. 2019CX018), and the Natural Science Foundation of Hunan Province (Grant No. 2021JJ30888).

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 71871227), the Innovation Driven Plan of Central South University (Grant No. 2019CX018), and the Natural Science Foundation of Hunan Province (Grant No. 2021JJ30888).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by LZ, Jinyue Shi and YY. The first draft of the manuscript was written by LZ and JS, and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Liang Zheng.

Ethics declarations

Conflict of interest

All authors declare that they have no conflict of interest.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Appendix A: Weighted scoring criterion

1.1.1 Step 1 prediction score

  1. (a)

    Use the RBF model \(s_{n} \left( {\varvec{x}} \right)\) (n is the number of evaluated points) to estimate all candidate points (denoted by CPs). Additionally, compute \(s_{n}^{{{\text{max}}}} = {\text{max}}\left\{ {s_{n} \left( {\varvec{x}} \right):{\varvec{x}} \in {\varvec{CPs}}} \right\}\) and \(s_{n}^{{{\text{min}}}} = {\text{min}}\left\{ {s_{n} \left( {\varvec{x}} \right):{\varvec{x}} \in {\varvec{CPs}}} \right\}\).

  2. (b)

    For each candidate point \({\varvec{x}} \in {\varvec{CPs}}\), compute its prediction score \(s_{n}^{pred} \left( {\varvec{x}} \right) = (s_{n} \left( {\varvec{x}} \right) - s_{n}^{{{\text{min}}}} )/\left( {s_{n}^{{{\text{max}}}} - s_{n}^{{{\text{min}}}} } \right)\) if \(s_{n}^{min} \ne s_{n}^{max}\) and \(s_{n}^{pred} \left( {\varvec{x}} \right) = 1\), otherwise.

1.1.2 Step 2 Distance score

  1. (a)

    Determine the minimum distance of each candidate point from previously evaluated points (denoted by EPs). For each \({\varvec{x}} \in {\varvec{CPs}}\), compute \(\Delta_{n} \left( {\varvec{x}} \right) = {\text{min}}D\left( {{\varvec{x}}, {\varvec{x}}_{i} } \right)\), where \({\varvec{x}}_{i} \in {\varvec{EPs}}\). Additionally, compute \(\Delta_{n}^{{{\text{max}}}} = {\text{max}}\left\{ {\Delta_{n} \left( {\varvec{x}} \right):{\varvec{x}} \in {\varvec{CPs}}} \right\}\) and \(\Delta_{n}^{{{\text{min}}}} = {\text{min}}\left\{ {\Delta_{n} \left( {\varvec{x}} \right):{\varvec{x}} \in {\varvec{CPs}}} \right\}\).

  2. (b)

    For each candidate point \({\varvec{x}} \in {\varvec{CPs}}\), compute its distance score \(s_{n}^{{{\text{dist}}}} \left( {\varvec{x}} \right) = (\Delta_{n}^{{{\text{max}}}} - \Delta_{n} \left( {\varvec{x}} \right))/(\Delta_{n}^{{{\text{max}}}} - \Delta_{n}^{{{\text{min}}}} )\) if \(\Delta_{n}^{{{\text{min}}}} \ne \Delta_{n}^{{{\text{max}}}}\) and \(s_{n}^{{{\text{dist}}}} \left( {\varvec{x}} \right) = 1\), otherwise.

1.1.3 Step 3 Determine the cyclical weights

$$ \omega = \left\{ {\rho_{1} ,\rho_{2} \ldots ,\rho_{h} } \right\},0 \le \rho_{1} \le \rho_{2} \le \cdot\cdot\cdot \le \rho_{h} \le 1 $$

\(Set \omega_{n}^{pred} = \left\{ {\begin{array}{*{20}c} {\rho_{{{\text{mod}}\left( {n,h} \right)}} , if {\text{mod}}\left( {n,h} \right) \ne 0} \\ {\rho_{h} , otherwise} \\ \end{array} } \right.\) and \(\omega_{n}^{{{\text{dist}}}} = 1 - \omega_{n}^{{{\text{pred}}}}\).

1.1.4 Step 4 Compute the weighted score

For each candidate point \({\varvec{x}} \in {\varvec{CPs}}\), compute the weighted score \(W_{n} \left( {\varvec{x}} \right) = \omega_{n}^{{{\text{pred}}}} s_{n}^{{{\text{pred}}}} \left( {\varvec{x}} \right) + \omega_{n}^{{{\text{dist}}}} s_{n}^{{{\text{dist}}}} \left( {\varvec{x}} \right)\).

1.1.5 Step 5 select the next evaluation point

Let \({\varvec{x}}_{n + 1} \leftarrow arg\min \left( {W_{n} \left( {\varvec{x}} \right)} \right),{\varvec{x}} \in {\varvec{CPs}}\).

The reasons for designing such a weighted scoring criterion are briefly given as follows: Initially, we prefer the candidate points that are far away from \({{\varvec{x}}}_{best}\) to focus on extensive exploration (i.e., global search). Moreover, it is not sufficiently accurate for the initial RBF built with limited evaluation samples to predict the candidate points. Therefore, we set \({\omega }_{n}^{D}=1\) and \({\omega }_{n}^{P}=0\). Later, the RBF fitted by more evaluation samples will become more accurate, and in this condition, we can pay a higher weight for the prediction score to enhance the local search. Thus, \({\omega }_{n}^{D}\) and \({\omega }_{n}^{P}\) are reduced and increased, respectively, to balance the global and local search. In addition, we use the cycling method of adjusting both weights to avoid the danger of being trapped in local optima. That is, when \({\omega }_{n}^{D}\) reaches 0, we reset \({\omega }_{n}^{D}=1\). For more specific explanations and analyses, please refer to Regis and Shoemaker (2007, 2013).

figure d

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zheng, L., Shi, J. & Yang, Y. A two-stage surrogate-assisted meta-heuristic algorithm for high-dimensional expensive problems. Soft Comput 27, 6465–6486 (2023). https://doi.org/10.1007/s00500-023-07855-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-023-07855-0

Keywords

Navigation