Introduction

In recent years, more and more optimization problems, such as route planning1,2,3, resource allocation4,5,6, and other real-world problems, have received increased attention from scholars. These optimization problems have an optimal solution that comprises a combination of multiple variables. Researchers aim to find the optimal solution as much as possible by using optimization algorithms.

Optimization algorithms play a crucial role in practical applications. In manufacturing7,8,9,10, optimization algorithms can be employed to optimize production processes, reduce costs, and improve efficiency. In the field of transportation planning11,12, they can help optimize traffic flow, reduce congestion, and improve road usage. In healthcare13,14, these algorithms can be utilized to optimize the allocation of hospital resources and improve the efficiency of healthcare services. Another important application area is financial and investment management15,16. Optimization algorithms can help investors find the best asset allocation in their portfolios to minimize risk and achieve expected returns. Furthermore, optimization algorithms have been applied in the development of Deep Learning (DL) and Machine Learning (ML)17,18,19. In ML and DL, optimization algorithms can improve the performance of models by tuning hyperparameters.

There are various types of optimization algorithms. These include gradient descent algorithms20 based on mathematics, dynamic programming algorithms21 based on operations research, and optimization algorithms22,23,24 based on heuristics. Among these, meta-heuristic optimization algorithms have a unique advantage in solving more complex optimization problems. They do not rely on specific problem knowledge but instead explore and exploit the solution space to find the optimal or near-optimal solution. These algorithms are usually designed based on natural phenomena and group behavior. Dehghani25 introduced the Coati Optimization Algorithm (COA), which mimics coati behavior in nature. Al-Betar26 proposed a novel nature-inspired swarm-based optimization algorithm called the Elk Herd Optimizer (EHO), inspired by the breeding process of elk herds. Zhao27 proposed the Sea-Horse Optimizer (SHO), inspired by the behavior of seahorses, mainly mimicking their movement, predation, and breeding behaviors. Hashim28 mathematically modeled the foraging and reproduction behaviors of snakes to present the Snake Optimizer (SO). Abdollahzadeh29 introduced the African Vultures Optimization Algorithm (AVOA), inspired by the foraging and navigation behaviors of African vultures. Zhong30 designed the Beluga Whale Optimization (BWO) algorithm, inspired by the behaviors of beluga whales. MiarNaeimi31 proposed the Horse Herd Optimization Algorithm (HOA), which imitates the social behaviors of horses at different ages. Heidari32 introduced the Harris Hawks Optimization (HHO) algorithm, inspired by the cooperative behavior and chasing style of Harris hawks in nature.

To further improve the performance of existing optimization algorithms, researchers have introduced several enhancements. Based on Bare-Bone Particle Swarm Optimization (BBPSO), Zhou33 proposed an atomic retrospective learning BBPSO (ARBBPSO) algorithm by introducing the renewal strategy of motion around nuclei and the retrospective learning strategy. Additionally, mutation methods34,35 are employed to enhance the exploration capabilities of optimization algorithms. Abed-alguni36 introduced the island-based Cuckoo Search with Polynomial Mutation (iCSPM), which replaces the Lévy flight method in the original Cuckoo Search with the highly disruptive polynomial mutation method.

Although these meta-heuristic optimization algorithms can solve some optimization problems well, high-dimensional optimization problems remain a challenge. High-dimensional optimization problems usually involve a large number of parameters or variables, leading to a dramatic increase in the dimension of the solution space. In high-dimensional space, the increase in the width and depth of the solution space results in a higher number of locally optimal solutions, making it more difficult to escape from the local optimum. Additionally, high-dimensional optimization problems lead to the “curse of dimensionality”37. As the dimension increases, the distance between valid data points becomes extremely dispersed, thus affecting the convergence speed and stability of the algorithm.

When solving high-dimensional optimization problems, although the Honey Badger Algorithm (HBA)38 and the Salp Swarm Algorithm (SSA)39 converge quickly in local search, they tend to fall into local optima due to insufficient global search capability. Due to their weak exploitation abilities, the Chimp Optimization Algorithm (ChOA)40 and the Whale Optimization Algorithm (WOA)41 show low convergence accuracy in high-dimensional optimization problems. The Artificial Gorilla Troops Optimizer (GTO)42 and the Dung Beetle Optimizer (DBO)43 struggle to escape local optima due to a lack of local search capability when dealing with high-dimensional problems.

To address these challenges, we propose a novel swarm-based Rhinopithecus Swarm Optimization (RSO), inspired by the social behaviors of rhinopithecus. In the RSO, we introduce three distinct search strategies: vertical migration, concerted search, and mimicry. Additionally, RSO divides the swarm into different categories, with each category allocated a corresponding search strategy and specific learning objectives. This approach enables individuals in the swarm to explore the solution space from multiple perspectives, thus increasing the coverage of the search space. Consequently, the proposed algorithm can effectively overcome high-dimensional optimization challenges.

The main contributions of this paper are as follows:

  1. 1.

    We propose a novel Rhinopithecus Swarm Optimization (RSO) algorithm, which can solve optimization problems and complex engineering design problems well.

  2. 2.

    Three different search strategies of RSO are devised based on the social behaviors of different groups in the swarm. These strategies aim to escape local optima by enhancing global and local search capabilities.

  3. 3.

    RSO is evaluated against eight well-known optimization algorithms. The results verify its superior performance in solving optimization challenges and complex engineering design problems, especially in high-dimensional optimization problems.

The rest of this paper is organized as follows: section "Methods" introduces the details of RSO. Section "Experiments" shows the results and analysis of the simulation experiments. Section "Engineering design problems" introduces the engineering problems and the experimental results. Section "Conclusions and future works" summarizes this work and details some possible future directions.

Methods

Optimization problems become more challenging with increasing dimensionality. In high-dimensional optimization problems, the number of local optimal solutions increases with dimension, making it easier for traditional optimization algorithms to fall into local optima. Therefore, the challenge is to improve the efficiency of searching the large-scale solution space to overcome the curse of dimensionality. To better address these problems, we have developed a novel Rhinopithecus Swarm Optimization (RSO) algorithm inspired by the social behavior of rhinopithecus swarm, including vertical migration, concerted search, and mimicry. In this section, we discuss some details of rhinopithecus behavior and the RSO.

Behavior of rhinopithecus

Rhinopithecus is an agile and intelligent animal, excelling in leaping and climbing among the canopy and often foraging while traversing trees. Rhinopithecus exhibit strong adaptability, with their habitat spanning various regions from low to high altitudes. Due to the influence of external factors such as ambient temperature and food distribution, the rhinopithecus swarm frequently displays vertical migration tendencies. Within the swarm, individuals of various age structures play distinct roles during these migration events. Mature rhinopithecus are responsible for guiding the migration direction of the swarm. Adolescent rhinopithecus, being less robust than the mature individuals, typically learn survival skills by emulating mature rhinopithecus and generally seek habitats around them. Infant rhinopithecus lack the ability to search for survival resources and usually rely on older individuals to acquire these resources for them.

Rhinopithecus swarm optimization algorithm

Inspired by the social behavior of the rhinopithecus swarm, we have introduced the Rhinopithecus Swarm Optimization (RSO) algorithm. The ways the rhinopithecus swarm searches for survival positions provide new strategies for the optimization algorithm, enhancing its exploration and exploitation capabilities. In RSO, the search space represents the natural habitat of rhinopithecus. The survival position of an individual in the search space corresponds to a solution of the optimization algorithm. Individuals occupy superior or inferior survival positions based on their survival experience, with older individuals typically occupying better positions.

In RSO, we categorize the top 40% of individuals as mature rhinopithecus, those ranking between 40 and 70% as adolescent rhinopithecus, and the remaining individuals as infancy rhinopithecus based on their survival positions. The individual with the best survival position in the rhinopithecus swarm is referred to as the king rhinopithecus, who usually emerges from the mature individuals. The king rhinopithecus and the mature individuals jointly lead the migration of the groups. The flowchart and pseudo-code of RSO is shown in Fig. 1 and Algorithm 1.

Figure 1
figure 1

The flowchart of RSO.

Algorithm 1
figure a

Pseudo-code of RSO

Vertical migration

The vertical migratory behavior of rhinopithecus plays a crucial role in the survival of the swarm. Typically, the rhinopithecus swarm periodically migrates between lower and higher altitudes depending on food distribution and climatic conditions. At low temperatures, rhinopithecus prefer to be active at lower altitudes, where the lower altitude reduces the physiological stress of cold temperatures and food is relatively more abundant. At other temperatures, the rhinopithecus swarm displays the opposite migratory trend, preferring to utilize higher altitude areas where food is relatively higher quality.

Researchers have found that rhinopithecus possess a certain degree of spatial cognition. They may be able to memorize positions they have inhabited and incorporate past experience and foraging strategies to guide migration. Mature and king rhinopithecus usually have richer experience and spatial cognition of the migration process, playing a leadership role in the swarm migration. These individuals retain optimal positions for two temperature conditions, as shown in Eq. (1). During the next migration, they search for a new survival position by integrating previous survival experience. This approach helps the rhinopithecus swarm select better positions effectively.

$$\begin{aligned} KingR&= \begin{bmatrix} KingR, KingR_{1}, KingR_{2} \end{bmatrix} \\ MR&= \begin{bmatrix} MR, MR_{1}, MR_{2} \end{bmatrix} \end{aligned}$$
(1)

where KingR stands for the king rhinopithecus, MR stands for the mature rhinopithecus.

Each time the mature rhinopithecus search for a candidate position, they try to move towards the position of the king rhinopithecus. Their candidate solution is calculated by Eq. (2).

$$\begin{aligned} \begin{aligned} &\alpha = \frac{{KingR_{a} + MR_{b}}}{2} \\&\beta = |KingR_{a} - MR_{b}|\\&CandiMR = Gausi(\alpha ,\beta ) \end{aligned}&\quad \begin{aligned} a, b \in [0, 2] \end{aligned} \end{aligned}$$
(2)

where \(KingR_{a}\) stand for the locations of the king rhinopithecus, the \(MR_{b}\) stand for the locations of the mature rhinopithecus. \(Gausi(\alpha ,\beta )\) was a function value that generates random from a Gaussian distribution with an expectation of \(\alpha\) and a variance of \(\beta\).

Concerted search

In the rhinopithecus swarm, adolescent rhinopithecus are in the growth stage. Although they have some ability to search for migratory positions, they lack the experience of mature and king rhinopithecus. Consequently, adolescent rhinopithecus often exhibit uncertainty in choosing search paths and migration positions. In such cases, they actively seek guidance from mature rhinopithecus and the king rhinopithecus, relying on their environmental cognition and survival experience to make decisions. Typically, adolescent rhinopithecus communicate information about their historical positions to the king and mature rhinopithecus. The king and mature individuals then use their own experience to guide the adolescents in making judgments. In RSO, adolescent rhinopithecus are set to communicate two historical positions to the king and mature rhinopithecus, respectively.

$$AR = \begin{bmatrix} AR_{1}, AR_{2} \end{bmatrix}$$
(3)

where AR stands for the adolescent rhinopithecus.

The king Rhinopithecus and the mature individuals provide the adolescent individuals with their respective advice. The adolescent rhinopithecus consider both suggestions equally and synthesize the opinions to generate a candidate solution, which is calculated using Eq. (4). By leveraging the survival experience of the older individuals, adolescent rhinopithecus can select better positions. This strategy allows RSO to explore the solution space more effectively, thereby improving its ability to escape from local optima.

$$\begin{aligned} \begin{aligned} \gamma &= \frac{{KingR_{a} + AR_{c}}}{2} \\ \epsilon &= \frac{{MR_{b} + AR_{c}}}{2} \\ \delta&= |KingR_{a} - AR_{c} |\\ \zeta&= |MR_{b} - AR_{c} |\\ CandiAR&= \frac{{Gausi(\gamma ,\delta ) + Gausi(\epsilon ,\zeta )}}{2} \end{aligned}&\begin{aligned} \quad a, b \in [0, 2] \\ \quad c \in [0, 1] \end{aligned} \end{aligned}$$
(4)

where \(KingR_{a}\) stand for the locations of the king rhinopithecus, the \(MR_{b}\) stand for the locations of the mature rhinopithecus, the \(AR_{c}\) stand for the locations of the adolescent rhinopithecus. \(Gausi(\gamma ,\delta )\) was a function value that generates random from a Gaussian distribution with an expectation of \(\gamma\) and a variance of \(\delta\). \(Gausi(\epsilon ,\zeta )\) was a function value that generates random from a Gaussian distribution with an expectation of \(\epsilon\) and a variance of \(\zeta\).

Mimicry

Infant rhinopithecus are usually in the early stages of learning and development. They have not fully mastered the elements of their environment and survival skills, making it very difficult for them to find suitable positions for migration. During this vital developmental stage, infancy individuals depend on other group members, especially adolescent and mature rhinopithecus, to lead them in their migration. In the searching process, infancy rhinopithecus communicate information about their position to these older individuals in various ways. Adolescent and mature rhinopithecus understand the habits of the infancy individuals based on this positional information. Thus, they can better integrate their own experience to guide the infancy rhinopithecus during migration.

In RSO, after receiving guidance from mature and adolescent rhinopithecus, the infancy individuals consider the suggestions from both groups and determine their candidate positions through comprehensive consideration, calculated using Eq. (5). This group-assisted approach helps infancy rhinopithecus navigate the growth stage seamlessly, allowing them to learn survival skills by imitating the search strategies of older individuals. This strategy enhances the ability to exchange information between different groups, which in turn improves the local search capability of RSO.

$$\begin{aligned} \begin{aligned} \eta &= \frac{{MR_{b} + IR}}{2} \\ \iota&= \frac{{AR_{c} + IR}}{2} \\ \theta&= |MR_{b} - IR|\\ \kappa&= |AR_{c} - IR |\\ CandiAR&= \frac{{Gausi(\eta ,\theta ) + Gausi(\iota ,\kappa )}}{2} \end{aligned}&\begin{aligned} \quad b \in [0, 2] \\ \quad c \in [0, 1] \end{aligned} \end{aligned}$$
(5)

where \(MR_{b}\) stand for the locations of the mature rhinopithecus, the \(AR_{c}\) stand for the locations of the adolescent rhinopithecus, IR stands for the position of the infancy rhinopithecus. \(Gausi(\eta ,\theta )\) was a function value that generates random from a Gaussian distribution with an expectation of \(\eta\) and a variance of \(\theta\). \(Gausi(\iota ,\kappa )\) was a function value that generates random from a Gaussian distribution with an expectation of \(\iota\) and a variance of \(\kappa\).

Consent to participate

All authors participated in the manuscript and agreed to participate in it.æ

Experiments

The CEC2017 function set is widely used to measure the comprehensive performance of optimization algorithms. All functions in the test set have undergone rotation and displacement, increasing the difficulty for optimization algorithms to find the optimum. The test set contains 29 test functions, each available in four dimensions: 10, 30, 50, and 100. The difficulty of solving these problems increases with the dimension. Based on the specific properties of the functions in the search space, they can be categorized into Unimodal Functions \((F_1-F_2)\), Simple Multimodal Functions (\(F_3-F_9\)), Hybrid Functions (\(F_{10}-F_{19}\)), and Composite Functions (\(F_{20}-F_{29}\)).

Table 1 shows the details of the CEC2017 benchmark functions. Unimodal Functions have only one significant optimum in the search space, making the optimization target clear. Simple Multimodal Functions contain multiple local optima, increasing the probability of falling into a local optimum. Hybrid Functions combine different properties such as linear and nonlinear components and low and high dimensional combinations, making the search space more complex. Composite Functions comprise combinations of simple functions, including linear combinations, nonlinear combinations, combinatorial functions, and transformation functions.

Table 1 Details of the CEC2017 benchmark functions.

To verify the comprehensive ability of RSO to solve optimization problems, we performed tests with 30 and 100 dimension using the CEC2017 benchmark. In the study, we used a series of well-known optimization algorithms as the control group, including DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO, and HHO. To ensure the reliability of the results, the control group algorithms used the same parameter settings as in their original papers. The proposed algorithm, RSO, is a parameter-free meta-heuristic algorithm.

The time complexity of RSO is \(O(m \cdot n)\), where \(m\) represents the population size and \(n\) represents the number of iterations. The proposed algorithm evaluates each individual only once per iteration without additional computations, resulting in a relatively low time complexity.

Mean Error (ME) was used to measure the performance of the algorithms. ME is defined as the mean error over multiple experiments, where the error is calculated as \(|Real_{optimal} - Theoretical_{optimal}|\). To minimize the impact of chance on the experimental results and performance analysis, we conducted 36 independent experiments for each test function. The errors of these experiments were averaged to obtain the ME. The results of the experiments are shown in Tables 23456789 and 10. The Mean, Best, Median, Worst, Standard Deviation (Std), and \(p-value\) of the 36 runs were recorded.

When the dimension was set to 30, out of 29 functions, RSO achieved 10 first rankings, 8 second rankings, 6 third rankings, 2 fourth rankings, 2 sixth rankings, and 1 eighth ranking. RSO obtained the highest number of first rankings among all algorithms, 3 more than ARBBPSO, which had the second-highest number. In each type of function in CEC2017, RSO outperformed nearly all the algorithms in the control group, especially for Simple Multimodal Functions, where RSO obtained 4 first rankings out of 7 test functions. However, the test results of RSO on Unimodal Functions were inferior to those of SSA and AVOA.

As the dimension was adjusted to 100, RSO still maintained outstanding performance, with the number of first rankings increasing to 19. While SSA received first rankings in only 6 functions, several algorithms did not achieve any first rankings. In all four types of CEC2017 functions, RSO showed significant advantages over the control group. In each category of test functions, the number of first rankings obtained by RSO was not less than that of the control group. It was only on \(F_3\) of Simple Multimodal Functions that RSO ranked relatively behind.

Table 2 Experimental results of the RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO for \(F_1\) - \(F_{8}\) with 30 dimension.
Table 3 Experimental results of the RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO for \(F_{9}\) - \(F_{16}\) with 30 dimension.
Table 4 Experimental results of the RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO for \(F_{17}\) - \(F_{24}\) with 30 dimension.
Table 5 Experimental results of the RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO for \(F_{25}\) - \(F_{29}\) with 30 dimension.
Table 6 Experimental results of the RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO for \(F_1\) - \(F_{2}\) with 100 dimension.
Table 7 Experimental results of the RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO for \(F_3\) - \(F_{10}\) with 100 dimension.
Table 8 Experimental results of the RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO for \(F_{11}\) - \(F_{18}\) with 100 dimension.
Table 9 Experimental results of the RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO for \(F_{19}\) - \(F_{26}\) with 100 dimension.
Table 10 Experimental results of the RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO for \(F_{27}\) - \(F_{29}\) with 100 dimension.

To visualize the iteration process of the algorithm in 30 and 100 dimension, we selected two test functions from each of the four function types to obtain the convergence curves and box plots, as shown in Figs. 2345678 and 9. Additionally, we applied a logarithmic transformation to ME used in the convergence figures and box plots to enhance the clarity of these figures.

From the convergence curves of these algorithms in 30 dimension, it is notable that RSO has a significant advantage in local search ability in \(F_{2}\) and \(F_{22}\). This advantage is attributed to the mimicry strategy, which helps individuals update positions by communicating with individuals of other groups, significantly enhancing the local search capability. In \(F_{12}\) and \(F_{14}\), RSO continuously updated the best value throughout the iteration process, demonstrating its excellent global search capability. This is because the vertical migration strategy assists individuals in updating their positions based on the optimal positions of the swarm, thus strengthening global search capability. Due to the concerted search strategy, RSO individuals can simultaneously consider the optimal positions of the swarm while updating their positions by combining the individual positions of other groups. This enables RSO to exhibit an outstanding ability to escape local optima in \(F_{4}\). Although RSO was trapped in local optima in \(F_{8}\) and \(F_{25}\), the upper quartiles of RSO in box plots were among the smallest. This result indicates that RSO is accurate in these two functions.

Regarding the iteration process in 100 dimension, the three search strategies also make a significant difference. It is noteworthy that RSO performed better in Unimodal Functions compared to its performance in 30 dimension. In particular, RSO shows the ability to escape local optima during the iteration process in \(F_2\). Although RSO converged early in \(F_{5}\), \(F_{6}\), and \(F_{21}\), the mean best values of RSO were consistently lower than those of other algorithms. In \(F_{10}\), \(F_{16}\), and \(F_{20}\), RSO demonstrates the ability to escape local optima due to its strong global and local search capabilities. Additionally, it converged to the lowest mean best value, validating the high accuracy of RSO in these functions.

Figure 2
figure 2

Convergence figures and box plots of RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO in CEC2017 F1 and F2 with 30 dimension. The vertical axis of these figures represents the logarithmic transformation of the experimental results.

Figure 3
figure 3

Convergence figures and box plots of RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO in CEC2017 F7 and F8 with 30 dimension. The vertical axis of these figures represents the logarithmic transformation of the experimental results.

Figure 4
figure 4

Convergence figures and box plots of RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO in CEC2017 F10 and F17 with 30 dimension. The vertical axis of these figures represents the logarithmic transformation of the experimental results.

Figure 5
figure 5

Convergence figures and box plots of RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO in CEC2017 F21 and F23 with 30 dimension. The vertical axis of these figures represents the logarithmic transformation of the experimental results.

Figure 6
figure 6

Convergence figures and box plots of RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO in CEC2017 F1 and F2 with 100 dimension. The vertical axis of these figures represents the logarithmic transformation of the experimental results.

Figure 7
figure 7

Convergence figures and box plots of RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO in CEC2017 F5 and F6 with 100 dimension. The vertical axis of these figures represents the logarithmic transformation of the experimental results.

Figure 8
figure 8

Convergence figures and box plots of RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO in CEC2017 F10 and F16 with 100 dimension. The vertical axis of these figures represents the logarithmic transformation of the experimental results.

Figure 9
figure 9

Convergence figures and box plots of RSO, DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO and HHO in CEC2017 F20 and F21 with 100 dimension. The vertical axis of these figures represents the logarithmic transformation of the experimental results.

We further investigated the experimental results using the Wilcoxon signed-rank test and Friedman test. The test results are shown in Tables 11 and 12, where ’+’ denotes that RSO performs better than other algorithms, ’-’ denotes that RSO performs worse than other algorithms, and ’\(\approx\)’ denotes that RSO performs equally well as other algorithms. “Mean Rank” denotes the average ranking following 36 independent runs, and “Rank” denotes the overall final ranking.

Results in Table 11 indicate that RSO outperformed the control group in up to 29 of the 29 functions with 30 dimension. Furthermore, RSO significantly outperformed other algorithms, except ARBBPSO, in Simple Multimodal, Hybrid, and Composite Functions. However, the performance of RSO in Unimodal Functions was weaker compared to its performance in other types of functions, particularly underperforming SSA and AVOA.

Table 12 shows the performance analysis of RSO in CEC2017 with 100 dimension. RSO outperformed the control group in at least 20 out of the 29 functions. Specifically, compared to DBO, BWO, WOA, GTO, and HHO, RSO excelled in all functions of Unimodal, Hybrid, and Composite Functions. In Multimodal Functions, RSO performed significantly better than the control group in almost all functions, but performed significantly worse than SSA, AVOA, and HHO in only one function.

In the experimental results for both 30 and 100 dimension, the Mean Rank of RSO ranked first, surpassing the second-ranked algorithms by 7.69% and 42.85%, respectively. Although RSO performed well in both 30 and 100 dimension, its performance advantage in 100 dimension was significantly higher than in 30 dimension. This indicates that RSO is more effective at solving high-dimensional optimization problems than low-dimensional ones.

Table 11 The result of statistical test in CEC2017 with 30 dimension.
Table 12 The result of statistical test in CEC2017 with 100 dimension.

Engineering design problems

In this section, we simulated RSO on some engineering design problems44 including 10-Bar Truss Design (10-BTD), Gear Train Design (GTD), Three-Bar Truss Design (3-BTD) to demonstrate its efficiency. These problems incorporate the satisfaction of constraints and the search for optimal solution. The effectiveness of optimization algorithms is often significantly affected by the constraints and a limited solution space. Therefore, an added mechanism called Constraint Handling Technique (CHT) is required to address these challenges. This technique ensures the final solution meets constraints by imposing penalties on solutions that do not satisfy them. To represent the fairness of the experiment, we ran each algorithm individually 36 times on all problems with the max number of iteration 500 and population size 50.

10-bar truss design

The 10-BTD is a significant engineering design problem with the objective on weight minimization of the truss structure while satisfying the frequency constraints. The details of the problem are as follows:

Minimize:

$$f({\bar{x}}) = \sum \limits _{i=1}^{10}{{L_i(x_i)}\rho _i A_i}$$
(6)

Subject to:

$$\begin{aligned} g_1({\bar{x}})&=\frac{7}{\omega _1({\bar{x}})}-1\le 0, \\ g_2({\bar{x}})&=\frac{15}{\omega _2({\bar{x}})}-1\le 0, \\ g_3({\bar{x}})&=\frac{20}{\omega _3({\bar{x}})}-1\le 0. \\ \end{aligned}$$
(7)

With bounds:

$$\begin{aligned}&6.45\times 10^{-5}\le A_i \le 5 \times 10^{-3}, i=1,2,\ldots ,10. \\&{\bar{x}}=[A_1,A_2,\ldots ,A_{10}],\rho =2770.\\ \end{aligned}$$
(8)
Table 13 Results analysis of 10-BTD, Part 1.
Table 14 Results analysis of 10-BTD, Part 2.
Figure 10
figure 10

Convergence curves and error bars of RSO and the control group algorithms on 10-BTD.

Tables 13 and 14 present the results of RSO and other algorithms on 10-BTD. In the Table 13, RSO receives the best solution with \({\bar{x}}\) =(3.3828E-03, 1.4758E-03, 3.6725E-03, 1.4262E-03, 6.4502E-05, 4.5776E-04, 2.4450E-03, 2.3391E-03, 1.1955E-03, 1.2612E-03) and optimal value 5.2662E+02. From the Table 14, it is obvious that RSO demonstrates superior values across Best, Mean, Worst, and Std compared to other algorithms. To visualize the iteration process of these algorithms on 10-BTD, the convergence curves and error bars of RSO and the control group are shown in Fig. 10. Although RSO converged early in the iteration process, the distance between upper and lower quartile in the box plot was the smallest. This indicates that while RSO may have been trapped into a local optimum, it exhibits a robust stability when solving 10-BTD problem.

Gear train design

The GTD is a critical engineering problem focusing on the gear ratio minimization in a compound gear train arrangement. The details of the problem are as follows:

Minimize:

$$f({\bar{x}}) = (i_{trg}-i_{tot})^2 = \left( \frac{1}{6.931}-\frac{x_1 x_2}{x_3 x_4}\right) ^2$$
(9)

Subject to:

$$\begin{aligned} g_{1-4}({\bar{x}})&=12-x_i\le 0, \\ g_{5-8}({\bar{x}})&=(60-{\bar{x}})\le 0, \\ \end{aligned}$$
(10)

With bounds:

$$12 \le x_i \le 60, i=1,2,\ldots ,4.$$
(11)
Table 15 Results analysis of GTD, Part 1.
Table 16 Results analysis of GTD, Part 2.
Figure 11
figure 11

Convergence curves and error bars of RSO and the control group algorithms on GTD.

The experimental results of RSO and the control group on GTD are shown Tables 15 and 16. In Table 15, it is clearly that RSO, GTO and HHO obtain the same optimal value 0.0000E+00 with three different sets of decision variables. Furthermore, Table 16 presents the statistical analysis on the experimental results of RSO and the control group. RSO, GTO and HHO still achieve the optimal values among the four metrics. To visualize the optimization process of these algorithms on GTD, the convergence curves and error bars of RSO and the control group are shown in Fig. 11.

Three-bar truss design

The 3-BTD is a classical problem from civil engineering. Its optimization objective is the weight minimization of the truss structure. The primary constraints are based on the stress limitations of each bar, ensuring the structure can safely withstand applied loads without exceeding material stress limits. The details of this problem are as follows:

Minimize:

$$\begin{aligned} f({\bar{x}}) = l (x_2+2\sqrt{2}x_1) \end{aligned}$$
(12)

Subject to:

$$\begin{aligned} & g_1({\bar{x}}) = \frac{x_2}{2 x_2 x_1 + \sqrt{2}x_1^2}P - \sigma \le 0, \\&g_2({\bar{x}}) = \frac{x_2+\sqrt{2}x_1}{2 x_2 x_1 + \sqrt{2}x_1^2}P - \sigma \le 0, \\&g_3({\bar{x}}) = \frac{1}{x_1 + \sqrt{2}x_2}P - \sigma \le 0, \\&l = 100, P = 2, \sigma =2. \end{aligned}$$
(13)

With bounds:

$$0 \le x_1, x_2 \le 1.$$
(14)
Table 17 Results analysis of 3-BTD, Part 1.
Table 18 Results analysis of 3-BTD, Part 2.
Figure 12
figure 12

Convergence curves and error bars of RSO and the control group algorithms on 3-BTD.

The optimal results of the experiments on 3-BTD for RSO and the control group are shown in Table 17. The results indicate that RSO obtain the best solution with \({\bar{x}}\)=(7.8868E-01, 4.0825E-01) and optimal value 2.638958E+02. Besides, Table 18 introduces the statistical analysis of the experimental results. Although the difference in the Mean of RSO, GTO, DBO, ARBBPSO, SSA and HHO across 36 independent experiments was small, RSO achieved the optimal values in the other three metrics. Particularly, the lowest Std indicates that RSO performs good robustness in solving 3-BTD problem.

To visualize the optimization process of these algorithms on 3-BTD, the convergence curves and error bars of HSO and the control group are shown in Fig. 12. RSO exhibits a significantly rapid downward trend in the early iterations. This suggests that RSO has the ability to rapidly approach the more optimal solutions at the early stage.

Conclusions and future works

In this paper, we design a novel meta-heuristic optimization algorithm called Rhinopithecus Swarm Optimization (RSO). The proposed algorithm draws inspiration from the social behaviors of rhinopithecus. In RSO, we categorize the population into mature, adolescent and infancy rhinopithecus, each performing one of three search methods: vertical migration, concerted search, and mimicry, respectively. These search methods enhance global and local search capabilities, decreasing the possibility of falling into local optima in high-dimensional space.

To validate the performance of RSO, we conducted benchmark tests using 29 test functions from the CEC2017 and three classical engineering design problems. Furthermore, the Wilcoxon signed-rank and Friedman tests were used to analyze the experimental results. Eight well-known optimization algorithms were selected as the control group, including DBO, BWO, SSA, AVOA, WOA, ARBBPSO, GTO, and HHO. Both the experimental results and statistical analysis reveal that RSO achieves better accuracy than the control group. However, the proposed algorithm still has some limitations that need to be addressed. In particular, RSO performed worse than SSA and AVOA in Unimodal Functions with 30 dimension. Additionally, compared to ARBBPSO, RSO shows an insignificant advantage in Multimodal, Hybrid, and Composite Functions. Thus, enhancing its optimization capability in low-dimensional problems is a significant ongoing work. We also aspire to develop new search strategy for RSO to tackle multi-objective problems in the future.