Abstract
In this paper, a hybrid algorithm called rough sine cosine algorithm (RSCA) is introduced for solving engineering optimization problems by merging the sine cosine algorithm (SCA) with the rough set theory concepts (RST). RSCA combines the benefits of SCA and RST to focus the search for a promising region where the global solution can be found. Due to imprecise information on the optimization problems, efficient algorithms roughly identify the optimal solution for this type of uncertain data. The fundamental motive for adding the RST is to deal with the imprecision and roughness of the available information regarding the global optimal, especially for large dimensional problems. The cut concept of RST targeted the more interesting search region so the optimal operation could be sped up, and the global optimum could be reached at a low computational cost. The proposed RSCA algorithm is tested on 23 benchmark functions and 3 design problems. RSCA’s obtained results are mainly compared to the SCA, which is used as a first level of the proposed algorithm in this work and those of other algorithms in the literature. According to the comparisons, the RSCA can provide very competitive performance with different algorithms.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The desired output of any engineering system process is considered the process’s objective function, and its optimal solution is required. However, engineers must have options in their selections while creating various systems and having informational alternatives of the best performance is crucial. The optimization field’s major goal is to discover the best solution to a problem (objective function) or the best objective output under a set of circumstances (constraints).
Neumaier (2004) emphasized the advantages of global optimization over local optimization and the issues that make a comprehensive search in the viable region unnecessary. These optimization issues were solved using two types of algorithms: deterministic and stochastic. The fundamental difference between deterministic and stochastic algorithms, as indicated in Jain and Agogino (1993), was the randomness of creating the next location. The first does not contain any random elements, whereas the second does. In addition, Jain and Agogino (1993) proposed using a multi-start strategy rather than a single-start technique. Instead of a single start starting from a collection of local minimum values, the minimal optimization procedure was initiated utilizing a range of uniform data in the feasible region. Clustering was employed as a prediction method to limit the number of processed data samples to a bare minimum. There was a need to present a technique that would oblige all domains because complex systems design and development tended to have adversarial objectives.
Using a multidisciplinary approach (Koch et al. 1999) to design investigated the relationship between the input and output of the constructed approximation efficient and simple statistics methodologies. Martins and Lambe (2013) emphasized the relevance of multidisciplinary design in enhancing the design phase and reducing design cycle costs and time. They also divided multidisciplinary design layouts into two categories: monolithic and distributed. An optimization problem is solved separately in the first type. The second form divides a single problem into single-start ones, each with a tiny subset of the variables and criteria (constraints).
The demand to reduce the search space was given by Wang and Simpson (2004), utilizing the fuzzy clustering approach due to vast space search in optimization problems. This strategy can solve problems involving constrained optimization and nonlinear problems. Shan and Wang (2003)’s proposed method is based on the concept of a rough set as a space search strategy for interesting areas of the design space and the global optimal optimization issue. Their results showed that employing finite objective function estimations to account for global and local optimum values was beneficial.
We are always looking for the best minimum or maximum of an interesting something in our daily lives, defined as the optimization technique. Solving these day-to-day issues within optimal values depends on the complication, so different algorithms can be used to select the most efficient or applicable algorithm for our problem. A survey on optimization techniques (Khan et al. 2015) summarized various techniques such as Simulated Annealing, Genetic Algorithms, the Ant Colony method, and Honey Bees algorithms. These algorithms began as a simulation of nature and are classified as metaheuristics algorithms. For example, Holland (1975) was the first to introduce a Genetic Algorithm (GA) that relied on genes and selection, so its main components were cross-over and mutation. Holland and others’ work was presented by Yang (2014) that focused on the features of GA.
The objective function in optimization problems can be unimodal or multimodal if the target is the minimum value of that function (Kanemitsu 1998). If the function’s local minimum value equals the global minimum, it is classified as unimodal, and the decent method is used to solve it. Traditional and modern techniques, such as Simulated Annealing and Genetic Algorithms, are used to find the global minimum value of a multimodal function with more than one local minimum value.
One of the most recent and effective population-based metaheuristic optimization techniques was SCA, which was introduced by Mirjalili (2016). Many optimization issues, including feature selection, image processing, robot path planning, scheduling, economic dispatch, radial distribution networks, and many others, are solved using this approach. It demonstrated its performance and high efficiency when compared to several well-regarded metaheuristics that are known to exist in the literature. It depends on the characteristics of the trigonometric sine and cosine functions.
2 Literature review
Many algorithms have been proposed to solve a wide range of optimization problems, but no one can compare the performance of algorithms based on a small scale of samples (problems). This observation can be clarified by the proposed theorem No Free Lunch (NFL) of Wolpert and Macready (1997), which demonstrated that the demand for new optimization algorithms is a fact due to a variety of problems. In other words, the NFL pushed for research into developing new techniques and improving existing ones for various issues. To understand the hypotheses, constraints, or even the impossibility of applying the NFL theorem, Adam et al. (2019) introduced the major research contributions that have been made to this field of study.
Some of these algorithms are based on a group of species’ collective behavior. Particle Swarm Optimization (PSO) (Kennedy and Eberhart 1995), Salp Swarm Algorithm (SSA) (Mirjalili et al. 2017), and Dragonfly Algorithm (DA) (Mirjalili 2015b) are popular algorithms in this category. The PSO method is inspired by flocks of birds that choose their flight direction and velocity. The PSO method is simpler to implement than other optimization strategies because of its mechanism and because a few parameters must be defined. As a result, the PSO algorithm is widely used in various applications, including cloud resource scheduling and privacy protection. The SSA algorithm is based on salps’ swarming behavior, frequently forming a swarm known as a salp chain in deep waters. Follower salps follow the leading salp in the SSA swarm model. Also moving towards the food source is the leading salp. The salp chain will naturally progress towards the global optimum if the food source is replaced by it. The behaviors of dragonflies are inspired statically and dynamically by Mirjalili’s Dragonfly Algorithm (DA). Modeling the social interaction of dragonflies in navigating, seeking meals, and avoiding opponents, whether swarming dynamically or statistically, is used to design two crucial phases of optimization: exploration and exploitation. The major goal of the exploration phase is for dragonflies to form sub-swarms and travel over different locations in a static swarm. On the other hand, dragonflies in the static swarm fly in larger swarms and in one direction, which is favorable in the exploitation phase.
Some algorithms are inspired by natural behavior, such as Ant Lion Optimizer (ALO) (Mirjalili 2015a) and Moth-Flame Optimization Algorithm (MFO) (Mirjalili 2015c). ALO can solve optimization problems differently. As its name suggests, the ALO algorithm simulates the intelligent behavior of natural antlions when hunting ants. Ants must wander around the search space to replicate such interactions, and antlions are permitted to pursue them and become fitter using traps. Mirjalili provides a mathematical model for moths flying in a spiral around artificial light (flames). The suggested technique is based on the transverse orientation navigating mechanism used by moths in nature. The MFO algorithm is a population-based algorithm motivated by artificial light convergence. Several operators were included in the algorithm to help it explore and exploit the search spaces.
On the other hand, the Gravitational Search Algorithm (GSA) (Rashedi et al. 2009) was proposed as a new optimization technique based on the law of gravity and mass interactions. The GSA method treats agents as objects; their masses determine their objective performance. The gravitational force attracts all these items, causing a worldwide movement of all objects toward the others with heavier weights. As a result, masses cooperate through gravitational force in a direct kind of communication.
Mirjalili (2016) utilized the sine and cosine functions to explore and exploit a region of space to find the optimum solution employing both functions as a population-based algorithm. This algorithm can handle optimization issues using a primary mathematical function called sine cosine algorithm (SCA). This method generates numerous initial random candidate solutions and needs them to fluctuate outwards or towards the optimal solution. Certain current algorithms are incorporated into the SCA algorithm to enhance its performance and speed up reaching the optimal solution. For example, the hybrid algorithm SCA-PSO (Nenavath et al. 2018) combines SCA and PSO’s exploration and exploitation capabilities to produce the best possible global solutions. SCA is also integrated with binary PSO as a hybrid approach (Kumar and Bharti 2019) called HBPSOSCA used to choose a useful subset of attributes in real-life clustering problems. The integration between SCA and SSA was suggested by Neggaz et al. (2020) and Singh et al. (2019) to accelerate the search process to improve search capabilities, global convergence rate, and convergence behavior. SCA is also combined with both ALO (Kumar et al. 2020) and GSA (Jiang et al. 2020) algorithms to overcome the limitations of each one. To resolve various optimization issues, Rizk-Allah (2017) suggested a multi-orthogonal sine cosine algorithm (MOSCA) based on parallel learning. The proposed approach offers multiple-orthogonal parallel learning to effectively demonstrate two benefits: maintain the diversity of the solutions and enhance exploratory search. With the use of the SCA algorithm, the researchers were able to tackle several issues that arise in real life, including a revolutionary sine cosine algorithm for feature selection (Hafez et al. 2016), binarization of Arabic manuscript images (Elfattah et al. 2016), and solving electrical engineering problems (Attia et al. 2018; Rizk-Allah 2021).
RST extends classical set theory and helps work with uncertain and noisy data. It differs from fuzzy set theory because it is not predicated on past information. It uses the boundary region of a set rather than a member relation to convey the vagueness. When little information is available, RST is concerned with characterizing a set or collection of things that may be difficult to discern. RST describes two additional sets, one of which is an upper approximation of the original set and the other a lower approximation. The RST methodology is crucial because real-world problems are expected to contain ill-posed data. Since it was first introduced by Pawlak (1982), it has drawn the interest of scientists and researchers in a variety of domains, including attractive design space (Shan and Wang 2003), attribute reduction (Jia et al. 2016), the discovery of knowledge (Li et al. 2008), and rough-fuzzy clustering (Hu et al. 2017). There is a lot of work employing RST in the literature. For example, (Hassanien et al. 2018) merged the metaheuristic crow search algorithm (CSA) (Askarzadeh 2016) and RST to accelerate the optimum solution with a low run time cost. Bhukya and Manchala (2022) introduced a unique metaheuristic rough set-based feature selection with rule-based medical data classification to tackle data uncertainty in the healthcare sector.
3 Motivation and contribution
Researchers have mostly focused on two approaches in the literature: enhancing the existing techniques and integrating several algorithms. These directions aim to increase diversity, delay premature convergence, and speed up convergence. However, these algorithms suffer from premature convergence and weak diversity, especially when handling highly nonlinear problems. However, the simplicity and effectiveness of the SCA suffer from slow convergence to reach the optimal solution. The optimization algorithms can establish additional flexibility and improved reliability in the exploration search using RST. RST can overcome many algorithms’ limitations by searching on smaller regions.
This research aims to benefit from RST to speed up SCA in a hybrid RSCA. By speeding up the search, the RSCA can increase convergence rather than letting the algorithm run again with no improvement. The proposed RSCA has several features distinguishing the proposed strategy from other optimization techniques. First, modify the SCA to benefit from the best solutions as a first optimization level. Second, enhance the optimized solutions by integrating SCA and RST and searching in new spaces. Third, enter a second SCA optimization level to explore new search regions and collect the best solution.
The proposed two-phase strategy combines SCA for finding the best solution and the rough-based cut notion. We execute SCA as a primary optimization level in the first phase, and the output of the best solution and its related destination will be the second phase’s input. In the second phase, we use roughly based principles to reduce and discretize attributes (features), and then we identify the most appealing subspaces based on the destination output. This phase breaks new regions in the search space to enhance the exploration search. Also, as a result of the iterative process used to find an exact optimal solution, these regions get smaller. Then, at the second level of SCA, we improve the best solution and its position based on the output of the second phase.
Our contributions to this study are presented as follows:
-
A novel improved algorithm called RSCA is introduced based on RST, which integrates the advantages of both SCA and RST to improve SCA efficiency.
-
RST is introduced to improve SCA’s exploration by suggesting smaller search regions after several iterations of SCA. It also enhances exploitation by storing the best SCA first-level data for searching new neighborhood regions and feedback to the SCA second-level.
-
RST is introduced to target the prospective areas to improve the quality of the solution while avoiding the local one.
-
The proposed RSCA is tested on 23 benchmark functions and three engineering optimization problems.
-
The output data of RSCA is compared with SCA, ALO, SSA, DA, and MFO algorithms.
The arrangement of this work is organized as follows: Sect. 4 introduces the study’s related work. The basics of the sine cosine algorithm (SCA) and rough set theory (RST) are covered in Sect. 5. The suggested rough sine cosine algorithm (RSCA) is detailed in Sect. 6. The results are reported in Sect. 7, followed by some design problems in Sect. 8, and finally, conclusion and future work in Sect. 9.
4 Related work
The preliminaries of the optimization problem are presented in this section. In addition, the rough set theory approach as a tool to deal with uncertain data and its main definition will be illustrated.
4.1 Formulation of a problem
The formula for a nonlinear programming issue was introduced by Rao (2009) as follows:
Subject to the constraints
where \(f_{obj}(X)\) is the minimization function of a vector with n variables \(X=\{x_1,x_2,...,x_n\}\). The lower and upper bounds for each variable component X are denoted by \(lb_n\) and \( ub_n\), respectively. There are p equality constraints \( h_j(X)\) and m inequality constraints \(g_i(X)\). Although there may be many local optima in the search space, one is the global optimum. The constraints create gaps in the search space and, on rare occasions, divide it into multiple regions. According to the literature, infeasible regions are parts of the search space that violate restrictions.
Definition 1
(Global minimum) (Bazaraa et al. 2005) Consider the issue of minimizing \(f_{obj}(X)\) over \(R^n\), and let \(X^*\in R^n\). If \(f_{obj}(X^*)\le f_{obj}(X) \forall X \in R^n\), \(X^*\) is referred to as a global minimum.
The method of penalty function (Rao 2009) transforms the previous constrained optimization problem into an unconstrained one as follows:
where
Equations 6 and 7 indicate violations of inequality and equality constraints of the problem, where R and S are adaptive constants concerning the penalty term contribution. Equation 5 added the \(f_{obj}\) to the penalty term if the constraint violation is satisfied; else, nothing is added.
4.2 Rough set theory
Pawlak (1982) developed rough set theory (RST) to deal with uncertain information. In the early 1980s, its primary purpose was to create a rough representation of concepts based on the data collected, which concerned the data classification analysis. RST and its applications are interesting in many areas due to their mathematical rigor and ability to solve practical difficulties.
Definition 2
(Information system) (Komorowski et al. 1999; Kryszkiewicz 1998) IS stands for an information system which is defined as \(IS=(O,A \cup D)\), where \(O \ne \varnothing \) is the set of objects, and A is the set of conditional attributes (features) and set of decision attributes D for any attribute \(x\in A\), such that \(x: O \rightarrow V_x\), where \(V_x\) is the domain of an attribute x.
An information system containing conditions and decision attributes can be called a decision system. The general structure of a decision (information) system is shown in Table 1.
Definition 3
(Indiscernibility relation) (Kryszkiewicz 1998) for each set of attributes \(B \subset A\), an indiscernibility relation IND(B) is defined as
This relation splits the objects O of the information system into classes denoted by O/IND(B).
5 Materials and methods
This section covers the fundamentals of the sine cosine algorithm (SCA) and the concepts of rough set theory (RST).
5.1 Sine cosine algorithm
A random set of solutions is used in population-based optimization approaches to begin the optimization process. Due to the stochastic nature of population-based optimization approaches, there is no certainty that researchers will find a solution in a single run. With enough random solutions and iterations, the possibility of discovering a global optimum grows, but it is not guaranteed. Exploration and exploitation (Črepinšek et al. 2013) are the two things these population-based optimization approaches have in common. While searching for promising regions of the search space, an optimization algorithm abruptly blends many randomly generated solutions in a set of random solutions. However, the random solutions gradually alter in the exploitation phase, and the random fluctuations are much smaller than in the exploration phase. The proposed sine cosine algorithm (SCA) (Mirjalili 2016) used updating equations to apply these two phases as the following:
\(X_{i}^t\) denotes the location of the current solution, \(P_{i}\) is the location of the target point, and i represents the dimension of the problem at iteration t. The numbers \(r_1,r_2,r_3,r_4\) are random and \(r_4 \in [0,1]\). In the following, Mirjalili (2016) summarized the role of these parameters in Eq. 9.
-
\(r_1\): determines the next set of position regions, either inside or outside the solution to destination space.
-
\(r_2\): specifies how far the movement should be directed towards or away from the destination.
-
\(r_3\): assigns random weights to the destination to stochastically emphasize or deemphasize the effect of desalination in determining distance.
-
\(r_4\): alternates between the sine \((r_4 < 0.5)\) and cosine \((r_4 \ge 0.5)\) components.
The sine and cosine range in Eq. 9 is changed using the following equation to establish a balance between exploration and exploitation:
t represents the current iteration, T is the maximal level of iterations, and a is a constant number. A solution can be re-positioned around another solution due to the cyclic pattern of sine and cosine functions, ensuring the space between the two solutions is exploited. The solutions should also be able to search outside the space between their respective destinations when exploring the search space by changing the r1 value (see Fig. 1). As demonstrated in Fig. 2, this can be accomplished by adjusting the range of the sine and cosine functions to \([-2,2]\) and defining the value of \(r_2 \in [0,2]\) in Eq. 9. Figure 3 shows the steps of the SCA routine.
5.2 RST concepts
In addition to the basic concepts of RST, we will introduce new notions, such as the threshold of decision attribute(s) and cut for the condition attributes. The transformation from an optimization problem to a decision system can be achieved by choosing a value between the minimal and maximal value of the objective function. The cutting process produces the lowest collection of attribute elements capable of classifying sample points with the same decision value.
5.2.1 Threshold concept
In RST, the referred problem in Eq. 1, the set of attributes A corresponds to X vector, the value of the objective function \(f_{obj}(X)\) can be considered the decision attribute d, and the set of objects can be described by the number of readings or take samples. Partitioning the value of the objective functions into two classes will be introduced by the concept of the decision threshold.
Definition 4
(Decision threshold) (Shan and Wang 2003) \(\forall x\in X,f_{obj}(x) \in f_{obj}(X)\) let \(d_t\) be a real number that is defined as \(\{d_t:\min (f_{obj}(X)) \le d_t < \max (f_{obj}(X)) \}\) where \(\min (f_{obj}(X))\) and \(\max (f_{obj}(X))\) are the minimal and maximal values of the objective function, respectively. The value of \( d_t(f_{obj}(x))=0 \, iff \, f_{obj}(x) \le d_t\), and \(d_t(f_{obj}(x))=1 \, iff \, f_{obj}(x) > d_t\).
The threshold value classifies the samples into two sets; the first set contains zero decision values, while the other includes one. The choice of the \(d_t\) value or even the number depends on the design space and the requirement of the information system.
5.2.2 Cut concept
let \(IS=(O,A \cup \{d\})\) be an information system with decision attribute d where \(O=\{o_1,o_2,\dots ,o_k\}\). Assuming that \(V_x=[lb_x,ub_x]\subset R\) for each \(x \in A\) where R is the real numbers’ set. \(lb_x\) and \(ub_x\) are the lower and upper bounds of the attribute x, respectively.
Definition 5
(Set of cuts) (Shan and Wang 2003) For each \(V_x\), \(P_x\) partitioned the domain into subintervals and is uniquely stated by the cut set \(C_x=\{c_1^x,c_2^x,\dots ,c_l^x\}\). Classes of the overall family P is represented by \(P=\cup _{x\in A}\{x\}\times C_x \). A cut on \(V_x\) is defined as any pair \((x,c) \in P\).
The MD-heuristic method (Nguyen 1997) can build a set of cuts with the fewest attribute elements that can distinguish all pairs of objects in the universe. The steps of this method can be summarized as follows:
-
1.
Construct a new decision table from the original one within the computation of:
-
A propositional parameter will be introduced for each interval \(P_l^{x_n}\) corresponding to the attribute \(x_n\), where l represents the number of intervals.
-
Each interval’s midpoint represents the c number, and then the cut pair can be defined as \((x_n,c_l)\) as the number of cuts is the same as the number of intervals.
-
-
2.
The main elements of the new table are:
-
The first column consists of object pairs with a contrasting value of the decision attribute.
-
The first row contains all the corresponding attributes’ partitioned intervals (propositional parameters).
-
The cell value depends on the propositional parameter of each pair of objects and the cut value of the corresponding interval. If \(\min (x(o_i),x(o_j))< c_l < \max (x(o_i),x(o_j))\) the value is 1 and 0 otherwise.
-
-
3.
Select the column with a maximum number of 1’s or anyone with the same maximum number of 1’s.
-
4.
Remove from the table the selected column in the previous step and rows with value 1 in this column.
-
5.
Repeat step 3. if the table has data; otherwise, stop.
At the end of these steps, the deleted propositional parameters and their cuts give the decision’s system resultant cut sets P.
5.3 Example
The rough set concepts, as well as the cut concepts, will be demonstrated in this example, taking the function \(f_{obj}(X)=x_1^2+x_2^2+x_3^2\) where \(X=[x_1\ x_2\ x_3]\) and \(V_x=[-2,2]\). As shown in Table 2, we use Matlab code to account for five random samples and compute their corresponding function value. Then, we choose a threshold value as the mean value of the output function \(dt=mean(f_{obj}(X))\), so Table 3 represents a complete information system with decision attribute d.
The steps of the MD-heuristic method can be applied to Table 3 as the following:
-
1.
Computation step:
-
Each \(x_n\) attribute has four intervals \(P_l^{x_n}\) where \(n=1,2,3\) and \(l=1,2,3,4\).
-
Cut sets are \(C_{x_1}=\{-1.1235,-0.9365,-0.4557,0.6305\}, C_{x_2}=\{-1.2691,-1.1586,-1.0928,-0.6733\}\), and \(C_{x_3}=\{-1.0082,-0.5174,0.6702,1.6565\}\).
-
-
2.
The new table element:
-
The intersection between the first column and the first row contains different decision pairs of objects \((o_1,o_2)\).
-
Computations of \(\min (x(o_1),x(o_2)),\max (x(o_1),x(o_2)) \) are \(-1.2209\) and \(-1.0963\), respectively. The cell value for \(x_2\) will take one because the value of its cut \(c_2=-1.1586\) lies inside the interval \([-1.2209,-1.0963]\) and so on for all different pairs (see Table 4).
-
-
3.
Column \(P_2^{x_2}\) has the maximum number of 1’s.
-
4.
Remove column \(P_2^{x_2}\) and rows with one value on this column.
-
5.
Stop; the table has no data.
The resulting cut of this method after the previous steps is \(C_{x_2}=\{-1.1586\}\); as a result, the region of interest is divided into two subspaces concerning \(x_2\). The first subspace is \([-2, -1.1586]\), whereas the second is \([-1.1586,2]\).
6 Rough sine cosine algorithm
In the previous sections, we introduced the SCA steps (Mirjalili 2016) and new concepts of RST (Shan and Wang 2003), and we aim to combine these two concepts in our work. We are establishing a new strategy and enhancing SCA for various issues. SCA uses a simple mathematical function to solve optimization problems, so adding the advantage of clustering the problem into small spaces will be a good hybridization. The rough sine cosine algorithm (RSCA) comprises two phases: In the first phase, two levels of SCA are used as a population-based global optimization system to solve optimization problems. In the second phase, RST improves the solution quality by considering the roughness of the previously achieved optimal solution. The roughness of the optimal solution can be expressed as a pair of precise notions based on the lower and higher bounds that make up the boundary region interval. Then, inside this region, new solutions are created randomly using the second level of SCA to increase the speed with which a solution is found. The RSCA is explained in the following manner.
6.1 Phase 1
This phase consists of two levels of SCA, but we will not run them parallel. The first level will be run as mentioned in the previous Sect. 5.1, starting with the details of the objective function, such as the variables’ lower and upper limits and the dimension. Then calculate the optimal value of the selected function and its position after the initialization step and run for many iterations less than the maximum iteration. The final value is reached after choosing the random parameters \(r_1,r_2\), and \(r_3\) in addition to the parameter r4, which is used to update solutions and switch between sine and cosine equations, respectively. This level’s output will be the second level’s input besides the rough phase’s output. We will run the second level only if the new interval of the rough step differs from the original interval; otherwise, the first level is enough as an optimization process. This level starts with the final optimal solution, the optimal position, and new intervals, while the ending criteria of this level depend on the number of new intervals. This level only searches in new subspaces from the first level’s endpoint. Then, the new optimal points are returned to the first level to choose the least fitness and perspective positions as the latest update of x values to the next iteration. Repeat these steps for each selected iteration to get the final best score and locus.
6.2 Phase 2
This phase aims to convert the output of the first level of SCA to an information system and then apply the RST concepts in the previous section. We construct the IS using the destination search as the attributes that depend on the variables’ dimension, and the objective function represents the decision attribute. After applying the Definition 4 of the threshold method and considering \(d_t\) as the average of the returned best fitness values, we get the binary representation of the decision attribute. Finally, the decision table is completed, followed by the steps in Sect. 5.2.2. This phase generates new subspace or new search intervals that are subsets of the boundary interval [lb, ub], and the number of elements in the set of cuts determines the number of new intervals. At the end of this stage, we will go to the second level of SCA, as mentioned in the first phase, to complete the optimization process. The flow chart and pseudo code in Figs. 4 and 5 clarify the sequence of the proposed RSCA and the input and output of each stage, and the highlighted steps in Fig. 4 indicate the modification of the SCA.
7 Results
In this section, the proposed RSCA algorithm is evaluated on test benchmark functions (Yao et al. 1999) classified into three groups: unimodal, multimodal, and fixed-dimension multimodal. As explained previously, unimodal test functions have a single optimum and can be used to benchmark; an algorithm’s exploitation and convergence. On the other hand, multimodal test functions have more than one optimum, making them more challenging to master than unimodal test functions. One optimum is known as the global optimum, while the others are known as the local optima. An algorithm should avoid all local optima approaches and find the global optimum. As a result, multimodal test functions can evaluate algorithm exploration and avoidance of local optima.
7.1 The proposed RSCA settings
The RSCA algorithm was run 30 times along 500 iterations on each benchmark function. Tables 5 and 6 have population sizes of 30 (agent no = 30) and have dimensions of 30 (Dim = 30) overall results, but Table 7 has constant dimensions as the description of the benchmarked function The SCA source code is publicly available at https://seyedalimirjalili.com/sca. This code is modified to match the desired output of the proposed RSCA phases; also, the required code for the second phase is developed as a connection between the two levels of the SCA. Tables 5, 6 and 7 show each tested function’s mean (avg) and standard deviation (std). The RSCA method is compared to SCA (Mirjalili 2016) as the main algorithm aimed to be enhanced. ALO (Mirjalili 2015a), and MFO (Mirjalili 2015c) as nature-inspired algorithms are added to the comparison tables. The RSCA algorithm is compared to two swarm-inspired algorithms: SSA (Mirjalili et al. 2017) and DA (Mirjalili 2015b). All these algorithms are also open-source and available at the website https://seyedalimirjalili.com/projects.
7.2 Comparison of RSCA results with other algorithms
The RSCA algorithm beats other algorithms in most test cases, as shown by the results of the algorithms on the unimodal test functions in Table 5. It was the most efficient optimizer for functions F1 through F7, except for F6, but it was not the worst. The outcomes for functions F8–F23 (multimodal and fixed-dimension multimodal functions) in Tables 6 and 7 show that RSCA can also deliver competitive results on multimodal benchmark functions. Furthermore, compared to SCA, ALO, SSA, DA, and MFO, RSCA produces very competitive results and occasionally exceeds them. These results demonstrated that the RSCA method is useful for exploration. The bold font indicates the least mean and standard deviation values, as indicated in the tables.
The six algorithms’ convergence curves of some functions are shown in Figs. 6, 7, 8 and 9 to illustrate the algorithms’ convergence rate. The best score demonstrates the fitness of the best solution produced in each iteration over 30 runs. The RSCA convergence curve shows a clear falling tendency on all the test functions evaluated so that the RSCA algorithm can better approximate the global optimum over many iterations. For F6 (Fig. 6c), in the first group of unimodal functions, the convergence curve of SSA tends to be below the RSCA’s convergence curve at the end of approximately \(50^{th}\) iterations, although the competitive start from the beginning of the RSCA approach. RSCA scores the lowest values against other algorithms, As shown in Fig. 7b and c. Unlike F8 (Fig. 7a), most algorithms are close to each other for about the first 350 iterations. Figure 8a displays that the start and end of the curves are the same for six algorithms; the figure is magnified for more details across iterations, as shown in Fig. 8b, c. Although SCA records the highest values, the last benchmark function clarifies the algorithms’ proximity (see Fig. 9).
7.3 T-test results
The reliability of the RSCA algorithm will be demonstrated by the t-test, the most used technique for analyzing data when a sample of measurements is averaged. Table 8 displays the T-test values of RSCA versus SCA, ALO, SSA, DA, and MFO. The p-value of 23 benchmark functions is calculated as shown in Table 8. The p-value is chosen at 0.05; if the result is less than 0.05, the output data are different; otherwise, there may be a difference between the algorithm’s output. The RSCA is very competitive with the other algorithms along the functions. For example, F6, F12, and F13 have p-values greater than 0.05, which means RSCA is competitive with SCA. F16 p-values of all algorithms against RSCA except DA, F18 values of RSCA versus algorithms except SCA, and with the highest p-values for all algorithms. RSCA competes with SSA, DA, and MFO with values of 0.76, 0.2, and 0.06, respectively, of F20. For F23, the p-value of RSCA against SSA is equal to 0.056, which concludes the reliability of the RSCA technique.
7.4 Time complexity analysis of RSCA
Here, we assess and compare the complexity of SCA versus RSCA used in the tests to objectively compare the algorithms’ quality. The maximum number of algorithm iterations T, the number of dimensions dim, and the number of search agents or solutions N all affect the time efficiency of the algorithm. The SCA algorithm’s temporal complexity mostly depends on how often the population’s members are evaluated for fitness. The algorithm’s temporal complexity is then \( O(N*dim*T )\). With the proposed method RSCA, the time complexity is affected by the two additional phases performing two-level fitness, discovering a new search region \(R_{new}\), and probability computation at each iteration \( O(N^2*dim*T^2*R_{new})\). The RSCA algorithm is more complex than the original SCA and its variations, as can be observed. Nonetheless, the experimental findings show that increasing algorithm complexity improves algorithm performance.
8 Investigation of engineering optimization problems
The proposed RSCA for resolving engineering design issues is validated in this section. Since these design problems involve linear and nonlinear constraints, the penalty function method in Eq. 5 is employed. Choosing penalty multiplier R as a constant \(10^6\) and a comparison with algorithms from the literature will be shown below.
8.1 Spring design
This design problem was handled by different optimization techniques such as penalty functions (Belegundu and Arora 1995), constraints correction at constant cost Arora (2004), and metaheuristic algorithms (Coello and Montes 2002; Yang 2010; Askarzadeh 2016; Ji et al. 2020). As shown in Fig. 10, the main design factors of the tension–compression problem are the spring’s wire diameter \(d=x_1\), the mean coil diameter \(D=x_2\), and the number of active coils \(N=x_3\). This engineering challenge aims to reduce the tension-compression spring’s weight. In the literature, researchers Arora (2004), Belegundu (1982), and Coello (2000) described and solved this optimization problem. The solution to this problem can be considered \(X=[x_1\ x_2\ x_3]= [d\ D\ N]\), and the model of the optimization problem can be stated as follows:
Subject to:
The proposed RSCA has been applied with 20 search agents and 500 iterations and compared with SCA, SSA, ALO, DA, and MFO. The same penalty function was employed as a constraint-handling technique to establish a fair comparison between these methods. Table 9 reports the best results for spring’s design problem of RSCA, SCA, SSA, ALO, DA, and MFO. These results represent minimum parameters’ values X, perspective constraints \(g_i(X),\ i=1,...,4\), and the new objective for constrained problem F(X). RSCA approach yields the optimal factors of \(X=[0.1413\ 1.3\ 11.4332]\), with the appropriate function value equal to \(F(X)=33079.55\). Moreover, the restrictions are determined \(g_i(X)=[0.0254\ 0.1749\ 0.0430\ -0.0076]\). Although the six algorithms violate the constraints except for \(g_4(X)\), RSCA has the best function minimum value. The six algorithms’ convergence curves for the optimal objective value obtained through randomly chosen iterations are shown in Fig. 11.
8.2 Welded beam design
This test problem aims to reduce the welded beam fabrication cost as much as possible. When designing a welded beam (Rao 2009; Coello 2000; Coello and Montes 2002; Rizk-Allah 2017), shear stress \((\tau )\), bending stress in the beam \((\sigma )\), buckling load on the bar \((P_c)\), end deflection of the beam \((\delta )\), and side restrictions are all factors to consider. As indicated in Fig. 12, there are four design factors: the thickness of the weld \((h=x_1)\), length of the clamped bar \((l=x_2)\), the height of the bar \((t=x_3)\), and thickness of the bar \((b=x_4)\). These design factors can be considered as a vector \(X=[x_1\ x_2\ x_3\ x_4]=[h\ l\ t\ b]\). The mathematical model of this design problem has seven constraints and can be represented as follows:
Subject to:
where
Many researchers have used various optimization techniques to address this issue. Coello (COELLO 2000) and Deb (Deb 1991, 2000) used GA to tackle this optimization challenge. From the literature, SCA, SSA, ALO, DA, and MFO algorithms were applied to handle this problem.
Table 10 shows the results of 30 different runs of the previous algorithms compared with the RSCA. We used 20 search agents with 500 iterations to deal with this problem. Table 10 demonstrates the best results of the proposed RSCA compared with SCA, SSA, ALO, DA, and MFO. As shown in Table 10, DA and MFO record the lowest values \(F(X)=4.0643\) and \(F(X)=4.0635\), respectively. Although high competence of these algorithms, they violate \(g_1\), \(g_2\), and \(g_3\) constraints, unlike the remaining algorithms. The RSCA’S design parameters values equal \([1\ 1\ 4.1034\ 1]\) and its objective value \(F(X)=4.0659\). The objective values of SCA, SSA, and ALO are \(F(X)=4.0667\), \(F(X)=4.0635\), and \(F(X)=4.0635\), respectively. The RSCA’s objective value is close to the objective values of SCA, SSA, and ALO but not the worst. Figure 13 clarifies the algorithms’ convergence curves’ closeness to the optimal objective value obtained through random iterations.
8.3 Vessel pressure design
Kannan and Kramer (1994) introduced this problem as shown in Fig. 14; a cylindrical vessel is capped at both ends by hemispherical heads. The shell is constructed from two sections of rolled steel plate welded longitudinally to form a cylinder. The goal is to reduce the entire cost \(f_{obj}(X)\), including material, forming, and welding costs. \(T_s=x_1\) (shell thickness), \(T_h=x_2\) (head thickness), \(R=x_3\) (inner radius), and \(L=x_4\) (length of the cylindrical part of the vessel, excluding the head) are the four design variables in. The problem can be expressed using the same notation as Kannan and Kramer (1994), considering \(X=[x_1\ x_2\ x_3\ x_4]=[T_s\ T_h\ R\ L]\) (in inches).
Subject to:
Many authors used metaheuristic algorithms and other approaches to resolve this design problem (He and Wang 2007; Coello and Montes 2002; Coello 2000; Deb 1997; Hassanien et al. 2018; Rizk-Allah 2017).
Like the two previous applications, 20 search agents and 500 iterations were used to deal with this problem. Table 11 compares the suggested RSCA method to various algorithms for identifying the best design variables X, their related F(X) values, and perspective constraints \(g_i(X),\ i=1,...,4\). DA and MFO violate the \(g_4\) constraint; however, they have the lowest objective values \(F(X)=8796.9531\) and \(F(X)=8796.8952\), respectively. The remaining four algorithms satisfy the conditions where SSA has a minimum value of \(F(X)=8797.5220\) and is very close to the RSCA value of \(F(X)=8797.9745\). ALO and SCA record values \(F(X)=8801.54565\) and \(F(X)=8841.8214\), respectively, the higher values. Design parameter values of RSCA are represented by \(X=[1\ 1\ 51.5561\ 86.4812] \) and its constraint’s values \(g_i(X)=[-0.0049\ -0.5082\ -178.0628\ -153.5188]\). The convergence curves of the comparison algorithms prove the closeness of RSCA to the best algorithms values without constraint violation, as illustrated in Fig. 15.
9 Conclusion and future work
In this paper, the SCA is combined with the RST to overcome the limitation of SCA of being convergent to a local optimum and to improve the quality of the solutions discovered. The RST methodology is crucial because real-world problems are expected to contain ill-posed data. The proposed RSCA increased the solutions’ quality found and ensured a faster convergence to the best solution. RSCA consists of two phases: the SCA phase, partitioned into two levels, and the RST phase. SCA’s first level offered the initial optimal solutions to be partitioned in the RST phase. The RST phase’s main goal is searching for promising regions for better results than the initial ones by applying cut notations. The second level of SCA enhances the first-level solutions according to the new interesting search regions. The proposed RSCA algorithm is tested on 23 benchmark functions and three engineering design problems. Several algorithms from the literature are compared to the results produced by RSCA. The simulations demonstrated that the addition of RST results in a significant alteration to the SCA. It appears that the RSCA performed much better than the traditional SCA. The RSCA’s exceptional performance on benchmark challenges and engineering design problems demonstrated its suitability for use with challenging real-world issues.
The suggested RSCA has the following advantages, as shown in this article:
-
It can effectively enhance the search’s exploratory capabilities of the SCA.
-
It can improve the quality of optimum solutions of the SCA.
-
It can locate the best solution for benchmark functions and engineering design issues.
Although SCA performs well in obtaining the optimal solutions for design problems, there is still potential for enhancement in some respects. First, the RSCA cannot deal with all optimization tasks, and the issues in this study are specific. Second, the algorithm suggested in this study splits the search into two phases based on an iterative procedure, increasing the time complexity.
This research will open several topics, especially when working on complex practical tasks. We can improve the time complexity analysis using other searching strategies (Eskandar et al. 2012; Sadollah et al. 2016) integrated with RST. Other fascinating areas of applying the RSCA can be explored, including multi-objective problems (Tawhid and Savsani 2017). The benefit of the approach of rough interval (Seikh et al. 2021) for classifying decision optimization problems in engineering, economics, and health care.
Data availability
The data used to support the findings of this study are available from the corresponding author upon request.
References
Adam SP, Alexandropoulos SAN, Pardalos PM, et al (2019) No free lunch theorem: a review. In: Approximation and Optimization. Springer International Publishing, pp 57–82. https://doi.org/10.1007/978-3-030-12767-1_5
Arora JS (2004) Introduction to optimum design. Elsevier, Amsterdam
Askarzadeh A (2016) A novel metaheuristic method for solving constrained engineering optimization problems: crow search algorithm. Comput Struct 169:1–12. https://doi.org/10.1016/j.compstruc.2016.03.001
Attia AF, Sehiemy RAE, Hasanien HM (2018) Optimal power flow solution in power systems using a novel sine–cosine algorithm. Int J Electr Power Energy Syst 99:331–343. https://doi.org/10.1016/j.ijepes.2018.01.024
Bazaraa MS, Sherali HD, Shetty CM (2005) Nonlinear programming. Wiley, Amsterdam. https://doi.org/10.1002/0471787779
Belegundu AD (1982) A study of mathematical programming methods for structural optimization. PhD thesis, The University of Iowa
Belegundu AD, Arora JS (1985) A study of mathematical programmingmethods for structural optimization. part II: Numerical results. Int J Numer Methods Eng 21(9):1601–1623. https://doi.org/10.1002/nme.1620210905,
Bhukya H, Manchala S (2022) Design of metaheuristic rough set-based feature selection and rule-based medical data classification model on MapReduce framework. J Intell Syst 31(1):1002–1013. https://doi.org/10.1515/jisys-2022-0066
Coello CAC (2000) Constraint-handling using an evolutionary multiobjective optimization technique. Civ Eng Environ Syst 17(4):319–346. https://doi.org/10.1080/02630250008970288
Coello CAC (2000) Use of a self-adaptive penalty approach for engineering optimization problems. Comput Ind 41(2):113–127. https://doi.org/10.1016/s0166-3615(99)00046-9
Coello CAC, Montes EM (2002) Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Adv Eng Inf 16(3):193–203. https://doi.org/10.1016/s1474-0346(02)00011-3
Črepinšek M, Liu SH, Mernik M (2013) Exploration and exploitation in evolutionary algorithms. ACM Comput Surv 45(3):1–33. https://doi.org/10.1145/2480741.2480752
Deb K (1991) Optimal design of a welded beam via genetic algorithms. AIAA J 29(11):2013–2015. https://doi.org/10.2514/3.10834
Deb K (1997) GeneAS: A robust optimal design technique for mechanical component design. In: Evolutionary Algorithms in Engineering Applications. Springer, Berlin, pp 497–514. https://doi.org/10.1007/978-3-662-03423-1_27
Deb K (2000) An efficient constraint handling method for genetic algorithms. Comput Methods Appl Mech Eng 186(2–4):311–338. https://doi.org/10.1016/s0045-7825(99)00389-8
Elfattah MA, Abuelenin S, Hassanien AE, et al (2016) Handwritten arabic manuscript image binarization using sine cosine optimization algorithm. In: Advances in Intelligent Systems and Computing. Springer International Publishing, p 273–280, https://doi.org/10.1007/978-3-319-48490-7_32,
Eskandar H, Sadollah A, Bahreininejad A et al (2012) Water cycle algorithm – a novel metaheuristic optimization method for solving constrained engineering optimization problems. Computers & Structures 110–111:151–166. https://doi.org/10.1016/j.compstruc.2012.07.010
Hafez AI, Zawbaa HM, Emary E, (2016) Sine cosine optimization algorithm for feature selection. In, et al (2016) International Symposium on INnovations in Intelligent SysTems and Applications (INISTA). IEEE. https://doi.org/10.1109/inista.2016.7571853
Hassanien AE, Rizk-Allah RM, Elhoseny M (2018) A hybrid crow search algorithm based on rough searching scheme for solving engineering optimization problems. J Ambient Intell Humaniz Comput. https://doi.org/10.1007/s12652-018-0924-y
He Q, Wang L (2007) An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng Appl Artif Intell 20(1):89–99. https://doi.org/10.1016/j.engappai.2006.03.003
Holland JH (1975) Adaptation in natural and artificial systems. an introductory analysis with applications to biology, control and artificial intelligence. Ann Arbor: University of Michigan Press
Hu J, Li T, Luo C et al (2017) Incremental fuzzy cluster ensemble learning based on rough set theory. Knowl-Based Syst 132:144–155. https://doi.org/10.1016/j.knosys.2017.06.020
Jain P, Agogino AM (1993) Global optimization using the multistart method. J Mech Des 115(4):770–775. https://doi.org/10.1115/1.2919267
Ji Y, Tu J, Zhou H et al (2020) An adaptive chaotic sine cosine algorithm for constrained and unconstrained optimization. Complexity 2020:1–36. https://doi.org/10.1155/2020/6084917
Jia X, Shang L, Zhou B et al (2016) Generalized attribute reduct in rough set theory. Knowl-Based Syst 91:204–218. https://doi.org/10.1016/j.knosys.2015.05.017
Jiang J, Jiang R, Meng X et al (2020) SCGSA: A sine chaotic gravitational search algorithm for continuous optimization problems. Expert Syst Appl 144(113):118. https://doi.org/10.1016/j.eswa.2019.113118
Kanemitsu H, Miyakoshi M, Shimbo M (1998) Properties of unimodal and multimodal functions defined by the use of local minimal value set. Electronics and Communications in Japan (Part III: Fundamental Electronic Science) 81(1):42–51. https://doi.org/10.1002/(sici)1520-6440(199801)81:1<42::aid-ecjc5>3.0.co;2-8,
Kannan BK, Kramer SN (1994) An augmented lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. J Mech Des 116(2):405–411. https://doi.org/10.1115/1.2919393
Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of ICNN’95 - International Conference on Neural Networks, vol 4. IEEE, pp 1942–1948, https://doi.org/10.1109/icnn.1995.488968,
Khan S, Asjad M, and AA (2015) Review of modern optimization techniques. International Journal of Engineering Research and V4(04). https://doi.org/10.17577/ijertv4is041129,
Koch PN, Simpson TW, Allen JK et al (1999) Statistical approximations for multidisciplinary design optimization: The problem of size. J Aircr 36(1):275–286. https://doi.org/10.2514/2.2435
Komorowski J, Pawlak Z, Polkowski L et al (1999) Rough sets: A tutorial. A new trend in decision-making, Rough fuzzy hybridization, pp 3–98
Kryszkiewicz M (1998) Rough set approach to incomplete information systems. Inf Sci 112(1–4):39–49. https://doi.org/10.1016/s0020-0255(98)10019-1
Kumar L, Bharti KK (2019) A novel hybrid BPSO–SCA approach for feature selection. Nat Comput 20(1):39–61. https://doi.org/10.1007/s11047-019-09769-z
Kumar S, Parhi DR, Muni MK et al (2020) Optimal path search and control of mobile robot using hybridized sine-cosine algorithm and ant colony optimization technique. Industrial Robot: the international journal of robotics research and application 47(4):535–545. https://doi.org/10.1108/ir-12-2019-0248
Li Y, Liao X, Zhao W (2008) A rough set approach to knowledge discovery in analyzing competitive advantages of firms. Ann Oper Res 168(1):205–223. https://doi.org/10.1007/s10479-008-0399-x
Martins JRRA, Lambe AB (2013) Multidisciplinary design optimization: A survey of architectures. AIAA J 51(9):2049–2075. https://doi.org/10.2514/1.j051895
Mirjalili S (2015) The ant lion optimizer. Adv Eng Softw 83:80–98. https://doi.org/10.1016/j.advengsoft.2015.01.010
Mirjalili S (2015) Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput Appl 27(4):1053–1073. https://doi.org/10.1007/s00521-015-1920-1
Mirjalili S (2015) Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl-Based Syst 89:228–249. https://doi.org/10.1016/j.knosys.2015.07.006
Mirjalili S (2016) SCA: A sine cosine algorithm for solving optimization problems. Knowl-Based Syst 96:120–133. https://doi.org/10.1016/j.knosys.2015.12.022
Mirjalili S, Gandomi AH, Mirjalili SZ et al (2017) Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv Eng Softw 114:163–191. https://doi.org/10.1016/j.advengsoft.2017.07.002
Neggaz N, Ewees AA, Elaziz MA et al (2020) Boosting salp swarm algorithm by sine cosine algorithm and disrupt operator for feature selection. Expert Syst Appl 145(113):103. https://doi.org/10.1016/j.eswa.2019.113103
Nenavath H, Jatoth DRK, Das DS (2018) A synergy of the sine-cosine algorithm and particle swarm optimizer for improved global optimization and object tracking. Swarm Evol Comput 43:1–30. https://doi.org/10.1016/j.swevo.2018.02.011
Neumaier A (2004) Complete search in continuous global optimization and constraint satisfaction. Acta Numer 13:271–369. https://doi.org/10.1017/s0962492904000194
Nguyen H (1997) Discretization of real value attributes, boolean reasoning approach. PhD thesis, Warsaw University
Pawlak Z (1982) Rough sets. International Journal of Computer & Information Sciences 11(5):341–356. https://doi.org/10.1007/bf01001956
Rao SS (2009) Engineering Optimization. John Wiley & Sons, Inc., https://doi.org/10.1002/9780470549124,
Rashedi E, Nezamabadi-pour H, Saryazdi S (2009) GSA: A gravitational search algorithm. Inf Sci 179(13):2232–2248. https://doi.org/10.1016/j.ins.2009.03.004
Rizk-Allah RM (2017) Hybridizing sine cosine algorithm with multi-orthogonal search strategy for engineering design problems. Journal of Computational Design and Engineering 5(2):249–273. https://doi.org/10.1016/j.jcde.2017.08.002
Rizk-Allah RM (2021) A quantum-based sine cosine algorithm for solving general systems of nonlinear equations. Artif Intell Rev 54(5):3939–3990. https://doi.org/10.1007/s10462-020-09944-0
Sadollah A, Eskandar H, Lee HM et al (2016) Water cycle algorithm: A detailed standard code. SoftwareX 5:37–43. https://doi.org/10.1016/j.softx.2016.03.001
Seikh MR, Dutta S, Li DF (2021) Solution of matrix games with rough interval pay-offs and its application in the telecom market share problem. Int J Intell Syst 36(10):6066–6100. https://doi.org/10.1002/int.22542
Shan S, Wang GG (2003) Introducing rough set for design space exploration and optimization. In: Volume 2: 29th Design Automation Conference, Parts A and B. ASMEDC, https://doi.org/10.1115/detc2003/dac-48761,
Singh N, Son LH, Chiclana F et al (2019) A new fusion of salp swarm with sine cosine for optimization of non-linear functions. Engineering with Computers 36(1):185–212. https://doi.org/10.1007/s00366-018-00696-8
Tawhid MA, Savsani V (2017) Multi-objective sine-cosine algorithm (MO-SCA) for multi-objective engineering design problems. Neural Comput Appl 31(S2):915–929. https://doi.org/10.1007/s00521-017-3049-x
Wang GG, Simpson T (2004) Fuzzy clustering based hierarchical metamodeling for design space reduction and optimization. Eng Optim 36(3):313–335. https://doi.org/10.1080/03052150310001639911
Wolpert D, Macready W (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82. https://doi.org/10.1109/4235.585893
Yang XS (2010) Nature-inspired metaheuristic algorithms. Luniver press
Yang XS (2014) Genetic algorithms. In: Nature-Inspired Optimization Algorithms. Elsevier, p 77–87, https://doi.org/10.1016/b978-0-12-416743-8.00005-1,
Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3(2):82–102. https://doi.org/10.1109/4235.771163
Acknowledgements
The authors are so grateful to Dr. Abdelmoneam Kozae (Professor at the Faculty of Science) and Dr. Abdullah Shalaby (Lecturer at the Faculty of Engineering) for their advice and support during this research. The authors did not receive support from any organization for the submitted work.
Funding
Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). No funding was received to assist with preparing this manuscript.
Author information
Authors and Affiliations
Contributions
Conceptualization: RR-A; data curation: RR-A, EE; software: RR-A, EE; writing—original draft preparation: RR-A, EE; writing—review and editing: RR-A, EE; supervision: RR-A.
Corresponding author
Ethics declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Rizk-Allah, R.M., Elsodany, E. An improved rough set strategy-based sine cosine algorithm for engineering optimization problems. Soft Comput 28, 1157–1178 (2024). https://doi.org/10.1007/s00500-023-09155-z
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-023-09155-z