An Improved Golden Jackal Optimization Algorithm Using Opposition-Based Learning for Global Optimization and Engineering Problems

Golden Jackal Optimization (GJO) is a recently developed nature-inspired algorithm that is motivated by the collaborative hunting behaviours of the golden jackals in nature. However, the GJO has the disadvantage of poor exploitation ability and is easy to get stuck in an optimal local region. To overcome these disadvantages, in this paper, an enhanced variant of the golden jackal optimization algorithm that incorporates the opposition-based learning (OBL) technique (OGJO) is proposed. The OBL technique is implemented into GJO with a probability rate, which can assist the algorithm in escaping from the local optima. To validate the efficiency of OGJO, several experiments have been performed. The experimental outcomes revealed that the proposed OGJO has more efficiency than GJO and other compared algorithms.


Introduction
In the optimization domain, the primary goal is to find the optimum solution for a set of decision variables so that the objective function is minimized or maximized [1].Hence, a robust optimization algorithm is required to explore unexplored regions of the problem search space and exploit already-explored regions [2].The most famous optimization algorithms, called metaheuristic algorithms (MAs) [3], have been used to solve a wide range of optimization problems [4][5][6][7][8].These metaheuristic algorithms can optimize the individual's generation after generation, utilizing intelligent operators with problem-specific knowledge controlled by finely-tuned parameters until an optimal solution is attained.MAs can be categorized according to their natural inspirations, such as evolutionary-based, swarm-based, human-based, physical-based and mathematical-based [9,10].Evolution-based metaheuristics imitate the natural evolution process, which includes selection, crossover, and mutation processes.Some of the well-known algorithms include in this category are Genetic algorithm (GA) [11], Evolutionary Programming (EP) [12], Evolution Strategies (ES) [13], and Differential Evolution (DE) [14].The second category, called swarm-based algorithms [15], is based on the social behaviour of birds, swarms, insects, and animal groups.A very well-known algorithm in this category is Particle Swarm Optimization (PSO) [16], which is motivated by the social behavior of bird flocking and fish schooling.Similarly, the Ant Colony Optimization (ACO) [17] was introduced based on the foraging behaviour of ant colonies, Firefly Algorithm (FFA) [18] was inspired by the way fireflies flash to attract prey or mates, Grey Wolf Optimizer (GWO) [19] simulates the natural hunting behaviour of grey wolves, Whale Optimisation Algorithm (WOA) [20] was developed to mimic the behaviour of humpback whales while deploying a bubble-net.Some of the recently proposed popular swarm based algorithms are Aquila optimizer (AO) [21], Grasshopper Optimization Algorithm (GOA) [22], Tunicate Swarm Algorithm (TSA) [23], improvised competitive swarm optimizer (ICSO) [24], Salp Swarm Algorithm (SSA) [25], Cuckoo Search (CS) [26], Moth Flame Optimizer (MFO) [27], Levy Flight Distribution (LFD) [28], Sea Horse Optimizer (SHO) [29], COOT algorithm [30], Artificial rabbit optimization (ARO) [31], and American zebra optimization algorithm (AZOA) [32].The third category of metaheuristics consists of human-based algorithms, which imitate human behaviour and social interactions.Some of the prominent algorithms include in this category are, Exchange Market Algorithm (EMA) [33] inspired by the process of trading shares on the stock exchange, Driving Training-Based Optimizer (DTBO) [34] was motivated by the process of learning to drive in driving school and the instruction of the driving instructor, and Teaching Learning Based Optimization (TLBO) [35].The fourth category belongs to physics-based algorithms, which are motivated by physical laws such as inertia, electromagnetic, gravitational, and so on.For examples, Gravitational Search Algorithm (GSA) [36] is based on mass interactions and the law of gravity, Black Hole (BH) algorithm [37] Multi-verse Optimizer (MVO) [38].The fifth category, known as math-based metaheuristics, consists of presenting metaheuristic algorithms inspired by simulating certain mathematical functions, such as Sine Cosine Algorithm (SCA) [39], Golden Sine Algorithm (GSA) [40], Base Optimization Algorithm (BOA) [41], and Cosine Swarm Algorithm [42].Most of the meta-heuristic algorithms mentioned above have been tried by the researchers around the world to solve different kinds of optimization problems.However, they still have some primary disadvantages of slow and early convergence, which causes the search process to become stuck at suboptimal values.For example, GA has the disadvantage of being difficult to optimize and can be time-consuming to run.Additionally, they may not find the global optimum and can be stuck in local optima.The PSO algorithm has the disadvantages of early convergence and being trapped in local optima.The GWO has the disadvantage of easy to fall into local optima and slow convergence in the later stage of the search.The ALO has limitations of premature convergence, entrapment of local optima and inability to reach global optima, slow convergence speed and high possibility of being trapped in a local optimum.The WOA has weaknesses of Low accuracy, slow convergence, and easily falling into local optimum.The CS algorithm has many disadvantages, such as slow search speed and low convergence accuracy.The LFD is still prone to drawbacks, like slow convergence, local optima entrapment, many hyper-parameters, and an imbalanced trade-off between exploration and exploitation.The TSA has disadvantages of poor convergence and falling to local optimum solutions in solving high-dimensional multimodal optimization.The SSA suffers from the problem of population diversity and trapping into locally optimal solutions.The COOT algorithm has the disadvantage of being stuck in the local region.The GSA has drawbacks: complex objective functions may exhibit substantial computational requirements, complex processes, numerous control parameters, and poor convergence.The main disadvantage of TLBO is the convergence rate, and it gets even worse when dealing with high-dimensional problems.The Black Hole algorithm has the disadvantage that, it is not able to find a good balance between exploitation and exploration.Thus, new advancements in optimization techniques are always required to overcome the aforementioned limitations.Additionally, the No-Free-Lunch theorem (NFL) [43] asserts that, no algorithm can be effective for all optimization challenges.As a result, there is always opportunity for establishing new MAs or improve/alter the current ones to overcome the difficult optimization problems in various fields.In this sense, researchers have primarily focused on three areas.Firstly, proposing novel algorithms inspired by evolutionary-based, swarm-based, human-based, physical-based and mathematical-based.Secondly, different optimization algorithms can hybrid their exploitation and/or exploration processes.Some examples of hybrid methods are SCA-GWO [44], SCA-ABC [45], SA-WOA [46], GWO-PSO [47], GWO-GOA [48], and WOA-DE [49].Lastly, improving existing meta-heuristic algorithms by employing various learning mechanisms such as levy flight-based approaches [50,51], quantum-behaved approaches [52,53], approaches with chaotic maps [54,55], and Opposition-based optimization algorithms will be deliberated in the literature review section.
In this study, an interesting meta-heuristic algorithm namely, GJO is chosen due to its global optimization capability, fewer control parameters and simple implementation strategy.GJO is a recently developed swarm-based metaheuristic algorithm inspired by the collaborative hunting behaviour of the golden jackals (Canis aureus) that was proposed by Nitish Chopra et al. in 2022 [56].However, since the updated position of prey often depends upon the male golden jackal, there is a lack of diversity of golden jackals in certain scenarios, and it tends to get stuck into a local optimum when facing some classical and complex problems.Hence, to overcome these limitations, this paper proposes an improved golden jackal optimization algorithm by incorporating opposition-based learning strategy (OBL) named as OGJO.The idea behind OBL is to explore both the original position and the opposite position at the same time to estimate the fittest solutions [57].It is mathematically proven that; opposite numbers are always closer to the optimal solution than random ones.Hence, in this work, OBL is employed with GJO to improve the exploration of the search region so that the stagnation problem of the solution can be avoided.The performance of the newly developed OGJO was assessed and compared with eminent search algorithms by employing 23 standard functions [58], 10 modern CEC2019 [59] test functions and six real-life engineering problems.The experimental outcomes reveal that the OGJO outperforms the GJO and other competitive algorithms.
The remaining sections are summarised as follows: Sect. 2 contains the literature review for OBL.Section 3 contains the preliminaries.Section 4 contains numerical experiments and results analysis.Section 5 contains the OGJO for classical engineering problems.Section 6 contains the conclusion and future work.

Literature Review for Opposition-Based Learning
Generally, metaheuristic approaches begin by producing an initial population of random solutions to identify the optimal solution.These initial solutions are produced arbitrarily or based on previous information, such as a domain search specification or other factors.In the absence of this information, however, these approaches cannot converge to the optimal solution because they operate arbitrarily in the search space.In addition, these methods are time-consuming because they depend on the distance that exists between the original and final solutions.In order to address these issues, the OBL offers a technique to explore a solution in the opposite direction of the current solution; as a result, the solution gets closer to the optimal solution, and convergence happens more rapidly.In this section, the most important MA studies that hybridized with opposition-based learning are discussed.ABC [60] is a swarm-based algorithm that imitates how a group of honeybees works together to find food.However, the ABC algorithm has a weakness of slow convergence during the search process, and it's easy to get hasty.In order to fix this weakness of the ABC algorithm, Zhao et al. came up with ABC employing an oppositionbased learning strategy (OBL-ABC) [61].It initiates the opposite solution using the employed bee and the onlooker bee and then selects the best solution as the new locations of the employed bee and onlooker bee in order to increase the search areas.This approach also suggests a novel update rule that can keep the benefits of the employed bee and onlooker bee while enhancing the exploration of OBL-ABC.Particle swarm optimization (PSO) was enhanced by employing the OBL mechanism [62], where the OBL was employed in the updating stage to improve the experiences of the particles and the collective knowledge of the group of particles.Gao et al. developed a modified harmony search (HS) approach based on the OBL technique to enhance the mutation problem, and this algorithm was employed to solve twenty-four benchmark test functions as well as an optimal wind generator design [63].Radha et al. improved the differential evolution algorithm by combining the chaotic and OBL approaches; the OBL generated the initial population, while the chaotic approach updated the mutation parameter [64].Gong (2016) suggested an enhanced version of the fireworks algorithm that employed the OBL to produce the initial population and compelled the population to leap into new solutions [65].Elaziz et al. proposed an enhanced opposition-based SCA (OBSCA) for global optimization, in which the superior position between the original position and its opposite is selected using opposition-based learning (OBL) [66].Equilibrium Optimiser (EO) [67] is a recently proposed algorithm which was based on control volume mass balance models.However, EO has the drawback of a poor exploitation ability; it's easy to get trapped in local optimum and an unsatisfactory balance between exploration and exploitation.
To overcome these drawbacks, Fan, Qingsong, et al. developed a modified equilibrium optimizer employing opposition-based learning and new update rules that combines four major modifications such as an OBL strategy, a new nonlinear time control strategy, new population update rules, and a chaos-based strategy [68].Salp Swarm Algorithm (SSA) imitates the behaviour of Salps while foraging and navigating in the ocean.Each salp will forage in the same manner as the finest salp.However, being too near to the optimal solution diminishes the exploration capability of the SSA, making it difficult for the algorithm to converge in later periods.Abdelaziz G. Hussien proposed a modified oppositionbased SSA for global optimization and engineering problems [69].The GWO algorithm is motivated by the leadership hierarchy and hunting method of grey wolves in the wild.It has attracted a great deal of attention from the heuristic algorithm community due to its superior optimization capability and limited parameters.However, when solving complex and multimodal functions, it is also simple to become trapped in the local optimum.Xiaobing Yu et al. proposed an opposition-based learning grey wolf optimizer (OGWO) to improve the performance of GWO [70].Chandran, Vanisree et al. proposed enhance opposition based GWO for solving global optimization and engineering problems [71].The Moth swarm algorithm (MSA), which was motivated by the direction of moths toward the moonlight, is an associative learning method with immediate memory that employs Levy mutation to cross-population diversity and spiral movement [72].However, due to the nature of its operators, this type of approach is prone to become stuck in sub-optimal positions, which impacts the convergence speediness and the computational work required to achieve superior solutions.

Golden Jackal Optimization (GJO) Algorithm
The golden jackal optimization (GJO) [56] method was proposed by Nitish Chopra and Mohsin Ansari in 2022.This method mimics the hunting behaviour of the golden jackal in its nature.The golden jackal is a terrestrial predator of moderate size that is a member of the canine family.They live in the regions of North Africa, East Africa, Europe, Southeast Asia, the Middle East, and Central Asia.The jackal's small stature and long legs allow it to run long distances to hunt its prey.They usually hunt with male and female jackals.The hunting behaviour of the golden jackal, shown in Fig. 1, is divided into three steps: Step 1: Prey probing.
Step 2: Surrounding and disturbing the prey until it ceases to move.
Step 3: Attack the target with a pounce.

Mathematical Model to Design Algorithm
In this part, the golden jackal's hunting technique is mathematically designed to build the GJO algorithm and perform optimization.

Formulation of Search Space
The golden jackal is a swarm-based approach where the initial candidate solution is arbitrarily generated throughout the search region as the first trial.The initial solution Z 0 is expressed below as Eq. ( 1): where, L and U represent the lower limit and upper limit of the search region, respectively and rand denotes a uniform random number lies inside [0,1].The initial matrix is produced in this phase.Equation (2) lists the prey, and the two fittest are selected as the jackal pair.
where, Z i,j is the j th element of i th prey, n denotes the total number of preys, and d represents the number of variables.The parameters of a particular solution are referred to as the prey position.An objective function determines the fitness (1) where, Fitprey represents a matrix containing all fitness val- ues of prey and f denotes the cost function.The best fittest value represents a male jackal, while the second best represents a female jackal.

Exploration Phase
Jackals can detect and track their prey, but occasionally the prey can evade capture.Therefore, the jackals wait and hunt for new prey.A male jackal typically takes the lead during hunts, and the female jackal trails the male jackal as: where, t represents the current iterations, prey(t) indicates the position vector, and Z M (t) and Z FM (t) represent the male and female jackal position in the search space, respectively.E is the escape energy of prey and is determined as follows: where, E 1 and E 0 express the decreasing energy and initial energy state of the prey, respectively.
Here, r is the random number between [0,1] and c 1 is the constant whose value is 1.5, t and T represents the current iteration and maximum iteration, respectively.E 1 decreases linearly from 1.5 to 0 during the course of the iterations.The rl represents the arbitrary random vector based on the levy flight (LF) distribution and is calculated as: The LF is calculated by using Eq.(10) (3) where , are arbitrary number in between 0 and 1, and is the constant whose value is 1.5.Finally, the new update position of the golden jackal is determined as:

Exploitation Phase
The escaping ability of the prey is reduced when it is harassed by jackals, and the jackal pairs then encircle the prey discovered in the prior phase.After encircling, they jump on their prey and eat it.This hunting behavior, together with male and female jackal, is expressed mathematically as: The aim of rl in Eqs.(12) and ( 13) is to produce random behavior in the exploitation period, emphasizing exploration and avoids local optimum.

Transitioning from Exploration to Exploitation
In the GJO technique, the E value is employed to transition from exploration to exploitation.The energy level of prey significantly decreases during the escaping behavior.When E 0 decreases from 0 to −1 , the prey really becomes weaker, whereas when E 0 grows from 0 to 1 , the prey becomes stronger.If |E| is greater than 1 , jackal pairs look in different areas to explore prey.If |E| is less than 1 , jackals attack the prey and do exploitation.

Opposition-Based Learning
The idea of OBL was introduced as a novel technique to accelerate the convergence of evolutionary approaches when deployed to intricate problems, particularly those involved in search and optimization.It was inspired by the unexpected and drastic transformations of social evolutions.The crucial principle is that in the current iteration, every particle or candidate solution of an optimization problem can be ( 10) generated either by integrating the information acquired during algorithm development or by producing simple random guesses, as in the initialization of the population.During the second stage, the convergence speediness might be boosted probabilistically by producing an opposing point for each possible solution both at the start of the process and at each iteration.The OBL speculates that exploring the direction of the initial candidate solution as well as in the reverse order may be advantageous for finding the optimal global value in a more efficient manner.As a result, an essential idea is an opposite number, which is stated as follows:

Opposite Number
The opposite number for any random real value Z ∈ [L, U] can be determined as follows.
where, Z R n is the opposite position vector from the real position vector Z R n .In addition, the two solutions Z and Z are compared during the optimization process and the better of these solutions is stored, while worst one is eliminated by comparing the fitness functions.For example, if

Incorporate OBL into GJO Algorithm (OGJO)
This section elucidates the proposed OGJO as a combination of the capabilities of the GJO algorithm with the OBL technique to enhance the exploration of the search region, which leads to an improvement in the accuracy of the optimal solution.The GJO method is improved by OBL since it has a number of undesirable characteristics, such as poor convergence and getting trapped in local solutions, which limit the exploration of the search region and the appropriate use of time.The suggested approach incorporates the opposing value while covering the search region by considering the two possibilities for the calculated point in order to prevent the above instances.This improvement increases the likelihood of finding optimal solutions in less time by 50%.The proposed strategy is performed in two stages.The first stage involves initializing the population using OBL, which starts with selecting the NP golden jackal that is closest to the optimal solution.The second stage updates the new generation of the golden jackal.
(1) Initialization For the traditional evolutionar y algor ithm, t h e i n i t i a l p o p u l a t i o n Z i = {z i1 , z i2 , ..z ij ...z iD } i s (i = 1, 2, 3, ....NP;j = 1, 2, ...D) generated arbitrarily in the search region, where D represents the dimension, NP denotes population size.Then, the population is employed to generate new individuals.The OBL is considered in this section.Equation ( 14) generates the opposition jackal.After that, the original population of Z i and the opposition popula- tion Z i , are merged into a single group.Choose the best NP solution from {Z i , Z i }.The best NP solutions are the original population.(2) Updating stage GJO searches for the best outcome with the help of both male and female jackals.Two jackals are the two fittest in the golden jackals.So, the optimization approach can be regarded as a better strategy.The convergence of the algorithm happens very fast.However, it is likely to get stuck in the local optimal because the search direction is only set by the two fittest jackals.The OBL is employed to generate new jackals with a certain probability rate p r .We generate a random value between 0 and 1.If the random value is less than p r , we employ OBL to produce new jackals depending on the existing population.Then, we choose the NP fittest jackals from the combination of the current jackals and the opposition jackals.In fact, the OBL can be considered as the mutation operator, which can allow the algorithm to escape from the local optima and increase exploration ability with a probability of p r .In traditional EAs, the probability of mutation is very small.Therefore, a very small number is assigned to p r .

The Proposed OGJO Algorithm
Based on the aforementioned analysis, the stage of the proposed OGJO algorithm is presented in Algorithm 1, and the main flowchart of the OGJO algorithm is depicted in Fig. 5.

Time Complexity
An essential factor for evaluating an algorithm's performance is its computational complexity.The following section discusses the time and space complexity of the proposed algorithm.where Max_it is the maximum number of iterations.(iii) Therefore, the total computational time complexity of the proposed OGJO algorithm is O(Max_it * n * Dim).For space complexity, the maximum space is considered to be occupied during the generation of the offspring in the iterations.Hence, the space complexity of OGJO is O(n * Dim).

Numerical Experiments and Result Analysis
In this section, an extensive collection of benchmark functions, including 23 classical [58] and CEC2019 [59] objective functions, are employed to estimate the efficiency of the OGJO while comparing with GJO, and other well-known meta-heuristics like GWO, PSO, SSA, MVO, TSA, LFD, SCA, MFO, COOT and SHO.In addition, six real-life engineering problems are employed to test the proposed OGJO algorithm as a practical tool.Furthermore, statistical tests like the Wilcoxon rank test [75,76] and t-test [77] are implemented to analyze the

Measure of Performance
• Average value (avg) : The average value implies the aver- age of the best results found by an algorithm over various runs, and it is determined as: where, z i represents the best results obtained from i th run and R denotes 30 independent runs.• Standard deviation (std) : The standard deviation is deter- mined to examine whether an algorithm can consistently produce the same best result in different runs and to assess the reproducibility of algorithmic outcomes.Which can be calculated as follows: • t-test: A statistical test, such as a t-test, is employed to estimate the significant differences among the proposed method and other metaheuristics.Which are calculated as follows.
where, avg 1 , avg 2 and std 1 , std 2 denotes the mean and standard deviation for the two algorithms, respectively.

Comparison of OGJO with GJO
In this subsection, the performances of the OGJO and GJO algorithms are examined on CEC2005 and CEC2019 benchmark functions to obtain the optimum solution.The comparison results on CEC2005 and CEC2019 benchmark functions in terms of avg and std are given in Tables 2 and  3, respectively.The statistical value, such as avg presented in Table 2, shows that the proposed method OGJO offers better outcomes than the GJO for functions F1-F8, F10, F12-F15, and F21-F23, similar results for functions F9, F11, and F18 and nearly equivalent result for functions F16, F17, and F19.In addition, the proposed OGJO is more stable than the standard GJO, as shown by the values of the std measure for each algorithm, excluding func- tions F6, F13, F16 and F17.Table 3 shows that the new proposed OGJO performs better in comparison to GJO for solving the benchmark functions cec01, cec02, cec04-cec06 and cec08-cec10 and provided similar results for function cec03.
In addition, Figs. 6 and 7 illustrate the convergence curve for the OGJO and GJO.This convergence curve demonstrates that the behaviour of the algorithms varies based on the given functions during iterations.For example, the OGJO has a better convergence rate than the GJO for most of the functions excluding F16-F18 and cec05, and the convergence of the GJO is the same as that of the OGJO, which is only very marginally superior to that of the OGJO.
To present additional evidence that the proposed OGJO is superior, the non-parametric tests, such as the Wilcoxon rank-sum test and t -test at =5% significant level, is employed to determine whether two sets of solution are statistically significantly different.The outcomes of the t -test at =0.05 for CEC2005 and CEC2019 function are presented in Tables 2 and 3, respectively, which are calculated by Eq. (17).If the t-value presented in Tables 2 and 3 is boldfaced, OGJO provided significantly more effective than compared algorithms.In a tie situation, the results are displayed in italics letter.Moreover, the last row of Tables 2 and 3, labelled as w∕t∕l , indicate OGJO win, tie, and loss count over the certain algorithm in terms of t-values.Clearly, from the t-values, it is observed that the per- formance of OGJO is a statistically significant difference in most cases.For the Wilcoxon rank-sum test, the p-value at =0.05 significant level is presented in Tables 2 and 3.In order to have statistically significant, two algorithms must obtain a p-value of less than 0.05.From Tables 2 and 3, it is observed that most of the p-values are smaller than 0.05 , which clearly demonstrates that the proposed   method OGJO performs superiorly in comparison to other meta-heuristics.In Tables 2 and 3, 'NA' indicates that the algorithms are statistically equivalent.

Comparison of OGJO with Other State of Art Algorithms
In this subsection, the performance of the newly proposed OGJO algorithm is compared with eight algorithms, such as GWO, PSO, SSA, MVO, TSA, LFD, SCA, MFO, COOT, and SHO.The statistical outcomes, such as average and standard deviation that are obtained from the CEC2005 and CEC2019 test suite are demonstrated in Tables 4-7.According to the outcomes shown in Tables 4-6, for the first unimodal group (F1-F7), the newly proposed OGJO algorithm performed better than all other algorithms in every function except F6.Therefore, compared to other algorithms, the OGJO method is more suitable for evaluating its potential for exploitation.In the second group of multimodal functions (F8-F13), the proposed method OGJO provided better results on functions F8, F9, and F11.However, for functions F12 and F13, the average of the OGJO algorithm does not reach the optimal value, whereas most of the algorithms reach the optimal value.For fixed-dimensional multimodal functions, the OGJO obtained better results on F16-F23, except for functions F14-F15.These results confirm that OGJO has better exploration capability than other compared algorithms.To further investigate the quality of the proposed OGJO and examine its exploration, exploitation, and local optimum avoidance capabilities, one of the most difficult benchmarks, the CEC2019 benchmark functions, has been employed.From Table 7, it is observed that the OGJO outperformed other algorithms and achieved higher statistical value in terms of avg and std.From Fig. 8, it is observed that the proposed OGJO method converges faster among all the cost functions except for F5, F6, F10, F12, F13, F14, and F15.Furthermore, when compared to the GJO strategies, the OGJO method has a greater effect on convergence.Also, from Fig. 9, it is clear that the OGJO has better convergence accuracy compared with other algorithms at some functions, such as cec01, cec02, cec03 and cec10.However, it has more effect on the convergence when it compared with GJO.Therefore, it is demonstrated that the enhancements projected in this work lead to a better balance between the exploration and exploitation abilities of the GJO.Through these enhancements, the proposed OGJO can accomplish a higher search precision and quicker convergence rate.In addition, the t-values at = 5% significant level are reported in Tables 4-7 to examine the significant difference among the algorithms.Clearly, from Tables 4, 5, 6 and 7, it is concluded that OGJO has a significant difference from other algorithms.The p-values by the Wilcoxon rank sum test at = 5% significant are presented in Tables 4, 5, 6 and 7. Tables 4, 5, 6 and 7 show that the p-values are less than 0.05.This shows clearly that OGJO performs well in comparison to other metaheuristic algorithms.
According to these results, the proposed OGJO can quickly converge to the global optimal values for 23 benchmark functions from CEC2005 and 10 from CEC2019.Additionally, the usage of OBL to update the solutions can effectively speed up the convergence of the GJO to the optimal values.

OGJO for Classical Engineering Problems
In this section, the proposed method OGJO is examined on six real-life engineering designed problems such as tension or compression spring design problem, welded beam design problem, pressure vessel design problem, speed reducer design problem, three bar truss design problem, and cantilever design problem.

Tension or Compression Spring Design Problem
The major concept for this design problem is to decrease the weight of spring while considering three decision variables such as wire diameter ( d or z 1 ), mean coil diameter ( D or z 2 ), and the number of active coils ( K or z 3 ).The schematic representation of the spring design is displayed in Fig. 10.The framework of this problem has been put into mathematical terms as follows:        To inspect the effectiveness of the projected OGJO algorithm, it has been compared with eleven well-known algorithms.Table 8 depicts the implementation results of OGJO and other compared algorithms.From Table 8, it is concluded that the OGJO achieved better outcomes than other compared algorithms. Minimize 2.00≤ z 3 ≤ 15.0

Welded beam design problem
The primary objective for this problem is to reduce the total price of welded beams as much as possible.There are four decision variables in this problem such as height of the bar (z 3 ort) , thickness of the bar (z 4 orb) , the thickness of the weld (z 1 orh) and length bar connected portion (z 2 orl).Fig. 11 shows the structure of the welded beam.This problem is designed using the following mathematical formula.where where, P = 6000lb, L = 14in, E = 30 * 10 6 psi, G = 12 * 10 6 psi, Eleven well-known algorithms have been contrasted with the proposed OGJO algorithm in order to assess its effectiveness.Table 9 depicts the implementation results of OGJO and other compared algorithms.From Table 9, it is concluded that the OGJO achieved better outcomes than other compared algorithms.

Pressure Vessel Design Problem
The primary goal of this issue is to minimize the overall price of a pressure vessel by calculating the four decision variables.The decision variables include the thickness of the shell ( z 1 or Ts ), the thickness of the head ( z 2 or Th ), inner radius ( z 3 or R ), and the length of the cylindrical portion of the vessel ( z 4 or L ). Figure 12 shows the schematic repre- sentation of pressure vessel.The mathematical formulation for this design problem is provided s below:   10, it is concluded that the OGJO achieved better outcomes than other compared algorithms.

Speed Reducer Design Problem
The main aim of this problem is to reduce the overall weight of the speed reducer by optimizing seven variables corresponding to the restrictions of the curvature stress on the gear teeth, the transverse deflection of the shafts, the stresses in the shafts, and the surface stress.Figure 13 depicts the schematic Minimize representation of this problem and the mathematical formulation of this design problem is presented as below: The proposed OGJO algorithm is compared with six well known algorithm, such as GJO, PSO, SCA, GA, FA, and GSA.The outcomes of OGJO and the other compared algorithms are presented in Table 11.From Table 11, it is concluded that the OGJO algorithm generated better outcomes than the compared algorithm.

Constant thickness
Page 27 of 29 147

Three Bar Truss Design Problem
The design of the 3-bar truss is a problem in the area of civil engineering.In designing a truss, it aims to determine two parameters (A1 and A2) in order to attain the minimum possible weight, as shown in Fig. 14.The mathematical formulation for this design problem is presented as bellow: The proposed OGJO algorithm is compared with seven well known algorithm, such as GJO, PSO, GWO, SSA, MVO, and TSA.The outcomes of OGJO and other compared algorithms are presented in Table 12.From Table 12, it is concluded that the OGJO algorithm generated better outcomes than the compared algorithm.

Cantilever Design Problem
The main aim of this design problem is to decrease the cantilever's beam weight subject to a single constraint with five parameters, which indicate five distinct block lengths.The pictorial representation of this design is depicted in Fig. 15.This problem is mathematically formulated as follows: Consider ⃗ z = z 1 z 2 z 3 z 4 z 5 Variable range 0.01 ≤ z i ≤ 1 , i = 1, 2, 3, 4, 5 The proposed OGJO algorithm is compared with six prominent algorithms, such as GJO, AO, MFO, CS, GOA, and ARO.The outcomes of OGJO and other compared algorithms are presented in Table 13.From Table 13, it is concluded that the OGJO algorithm generated better outcomes than the compared algorithm.
Considering the evaluation of OGJO to solve engineering problems and the results of other optimization algorithms that have been compared with OGJO, OGJO has demonstrated high performance in solving engineering problems with small and large dimensions as well as high complexity.

Conclusion and Future Work
This study proposed an improved version of the standard GJO algorithm called OGJO using the OBL strategy.The main purpose of this OBL technique is to improve the convergence rate and global search capability by generating the opposite solution to the current solution.To examine the performance of the proposed OGJO, the CEC2005 and CEC2019 test suite have been selected and compared with GJO and other prominent meta-heuristic algorithms.The comparisons have demonstrated that the OGJO attained better outcomes than the compared algorithm in terms of avg and std.In addition, statistical tests such as the t-test and Wilcoxon rank sum test have been performed to make significant differences in OGJO with other algorithms.Moreover, evaluations on engineering challenges show that the proposed OGJO has superior outcomes than the compared algorithms.However, the performance of OGJO is insignificant while solving the CEC2019 benchmark functions.Based on the promising results generated by the proposed OGJO algorithm, in future research, it is interesting to implement OGJO in different fields, such as image segmentation, feature selection, multi-objective problems, decision making and decision support system (DSS), and several other kinds of optimization problems.Moreover, it is recommended that the performance of the proposed OGJO algorithm for complex functions can be further improved by employing numerous techniques.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Fig. 1 A
Fig. 1 A Golden jackal pair.B The golden jackal is in search of prey.C Pursuit and capture of prey.D, E Attacking on prey , and 4 indicate the Z and its opposite Ẑ in one, two, and three- dimensional spaces, respectively.

Fig. 2
Fig. 2 One dimensional space of OBL mechanism

Fig. 3 Fig. 4
Fig. 3 Two-dimensional space of OBL mechanism (i) It takes O(n * Dim) time to initialize the population of jackals, where n symbolizes the population size and Dim represents the dimension of the variables.O(n) is required to calculate the fitness value of each jackal.(ii) Fitness value of each jackal is determined through iterations using the formula O(Max_it * n) time,

Table 1
Parameter settings of GJO and OGJO

Table 3
Results on unimodal, multi-modal, and fixed-

Table 4
Results of OGJO and other algorithms on the unimodal (F1-F7) functions Fig. 8 Convergence curve of CEC2005 benchmark functions for all algorithms International Journal of Computational Intelligence Systems (2023) 16:147 1 3 147 Page 16 of 29

Table 5
Results of OGJO and other algorithms on the multi-modal (F8-F13) functions

Table 6
Results of OGJO and other algorithms on the fixed-dimensional multimodal (F14-F23) functions

Table 7
Results of OGJO and other algorithms on unimodal and multi-modal benchmark functions

Table 10 The
Fig. 13 Speed reducer design problem

Table 10
depicts the implementation results of OGJO and other compared algorithms.From Table

Table 12
The outcomes obtained in solving three bar truss design problem