Enhancing the Harris’ Hawk optimiser for single- and multi-objective optimisation

This paper proposes an enhancement to the Harris’ Hawks Optimisation (HHO) algorithm. Firstly, an enhanced HHO (EHHO) model is developed to solve single-objective optimisation problems (SOPs). EHHO is then further extended to a multi-objective EHHO (MO-EHHO) model to solve multi-objective optimisation problems (MOPs). In EHHO, a nonlinear exploration factor is formulated to replace the original linear exploration method, which improves the exploration capability and facilitate the transition from exploration to exploitation. In addition, the Differential Evolution (DE) scheme is incorporated into EHHO to generate diverse individuals. To replace the DE mutation factor, a chaos strategy that increases randomness to cover wider search areas is adopted. The non-dominated sorting method with the crowding distance is leveraged in MO-EHHO, while a mutation mechanism is employed to increase the diversity of individuals in the external archive for addressing MOPs. Benchmark SOPs and MOPs are used to evaluate EHHO and MO-EHHO models, respectively. The sign test is employed to ascertain the performance of EHHO and MO-EHHO from the statistical perspective. Based on the average ranking method, EHHO and MO-EHHO indicate their efficacy in tackling SOPs and MOPs, as compared with those from the original HHO algorithm, its variants, and many other established evolutionary algorithms.


Introduction
Multi-objective optimisation problems (MOPs) need to consider more than one conflicting objective simultaneously, and they are ubiquitous in many real-world engineering applications.Unlike single-objective optimisation problems (SOPs), MOPs are more challenging and difficult to solve because the objectives in MOPs are often conflicting and/or incommensurable with each other, and improvement of one objective deteriorates other objectives.Hence, finding a set of equally-optimal solutions, i.e. the Pareto optimal set (PS), is necessary, since there is no single solution that can fulfill all objectives simultaneously (Akbari et al. 2012;Li et al. 2011).
Evolutionary algorithms (EAs) have become popular in recent years, mainly due to their population-based search capabilities that can approximate the PS.The applicability of EAs, however, is not always as good as that of other metaheuristics (Khan et al. 2016(Khan et al. , 2019)).Metaheuristic algorithms are flexible and easy to implement for solving SOPs and MOPs because they do not rely on gradient information of the objective landscape.While metaheuristic algorithms have been widely used to tackle various SOPs and MOPs, one key limitation is that the user-defined parameters of each metaheuristic algorithm require accurate tuning to realise its full potential.Another downside is the slow or premature con-vergence towards the global optimum solution (Talbi 2009;Heidari et al. 2019).
Two main types of metaheuristic algorithms are single solution-based and population-based models.Simulated Annealing (SA) (Van Laarhoven and Aarts 1987) and Variable Neighbourhood Search (VNS) (Mladenović and Hansen 1997) are examples of single solution-based metaheuristics, while Genetic Algorithm (GA) (Holland 1992b) and Differential Evolution (DE) (Price 1996;Storn and Price 1997) are examples of population-based metaheuristics (pmetaheuristic) (Talbi 2009).Single solution-based metaheuristics methods only solve one solution at a time during the optimisation process, while p-metaheuristics methods process a set of solution in each iteration.As expected, p-metaheuristics methods are more efficient than single solution-based metaheuristic and can often find an optimal solution, or suboptimal solutions that are located in the neighbourhood of the optimal solution space.By maintaining a population of solutions, these algorithms can avoid local optima and explore new regions of the search space.Heidari et al. (2019) presents the four main types of pmetaheuristics methods, namely EAs, physics-based, humanbased, and swarm intelligence (SI) models.The first types of p-metaheuristics methods are EAs.GA and DE are the most popular EAs, and many enhancements have been introduced (Holland 1992b;Price 1996).The algorithms use a variety of operators, such as mutation, crossover, and selection in the GA, to generate new solutions and improve the quality of the population.The search process involves the evaluation of multiple solutions simultaneously, and the selection of a pool of good solutions for the next generation.Genetic Programming (GP) and Biogeography-Based Optimiser (BBO) are other examples of EAs (Kinnear et al. 1994;Simon 2008).The physics-based metaheuristics methods include Big-Bang Big-Crunch (BBBC) (Erol and Eksin 2006), Central Force Optimisation (CFO) (Formato 2007) and Gravitational Search Algorithm (GSA) (Rashedi et al. 2009).
As the name suggests, these metaheuristics methods are based on the physics principles of a system.Human-based metaheuristics methods imitate human behaviours such as Teaching Learning-Based Optimisation (TLBO) (Rao et al. 2011), Tabu Search (TS) (Glover and Laguna 1998) and Harmony Search (HS) (Geem et al. 2001).SI algorithms have been the current trend for solving MOPs.SI algorithms are generally based on the social behaviours of organisms and social interactions of the population, e.g. the Particle Swarm Optimisation (PSO) algorithm that simulates the bird clustering behaviours (Kennedy and Eberhart 1995).
Recently, many new nature-inspired SI models have been proposed, such as Grey Wolf Optimisation (GWO) algorithm (Mirjalili et al. 2014), Whale Optimisation Algorithm (WOA) (Mirjalili and Lewis 2016), Grasshopper Optimisa-tion Algorithm (GOA) (Mirjalili et al. 2018), Moth-flame optimisation (MFO) (Mirjalili 2015), Artificial Bee Colony (ABC) (Karaboga and Basturk 2008), and Harris' Hawk Optimisation (HHO) (Heidari et al. 2019).The general procedure of these SI models can be summarised as follows: • Generate a set of individuals (population), where each individual represents a solution • Each individual in the population is updated using a metaheuristic algorithm • The newly generated population replaces the current population on an iterative manner The above procedure continues for every iteration until a termination condition is satisfied, e.g. the maximum number of iterations.In accordance with the 'no free lunch' theorem (NFL), there is no universal optimisation methods for solving all possible problems (Wolpert and Macready 1997).Many new optimisation methods and their enhanced variants have been introduced, since no single algorithm is effective for solving all classes of optimisation problems.
SI models have been used to solve various optimisation problems in different fields.Nonetheless, there is room for further research to improve their performance and extend their applications.Several enhancements of metaheuristic algorithms have been proposed to improve the efficiency of an algorithm.In general, four categories of enhancements are: (i) hybrid metaheuristic algorithms, (ii) modified metaheuristic algorithms, (iii) integration with chaos theory, and (iv) multi-objective metaheuristic algorithms.Note that hybrid metaheuristic algorithms integrate two or more algorithms for performance enhancement, while modified metaheuristic algorithms change the search process to improve the performance of an algorithm.
One major problem of all metaheuristic algorithms is pre-mature convergence and local optima, particularly in large-scale optimisation problems.Hybrid metaheuristic algorithms are used to solve optimisation problems with more efficient characteristics and with higher flexibility (Bettemir and Sonmez 2015;Li and Gao 2016).The GA and SA models are useful stochastic search algorithms for solving combinatorial optimisation problems.GA has been developed based on the principle of the survival-of-the-fittest and natural selection principles while SA is inspired by the physical process of annealing (Kirkpatrick et al. 1983;Van Laarhoven and Aarts 1987;Holland 1992a).SA often is coupled with GA to avoid local optima by irregularly allowing the acceptance of non-improving solutions.Hybrid Genetic Algorithm and Simulated Annealing (HGASA) is good at tackling optimisation problems, such as project scheduling with multiple constraints and railway crew scheduling (Chen and Shahandashti 2009;Hanafi and Kozan 2014;Bettemir and Sonmez 2015).Leveraging the capability of DE in searching for good and diverse solutions, many researchers have proposed integration of DE and other p-metaheuristics algorithms.Wang and Li (2019) enhanced the solution diversity by combining DE and GWO.An adaptive DE model was combined with GOA for multi-level satellite image segmentation (Jia et al. 2019).The resulting hybrid algorithm was evaluated with a series of experiments using satellite images, demonstrating search efficiency and population diversity.
Modification methods in metaheuristic algorithms refer to the process of changing the search process to improve the performance of an algorithm.One such method is the use of local search techniques to improve the solution quality.Local search techniques involve searching the local neighbourhood of a solution to find a better solution.Another modification method is the use of diversity mechanisms to prevent an algorithm from converging prematurely.Diversity mechanisms involve maintaining a diverse population of solutions to explore different regions of the search space.As an example, Gao et al. (2008) employed a logarithm decreasing inertia weight and chaos mutation with PSO to improve the convergence speed and diversity of solutions.In Gao and Zhao (2019), a variable weight was assigned to the leader of grey wolves, and a controlling parameter was introduced to decline exponentially and alter the transition pattern between exploration and exploitation.Moreover, Xie et al. (2020) enhanced GWO by integrating a nonlinear exploration factor to increase exploration in the early stage and improve exploitation in the second half of the search process.
This paper proposes enhancements to the HHO algorithm.Specifically, the nonlinear exploration factor, DE scheme, chaos strategy, and mutation mechanism from EAs are leveraged, leading to the proposed enhanced HHO (EHHO) model and its extension to multi-objective EHHO for solving SOPs and MOPs, respectively.A nonlinear exploration factor is introduced to replace the linear exploration factor in the original HHO algorithm.This nonlinear exploration factor aims to improve the exploration capability in the early search stage and facilitate the transition from exploration to exploitation.The DE scheme is then used to generate new, diverse individuals through mutation, crossover and selection.A chaos strategy is adopted to replace the DE mutation factor, in order to increase the randomness and find more valuable search areas.In MO-EHHO, a mutation mechanism is introduced to increase diversity of individuals in the external archive.The fast non-dominated sorting and crowding distance techniques from NSGA-II are also employed to select the optimal solution in each iteration and to maintain diversity of the nondominated solutions.
Benchmark SOPs and MOPs are used to evaluate the effectiveness of EHHO and MO-EHHO (Yao et al. 1999;Digalakis and Margaritis 2001).The benchmark suite of SOPs includes the following tasks: unimodal (UM) functions F1-F7, multimodal (MM) functions F8-F23, and composi-tion (CM) functions F24-F29.On the other hand, MO-EHHO is applied to tackling high-dimensional bi-objective problems without equality or inequality constraints and scalable tri-objective problems without equality or inequality constraints for performance evaluation.These include ZDT benchmark functions (ZDT1-4,6) (Deb et al. 2002;Deb and Agrawal 1999) and DTLZ benchmark functions (DTLZ1-DTLZ7) (Deb et al. 2005).The statistical sign test with a 95% confidence level is used for performance comparison.Four performance indicators are used, namely convergence, diversity, generational distance (GD), and inverted generational distance (IGD).The average ranking method (Yu et al. 2018) is leveraged to rank the performance of MO-EHHO against its competitors on each performance indicator.
This study introduces enhanced HHO-based models for solving SOPs snd MOPs.The organisation of this paper is as follows.In Sect.2, a review on HHO, chaos theory, and DE is provided.In Sect.3, the HHO algorithm and the proposed enhancements, namely EHHO and MO-EHHO, are explained in detail.The effectiveness of EHHO and MO-EHHO for solving SOPs and MOPs is evaluated, compared, analysed and discussed in Sect. 4. Concluding remarks and suggestions for future work are presented in Sect. 5.

Literature review
This section provides the concepts of multi-objective optimisation and current methods in meta-heuristics for solving MOPs.

Harris' Hawks optimisation algorithm
Proposed by Heidari et al. (2019), the HHO algorithm is a swarm-based, nature-inspired metaheuristic model.It imitates the collaboration behaviours of Harris' hawks working together to search and chase for a prey in different directions.A variety of hunting strategies are performed by Harris' hawks based on the prey's escaping energy.The mathematical model pertaining to the behaviours of Harris' hawks are formulated for solving optimisation problems.The HHO algorithm has received extensive interest from researchers due to its effectiveness and fast convergence speed; however, the HHO algorithm suffers from the following problems: 1. greatly affected by the tuning parameters; 2. weak transition between exploration and exploitation; 3. easily fall into local optima; 4. poor convergence for high-dimensional problems and multimodal problems; and 5. limited diversity of the solutions.
Enhancements of HHO have been proposed, e.g.hybridisation of HHO, modification of HHO and chaotic HHO (Alabool et al. 2021).Hybridisation is a popular method to improve the performance of an algorithm.HHO has been integrated with other optimisation methods to find the solution faster, avoid falling into local optima, provide better solution quality, and improve algorithm stability (Chen et al. 2020a;Gupta et al. 2020;Ewees and Abd Elaziz 2020).Chen et al. (2020a) integrated the DE algorithm and a multipopulation strategy into HHO to overcome its convergence issue and prevent HHO from being trapped in local optima.In addition, opposition-based learning was applied to HHO by Gupta et al. (2020) to reduce the number of opposite solutions and increase the convergence speed.HHO has also been used to improve the performance of other algorithms, e.g.accelerating the search process of multi-verse optimisation (MVO) and maintaining the MVO population (Ewees and Abd Elaziz 2020).
Other HHO modifications include improving the convergence speed, increasing the population diversity, and enhancing the transition from exploration to exploitation (Alabool et al. 2021).HHO was integrated with a chaotic map to improve the balance between exploration and exploitation (Zheng-Ming et al. 2019;Qu et al. 2020).Specifically, Zheng-Ming et al. ( 2019) used a tent map in the exploration phase to identify the optimal solution rapidly, while Qu et al. (2020) employed a logistic map in the escaping energy factor to balance global exploration and local exploitation.The use of HHO for solving MOPs was investigated in Islam et al. (2020) and Jangir et al. (2021).Table 1 summarises the main enhancement methods of HHO.

Differential evolution (DE)
Introduced by Price (1996) and Storn and Price (1997), DE is well-known for its simplicity, fewer control parameters and superior performance in optimisation.Storn and Price (1997) employed DE to solve global optimisation problems.However, one major problem of all metaheuristics algorithms is the potential of early convergence and local optima.Leveraging on DE's capability in finding better and diverse solutions, many researchers have proposed integration of DE and other p-metaheuristics algorithms.As an example, Wang and Li (2019) enhanced the solution diversity by integrating DE and the GWO algorithm.An adaptive DE was combined with GOA by Jia et al. (2019) for multi-level satellite image segmentation.The hybrid algorithm was evaluated with a series of experiments using satellite images.It was able to increase search efficiency and population diversity through iterations.In addition, hybrid models of DE and HHO can elevate the performance of the original HHO algorithm.Birogul (2019) introduced HHODE, which blends HHO and DE, which uses mutation operators from DE to harness its exploration strengths.The hybridisation manages to strike a balance between exploratory and exploitative tendencies.It was tested on an altered IEEE 30-bus test system for optimal power flow problem and proved to be more effective than other optimisation algorithms.Besides, DE is used by Chen et al. (2020a) in HHO to boost its local search capability and exploit the DE operations, such as mutation, crossover and selection, to discover better and more diverse solutions.A new HHO variant was introduced by Islam et al. (2020) to solve the power flow optimisation problem.Multiple researchers conducted studies to demonstrate hybridisation of DE with HHO for improving the HHO performance.Table 1 summarises the main publications in this topic (Birogul 2019;Bao et al. 2019;Wunnava et al. 2020;Abd Elaziz et al. 2021;Abualigah et al. 2021;Liu et al. 2022).In short, DE is useful for maintaining a more diverse and quality individual without losing the convergence speed in HHO.

Chaos theory
Chaos theory helps enhance the population diversity and cover wider search areas that an algorithm may miss.As mentioned before, HHO has a poor changeover from exploration to exploitation; hence, its susceptibility to local optima.Besides, HHO exhibits a poor performance in tackling highdimensional and multimodal problems.In this respect, chaos theory is often applied to an optimisation algorithm to enhance their global search capability (Ewees and Abd Elaziz 2020;Chen et al. 2020b;Liu et al. 2020;Dhawale et al. 2021;Hussien and Amin 2022).A variety of chaotic maps are available: Chebyshev, circle, intermittency, iterative, Leibovich, logistic, piecewise, sawtooth, sine, singer, sinusoidal and tent map.Researchers popularly use logistic and tent maps to improve the HHO performance (Zheng-Ming et al. 2019;Menesy et al. 2019;Chen et al. 2020a).Chen et al. (2020a) introduced chaotic sequence into the escaping energy of the rabbit in HHO to enhance the global searchability and cover broader search areas.A chaotic map was incorporated into the adaptive mutation strategy and integrated into the external archive by Liu et al. (2020) to attain a set of diverse nondominated solutions.Moreover, Xie et al. (2020) employed a chaotic strategy to assign weights for the leader wolf in GWO to increase the diversity of the leader wolves.Barshandeh and Haghzadeh (2021) integrated a chaos strategy to initialise the population and improve the solution diversity, allowing a larger search space to be covered.Table 1 Chaos shows the key publications on the use of chaos theory for improvement of the HHO algorithm.

Multi-objective optimisation
MOPs are similar to SOPs in terms of problem definition, but with multiple objective functions that cannot be efficiently solved by a single-objective algorithm.A set of equilibrium solutions exists for solving an MOP, i.e. a set of non-inferior solutions or Pareto optimal solutions (Fonseca and Fleming 1998).An MOP can be formulated as follows: Minimise subject to: where n is the number of design variables, o is the number of objective functions, r is the number of inequality constraints, s is the number of equality constraints, g j and h j are the jth inequality and equality constraints, respectively, and Lb i and U b i are the lower and upper bounds of the i-th design variables.
In a single-objective function, solution x 1 is better than x 2 if f (x 1 ) < f (x 2 ); however, this definition cannot be used in a MOP that has two or more objective functions.Using the Pareto dominance concept in solving MOP, solution x 1 is better than (dominates) x 2 if all objective functions f (x 1 ) < f (x 2 ) (Fonseca and Fleming 1998).Two solutions are denoted as non-dominated if, for at least one objective function, but not all of them, f (x 1 ) ≮ f (x 2 ).
Recently, p-metaheuristics algorithms have been utilised to solve MOPs due to their population-based search capabilities in yielding multiple solutions in a single run.The Pareto dominance concept is used to determine the PS.Most p-metaheuristics algorithms employ non-dominated ranking and Pareto strategy to ensure the diversity of PS.Several useful p-metaheuristics algorithms that store the best PS when there are multiple optimum solutions are: NSGA-II (Deb et al. 2002), SPEA2 (Zitzler et al. 2001), and MOPSO with an archive (Coello et al. 2004).Mirjalili et al. (2016) discussed several multi-objective p-metaheuristics algorithms, such as multi-objective grey wolf optimiser (MOGWO) (Mirjalili et al. 2016), multi-objective ant lion optimiser (MOALO) (Mirjalili et al. 2017b), multi-objective multi-verse optimisation (MOMVO) (Mirjalili et al. 2017a) and multi-objective grasshopper optimisation algorithm (MOGOA) (Mirjalili et al. 2018), to solve MOPs.Jangir and Jangir (2018) embedded the non-dominated sorting into GWO to solve several multi-objective engineering design problems and apply it to a constraint multi-objective emission dispatch problems with economic constrained and integration of wind power.Liu et al. (2020) modified a MOGWO with multiple search strategies to solve MOPs.Hence, this paper extended the Fig. 1 Different stages of HHO (Heidari et al. 2019) EHHO to a multi-objective EHHO that incorporated the non-dominated sorting from NSGA-II to sort and rank the non-dominated solution by calculating the crowding distance between solution sets to ensure a diverse PS.An external archive with a mutation strategy is utilised to diversify the obtained PS.

Harris' Hawk optimisation
HHO was originally proposed by Heidari et al. (2019) to solve SOPs.It mimics the hunting strategy of harris' hawks, focusing on their "surprise pounce" and "seven kills" to catch a prey (e.g. a rabbit).The hawks collaborate with each other in chasing or attacking the prey, and change their attack mode dynamically based on the escaping pattern of the prey.The hawk's behaviour is modelled in three main stages: exploration, the transition from exploration to exploitation, and exploitation.Figure 1 presents the different stages of HHO, while the pseudocode of the HHO algorithm is presented in Algorithm 1.
The HHO algorithm begins by initialising a fixed number of randomised individuals (population), in which every individual represents a candidate solution.The individuals are random within the lower and upper boundaries of the problems.The fitness value of each individual is calculated every iteration until the stopping criteria are satisfied.
The key mathematical formulation of each HHO stage is as follows: Exploration stage: Transition stage: Exploitation stage: Soft besiege (r ≥ 0.5and|E| ≥ 0.5) Hard besiege (r ≤ 0.5 and |E| < 0.5) Soft besiege with progressive rapid dives(r < 0.5 and |E| ≥ 0.5) Hard besiege with progressive rapid dives (r < 0.5 and |E| ≤ 0.5) where X (t + 1) is the position vector of an individual in the next iteration t, X rabbit (t) is the best position of a prey, X (t) is the current position of the individual, X rand (t) is the random position of the individual, X m (t) is the average position of the population, r 1 ,r 2 , r 3 , r 4 , r 5 , q, v and β are random factors within [0,1], while L B and U B are the lower and upper bounds of the search space, respectively.In addition, E denotes the escaping energy of the prey, E 0 ∈ (−1, 1) is the initial state of the prey's energy, e ∈ (2, 0) is the exploration factor, and T is the maximum number iterations (e.g. 100 iterations).Besides, J is the jumping strength of the prey and X (t) denotes the difference between the position of an individual with the best position of the prey, while L F, D and S represent the dimensions of the search space (Heidari et al. 2019;Chen et al. 2020a).The detailed processes of the original HHO algorithm are shown in Algorithm 1 (Heidari et al. 2019).

Algorithm 1 Harris' Hawk Optimiser
Input: The popsi ze N and Max I teration T 1: Initialise the location of individuals (solutions) (X i , i =, 2, ...N ) 2: while t ≤ T do 3: Calculate the fitness of each individual in the population 4: Set the best individual as the location of rabbit X rabbit 5: for each search agents (X i ) do 6: Update the escaping energy E and jumping strength J with Eqs.( 4) and (9) 7: Update the location of the individuals using Eqs.( 2), ( 7), ( 10), ( 14) and ( 17) 8: end for 9: end while Output: The location and fitness value of the rabbit

Nonlinear exploration factor
A nonlinear exploration factor is introduced to replace the linear version from the original HHO algorithm.In the HHO algorithm, the exploration factor, e controls the transition pattern from exploration to exploitation as defined in Eq. ( 4), which decreases linearly from 2 to 0, resulting in an acute shrinkage of the search space and insufficient search attention.Multiple studies in the literature have explored adaptive control of diverse search operations to improve the conversion between exploration and exploitation.Trigonometric (Mirjalili 2016;Gao and Zhao 2019), exponential (Mittal et al. 2016;Long et al. 2018a), and logarithmic-based functions (Gao et al. 2008;Long et al. 2018b) functions are useful for controlling the exploration factor nonlinearly.In this research, we use a nonlinear exploration factor proposed by Xie et al. (2020), which is a combination of the trigonometric functions, i.e. cos and sin, and a hyperbolic function, i.e tanh.This nonlinear exploration factor aims to amplify the diversification capabilities in the early exploration phase and smoothen the changeover from exploration to exploitation.The proposed nonlinear exploration factor and newly proposed escaping energy are defined as follows: where e is the newly proposed exploration factor to govern the switch between exploration to exploitation, t and Max iter are the current iteration and maximum iteration of each iteration, respectively.Coefficient k controls the descending slope of the exploration factor, e , while k = 5 is adopted, as used in Xie et al. (2020).The new exploration factor and escaping energy descending pattern are presented in Fig. 2a, b, respectively.As shown in Fig. 2a, the proposed nonlinear exploration factor decreases with a descending slope.The first half of the search path has a more effective exploration rate, while the second half has a lower exploration rate.The search space is significantly expanded in the first half, which helps EHHO to focus on exploration and finish the search with exploitation.

Differential evolution (DE)
Developed by Price (1996), Storn and Price (1997), DE is a robust yet straightforward EA for optimising real-valued, multi-modal functions.DE is utilised in EHHO to enhance its searchability and solution quality.Three primary processes exist in DE, namely mutation, crossover and selection (Price 1996;Storn and Price 1997).They are described in the following subsection.

Mutation
Following the DE process, three random solutions are selected from the population for mutation.A constant mutation factor is applied to the difference between two individuals.The third individual is combined with the mutated difference vector to produce a new individual, as follows: where X r 1 , X r 2 , X r 3 are three random solutions selected from the population, V i is the new individual, F is the mutation factor ∈ [0.5, 1], which is generated by a sinusoidal mapping defined in Eq. ( 22).The sinusoidal mapping is a chaotic sequence, as shown in Fig. 3.In accordance with chaos theory, a chaotic sequence has three important features, i.e. its initilisation condition, ergodicity, and randomness.The chaotic mutation factor can avoid local optima and premature convergence by exploiting randomness of the chaotic sequence.

Crossover
The crossover operation is modelled as follows: where rand j is randomised between [0, 1] and Cr is the crossover ratio.The solution from the mutation process will replace the current solution if Cr is higher than rand j in Eq. 23.The Cr l and Cr u are the lower and upper boundaries of Cr.

Selection
The selection operation is modelled as follows: where X i,g is the original individual, U i,g is the individual from crossover, and X i,g+1 is the new individual for the next iteration.Following crossover, DE determines the solution using equation 25.In this regard, f is the fitness function before and after mutation and crossover.The new solution with lower fitness value will replace the current solution with higher fitness lower in a minimisation problem.

Mutation strategy
A mutation strategy is employed in the external archive to increase the chance of generating better non-dominated solutions when the mutation probability is smaller than the mutation factor.An adaptive mutation factor is applied and modelled as follows: where m l and m u are the lower and upper boundaries of the mutation factor (m). Generally, m l = 0.1 and m u = 0.9.A set of new solutions is obtained after the mutation operation is completed (Fig. 4).

External archive
Zhang and Sanderson (2009) proposed an adaptive DE algorithm with an optimal external archive to solve MOPs.The external archive is a solution set, Ar c, that acts as a storage unit to keep historical PS obtained from each iteration.
When a new and better solution is found, the existing solution is replaced by the new non-dominated solution.Moreover, other non-dominated solutions are removed based on the crowding distance when the maximum capacity q is reached.Note that q is denoted as the maximum number of solutions can be stored in the external archive, which has the same size of that of the population.

Fast non-dominated sorting
The fast non-dominated sorting method is integrated into the multi-objective p-metaheuristics algorithm for finding nondominated solutions.Developed by Deb et al. (2002), it is used to sort the Pareto optimal solutions according to their degree of dominance in NSGA-II.The non-dominated solution is assigned as rank 1, the solution that is dominated by only one solution is assigned as rank 2, and so on.The solutions are chosen based on their ranks, in order to preserve quality of the solution base.

Crowding distance
The crowding distance is computed as follows (Deb et al. 2002): where q is the size of solution set, i ∈ q. n is the number of objectives; obj j (S i + 1) is the value of the jth objective of solution S i , and obj max j and obj min j are the maximum and minimum values of the jth objective of the solution set.Specifically, the crowding distance is the distance between two adjacent solutions on the same front; the larger the crowding distance, the closer the two adjacent solutions.

Algorithm 2 Multi-objective Enhanced Harris' Hawk Optimiser
Input: The popsi ze N and Max I teration T Initialise a set of individuals, no. of variable, and their lower and upper bounds Initialise a set of archive Calculate the fitness of the population and archive while t ≤ T do Evaluate the fitness of each individual in the population Set the best individual as the location of rabbit X rabbit for each search agents (X i ) do Calculate the escaping energy E and jumping strength J with Eqs.( 4) and ( 9) Update the location of the individuals using Eqs.( 2), ( 7), ( 10), ( 14) and ( 17

Experimental setup and compared algorithms
A set of diverse benchmark problems (Yao et al. 1999;Digalakis and Margaritis 2001) is used to evaluate the proposed EHHO algorithm.The benchmark suite includes the following problems: unimodal (UM) functions F1-F7, multimodal (MM) functions F8-F23, and composition (CM) functions F24-F29.The UM functions evaluate the exploitative capabilities of optimisers with respect to their global best.
The MM functions are designed to assess the diversification or exploration capabilities of optimisers.The characteristic and mathematical formulation of the UM and MM functions are presented in Tables 22, 23 and 24 in the Appendix.The CM functions are selected from the IEEE CEC 2005 competition (García et al. 2009), which include rotated, shifted and multi-modal hybrid composition functions.Details of the CM functions are presented in Table 25 in the Appendix.These functions are beneficial to investigate the interchange from exploration to exploitation and the capabilities of an optimiser in escaping from the local optima of an optimiser.EHHO is implemented in Python and executed on a computer with Windows 10 64-bit professional and 16 GB RAM.The population size and maximum iterations of EHHO are set to 30 and 500, respectively, as proposed in the original HHO (Heidari et al. 2019) and HHODE (Birogul 2019).The EHHO results are documented and compared with other algorithms across the average performance over 30 independent runs.The results of other algorithms are extracted from the original HHO publication (Heidari et al. 2019) and HHODE publication (Birogul 2019).The average (AVG) and standard deviation (STD) of EHHO are compared with those of the following algorithms: GA (Simon 2008), BBO (Simon 2008), DE (Simon 2008), PSO (Simon 2008), FPA (Yang et al. 2014), GWO (Mirjalili et al. 2016), BAT (Yang and Gandomi 2012), FA (Gandomi et al. 2011), CS (Gandomi et al. 2013), MFO (Mirjalili 2015), TLBO (Rao et al. 2012) and DE (Simon 2008).These include commonly utilised optimisers, such as GA, DE, PSO and BBO, and also newly emerged optimisers such as FPA, GWO, BAT, FA, CS, MFO and TLBO.Additionally, EHHO is compared with a modified version of HHO, i.e.HHODE, from Birogul (2019).
A statistical hypothesis test, i.e. the non-parametric sign test at the 95% confidence level, is used for performance assessment from the statistical perspective.The sign test results at the 95% confidence interval and the associated p-values are presented in Tables 6, 7, 8 and 9. Symbol "+/=/-" indicates a better/equal/worse performance between two compared algorithms.The indicates the target algorithm has significant difference in performance with the compared algorithm, × is vice versa, and ≈ depicts no significant difference in performance between the two algorithms.
To compare an algorithm with others in a more general form, lexicographic ordering and average ranking are used.Lexicographic ordering has been adopted to obtain the final ranking for all algorithms in the CEC 2009 competition (Yu et al. 2018).The average ranking method is used to reveal accuracy and stability of an algorithm against other competitors.All algorithms are ranked based on their average results pertaining to the total number of benchmark functions, N .To measure accuracy of the target algorithm, the mean rank μ r is used, i.e.
where R = {R 1 , R 2 , . . ., R n } is a rank set of the target algorithm.The lower the mean rank μ r , the better the performance of the target algorithm.

Quantitative results of EHHO and discussion
By observing the quantitative results from different classes of benchmark functions, the AVG and STD measures from 30 independent runs indicate the performance and stability of EHHO.The EHHO results versus those from other competitors in dealing with UM and MM functions (F1-F13) on 30, 100, 500 and 1000 dimensions are tabulated in Tables 2,  3, 4 and 5. Table 8 shows the EHHO results versus those from competitors in dealing with MM and CM functions.The best AVG and STD scores of EHHO are in bold.The statistical results of UM, MM, and CM benchmark functions are tabulated in Tables 6, 7, 8 and 9.The average ranking results are tabulated in Tables 7, 8, 9 and 10.From Table 2, EHHO generates superior results on F2, F4-F7, and F9-F13.From the statistical evaluation veiwpoint, EHHO outperforms its competitors in all 30-dimensional UM functions at the 95% confidence level, as indicated by p < 0.05 in Table 6.Moreover, EHHO achieves better results than other models on F1, F2, F4-F6, and F9-F13 on 100-dimensional UM functions, as shown in Table 3.As indicated in Table 6, EHHO performs significantly better than its competitors at the 95% confidence level.It can be noticed that EHHO yields the best results on 9 out of 13 500dimensional benchmark problems, as presented in Table 4.
In addition, Table 6 shows EHHO performs statistically better than its competitors, achieving comparable results than those of HHODE/rand/1, HHODE/best/1, HHODE/currentto-best/2, and HHODE/best/2.From Table 5, EHHO depicts its superior performance with 10 out of 13 best results on the benchmark problem with 1000-dimension.Comparing with other models statistically, EHHO performs significantly better than its competitors at the 95% confidence level, and no statistically difference with HHODE/current-to-best/2. From the results, integrating DE can certainly achieve more diverse solutions, leading to the attainment of the global optimum solution.In summary, EHHO proves to be highly effective in solving the F4-F6 and F9-F13 benchmark functions.Furthermore, EHHO outperforms other algorithms in almost all instances, and achieves the most optimal global solution for F9 and F11 in search spaces ranging from 30 to 1000 dimensions.
According to the results presented in Table 8, EHHO outperforms other models on several functions, including F14-F17, F19, F21-F22, and F25-F27.It also depicts similar performance to other models on F23 and F29.However, when it comes to hybrid composition functions F18, F20, F24, and F28, EHHO has a relatively lower performance on those hybrid composition functions.From Table 10, the average ranking results reinforce the findings from the statistical analysis, providing additional insights into the comparative performance of algorithms on benchmark functions F14-F29.EHHO is inferior to HHODE/current-to-best/2, HHODE/rand/1, HHODE/best/1, HHODE/best/2, and HHO, but it outperforms GA, PSO, BBO, FPA, GWO, BAT, FA, CS, MFO, TLBO, and DE.These findings provide valuable insights into the strengths and weaknesses of the EHHO algorithm on specific benchmark functions.
Based on the results, it can be seen that using a hybrid of HHO and DE method can enhance the diversity of solutions and prevent local optima.By incorporating a chaotic mutation rate in DE, the population diversity can be further improved while avoiding stagnation.Additionally, the ergodicity and randomness of the chaotic mutation assist EHHO in avoiding premature convergence.The nonlinear escaping factor further enhances EHHO in transitioning from exploration to exploitation, which enabling its attainment of the global optimum solution.Overall, EHHO outperforms the original HHO algorithm in solving SOPs.However, there are certain composition functions, such as F18, F20, F24, and F28, where HHODE performance proposed by Birogul (2019) is superior to that of EHHO.

Runtime analysis
Table 11 presents the runtime (seconds) of EHHO for different benchmark functions and dimensions (30D, 100D, 500D, and 1000D), providing information into its compu-      11, functions F1, F4, F5, F6, F8, F9, and F10 have lower runtimes across different dimensions.On the other hand, functions F3, F7, F11, F12, and F13 experience significant increases in runtimes with higher dimensions.This suggests that these functions may face scalability issues, as the algorithm takes longer to reach optimal solutions in higher-dimensional spaces.It is therefore essential to evaluate the scalability of the EHHO algorithm and find ways to improve its performance for higher-dimensional optimisation problems.
From Table 12, functions F15, F16, F17, F18, F19, and F20 have shorter completion times, which indicates that the algorithm can find satisfactory solutions for these functions rapidly.However, functions F14, F21, F22, and F23 take longer to complete, indicating that EHHO needs more computational power to find optimal solutions for these functions.
In addition, Table 12 presents the average runtime for functions F24-F29, which comprise rotated, shifted, and hybrid composition functions from the IEEE CEC 2005 competition (García et al. 2009).Function F29 has the shortest runtime, indicating that the algorithm is able to find satisfactory solutions faster for this function.F24 and F25-F27 have similar runtimes, with each run taking 47 s and 49 s, respectively.Function F28 takes the longest time to run, suggesting that it has more difficult optimisation properties that slow down the convergence of EHHO (Table 13).
To obtain a comprehensive understanding of the computational efficiency of EHHO, it is valuable to compare its runtimes with other competing algorithms on the same benchmark functions and dimensions.Unfortunately, the authors of these algorithms did not disclose their computational times.Nevertheless, this study has employed the same maximum iterations as those published in Heidari et al. (2019) and Birogul (2019).

Experimental setup and compared algorithms
MO-EHHO is applied to high-dimensional bi-objective problems without equality or inequality constraints and scalable tri-objective problems without equality or inequality con-straints for performance evaluation.A total of 12 test functions are available, as follows: • ZDT benchmark functions (ZDT1-4,6) (Deb et al. 2002;Deb and Agrawal 1999) • DTLZ benchmark functions (DTLZ1-DTLZ7) (Deb et al. 2005).

Performance metrics
Three common objective functions for evaluating the performance of a multi-objective algorithm in solving MOPs are as follows: • minimise the distance between the true PF and the obtained PS • maximise the spread of the obtained PS along with the true PF • maximise the extent of the obtained PS in covering the true PF Based on these objectives, the following performance metrics are used to assess the MO-EHHO efficacy.Convergence metric (γ ): the distance between the PF and the obtained PS (Deb et al. 2002) Diversity metric ( ): the degree of spread achieved by the obtained PS (Deb et al. 2002), i.e.
Generational distance (G D): the distance between the true PF and the obtained PS (Van Veldhuizen and Lamont 1998),   Yang et al. (2022).MO-EHHO achieves comparable I G D results as compared with those of CMPMO-HHO, NSGA-III, NSGA-II, SPEA2, ANSGA-III, and ar-MOEA, but outperforms others at the 95% confidence interval with p < 0.05.Table 16 summarises the ranking outcomes of all algorithms.MO-EHHO is ranked in the top 1 position as the overall best performer in solving the ZDT and DTLZ problems.Note that the number of maximum function evaluations of MO-EHHO is lower than that in Yang et al. (2022), i.e. 50,000 versus 300,000, indicating the effectiveness of MO-EHHO in tackling MOPs.

Comparison between MO-EHHO and other algorithms published in Liu et al. (2020)
The performance of MO-EHHO is evaluated using the ZDT and DTLZ benchmark functions.The obtained results are compared with those published in Liu et al. (2020).In accordance with the experimental procedure in Liu et al. (2020)  From the analysis above, MO-EHHO outperforms their competitors at the 95% confidence interval in terms of I G D. The I G D metric evaluates both the convergence and diversity properties of an algorithm.Hence, the solutions produced by MO-EHHO have a good balance between convergence and diversity.Besides that, Table 18 depicts the ranking outcome of the algorithms on each metric.MO-EHHO ranks the first in terms of γ , G D and I G D. Moreover, MOPSO occu-  2020) and Xiang et al. (2015), the number of maximum function evaluations is set to 50,000 for all problems, and the population size and external archive are set to 100, respectively.As such, the number of maximum iteration is 50, 000/100 = 500 for each of the 30 independently runs.Only the I G D metric is used for performance evaluation, as per the results presented in Liu et al. (2020) and Xiang et al. (2015).This experiment is performed in a Python environment.
The mean I G D results of MO-EHHO over 30 independent runs are shown in Table 19.MO-EHHO is the best algorithm, producing the best scores on 10 out of 12 test problems.Table 20 presents the sign test outcomes of MO-EHHO against those of other algorithms.MO-EHHO demonstrates a superior performance as compared with those from its competitors in a statistical context.From Table 21, MO-EHHO is the best in ranking against its competitors.In short, MO-EHHO is the best among other MOEA models.

Conclusions
In this study, we have proposed enhancements to the HHO algorithm, namely EHHO and MO-EHHO to solve SOPs and MOPs, respectively.Both EHHO and MO-EHHO models have been evaluated with 29 SOPs and 12 MOPs.Specifically, EHHO has been evaluated using a variety of single-objective benchmark problems, including unimodal In addition, the sign test is employed to deduce a statistical conclusion for the performance comparison between two compared algorithms at the significance level of α = 0.05.The average ranking method has also been adopted to reveal accuracy and stability of an algorithm against other competitors.All algorithms have ranked based on their average results pertaining to the total number of benchmark functions.
The DE scheme has been integrated into EHHO, while the chaos theory has been utilised to formulate its mutation factor.It leverages the DE/best/1 mutation scheme, where the "best" indicates that the non-dominated solution is stored in the external archive, while "1" represents one mutated vector is used.The nonlinear exploration factor contributes towards the diversification capability in the early exploration phase, which facilitates a smooth changeover from exploration to exploitation.Additionally, the incorporation of DE into EHHO and the mutation strategy in the population are useful to prevent EHHO from falling into local optima.Despite its strengths, EHHO struggles in certain benchmark problems that involve higher dimensions and more complex search landscapes in comparison with those from the HHODE variants.For MOPs, the non-dominated sorting strategy from NSGA-II has been employed in MO-EHHO.MO-EHHO is able to exploit the Pareto optimal solutions while preserving its diversity.

Fig. 2
Fig. 2 Proposed enhancements of the exploration and exploitation of EHHO

Table 1
Recent popular enhancement methods of HHO for solving SOPs and MOPs

Table 7
Final rankings of each algorithm inHeidari et al.

Table 14
Yang et al. (2022)ard deviation and ranking of MO-EHHO variants and other enhanced models inYang et al. (2022)in terms of I G D for all test problems

Table 15
Statistical comparison of MO-EHHO and other enhanced models in Yang et al. (2022) with 95% confidence interval on each metric

Table 16
Yang et al. (2022)each algorithm on I G D metric with other enhanced models inYang et al. (2022)the index of a solution set, d i is the Euclidean distance between the ith solution in the non-dominated set and the ith solution in the true PF. n is the number of obtained PS, m is the number of solutions in the true PF.In addition, d min and d max are the minimum and maximum Euclidean distances of the extreme non-dominated solution, respectively, in the objective space, d is the mean of all d i .

Quantitative results of MO-EHHO and discussion 4.4.1 Comparison between MO-EHHO and other enhanced models in Yang et al. (2022)
Yang et al. (2022)enhanced models presented inYang et al. (2022)are compared using the ZDT and DTLZ functions.In accordance with the experimental procedure inYang et al. (2022), the population size and external archive are set to 100.The number of maximum function evaluations of ZDT is 30,000, i.e. the same as that inYang et al. (2022).However, the number of maximum function evaluations of DTLZ is 300,000 inYang et al. (2022), as compared with only 50,000 for MO-EHHO in this research.In this comparison, 18 algorithms are compared with MO-EHHO, i.e.CMPMO-HHO, NSGA-II, SPEA2, NSGA-III, PESA-II, CMOPSO, C-The average results and standard deviation of MO-EHHO and those presented inYang et al. (2022)are depicted in Table 14.MO-EHHO generates the best or second-best scores on 9 out of 12 test problems in terms of I G D. Table 15 presents the sign test results of all MO-EHHO in terms of I G D as compared with those in

Table 17 ,
MO-EHHO outperforms SPEA2, NSGA-II and MOABC in terms of both γ and G D. In addition, MO-EHHO achieves better results than those of MOGOA in terms of G D. It should be noted that the metric evalu-

Table 20
Xiang et al. (2015)son of MO-EHHO and several improved MOEAs inLiu et al. (2020)andXiang et al. (2015)with 95% confidence interval on each metric Two sets of well-known multiobjective benchmark functions, i.e.ZDT and DTLZ, have been employed to evaluate MO-EHHO.Four performance metrics, i.e. γ , , G D, and I G D, have been utilised to measure the convergence and diversity properties of MO-EHHO.

Table 23
Description of multimodal benchmark functions

Table 24
Description of fixed-dimension multimodal benchmark functions

Table 27
Liu et al. (2020)dard deviation and ranking of MO-EHHO variants and other well-known algorithms inLiu et al. (2020)in terms of γ for all test problems

Table 28
Liu et al. (2020)dard deviation and ranking of MO-EHHO variants and other well-known algorithms inLiu et al. (2020)in terms of for all test problems

Table 30
Liu et al. (2020)dard deviation and ranking of MO-EHHO variants and other well-known algorithms inLiu et al. (2020)in terms of I G D for all test problems