Fast elitist ABC for makespan optimisation in interval JSP

This paper addresses a variant of the Job Shop Scheduling Problem with makespan minimisation where uncertainty in task durations is taken into account and modelled with intervals. A novel Artificial Bee Colony algorithm is proposed where the classical layout is simplified, increasing the algorithm’s speed and reducing the number of parameters to set up. We also take into account the fundamental principles of exploration around a local solution and attraction to a global solution to improve diversity in the hive. The increase on speed and diversity allows to include a Local Search phase to better exploit promising areas of the search space. A parametric analysis is conducted and the contribution of the new strategies is analysed. The results of the new approach are competitive with those obtained with previous methods in the literature, but taking less runtime. The addition of Local Search improves the results even further, outperforming the best-known ones from the literature. An additional sensitivity study is conducted to assess the advantages of considering uncertainty and how increasing it affects the solution’s robustness.


Introduction
The job shop scheduling problem (JSP) is considered to be one of the most relevant scheduling problems.It consists in allocating a set of resources to execute a set of jobs under a set of given constraints, with the most popular objective in the literature being the minimisation of the project's execution timespan, also known as makespan.Solving this problem improves the efficiency of chain production processes, optimising the use of energy and materials Pinedo (2016) and having a positive impact on costs and environmental sustainability.However, in real-world applications, the available information is often imprecise.Interval uncertainty arises as soon as information is incomplete, and contrary to the case of stochastic and fuzzy scheduling, it does not assume any further knowledge, thus representing a first step towards solving problems in other frameworks Allahverdi et al. (2014).Moreover, intervals are a natural model whenever decision-makers prefer to provide only a minimal and a maximal duration, and obtain interval results that can be easily understood.Under such circumstances, interval scheduling allows to concentrate on significant scheduling decisions and to produce robust solutions.
Contributions to interval scheduling in the literature are not abundant.In Lei (2012), a genetic algorithm is proposed for a JSP minimizing the total tardiness with respect to job due dates with both processing times and due dates represented by intervals.In Dı ´az et al. (2022), a different genetic algorithm is applied to the same problem, including a study of different interval ranking methods based on the robustness of the resulting schedules.A population-based neighbourhood search for an interval JSP with makespan minimisation is presented in Lei (2011).In Li et al. (2019), a hybrid between particle swarm and a genetic algorithm is used to solve a flexible JSP with interval processing times as part of a larger integrated planning and scheduling problem.More recently, a genetic algorithm is applied in Dı ´az et al. (2020) to the JSP with interval uncertainty minimizing the makespan and two different algorithms based on artificial bee colonies are proposed in Dı ´az et al. (2022) and Dı ´az et al. (2023) for the same problem.
Due to the complexity of job shop scheduling problems, metaheuristic search methods are especially suitable to solve them.In particular, Artificial Bee Colony (ABC) is a swarm intelligence optimiser inspired by the intelligent foraging behaviour of honeybees that has shown very competitive performance on JSP with makespan minimisation.For instance, Wong et al.Wong et al. (2008) propose an evolutionary computation algorithm based on ABC that includes a state transition rule to construct the schedules.Taking some principles from Genetic Algorithms, Yao et al.Yao et al. (2010) present an Improved ABC (IABC) where a mutation operation is used for exploring the search space, enhancing the search performance of the algorithm.Later, Banharnsakun et al. Banharnsakun et al. (2012) propose an effective ABC approach based on updating the population using the information of the bestso-far food source.In Dı ´az et al. ( 2022), an elitism mechanism is introduced to increase diversity and solve an interval job shop problem with makespan minimisation.The same problem is tackled in Dı ´az et al. ( 2023) introducing the seasonal behaviour of honeybees as part of the onlooker bee phase.
In the following, we extend the work presented in Dı ´az et al. (2022) to solve the interval JSP with makespan minimisation.We propose several improvements on the introduced Elite ABC method that aim at speeding up the algorithm while increasing its diversity.A Local Search is then included into the ABC to exploit the new diversity and obtain better results.The robustness study conducted in Dı ´az et al. ( 2022) is complemented with a new sensitivity analysis to assess the quality of solutions in scenarios of increasing uncertainty.The rest of the paper is organized as follows: the interval JSP is presented in Sect.2; in Sect. 3 we describe the different components that conform the ABC algorithm that addresses this problem; in Sect. 4 we compare these strategies and the best one is also compared with the state of the art; a sensitivity analysis is also included in Sect. 4.

The job shop problem with interval durations
The classical job shop scheduling problem consists of a set of resources M ¼ fM 1 ; . ..; M m g and a set of jobs J ¼ fJ 1 ; . ..; J n g.Each job J j is organised in tasks or operations ðoðj; 1Þ; . ..; oðj; m j ÞÞ that need to be sequentially scheduled.We assume w.l.o.g. that tasks are indexed from 1 to N ¼ P n j¼1 m j , so we can refer to task o(j, l) by its index o ¼ P jÀ1 i¼1 m i þ l and denote the set of all tasks as O ¼ f1; . ..; Ng.Each task o 2 O requires the uninterrupted and exclusive use of a machine m o 2 M for its whole processing time p o .
A solution to this problem is a schedule s, i.e. an allocation of starting times for each task, which, besides being feasible (all constraints hold), is optimal according to some criterion, in our case, minimal makespan C max .

Interval uncertainty
Following Lei (2011)  The interval JSP (IJSP) with makespan mimisation requires two arithmetic operations: addition and maximum.Given two intervals a ¼ ½a; a; b ¼ ½b; b, the addition is expressed as ½a þ b; a þ b and the maximum as ½maxða; bÞ; maxða; bÞ.Also, given the lack of a natural order in the set of closed intervals, to determine the schedule with the ''minimal'' makespan, we need an interval ranking method.For the sake of fair comparisons with the literature, we shall use the midpoint method: a MP b , mðaÞ mðbÞ with mðaÞ ¼ ða þ aÞ=2.This is used in Dı ´az et al. (2020,2022,2023) and it is equivalent to the ranking method used in Lei (2012) and Lei (2011).Notice that mðaÞ coincides with the expected value of the uniform distribution on the interval, E½a.

Robustness on interval JSP
In a solution to the IJSP, the makespan value is not an exact value, but an interval.It is only after the solution is executed on a real scenario that actual processing times for tasks P ex ¼ fp This is the idea behind the concept of -robustness first proposed in Bidot et al. (2009) for stochastic scheduling, and later adapted to the IJSP in Dı ´az et al. (2020).For a given !0, a schedule with makespan C max is considered to be -robust in a real scenario P ex if the relative error made by the expected makespan E½C max with respect to the makespan C ex max of the executed schedule is bounded by , that is: Clearly, the smaller the bound , the more robust the interval schedule is.This measure of robustness is dependent on a specific configuration P ex of task processing times obtained upon execution of the predictive schedule s.
In the absence of real data, as is the case with the usual synthetic benchmark instances for job shop, we may resort to Monte-Carlo simulations.We simmulate K possible configurations P k ¼ fp k o 2 ½p o ; p o ; o 2 Og using uniform probability distributions to sample durations for every task and compute for each configuration k ¼ 1; . ..; K the exact makespan C k max that results from executing tasks according to the ordering provided by s.Then, the average -robustness of the predictive schedule across the K possible configurations, denoted , can be calculated as: This value provides an estimate of how robust the solution s is across different processing times configurations.
Fig. 1 Gantt Chart representing a solution to an IJSP instance The Artificial Bee Colony Algorithm is a bioinspired swarm metaheuristic for optimisation based on the foraging behaviour of honey bees.Since it was introduced in Karaboga ( 2005) it has been successfully adapted to a variety of problems Karaboga et al. (2014).
Typically, the ABC starts by generating and evaluating an initial hive H 0 of random food sources.The best food source Best is assigned to the hive's queen.Then, the algorithm iterates over a number of cycles, each consisting of three phases mimicking the behaviour of three types of foraging bees: employed, onlooker and scout.In the employed bee phase, each food source is assigned to one employed bee, who explores a new candidate food source between its own food source and the queen's one.In the onlooker bee phase, each bee chooses a food source and tries to find a better one in its neighbourhood.At the end of each phase, the newly-found food source is evaluated.If it is equivalent to the queen's one (i.e. the best food source found so far), it is discarded for the sake of maintaining diversity in the hive.Otherwise, if it is better than the food source of the bee that generated it, it replaces it.If it cannot improve the original food source, then its fs.numTrials counter is increased by one.In the scout bee phase, if the number of improvement trials of a food source fs.numTrials reaches a given threshold NT max , the scout bee determines a new food source to replace the former one in the hive of solutions.Typically, this is done by replacing the exhausted food source by a randomly generated one.The algorithm terminates when a certain stopping condition is met.In Dı ´az et al. ( 2022), this condition is met after a number maxIter of consecutive iterations without finding a food source that improves the queen's one.
In this general schema, diversity is mainly controlled by two mechanisms: modifying a random part of a food source to obtain a trail solution in the onlooker bee phase, or replacing a whole solution by a new one during the scout bee phase.Although it can be argued that these mechanisms help to avoid premature convergence, practical experiments have determined that this might not be the case for the IJSP Dı ´az et al. ( 2022).The employed and onlooker bee phases generate new solutions at each iteration, but they are included in the hive only if they can improve the food source from which they were generated.This may lead to a high selective pressure and facilitate getting trapped in local optima.When that happens, injecting a randomly generated solution in the scout bee phase with poor quality may not contribute enough to obtain better results.On the other hand, the current schema has up to three evaluation rounds, one at the end of each of the main phases.When diversity issues are present, most of these evaluations are useless, since new solutions won't be accepted, making the algorithm unnecessarily slow.
To increase diversity, an elitist selection mechanism was introduced in Dı ´az et al. ( 2022) so the employed bee phase does not always choose the queen's food source to explore, but a solution from a set of promising ones.In Dı ´az et al. ( 2023), an ESABC method is proposed where the onlooker bee phase is redefined based on the seasonal behaviour of honeybees to also increase the exploration capabilities of ABC.However, this seasonal behaviour is somehow similar to a Simulated Annealing method, so it increases the number of evaluations performed by the algorithm and therefore its overall complexity.
In Karaboga and Akay (2009), ABC is compared with other metaheuristics such as genetic algorithms (GA), differential evolution (DE) or particle swarm (PSO).In general, the exploration idea on these methods consists in altering all individuals of the population, or creating new ones, with a certain probability.Then the new set of solutions is evaluated and some type of replacement strategy is applied.This allows to first explore, and then apply the selective pressure through replacement.We propose to adapt this strategy to the setting of the ABC to increase diversity of solutions while keeping a reasonable complexity.The general layout of our proposal is inspired by the structure of Particle Swarm Optimization Kennedy and Eberhart (1995) in the sense that food sources can be understood as the local best position of a bee, and the queen's source would be the global best.At each iteration, bees explore new food sources (solutions) influenced both by the best sources of the hive (global best) and its current food source (local best).Thus, the employed bee phase and the onlooker bee phase are fused into one exploration step.At the end of the cycle, all new solutions are evaluated.Each bee moves to its new food source only if it is different from the queen's and better than its local best.If the new food source is not accepted, the bee increases its counter of trials fs.numTrials and when it exceeds the threshold NT max , the bee moves to a random food source emulating scout bees.Allowing the bees to move more freely before evaluation can increase population's diversity and reduce the complexity of the ABC, going from three evaluation phases per iteration to only one.Furthermore, having only one replacement phase decreases the overall count of improvement trials and the number of random solutions introduced in the scouting section.We refer to this new algorithm as Fast Elitist Artificial Bee Colony (fEABC in short).To exploit the diversity and speed of the new structure, we propose to incorporate a new Local Search step before evaluation and do a further empirical evaluation on its advantages.The general structure of the algorithm is given in Algorithm 1.Each step is detailed in the following subsections.

Codification and initialization
We adopt the codification strategy from Dı ´az et al. ( 2020), where solutions are encoded using permutations with repetition Bierwirth (1995).Each solution s is represented by its task processing order p, but each operation o(i, j) in p is replaced by its job number i.For example, for a problem with n ¼ 3 jobs and m ¼ 2 machines, a schedule with p ¼ ðoð1; 1Þ; oð2; 1Þ; oð1; 2Þ; oð3; 1Þ; oð3; 2Þ; oð2; 2ÞÞ is encoded as (1, 2, 1, 3, 3, 2).To decode a solution, each value i in the permutation is replaced by the j-th task of that job, where j is the number of times the job has appeared so far in the permutation (e.g. the second time the value 1 appears, it refers to task o(1, 2)).To build a schedule from the permutation, we consider two decoding strategies.The strategy described in Sect.2.1 can be seen as an adaptation to intervals of the concept of Semi-active Schedule Generation Scheme, or Semi-active SGS, introduced in  (1995); Palacios et al. (2014).The set of active schedules is smaller than the set of semi-active schedules and both are guaranteed to contain the optimal solution.This can be seen as an advantage, since reducing the search space makes it faster to navigate, but it can also decrease diversity in meta-heuristics working on that space.An empirical analysis is needed to find the best option.
To generate an initial hive H 0 for the algorithm, a set of food sources is created by randomly generating permutations with repetition that are feasible for the problem.These permutations are later decoded and evaluated using of the described SGS.When comparing the richness of two different food sources, the fitness function is used.Given two food sources fs and fs 0 encoding two schedules s and s 0 respectively, we consider that fs is better than fs 0 if C max ðsÞ MP C max ðs 0 Þ.

Exploration strategy
At each iteration, each bee begins by exploring the neighbourhood around its currently-assigned food source fs.To generate new food sources in the surroundings of fs, a small change is performed using one of the following operators for permutations: Swap, Inversion or Insertion.Given the small magnitude of the changes, it is reasonable to expect that new solutions do not differ much from fs in terms of makespan but provide enough of a difference to increase diversity while maintaining the average quality of the population.
After moving to a neighbouring food source new fs , the bee begins the exploration towards the best food sources known by the hive.In the classical ABC, the best food source is selected at the beginning of the iteration and is later used by all employed bees.In this work, each bee selects a food source gBest to move towards to.Selecting the best food source in the hive (the queen's) can lead to a shorter execution time derived from the lack of diversity in the solution bank Banharnsakun et al. (2012).Two alternatives were tested in our preliminary work in Dı ´az et al. (2022) to avoid this issue.Elite2 consists on selecting the best food source among the group of sources with the highest number of improvement trials Garcı ´a-A ´lvarez et al. (2018).On the other hand, Elite3 selects at random one of the best N food sources existing at the time, being N a configurable value.After an experimental study, the latter appears to be the most prominent strategy.Moreover, the fact of being able to configure the size of that set allows to balance exploration and exploitation.Therefore, we choose Elite3 as selection strategy for each bee in our method.
Once the bee has selected its global best gBest, it applies a recombination operator to move from new fs to gBest, obtaining a new source new 0 fs containing information of both solutions.This focuses the exploration towards more promising areas of the search space.We test three different recombination operators especially tailored to Job Shop Scheduling Problems: Job-Order Crossover (JOX) Ono et al. (1996), Generalised Order Crossover (GOX) Bierwirth (1995) and Precedence Preservative Crossover (PPX) Bierwirth et al. (1996).Only after the bee has explored both its neighbouring food source and a solution towards the hive's bests, the newly food source new f s 0 is evaluated.

Local search
The diversity derived from the previous steps, and the reduction on the number of evaluations per iteration, creates an opportunity to include more exploitation-driven strategies such as Local Search.Local search techniques focus on exploitation to offer further improvements of solutions resulting from schedule generation heuristics.In our context, after the exploration phase of the bee is completed and a new solution new 0 fs is evaluated, the bee may decide to carry an intense search in the vicinity of the new food source before moving on to the next iteration.
We take the neighbourhood structure defined in Van Laarhoven et al. (1992) as reference.There, a neighbour is generated by reversing a critical arc in the solution graph GðsÞ representing schedule s.That is a graph where each task is represented as a node.There is an arc from node x to node y, if and only if, x ¼ PJ y or x ¼ PM y .Additionally, there are two dummy nodes, 0 and E, such that there is arc from 0 to the first task of each job, and also from the last task of each job to E. Each arc (x, y) is labelled with the processing time p x .A critical path in G(s) is the longest path from 0 to E and its length determines the makespan.All arcs that belong to a critical path are called critical arcs.In Gonza ´lez Rodrı ´guez et al. ( 2008), this idea is adapted and extended to the Fuzzy JSP by using three parallel graphs, where arcs on each one of them are labelled with each of the components of the Triangular Fuzzy Numbers (TFN).Within this neighbourhood structure, all neighbours are feasible and the connectivity property holds.For our algorithm, we take that idea and adapt it to the framework of interval uncertainty by using two parallel graphs G 1 , G 2 to represent each solution.The former labels the arcs with the lower bound of the processing times and the latter with the upper bounds.Therefore, critical paths in G 1 and G 2 determine C max and C max respectively.We define our neighbourhood as the set of solutions that result from reversing an arc (x, y) that is critical in G 1 or G 2 (or both).
Given that the aim is to maintain a good solution diversity in our solving method, we use a simple hillclimbing algorithm to guide the search.In this approach, neighbours of the current solution are explored in a random order until we find one that improves the current solution.That neighbour becomes the new current solution and the process is repeated until a solution with no improving neighbours is found.This method is among the fastest in the family of Local Search, since it does not necessarily evaluate all neighbours of each solution and it does not provide too much exploitation, thus helping us improve our solutions without losing much diversity.

Scouting and replacement
After each bee has found and evaluated a new food source, and the Local Search has been applied to it if the option is available, it shares the new solution with the rest of the hive.If the new food source is equivalent to the queen's one (i.e. the best food source found so far), it is discarded for the sake of maintaining diversity in the pool.Otherwise, if it improves the food source currently assigned to the bee, the bee moves to the new food source for the upcoming iteration.Similarly, if it is better than the best food source found so far, it replaces it and it is assigned to the queen.On the other hand, if it cannot improve the current food source of the bee, the number of improvement trials fs.numTrials of the food source is increased by one.
If the food source reaches the maximum number of improvement trials NT max , it is discarded and the bee is in charge of finding a replacement.In this case, a random solution is generated following the same criteria as in Sect.3.1 and the bee is assigned to it.

Experimental results
In this section, the proposed fast Elite ABC algorithm (fEABC) is evaluated and compared with the state-of-theart methods.Firstly, a parametric tuning is carried out to find the best setup for the algorithm.Once found, it is compared to best known methods from the literature for the Interval JSP.Finally, a sensitivity analysis is conducted to assess the behaviour of the algorithm on instances with different amounts of uncertainty.We evaluate our method over 12 instances from the literature Dı ´az et al. (2020).Namely FT10 (10 Â 10), FT20 (20 Â 5), La21, La24, La25 (15 Â 10), La27, La29 (20 Â 10), La38, La40 (15 Â 15), ABZ7, ABZ8, and ABZ9 (20 Â 15).Values in brackets denote the instance size (n Â m).All experiments are done using a C?? implementation on a PC with Intel Xeon Gold 6132 processor at 2.6 Ghz and 128 Gb RAM with Linux (CentOS v6.10).For every experiment, we consider 30 runs of the method on each instance, so the resulting data are representative of the method's performance.

Parameter setup
For the parameter setup, we perform two different tuning processes depending on the use or not of Local Search.To differentiate them, we refer to the variant with Local Search as fEABC LS , while we use simply fEABC for the one without the Local Search.The stopping criterion is set in both cases to maxIter ¼ 25 consecutive iterations without improving the best solution found so far and the population size is set to 250 individuals according to the results obtained in Dı ´az et al. ( 2022) for ABC E3 .For the remaining parameters, the following values are tested: We begin the parameter tuning using a default setup with the values highlighted in bold in the list.Then we follow a sequential process where we select a parameter and test all its possible values.Once the best value for that parameter is found, it is set and the process repeats until all parameters have been established.Table 1 displays the best resulting configuration for each variant.
Regarding the use of Semi-active SGS or Insertion SGS, our results show that using an insertion strategy, and thus moving in the search space of active schedules, is better in general.In fact, the best setup for fEABC using the Insertion SGS obtains makespan values that are 7.2% better in average than those obtained with the best setup using the Semi-active SGS.When including Local Search, using Semi-active schedules brings more diversity, which could potentially benefit the exploitation of LS.However, this is not the case, and using the Insertion SGS still gets results that are 5.0% better than using the Semi-active SGS.Our first target is to assess if the new method increases the population's diversity enough to allow the algorithm to converge for more iterations and reach better areas of the search space.To do so, in Table 2 we compare the fEABC without the Local Search, with GA and the ABC E3 method that defines the starting point for this work.For each instance, the best-known Lower Bound (LB) for the expected makespan is reported Dı ´az et al. (2022).For each method, the table displays the Relative Error (RE) with respect to LB of the expected makespan of the best solution obtained in 30 runs, together with the average relative error (standard deviation in brackets) among those runs and the average runtime in seconds.Best average values are highlighted in bold.We can see that in average, fEABC obtains the best results in 10 out of 12 instances.Not only that, but in average, the relative errors obtained by fEABC are 7.2% better than those obtained with ABC E3 , and 39.1% better than the GA.
Regarding runtime, despite reducing the number of evaluations in the proposed method, it takes 16% longer in average to converge than ABC E3 .This is an expected result, since the target is to increase diversity for the algorithm not to get easily stuck in local optima and explore further into the search space.If we analyse the speed of the algorithms per iteration, we see that an iteration in fEABC is actually 9% faster than an iteration of ABC E3 .But fEABC is capable of iterating for longer before meeting the stopping criterion.This is illustrated in Fig. 2, where we can see the evolution during 200 iterations of the average expected makespan in 30 runs on instance La29.We can see how ABC E3 quickly finds good quality solutions, but then gets stuck in local optima.On the other hand, fEABC focuses more on exploration on the early iterations, which then allows it to converge to better solutions than ABC E3 in the long term.
In Dı ´az et al. ( 2023), an ESABC incorporates a Simulated Annealing-based strategy to improve population's diversity, obtaining the best-known results for the IJSP at the cost of increasing the algorithms runtime.In Table 3 we compare both fEABC and fEABC LS with this method.First, we observe that fEABC has a very similar behaviour to ESABC in terms of average relative errors.We conduct a statistical examination to detect if there is a significant difference between them.If the samples on each instance meet the Shapiro-Wilk test of normality, an Analysis of Variance (ANOVA) is executed, followed by Tukey's Honest Significant Difference to display the results of all pairwise comparisons within the tested groups.If the test of normality fails, a Kruskall-Wallis rank sum test is performed followed by a multiple comparison to identify which groups differ.The tests show that there is a significant difference between ESABC and fEABC in only 1 of the 12 instances.However, the runtime of fEABC is 13.2% shorter than ESABC.That is, fEABC obtains very similar results in less time than ESABC, which is a significant achievement taking into account that ESABC incorporates a Simulated Annealing-like technique for diversity and exploitation.In fEABC LS we try to invest the time reduction of fEABC on exploitation.Reported results show that in The behaviour of all different methods can be better appreciated in Fig. 2. We can observe how fEABC LS converges better than any of the other methods, including ESABC, due to its ability to balance the exploration and exploitation of the search space, being less likely to get trapped in local optima.Furthermore, fEABC shows a very similar behaviour to ESABC while being faster as shown in Table 3.

Sensitivity analysis
Finally, we carry out a sensitivity analysis to determine if taking into account uncertainty during the optimisation process is a beneficial effort in the face of increasing uncertainty.We consider a new version of fEABC LS , fEABC C LS , where the duration of each task is taken as the midpoint of the interval.That is, uncertainty is not taken into account during the optimisation process.In this setting, makespan values obtained by fEABC LS are intervals,  but makespan values obtained by fEABC C LS will be crisp, so a straightforward comparison is not fair.Instead, we evaluate the performance of the obtained solutions in terms of their robustness in K ¼ 1000 different configurations (see Sect. 2.2).To also evaluate their robustness in more uncertain environments, we generate two new versions of each instance where the interval widths are respectively enlarged by 20% and 40%.The changes are applied symmetrically on both sides of the intervals to maintain the significance of the midpoint.If the increase on the width of an interval p o would result on a negative bound, then the interval ½0; p o þ p o is taken instead.
Table 4 shows the values of the solutions obtained by fEABC LS , considering interval processing times, and fEABC C LS , considering only the midpoint of the intervals, over the three sets of instances: the original ones (?0% in the table), and the two new versions (?20% and ?40% in the table).The results show that fEABC LS finds the most robust solutions, even when the size of the intervals is expanded by 20 and 40%.In fact, the solutions obtained by fEABC LS have better robustness values over the scenarios with an increase of 20% than those obtained by fEABC C LS on the original instances.As expected, in both cases the robustness deteriorates as uncertainty increases, but there is a clear difference between incorporating uncertainty in the optimisation or not.For instance, when the intervals are increased by 20 and 40% the values of solutions obtained with fEABC LS get 10.68% and 29.62% worse, whereas for fEABC C LS the values become 16.37% and 42.50% worse respectively.This is better illustrated in Fig. 3.Each graphic contains a histogram with the K ¼ 1000 realisations of the best solution obtained in fEABC LS and the best one from fEABC C LS on the different variants of instance La25.Red lines depict the predictive values: C max when using fEABC C LS and E½C max when using fEABC LS .In the latter case, blue dotted lines show the interval makespan bounds.If we compare the graphics with the original La25 instance, we can see how the red line in fEABC LS is quite inside the histogram, while in the case of fEABC C LS it is more on the left side, showing that the solution in this case is quite optimistic and real executions tend to have a higher makespan.When uncertainty increases, the histograms in both cases tend to spread towards the right side of the plot.However, with the fEABC LS solution, the red line is still quite inside the histogram, showing that it is a better predictor than fEABC C LS where it remains on the left.Moreover, in the case of fEABC C LS , real executions tend to move more to the right side of the graphic than with fEABC LS , starting to accumulate around 1050 in La25 þ40% .

Conclusions
We have considered the IJSP, a version of the JSP that models the uncertainty on task durations appearing in realworld problems using intervals.In Dı ´az et al. ( 2022) we proposed an ABC algorithm tailored to this problem.In that study, diversity issues where spotted and a new selection mechanism Elite3 was proposed to tackle them.In this work, we extend the mentioned ABC by including new diversity strategies and modifying the general structure of the algorithm to reduce the number of unnecessary evaluations.Exploration is more encouraged before reaching the evaluation and replacement phases.At the same time, the number of evaluation phases is reduced from three to one.Moreover, the number of parameters to set up in the algorithm is greatly reduced, making it easier to tune for different environments.
A parametric analysis showed that using semi-active schedules brings in general more diversity to the Fast Elitist ABC for makespan optimisation... population, but it lacks enough exploitation, thus using an insertion SGS capable of generating active schedules provides better results overall.The proposed solving method was favourably compared with its previous version ABC E3 and obtained similar results to the best method in the IJSP literature while using significantly less time.The reduction in runtime and the increase of diversity allowed us to hybridize our method with a Hill Climbing algorithm.As expected, the runtime increases, but the improvement in solution quality is larger than the time increase and leads the algorithm to the best results for the IJSP, outperforming all previously published methods.
A sensitivity analysis was also performed to assess the robustness of the obtained solutions in environments with larger amounts of uncertainty.The comparison was also made to see the advantages of considering the uncertainty during the optimisation process.The results showed that in that case, the robustness of the obtained solutions is much better than solutions obtained when solving the problem without taking the uncertainty into account.
For every task o 2 O, let s o ðpÞ and c o ðpÞ denote respectively the starting and completion times of o, let PM o ðpÞ and SM o ðpÞ denote the predecessor and successor tasks of o in the machine m o according to p, and let PJ o and SJ o denote the tasks preceding and succeeding o in its job.Then the starting time of o is given by s o ðpÞ ¼ maxðs PJ o þ p PJ o ; s PM o ðpÞ þ p PM o ðpÞ Þ, and the completion time by c o ðpÞ ¼ s o ðpÞ þ p o .The makespan is computed as the completion time of the last task to be processed according to p thus, C max ðpÞ ¼ max o2O fc o ðpÞg.If there is no possible confusion regarding the processing order, we may simplify notation by writing s o , c o and C max .
To the best of our knowledge, the most successful algorithms in the literature for solving the Interval JSP are the genetic algorithm from Dı ´az et al. (2020) (GA), the ABC E3 from Dı ´az et al. (2022), and the more recent ESABC from Dı ´az et al. (2023).

Fig. 2
Fig. 2 Evolution of Expected makespan over 200 iterations of GA, ABC E3 , fEABC, ESABC and fEABC LS on instance La29 and Dı ´az et al. (2020), uncertainty in the processing time of tasks is modelled using closed intervals.Therefore, the processing time of task o 2 O is represented by an interval p o ¼ ½p o ; p o , where p o and p o are the available lower and upper bounds for the exact but unknown processing time p o .
Schema of the f EABC Algorithm Algorithm 1 16: if new fs is better than Best then 22: if new fs is better than fs and different than Best then 23: fs ⇐ new fs 24: else 25: fs.numTrials ⇐ fs.numTrials + 1 Palacios et al. (2014) for the JSP with fuzzy durations.In this setting, the starting time s o of each task o corresponds to the Earliest feasible Appending Starting time (ESA o ), and the resulting schedule is said to be Semi-active based on the definition from Sprecher et al. (1995).In an Insertion SGS, the starting time s o of each task o in p is calculated as its Earliest feasible Insertion Starting time (ESI o ).Let k ¼ m o be the machine where o needs to be processed, PJ o the kÞ; . ..; rðg k ; kÞÞ the sequence of tasks already scheduled in machine m o .A feasible insertion position q; 0 q\g k for o verifies that maxfc rðq;kÞ ; c PJ o g þ p o s rðqþ1;kÞ and maxfc rðq;kÞ ; c PJ o g þ p o s rðqþ1;kÞ .If such position exists, ESI o ¼ maxfc rðq Ã ;kÞ ; c PJ o g, where q Ã is the smallest feasible insertion position.If there is no feasible insertion position, then ESI o ¼ ESA o .The schedules that can be obtained with this decoding mechanism fall into the definition of Active schedules given in Sprecher et al.

Table 1
Parameter setup for each variant of fEABC

Table 2
Relative error (%) w.r.t.LB obtained by 30 runs of GA, ABC E3 and fEABC and average runtime in seconds

Table 3
Relative error (%) w.r.t.LB obtained by 30 runs of ESABC, fEABC and fEABC LS and average runtime in seconds Bold indicates the best Average Relative Error per instance among the three comparedmethods Fast Elitist ABC for makespan optimisation...

Table 4
Average values (Â1000) for fEABC CLS and fEABC LS increasing processing times' interval width in ?20% and ?40% (standard deviation in brackets) Histograms of C ex max obtained with the best solutions from fEABC C LS and fEABC LS on K ¼ 1000 configurations of instances La25, La25 þ20% and La25 þ40%