Introduction

Scheduling is a field of investigation which has known a significant growth these last years. The scheduling problems appear in all the economic areas, from computer engineering to industrial production and manufacturing. The job shop scheduling problem (JSP), which is among the hardest combinatorial optimization problems (Sonmez and Baykasoglu 1998), is a branch of the industrial production scheduling problems.

The flexible job shop scheduling problem (FJSP) is a generalization of the classical JSP that allows to process operations on one machine out of a set of alternative machines. Hence, the FJSP is more computationally difficult than the JSP. Furthermore, the operation scheduling problem, the FJSP presents an additional difficulty caused by the operation assignment problem to a set of available machines. This problem is known to be strongly NP-Hard even if each job has at most three operations and there are two machines (Garey et al. 1976).

To solve this problem, Pinedo (2002) developed a set of exact algorithms limited for instances with 20 jobs and 10 machines. Birgin et al. (2014) presented a mixed integer linear programming (MILP) model, but it took a very large time to generate a scheduling solution. Shafigh et al. (2015) developed a mathematical model integrating layout configuration and production planning in the design of dynamic distributed layouts. Their model incorporated different manufacturing attributes such as demand fluctuation, system reconfiguration, lot splitting, work load balancing, alternative routings, machine capability, and tooling requirements. On the other hand, a community of researchers used the metaheuristics to find near-optimal solutions for the FJSP with acceptable computational time. Brandimarte (1993) proposed a hierarchical algorithm based on tabu search metaheuristic for routing and scheduling with some known dispatching rules to solve the FJSP. Hurink et al. (1994) developed a Tabu Search procedure for the job shop problem with multi-purpose machines. Dauzère-Pérès and Paulli (1997) presented a new neighborhood structure for the problem, and a list of Tabu moves was used to prevent the local search from cycling. Mastrolilli and Gambardella (2000) used tabu search techniques and presented two neighborhood functions allowing an approximate resolution for the FJSP. Bozejko et al. (2010a) presented a tabu search approach based on a new golf neighborhood for the FJSP, and in the same year, Bozejko et al. (2010b) proposed another new model of a distributed tabu search algorithm for the FJSP, using a cluster architecture consisting of nodes equipped with the GPU units (multi-GPU) with distributed memory. A novel hybrid tabu search algorithm with a fast Public Critical Block neighborhood structure (TSPCB) was proposed by Li et al. (2011) to solve the FJSP. For the genetic algorithm, it was adopted by Chen et al. (1999), where their chromosome representation of solutions for the problem was divided into two parts. The first part defined the routing policy and the second part took the sequence of operations on each machine. Kacem et al. (2002a) used a genetic algorithm with an approach of localization to solve jointly the assignment and job shop scheduling problems with partial and total flexibility, and a second hybridization of this evolutionary algorithm with the fuzzy logic was presented in Kacem et al. (2002b). Jia et al. (2003) proposed a modified genetic algorithm for the FJSP, where various scheduling objectives can be achieved such as minimizing makespan, cost, and weighted multiple criteria. Ho et al. (2007) developed a new architecture named LEarnable Genetic Architecture (LEGA) for learning and evolving solutions for the FJSP, allowing to provide an integration between evolution and learning in an efficient manner within a random search process. Gao et al. (2008) adapted a hybrid genetic algorithm (G.A) and a variable neighborhood descent (V.N.D) for FJSP. The G.A used two vectors to represent a solution and the disjunctive graph to calculate it. Then, a V.N.D was applied to improve the G.A final individuals. Zhang et al. (2014) presented a model of low-carbon scheduling in the FJSP considering three factors, the makespan, the machine workload for production, and the carbon emission for the environmental influence. A metaheuristic hybridization algorithm was proposed combining the original Non-dominated Sorting Genetic Algorithm II (NSGA-II) with a local search algorithm based on a neighborhood search technique. Kar et al. (2015) presented a production-inventory model for deteriorating items with stock-dependent demand under inflation in a random planning horizon. This model is formulated as profit maximization problem with respect to the retailer and solved by two metaheuristics, which are the genetic algorithm and the particle-swarm optimization. Kia et al. (2017) treated the dynamic flexible flow line problem with sequence-dependent setup times. A set of composite dispatching rule-based genetic programming are proposed to solve this problem by minimizing the mean flow time and the mean tardiness objectives. Moreover, the particle-swarm optimization was implemented by Xia and Wu (2005) in a metaheuristic hybridization approach with the simulated annealing for the multi-objective FJSP. A combined particle-swarm optimization and a tabu search algorithm were proposed by Zhang et al. (2009) to solve the multi-objective FJSP. Moslehi and Mahnam (2011) presented a metaheuristic approach based on a hybridization of the particle-swarm optimization and local search algorithm to solve the multi-objective FJSP. In addition, other types of metaheuristics were developed in this last few years, such as (Yazdani et al. 2010) implementing a parallel variable neighborhood search (PVNS) algorithm to solve the FJSP using various neighborhood structures. A new biogeography-based optimization (BBO) technique is developed by Rahmati and Zandieh (2012) allowing to search a solution area for the FJSP and to find the optimum or near-optimum scheduling to this problem. Shahriari et al. (2016) studied the just in time single machine scheduling problem with a periodic preventive maintenance. A multi-objective version of the particle-swarm optimization algorithm is implemented to minimize the total earliness–tardiness and the makespan simultaneously. In addition, it is noted that metaheuristics based on constraint programming (CP) techniques have been used for the FJSP. Hmida et al. (2010) proposed a variant of the climbing discrepancy search approach (C.D.S) for solving the FJSP, where they presented various neighborhood structures related to assignment and sequencing problems. Pacino and Hentenryck (2011) considered a constraint-based scheduling approach to the flexible job shop problem. They studied both the large neighborhood search (LNS) and the adaptive randomized decomposition (ARD) schemes, using random, temporal, and machine decompositions. Oddi et al. (2011) adapted an iterative flattening search (IFS) algorithm for solving the flexible job shop scheduling problem (FJSSP). This algorithm applied two steps, a first relaxation step, in which a sub-set of scheduling decisions was randomly retracted from the current solution, and a second solving step, in which a new solution was incrementally recomputed from this partial schedule. Moreover, a new heuristic was developed by Ziaee (2014) for the FJSP. This heuristic is based on a constructive procedure considering simultaneously many factors having a great effect on the solution quality. Furthermore, distributed artificial intelligence techniques were used for this problem, such as the multiagent model proposed by Ennigrou and Ghédira (2004) composed by three classes of agents, job agents, resource agents, and an interface agent. This model is based on a local search method which is the tabu search to solve the FJSP. In addition, this model was improved in Ennigrou and Ghédira (2008), where the optimization role of the interface agent was distributed among the resource agents. Henchiri and Ennigrou (2013) proposed a multiagent model based on a hybridization of two metaheuristics, a local optimization process using the tabu search to get a good exploitation of the good areas and a global optimization process integrating the particle-swarm optimization (PSO) to diversify the search towards unexplored areas. Rezki et al. (2016) proposed a multiagent system combining many intelligent techniques such as: multivariate control charts, neural networks, bayesian networks, and expert systems, for complex process monitoring tasks that are: detection, diagnosis, identification, and reconfiguration.

In this paper, we present how to solve the flexible job shop scheduling problem by a hybridization of two metaheuristics within a holonic multiagent model. This new approach follows two principal steps. In the first step, a genetic algorithm is applied by a scheduler agent for a global exploration of the search space. Then, in the second step, a local search technique is used by a set of cluster agents to improve the quality of the final population. Numerical tests were made to evaluate the performance of our approach based on four data sets of Kacem et al. (2002b), Brandimarte (1993), Hurink et al. (1994), and Barnes and Chambers (1996) for the FJSP, where the experimental results show its efficiency in comparison with other approaches.

The rest of the paper is organized as follows. In the next section, we define the formulation of the FJSP with its objective function and a simple problem instance followed by which we detail the proposed hybrid metaheuristic algorithm with its clustered holonic multiagent levels. The experimental and comparison results are provided in the subsequent section. The final section rounds up the paper with a conclusion.

Problem formulation

The flexible job shop scheduling problem (FJSP) could be formulated as follows. There is a set of n jobs \(J = \lbrace {J}_{1},\dots ,{J}_{n}\rbrace\) to be processed on a set of m machines \(M = \lbrace {M}_{1},\dots ,{M}_{m}\rbrace.\) Each job \({J}_{i}\) is formed by a sequence of \({n}_{i}\) operations \(\lbrace {O}_{i,1},{O}_{i,2},\dots ,{O}_{i,ni}\rbrace\) to be performed successively according to the given sequence. For each operation \({O}_{i,j},\) there is a set of alternative machines \(M({O}_{i,j})\) capable of performing it. The main objective of this problem is to find a schedule minimizing the end date of the last operation of the jobs set which is the makespan. The makespan is defined by \(C_{\rm {max}}\) in Eq. 1, where \({C}_{i}\) is the completion time of a job \({J}_{i}\):

$$\begin{aligned} C_{\rm {max}} = {\rm {max}}_{1 \le i \le n} ({C}_{i}). \end{aligned}$$
(1)

The FJSP scheduling problem is divided into two sub-problems:

  • The operations assignment sub-problem assigns each operation to an appropriate machine.

  • The operations sequencing sub-problem determines a sequence of operations on all the machines.

Furthermore, the adopted hypotheses in this problem are:

  • All the machines are available at time zero.

  • All jobs are ready for processing at time zero.

  • The order of operations for each job is predefined and cannot be modified.

  • There are no precedence constraints among operations of different jobs.

  • The processing time of operations on each machine is defined in advance.

  • Each machine can process only one operation at a time.

  • Operations belonging to different jobs can be processed in parallel.

  • Each job could be processed more than once on the same machine.

  • The interruption during the process of an operation on a machine is negligible.

To explain the FJSP, a sample problem of three jobs and five machines is shown in Table 1, where the numbers present the processing times and the tags “–” mean that the operation cannot be executed on the corresponding machine.

Table 1 Simple instance of the FJSP

A metaheuristic hybridization within a holonic multiagent model

Glover et al. (1995) elaborated a study about the nature of connections between the genetic algorithm and tabu search metaheuristics, searching to show the existing opportunities for creating a hybrid approach with these two standard methods to take advantage of their complementary features and to solve difficult optimization problems. After this pertinent study, the combination of these two metaheuristics has become more well known in the literature, which has motivated many researchers to try the hybridization of these two methods for the resolution of different complex problems in several areas.

Ferber (1999) defined a multiagent system as an artificial system composed of a population of autonomous agents, which cooperate with each other to reach common objectives, while simultaneously each agent pursues individual objectives. Furthermore, a multiagent system is a computational system where two or more agents interact (cooperate or compete, or a combination of them) to achieve some individual or collective goals. The achievement of these goals is beyond the individual capabilities and individual knowledge of each agent (Botti and Giret 2008).

Koestler (1967) gave the first definition of the term “holon” in the literature, by combining the two Greek words “hol” meaning whole and “on” meaning particle or part. He said that almost everything is both a whole and a part at the same time. In fact, a holon is recursively decomposed at a lower granularity level into a community of other holons to produce a holarchy (Calabrese 2011). Moreover, a holon may be viewed as a sort of recursive agent, which is a super-agent composed by a sub-agents set, where each sub-agent has its own behavior as a complementary part of the whole behavior of the super-agent. Holons are agents able to show an architectural recursiveness (Giret and Botti 2004).

In this work, we propose a hybrid metaheuristic approach processing two general steps: a first step of global exploration using a genetic algorithm to find promising areas in the search space and a clustering operator allowing to regroup them in a set of clusters. In the second step, a tabu search algorithm is applied to find the best individual solution for each cluster. The global process of the proposed approach is implemented in two hierarchical holonic levels adopted by a recursive multiagent model, named genetic algorithm combined with tabu search in a holonic multiagent model (GATS\(+\)HM), see Fig. 1. The first holonic level is composed by a scheduler agent which is the Master/Super-agent, preparing the best promising regions of the search space, and the second holonic level containing a set of cluster agents which are the workers/sub-agents, guiding the search to the global optimum solution of the problem. Each holonic level of this model is responsible to process a step of the hybrid metaheuristic algorithm and to cooperate between them to attain the global solution of the problem.

Fig. 1
figure 1

Metaheuristic hybridization within a holonic multiagent model

In fact, the choice of this new metaheuristic hybridization is justified by that the standard metaheuristic methods use generally the diversification techniques to generate and to improve many different solutions distributed in the search space, or using local search techniques to generate a more improved set of neighbourhood solutions from an initial solution. However, they did not guarantee to attain promising areas with good fitness converging to the global optimum despite the repetition of many iterations; that is why, they need to be more optimized. Therefore, the novelty of our approach is to launch a genetic algorithm based on a diversification technique to only explore the search space and to select the best promising regions by the clustering operator. Then, applying the intensification technique of the tabu search allowing to relaunch the search from an elite solution of each cluster autonomously to attain more dominant solutions of the search space.

The use of a multiagent system gives the opportunity for distributed and parallel treatments which are very complimentary for the second step of the proposed approach. Indeed, our combined metaheuristic approach follows the paradigm of “Master” and “Workers” which are two recursive hierarchical levels adaptable for a holonic multiagent model, where the scheduler agent is the Master/Super-agent of its society and the cluster agents are its Workers/Sub-agents.

Fig. 2
figure 2

First step of the global process by the scheduler agent

Scheduler agent

The scheduler agent (SA) is responsible to process the first step of the hybrid algorithm using a genetic algorithm called neighborhood-based genetic algorithm (NGA) to identify areas with high average fitness in the search space during a fixed number of iterations MaxIter, see Fig. 2. In fact, the goal of using the NGA is only to explore the search space, but not to find the global solution of the problem. Then, a clustering operator is integrated to divide the best identified areas by the NGA in the search space to different parts, where each part is a cluster \({\rm {CL}}_{i} \in {\rm {CL}}\) the set of clusters, where \({\rm {CL}} = \lbrace {\rm {CL}}_{1},{\rm {CL}}_{2},\dots ,{\rm {CL}}_{N}\rbrace.\) In addition, this agent plays the role of an interface between the user and the system (initial parameter inputs and final result outputs). According to the number of clusters N obtained after the integration of the clustering operator, the SA creates N cluster agents (CAs) preparing the passage to the next step of the global algorithm. After that, the SA remains in a waiting state until the reception of the best solutions found by the CA for each cluster. Finally, it finishes the process by displaying the final solution of the problem.

Individual’s solution presentation

The flexible job shop problem is composed by two sub-problems: the machine assignment problem and the operation scheduling problem; that is why, the chromosome representation is encoded in two parts: machine assignment part (MA) and operation sequence part (OS). The first part MA is a vector \({V}_{1}\) with a length L equal to the total number of operation, where each index represents the selected machine to process an operation indicated at position p, see Fig. 3a. For example \(p=2,\) \({V}_{1}(2)\) is the selected machine \({M}_{4}\) for the operation \({O}_{1,2}.\) The second part OS is a vector \({V}_{2}\) having the same length of \({V}_{1},\) where each index represents an operation \({O}_{i,j}\) according to the predefined operations of the job set, see Fig. 3b. For example, the operation sequence 1–2–1–3–2–3–2 can be translated to: \(({O}_{1,1},{M}_{5}) \rightarrow ({O}_{2,1},{M}_{1}) \rightarrow ({O}_{1,2},{M}_{4}) \rightarrow ({O}_{3,1},{M}_{3}) \rightarrow ({O}_{2,2},{M}_{3}) \rightarrow ({O}_{3,2},{M}_{1}) \rightarrow ({O}_{2,3},{M}_{2}).\)

Fig. 3
figure 3

Chromosome representation of a scheduling solution

To convert the chromosome values to an active schedule, we used the priority-based decoding of Gao et al. (2008). This method considers the idle time which may exist between operations on a machine m, and which is caused by the precedence constraints of operations belonging to the same job i. Let \({S}_{i,j}\) is the starting time of an operation \({O}_{i,j}\) (which can only be started after processing its precedent operation \({O}_{i,(j-1)}\)) with its completion time \({C}_{i,j}.\) In addition, we have an execution time interval [\({{t}^{S}}_{m},{{t}^{E}}_{m}\)] starts form \({{t}^{S}}_{m}\) and ends at \({{t}^{E}}_{m}\) on a machine m to allocate an operation \({O}_{i,j}.\) Therefore, if \(j=1,\) \({S}_{i,j}\) takes \({{t}^{S}}_{m},\) else if \(j \ge 2,\) it takes \({\rm {max}}\lbrace {{t}^{S}}_{m}, {C}_{i,(j-1)}\rbrace.\) In fact, the availability of the time interval [\({{t}^{S}}_{m},{{t}^{E}}_{m}\)] for an operation \({O}_{i,j}\) is validated by verifying if there is a sufficient time period to complete the execution time \({p}_{ijm}\) of this operation, see Eq. 2:

$$\begin{aligned}&{\rm{if}}\; j=1,\; {{t}^{S}}_{m} + {p}_{ijm} \le {{t}^{E}}_{m}\nonumber \\&{\rm{if}}\; j\ge 2,\; {\rm {max}}\lbrace {{t}^{S}}_{m}, {C}_{i,(j-1)}\rbrace + {p}_{ijm} \le {{t}^{E}}_{m}. \end{aligned}$$
(2)

The used priority-based decoding method allows in each case to assign each operation to its reserved machine following the presented execution order of the operation sequence vector \({V}_{2}.\) Therefore, to schedule an operation \({O}_{i,j}\) on a machine m, the fixed idle time intervals of the selected machine are verified to find an allowed available period to its execution. Therefore, if a period is found, the operation \({O}_{i,j}\) is executed there, else it is moved to be executed at the end of the machine m.

Noting that the chromosome fitness is calculated by \({\rm{Fitness}}(i)\) which is the fitness function of each chromosome i and \(C_{\rm {max}}(i)\) is its makespan value, where \(i \in \lbrace 1,\dots ,P\rbrace\) and P is the total population size, see Eq. 3:

$$\begin{aligned} {\rm{Fitness}}(i) = \frac{1}{C_{\rm {max}}(i)}. \end{aligned}$$
(3)

Population initialization

The initial population is generated randomly following a uniform law based on a neighborhood parameter to make the individual solutions more diversified and distributed in the search space. In fact, each new solution should have a predefined distance with all the other solutions to be considered as a new member of the initial solution. The used method to determinate the neighborhood parameter is inspired from Bozejko et al. (2010a), which is based on the permutation level of operations to obtain the distance between two solutions. In fact, the dissimilarity distance is calculated by verifying the difference between two chromosomes in terms of the placement of each operation \({O}_{i,j}\) on its alternative machine set in the machine assignment vector \({V}_{1}\) and its execution order in the operation sequence vector \({V}_{2}.\) Therefore, if there is a difference in the vector \({V}_{1},\) the distance is incremented by \(M({O}_{i,j})\) (is the number of possible n placement for each operation on its machine set, which is the alternative machine number of each operation \({O}_{i,j}\)), because it is in the order of O(n). Then, if there is a difference in the vector \({V}_{2},\) the distance is incremented by 1, because it is in the order of O(1). Let \({\rm{Chrom}}1({\rm{MA}}_{1}, {\rm{OS}}_{1})\) and \({\rm{Chrom}}2({\rm{MA}}_{2}, {\rm{OS}}_{2})\) two chromosomes of two different scheduling solutions, \(M({O}_{i,j})\) the alternative number of machines of each operation \({O}_{i,j},\) L is the total number of operations of all jobs and Dist is the dissimilarity distance. The distance is calculated first by measuring the difference between the machine assignment vectors \({\rm{MA}}_{1}\) and \({\rm{MA}}_{2}\) which is in order of O(n), then by verifying the execution order difference of the operation sequence vectors \({\rm{OS}}_{1}\) and \({\rm{OS}}_{2}\) which is in order of O(1). We give how to proceed in Algorithm 1.

figure a

Noting that Distmax is the maximal dissimilarity distance and it is calculated by Eq. 4, representing 100% of difference between two chromosomes:

$$\begin{aligned} {\rm{Distmax}} = \left[ \sum _{i=1}^{n} \sum _{i,1}^{i,ni} M({O}_{i,j})\right] + L. \end{aligned}$$
(4)

Selection operator

The selection operator is used to select the best parent individuals to prepare them to the crossover step. This operator is based on a fitness parameter allowing to analyze the quality of each selected solution. However, progressively, the fitness values will be similar for the most individuals. That is why, we integrate the neighborhood parameter, where we propose a new combined parent selection operator named fitness-neighborhood selection operator (FNSO) allowing to add the dissimilarity distance criteria to the fitness parameter to select the best parents for the crossover step. The FNSO chooses in each iteration two parent individuals until engaging all the population to create the next generation. The first parent takes successively in each case a solution i, where \(i \in \lbrace 1,\dots ,P\rbrace\) and P is the total population size. The second parent obtains its solution j randomly by the roulette wheel selection method based on the two fitness and neighborhood parameters relative to the selected first parent, where \(j \in \lbrace 1,\dots ,P\rbrace {\setminus } \lbrace i\rbrace\) in the P population and where \(j \ne i.\) In fact, to use this random method, we should calculate the fitness-neighborhood total FN for the population, see Eq. 5, the selection probability \({\text {sp}}_{k}\) for each individual \({I}_{k},\) see Eq. 6, and the cumulative probability \({\text {cp}}_{k},\) see Eq. 7. After that, a random number r will be generated from the uniform range [0,1]. If \(r \le {\text {cp}}_{1},\) then the second parent takes the first individual \({I}_{1},\) else it gets the \({k}{\rm th}\) individual \({I}_{k} \in \lbrace {I}_{2},\dots ,{I}_{P}\rbrace {\setminus } \lbrace {I}_{i}\rbrace\) and where \({\text {cp}}_{k-1} < r \le {\text {cp}}_{k}.\)

  • The fitness-neighborhood total for the population:

    $$\begin{aligned} {\rm{FN}} = \sum _{k=1}^P [1/(C_{\rm {max}}[k] \times {\text {Neighborhood}}[i][k])]. \end{aligned}$$
    (5)
  • The selection probability \({\text {sp}}_{k}\) for each individual \({I}_{k}\):

    $$\begin{aligned} {\text {sp}}_{k} = \frac{1/(C_{\rm {max}}[k] \times {\text {Neighborhood}}[i][k])}{\text {FN}}. \end{aligned}$$
    (6)
  • The cumulative probability \({\text {cp}}_{k}\) for each individual \({I}_{k}\):

    $$\begin{aligned} {\text {cp}}_{k} = \sum _{h=1}^k {\text {sp}}_{h}. \end{aligned}$$
    (7)

\(\Longrightarrow\) For Eqs. 5, 6, and 7, \(k = \lbrace 1,2,\dots ,P\rbrace {\setminus } \lbrace i\rbrace\)

Crossover operator

The crossover operator has an important role in the global process, allowing to combine in each case the chromosomes of two parents to obtain new individuals and to attain new better parts in the search space. In this work, this operator is applied with two different techniques successively for the parent’s chromosome vectors MA and OS.

Machine vector crossover A uniform crossover is used to generate in each case a mixed vector between two machine vector parents, Parent1-MA1 and Parent2-MA2, allowing to obtain two new children, Child1-MA1\('\) and Child2-MA2\('.\) This uniform crossover is based on two assignment cases; if the generated number is less than 0.5, the first child gets the current machine value of parent1 and the second child takes the current machine value of parent2. Else, the two children change their assignment direction, first child to parent2 and the second child to parent1, see Algorithm 2.

figure b

Operation vector crossover An improved precedence preserving order based on crossover (iPOX), inspired from Lee et al. (1998), is adapted for the parent operation vector OS. This iPOX operator is applied following four steps, a first step is selecting two parent operation vectors (\({\rm{OS}}_{1}\) and \({\rm{OS}}_{2}\)) and generating randomly two job sub-sets \({\rm{Js}}_{1}\)/\({\rm{Js}}_{2}\) from all jobs. A second step is allowing to copy any element in \({\rm{OS}}_{1}\)/\({\rm{OS}}_{2}\) that belong to \({\rm{Js}}_{1}/{\rm{Js}}_{2}\) into child individual \({\rm{OS}}_{1}'\)/\({\rm{OS}}_{2}'\) and retain them in the same position. Then, the third step deletes the elements that are already in the sub-set \({\rm{Js}}_{1}/{\rm{Js}}_{2}\) from \({\rm{OS}}_{1}\)/\({\rm{OS}}_{2}.\) Finally, fill orderly the empty position in \({\rm{OS}}_{1}'\)/\({\rm{OS}}_{2}'\) with the reminder elements of OS\(_{2}\)/OS\(_{1}\) in the fourth step, see the example in Fig. 4.

Fig. 4
figure 4

iPOX crossover example for the OS vector

Mutation operator

The mutation operator is integrated to promote the children generation diversity. In fact, this operator is applied on the chromosome of the new children generated by the crossover operation. In addition, each part of a child chromosome MA and OS has separately its own mutation technique.

Machine vector mutation This first operator uses a random selection of an index from the machine vector MA. Then, it replaces the machine number in the selected index by another belonging to the same alternative machine set, see Fig. 5.

Fig. 5
figure 5

Mutation operator example for the MA vector

Operation vector mutation This second operator selects randomly two indexes index1 and index2 from the operation vector OS. Next, it changes the position of the job number in the index1 to the second index2 and inversely, see Fig. 6.

Fig. 6
figure 6

Mutation operator example for the OS vector

Replacement operator

The replacement operator has an important role to prepare the remaining surviving population to be considered for the next iterations. This operator replaces in each case a parent by one of its children which has the best fitness in its current family.

Fig. 7
figure 7

Final population transformation by applying the clustering operator

Clustering operator

By finishing the maximum iteration number MaxIter of the genetic algorithm, the scheduler agent applies a clustering operator using the hierarchical clustering algorithm of Johnson (1967) to divide the final population into N clusters, see Fig. 7, to be treated by the cluster agents in the second step of the global process. The clustering operator is based on the neighbourhood parameter which is the dissimilarity distance between individuals. The clustering operator starts by assigning each individual \({\rm{Indiv}}(i)\) to a cluster \({\rm {CL}}_{i},\) so if we have P individuals, we have now P clusters containing just one individual in each of them. For each case, we fixe an individual \({\rm{Indiv}}(i)\) and we verify successively for each next individual \({\rm{Indiv}}(j)\) from the remaining population (where i and \(j \in \lbrace 1,\dots ,P\rbrace , i\ne j\)) if the dissimilarity distance Dist between \({\rm{Indiv}}(i)\) and \({\rm{Indiv}}(j)\) is less than or equal to a fixed threshold Distfix (representing a percentage of difference X% relatively to Distmax, see Eq. 8) and where \({\rm{Cluster}}({\rm{Indiv}}(i)) \ne {\rm{Cluster}}({\rm{Indiv}}(j)).\) If it is the case, \({\rm{Merge}}({\rm{Cluster}}({\rm{Indiv}}(i)), {\rm{Cluster}}({\rm{Indiv}}(j))),\) else continue the search for new combination with the remaining individuals. The stopping condition is by browsing all the population individuals, where we obtained at the end N clusters:

$$\begin{aligned} {\rm{Distfix}} = {\rm{Distmax}} \times X\%. \end{aligned}$$
(8)
Fig. 8
figure 8

Distribution of the cluster agents in the different clusters of the search space

Cluster agents

Each cluster agent \({\rm{CA}}_{i}\) is responsible to apply successively to each cluster \({\rm {CL}}_{i}\) a local search technique which is the tabu search algorithm to guide the research in promising regions of the search space and to improve the quality of the final population of the genetic algorithm. In fact, this local search is executed simultaneously by the set of the CAs agents, where each CA starts the research from an elite solution of its cluster searching to attain new more dominant individual solutions separately in its assigned cluster \({\rm {CL}}_{i},\) see Fig. 8. The used tabu search algorithm is based on an intensification technique allowing to start the research from an elite solution in a cluster \({\rm {CL}}_{i}\) (a promising part in the search space) to collect new scheduling sequence minimizing the makespan. Let E the elite solution of a cluster \({\rm {CL}}_{i},\) \(E' \in N(E)\) is a neighbor of the elite solution E, \({\rm{GL}}_{i}\) is the global list of each \({\rm{CA}}_{i}\) to receive new found elite solutions by the remaining CAs, each \({\rm {CL}}_{i}\) plays the role of the tabu list with a dynamic length, and \(C_{\rm {max}}\) is the makespan of the obtained solution. Therefore, the search process of this local search starts from an elite solution E using the move and insert method of Mastrolilli and Gambardella (2000), where each cluster agent \({\rm{CA}}_{i}\) changes the position of an operation \({O}_{i,j}\) from a machine m to another machine n belonging to the same alternative machine set of this selected operation \({O}_{i,j},\) searching to generate new scheduling combination \(E' \in N(E).\) After that, verifying if the makespan value of this new generated solution \(C_{\rm {max}}(E')\) dominates \(C_{\rm {max}}(E)\) (\(C_{\rm {max}}(E') < C_{\rm {max}}(E)\)), and if it is the case \({\rm{CA}}_{i}\) saves \(E'\) in its tabu list (which is \({\rm {CL}}_{i}\)) and sends it to all the other CAs agents to be placed in their global lists GLs \((E',{\rm{CA}}_{i}),\) to ensure that it will not be used again by them as a search point. Else continues the neighborhood search from the current solution E. The stopping condition is by attaining the maximum allowed number of neighbors for a solution E without improvement. We give how to proceed in Algorithm 3.

figure c

By finishing this local search step, the CA agents terminate the process by sending their last best solutions to the SA agent, which considers the best one of them the global solution for the FJSP, see Fig. 9.

Fig. 9
figure 9

Second step of the global process by the cluster agents

Experimental results

Experimental setup

The proposed GATS\(+\)HM is implemented in java language on a 2.10 GHz Intel Core 2 Duo processor and 3 Gb of RAM memory, where we use the integrated development environment (IDE) Eclipse to code the algorithm and the multiagent platform Jade (Bellifemine et al. 1999) to create the different agents of our holonic model. To evaluate its efficiency, numerical tests are made based on four sets of well-known benchmark instances in the literature of the FJSP:

  • Kacem data (Kacem et al. 2002b): The data set consists of five problems considering a number of jobs ranging from 4 to 15 with a number of operations for each job ranging from 2 to 4, which will be processed on a number of machines ranging from 5 to 10.

  • Brandimarte data (Brandimarte 1993): The data set consists of ten problems considering a number of jobs ranging from 10 to 20 with a number of operations for each job ranging from 5 to 15, which will be processed on a number of machines ranging from 4 to 15.

  • Hurink edata (Hurink et al. 1994): The data set consists of 40 problems (la01–la40) inspired from the classical job shop instances of Lawrence (1984), where three test problems are generated: rdata, vdata, and edata which are used in this paper.

  • Barnes data (Barnes and Chambers 1996): The data set consists of 21 problems considering a number of jobs ranging from 10 to 15 with a number of operations for each job ranging from 10 to 15, which will be processed on a number of machines ranging from 11 to 18.

Due to the non-deterministic nature of the proposed algorithm, we run it five independent times for each one of the four instances Kacem et al. (2002b), Brandimarte (1993), Hurink et al. (1994), and Barnes and Chambers (1996) to obtain significant results. The computational results are presented by five metrics such as the best makespan (Best), the average of makespan (Avg Cmax), the average of CPU time in seconds (Avg CPU), and the standard deviation of makespan (Dev \(\%\)), which is calculated by Eq. 9. \({\rm{Mk}}_{\bf{o}}\) is the makespan obtained by our algorithm and \({\rm{Mk}}_{\bf{c}}\) is the makespan of an algorithm that we chose to compare to

$$\begin{aligned} {\rm{Dev}} = [({\rm{Mk}}_{\bf{c}} - {\rm{Mk}}_{\bf{o}})/{\rm{Mk}}_{\bf{c}}] \times 100\%. \end{aligned}$$
(9)

The used parameter settings for our algorithm are adjusted experimentally and presented as follows:

  • Crossover probability 1.0.

  • Mutation probability 0.5.

  • Maximum number of iterations 1000.

  • The population size ranged from 15 to 400 depending on the complexity of the problem.

  • The fixed threshold Distfix represents 50% of the maximal dissimilarity distance Distmax.

Table 2 Results of the Kacem instances (part 1)
Table 3 Results of the Kacem instances (part 2)
Table 4 Results of the Brandimarte instances (part 1)
Table 5 Results of the Brandimarte instances (part 2)
Table 6 Results of the Hurink edata instances
Table 7 Results of the Barnes data instances

Experimental comparisons

To show the efficiency of our GATS\(+\)HM algorithm, we compare its obtained results from the four previously cited data sets with other well-known algorithms in the literature of the FJSP.

The chosen algorithms are:

  • The TS of Brandimarte (1993), N1-1000 of Hurink et al. (1994) (with its literature lower bound LB), and the AL\(+\)CGA of Kacem et al. (2002b) obtained the first results in the literature for their proposed instances.

  • The LEGA of Ho et al. (2007), the BBO of Rahmati and Zandieh (2012), and the Heuristic of Ziaee (2014) are standard heuristic and metaheuristic methods.

  • The TS3 of Bozejko et al. (2010a) is the paper from which we inspired the computation method of the dissimilarity distance.

  • The MOPSO\(+\)LS of Moslehi and Mahnam (2011) and the Hybrid NSGA-II of Zhang et al. (2014) are two recent hybrid metaheuristic algorithms.

  • The MATSLO\(+\) of Ennigrou and Ghédira (2008) and the MATSPSO of Henchiri and Ennigrou (2013) are two new hybrid metaheuristic algorithms distributed in a multiagent model.

The different comparative results are displayed in Tables 2, 3, 4, 5, 6, and 7, where the first column takes the name of each instance, the second column gives the size each instance, with n the number of jobs and m the number of machines \((n\times m),\) and the remaining columns detail the experimental results of the different chosen approaches in terms of the best \(C_{\rm {max}}\) (Best) and the standard deviation (Dev %). The bold values in the tables signify the best obtained results and the N/A means that the result is not available.

Analysis of the comparative results

By analyzing Tables 2 and 3, it can be seen that our algorithm GATS\(+\)HM is the best one which solves the fives instances of Kacem. In fact, the GATS\(+\)HM outperforms the AL\(+\)CGA in four out of five instances; the Heuristic in three out of five instances; and the LEGA, the MOPSO\(+\)LS, BBO, and the Hybrid NSGA-II in two out of five instances. In addition, by solving this first data set, our GATS\(+\)HM attains the same results obtained by the chosen approaches such as in the case 1 for LEGA, BBO, Hybrid NSGA-II, and Heuristic; in the case 2 for MOPSO\(+\)LS and BBO; in the case 3 for LEGA; in the case 4 for all the algorithms; and in the case 5 for MOPSO\(+\)LS and Hybrid NSGA-II.

From Tables 4 and 5, the comparison results show that the GATS\(+\)HM obtains eight out of ten best results for the Brandimarte instances. Indeed, our algorithm outperforms the TS in nine out of ten instances. Moreover, our GATS\(+\)HM outperforms the LEGA and the MATSLO\(+\) in eight out of ten instances. In addition, our hybrid approach outperforms the TS3 in five out of ten instances. For the comparison with the BBO, the GATS\(+\)HM obtains the best solutions for the MK02, MK06, and MK10 instances, but it gets slightly worse result for the MK09 instance. Furthermore, the MATSPSO attained the best result for the MK01 instance, but our algorithm obtains a set of solutions better than it for the remaining instances. In addition, our algorithm outperforms the Heuristic in all the Brandimarte instances. By solving this second data set, our GATS\(+\)HM attains the same results obtained by some approaches such as in the MK01 for LEGA, MATSLO\(+\) and TS3; in the MK02 for MATSPSO; in the MK03 for TS3, BBO and Heuristic; in the MK04 for BBO; in the MK05 for TS3 and BBO; in the MK07 for BBO and TS3; and in the MK08 for all the algorithms only it is not the case for the Heuristic.

From Table 6, the obtained results show that the GATS\(+\)HM obtains seven out of ten best results for the Hurink edata instances (la01–la05) and (la16–la20). Indeed, our approach outperforms the N1-1000 in eight out of ten instances. Moreover, our GATS\(+\)HM outperforms the MATSLO\(+\) in seven out of ten instances. For the comparison with the literature lower bound LB, the GATS\(+\)HM attains the same results for the la01, la02, la04, la05, la16, la17, and la20 instances, but it gets slightly worse result for the la03, la18, and la19 instances. Furthermore, by solving this third data set, our GATS\(+\)HM attains the same results obtained by the chosen approaches such as in the la01 for the MATSLO\(+;\) in the la02 for the N1-1000 and the MATSLO\(+;\) and in the la05 for the N1-1000 and the MATSLO\(+.\)

Fig. 10
figure 10

\(C_{\rm {max}}\) comparison of GATS\(+\)HM and BBO for the Barnes data (Barnes and Chambers 1996)

Fig. 11
figure 11

Population size comparison of GATS\(+\)HM and BBO for the Barnes data (Barnes and Chambers 1996)

Fig. 12
figure 12

CPU time comparison of GATS\(+\)HM and BBO for the Barnes data (Barnes and Chambers 1996)

From Table 7, the results for the Barnes instances demonstrate that our GATS\(+\)HM dominates the BBO algorithm in different criteria such as the \(C_{\rm {max}},\) the Avg \(C_{\rm {max}},\) the Avg CPU, the deviation, and the population size. In fact, for the \(C_{\rm {max}}\) criterion, our GATS\(+\)HM outperforms the BBO in 12 out of 14 instances, see Fig. 10, with deviations varying from 0.430 to 4.805%. In addition, we attain average values for the \(C_{\rm {max}}\) solutions dominating the BBO in 12 times. In addition, as shown in Fig. 11, the used population sizes for our algorithm are less than the BBO in all the 14 instances, which influenced on the CPU execution time for each solution, see Fig. 12.

By analyzing the computational time in seconds and the comparison results of our algorithm in terms of makespan, we can distinguish the efficiency of the new proposed GATS\(+\)HM relatively to the literature of the FJSP. This efficiency is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process and by applying the intensification technique of the tabu search allowing to start from an elite solution to attain new more dominant solutions.

Conclusion

In this paper, we present a new metaheuristic hybridization algorithm-based clustered holonic multiagent model, called GATS\(+\)HM, for the flexible job shop scheduling problem (FJSP). In this approach, a neighborhood-based genetic algorithm is adapted by a scheduler agent (SA) for a global exploration of the search space. Then, a local search technique is applied by a set of cluster agents (CAs) to guide the research in promising regions of the search space and to improve the quality of the final population. To measure its performance, numerical tests are made using four well-known data sets in the literature of the FJSP. The experimental results show that the proposed approach is efficient in comparison with others approaches. In the future works, we will search to treat other extensions of the FJSP, such as by integrating new transportation times in the shop process, where each operation must be transported by a moving robot to continue its treatment on its next machine. In addition, this problem can be improved by considering a non-unit transport capacity for the moving robots, where the problem becomes a flexible job shop scheduling problem with transportation times and non-unit transport capacity robots. Therefore, we will plan to make improvements to our approach to adapt it to this new transformation of the problem, and study its effects on the makespan.