1 Introduction

The optimization of production scheduling can bring in considerable improvements in the manufacturing efficiency [1]. Job-shop scheduling problem (JSP) is basically an NP-complete challenge [2]. JSP considers no flexibility of any resources (such as machines and tools) for each operation [1].

Modern manufacturing systems contain many flexible machines to increase the production efficiency. These machines are capable of processing several types of the operations. This gives the permission to break the definition of JSP where an operation can be processed by a single machine [1].

As an extension of the JSP, Flexible Job-shop Scheduling Problem (FJSP), which is known to be NP-hard [3], considers the flexible machines for each operation. Furthermore, the FJSP is considered to be more complicated in comparison to the traditional JSP, as it dictates an extra decision level for the same scale in addition to the sequencing one, such as the operations routes [2].

Many approaches have been proposed to solve FJSP since first presented in 1990 [4]. The current methods for solving FJSP can be mainly categorized into; exact algorithm, dispatching rules (DR), evolutionary algorithm (EA), swarm intelligence (SI) based approaches, local search (LS) algorithms, and so on [5].

While exact algorithms tend to be inefficient with large scale FJSP, other methods, such as EA and SI, are much more expensive regarding the consumption of computation time. Many of these algorithms are also complicated and in many cases they are too difficult to reproduce. This makes it difficult to apply these methods in real-life problems.

In this research, we try to present a modified iterated greedy (IG) algorithm, a simpler metaheuristic that is easier to code and reproduce. The classical IG is separated into two phases to deal with the two sub-problems of FJSP and it’s combined with a set of dispatching rules (DRs) to solve the FJSP. The simplicity and the efficiency of IG and DRs shall result in an effective method which consumes less computation time and is easy to implement.

The remainder of this paper is arranged as follows. Literature review and problem definition are presented in Section 2. IG approach and the modified iterated greedy (MIG) is proposed in Section 3. Experimental studies are discussed in Section 4. Section 5 provides the conclusions and future work.

2 Literature Review and Problem Definition

2.1 Literature Review

Several methods have been used to deal with the FJSP. These techniques are classified into two groups; the exact methods and the approximation methods [1]. Exact algorithms include mathematical programming (MP), while the approximation algorithms include some dispatching rules (DRs) and artificial intelligence (AI) based approaches.

Brucker and Schlie [4] proposed a polynomial graphical algorithm when they first presented the FJSP with two jobs. Demir and İşleyen [6] evaluated some mathematical models of the FJSP. However, it’s been proved by Pezzella et al. [7] that exact algorithms are ineffective when dealing with large scale problems of FJSP.

Baykasoğlu and Özbakır [8] analyzed the effects of several DRs on the scheduling performance of job shops with different levels of flexibility and with different sizes of the problem. They proved that the performances of these DRs were approximately similar when dealing with high machine flexibility. On the other hand, different performances were obtained for zero machine flexibility. Ingimundardottir and Runarsson [9] created an auto-selection of combined DR by converting them into measurable contribution factors in the optimizing process of scheduling problems. Huang and Süer [10] adopted GA to explore the best combination of DR and used a “Hold” strategy for multi-objective JSS.

Evolutionary algorithms, such as genetic algorithm (GA), are an effective type of meta-heuristic methods. Zhang et al. [11] proposed a bi-level GA in an attempt to keep the advantages of preceding generations and reduce the disturbance of genetic operators. Later, an improved GA were proposed by Zhang et al. [12] targeting a better initialization and faster convergence. Huang et al. [13] developed an improved GA using opposition-based learning. The method used a multi-parent precedence operation crossover and a modified neighbor search mutation with opposite inverse mutation.

Swarm intelligence (SI) algorithms mainly include ant colony optimization (ACO), particle swarm optimization (PSO) algorithm, and artificial bee colony (ABC). Xu et al. [14] used bat algorithm to solve a dual flexible job shop problems (DFJSP). That algorithm used crossover and mutation as well as an adjusted value of the inertia weight with a linear decreasing strategy to enforce the search ability of the algorithm. Wu et al. [15] proposed a hybrid algorithm based on ACO while providing a modeling method based on 3D disjunctive graph.

Local search (LS) methods have been employed widely in solving the FJSP as well. The design of the neighborhood structure contributes directly to the efficiency of the method [1]. Sobeyko and Mönch [16] developed an iterative local search approach to deal with the objective of total weighted tardiness in large-scale FJSP. The algorithm used SA acceptance criterion to avoid getting trapped in local optimum.

Many researchers attempted to combine several algorithms to create some effective hybrid algorithms (HA) for FJSP. Palacios et al. [17] also combined GA with TS and added heuristic seeding. The hybrid algorithm was used to solve the fuzzy FJSP. Gaham et al. [18] presented an operations permutation-based discrete harmony search method. That method adopted an integration of the solution harmony with a dedicated improvisation operator. In addition to an integrated modified intelligent mutation operator.

In short, exact algorithm cost less CPU time, but most of these algorithms were not able to give competitive solution quality in comparison with other methods. On the other hand, metaheuristics, such as evolutionary algorithms (EA) and swarm intelligence based algorithms (SI) have given effective solutions and better quality. However, such metaheuristics algorithms cost much more computation time.

In this research, we propose a modified iterated greedy algorithm (MIG) to reduce the cost by consuming less CPU time. MIG provides a simple and easily applicable method that can compete with more complex meta-heuristics. The proposed algorithm MIG consists of two phases, each phase is derived from the classical IG to solve the two sub-problems of FJSP. Both phases use a set of dispatching rules (DRs) to solve the FJSP.

2.2 Flexible Job Shop Problem Definition

For processing n jobs on m machines, the problem is to find the best solution that achieves the minimum or maximum value for an objective function. In the FJSP, there are a set of machines A=M1,, Mm, and a set of jobs, J=J1,, Jn so that each job Ji consists of a given sequence of ni operations, Oi,1, Oi,2,…, Oi,ni. Each operation Oi,j can be processed on any machine of a subset Ai,j ⊆ A which represents the routing sub-problem. The other sub-problem is the sequencing sub-problem which is to sequence the operations on the machines. In this paper, the objective function is to minimize the makespan (maximal completion time) of all jobs.

In this research the following assumptions are considered:

  1. 1)

    All machines are available at time 0;

  2. 2)

    All jobs are released at time 0;

  3. 3)

    Each machine can process only one operation at a time;

  4. 4)

    Each operation can be processed without interruption on one of a set of available machines;

  5. 5)

    Recirculation occurs when a job could visit a machine more than once;

  6. 6)

    The order of operations for each job is predefined and cannot be modified.

The FJSP has been classified by Kacem et al. [19] into partial flexible job shop (P-FJSP) and total flexible job shop (T-FJSP). The flexibility of problems is partial when there exists a subset Ai,j of A (Ai,j ⊂ A) for at least one operation Oi,j, and it is total when Ai,j=A for all operations. For the same number of machines and jobs, although the T-FJSP has the larger solution space, the P-FJSP is more difficult to solve than the T-FJSP [19].

3 Modified Iterated Greedy

3.1 Classical Iterated Greedy

This algorithm was first proposed by Ruiz and Stützle [20] to solve traditional permutation flow shop scheduling problems. The traditional IG consists of two distinct iterative phases; destructing some a part of the solution, and reconstructing this part by some greedy techniques including local search to improve the solution [20, 21]. The original IG has adopted NEH heuristics of Nawaz et al. [22] as its greedy constructive method.

Many works have been done later with IG; Ruiz and Stützle [23] used IG to solve FSP with sequence dependent setup times, and it’s been used for node placement in street networks by Toyama et al. [24], and for single machine scheduling problems by Tasgetiren et al. [25], and as a local search method for unrelated parallel machine scheduling by Fanjul-Peyro and Ruiz [21].

The simple IG has proved to be effective and able to obtain state-of-the-art outcomes for a variety of JSP with different objectives [26].

3.2 Presented Algorithm (MIG)

The classical IG algorithm has been used in a wide range of scheduling problems according to Pan and Ruiz [26]. Although the algorithm was proposed to solve flow shop problems, some researchers used IG to deal with more complex problems like the hybrid flexible flow line problem as in Refs. [27, 28] and the blocking job shop scheduling problem by Pranzo and Pacciarelli [29].

Shop scheduling problems basically differs from each other by having different types or numbers of flexibility. By definition, operation flexibility is the possibility of performing an operation on more than a machine, sequencing flexibility is the possibility of interchanging the sequence in which required manufacturing operations are performed and processing flexibility is possibility of producing the same manufacturing feature with alternative operations or sequence of operations.

According to this definition, we can consider that the main difference between flexible job shop scheduling problem and job shop scheduling problem (JSP) is that flexible job shop problem dictates operation flexibility. The FJSP could be separated into two sub-problems: routing (assigning operations to machines) and scheduling (sequencing the assigned operations on each machine) [30]. Hence, the modified iterated greedy must deal with both of the sub-problems. Basically, that is achieved by separating both of destruct and construct phases in the algorithm into two stages in which both sides of FJSP are resolved. Pan and Ruiz [31] stated that IG dictates an effective greedy reconstruction in order to outperform other algorithms. The NEH heuristics gives no contribution on decision making of machine selection, and it also has limited solution variety while studying the FJSP. Therefore, NEH heuristics is replaced with a set of dispatching rules (DRs), to help with both decisions of sequencing and machine selection in reconstruction phase.

Figure 1 shows the flowchart of the MIG. The working of the algorithm is described below:

Figure 1
figure 1

Flowchart of the MIG

Step 1::

Initialization, which is done randomly

Step 2::

(Phase one) Destruct part of the solution machine selection

Step 3::

(Phase one) Reconstruct machine selection

Step 4::

(Phase two) Destruct sequence and machine selection for some operations

Step 5::

(Phase two) Reconstruct the sequence and the machine selection for these operations

Step 6::

Repeat the steps 2 through 5 until a stopping criterion is met.

3.3 Destruct Phase

The first stage here is to destruct part of the schedule. This is done by dissociating some consecutive operations from their machines, without changing the sequence of these selected operations. On the other hand, the second stage is to destruct part of the sequence and the schedule. This is done by removing some consecutive operations in from their sequence and dissociating them from their machines, later both sequence and machine selection will be reconstructed.

The destructed part is selected in one of either two ways:

  1. 1.

    The sequence is split into two parts and either one of these two part is destructed.

  2. 2.

    A number of consecutive operations are selected randomly along the sequence to be destructed.

3.4 Reconstruct Phase

In the first phase, reconstructing is to reassign the destructed operations from the existing sequence to machines using a set of dispatching rules as shown in Figure 2. This is done by comparing the quality of the sub-solution according to a randomly selected DR for all possible machines to process the operation. Then, the machine that achieves the sub-solution is selected.

Figure 2
figure 2

Construct phase–machine selection

Reconstructing in the second phase is to regenerate the destructed part of the sequence as in Figure 3. This is done by selecting two operations at a time. A DR is randomly selected and used to assess sequencing each of the two operations. The operation that results in a better sub-sequence is eventually selected.

Figure 3
figure 3

Construct phase–sequencing

3.5 Dispatching Rules

As a special case of priority rules, dispatching rules (DRs) are a simple scheduling heuristic, which gradually construct solutions by scheduling a single operation at a time [32, 33].

Due to their simplicity, sensitive nature, ease of use and the ability to fit a wide range of problem scale, DRs have been widely employed in solving scheduling problems [10, 33, 34]. DRs do not dictate high computational and information requirements [33]. Another important characteristic of DRs is their ability to adjust to dynamic changes [35]. These features encourage us to use DRs instead of NEH as the main heuristic to perform with IG for optimizing the FJSP.

It’s been proved that a random assigning of the operations to the machines gives no better convergence. Besides that there isn’t any dispatching rule that is alone capable to push forward the optimization process for any scheduling problem [36]. Thus, to make the selection procedure more intelligent, a set of dispatching rules (DRs) is used to assist machine selection procedure for each operation. Moreover, a favoring mechanism adopted by Ausaf et al. [37] is used to give more chance to more effective dispatching rules.

In general, researchers classified DRs into two main categories; online dispatching rules (dynamic DRs) and offline dispatching rules (static DRs) [38, 39].

3.5.1 Offline Dispatching Rules

According to these dispatching rules, each operation will have a fitness value based on some analysis on the problem’s data. In this research, offline dispatching rules are used to optimize and construct the sequence of operation therefore the scheduling in FJSP. When two operations are selected, the fitness value is calculated according to an offline DP rule, and then the operation with the best value is placed in the sequence while these other operation is kept with ready operations. The set of ready operations is updated and another two operations are selected after that and the procedure is repeated till the construction phase of the sequence is completed. The fitness value of an offline DP rule is calculated directly from the problem’s data before the optimization process. Hence, this fitness value remains constant for each pair of operation-DP rule. In this research, three offline dispatching rules are used as in Figure 4.

Figure 4
figure 4

Using offline dispatching rules to build the sequence

3.5.2 Online Dispatching Rules

According to these dispatching rules, each pair machine-operation will have a fitness value based on an analysis made on the current status of the machines in the sub-solution. This fitness value can be calculated only right before assigning this operation to a machine. Hence, for the same pair machine-operation, it varies along iterations due to the changes in the current sub-solution while constructing phase. In this research, 10 online dispatching rules are used (Figure 5).

Figure 5
figure 5

Using online dispatching rules for machine selection

The used dispatching rules can be categorized according to the use in this research as below:

  1. a)

    Dispatching rules used to select a machine for an operation: where all the available machines are considered for each operation, and then only one machine is selected according to the following.

    1. 1.

      Shortest processing time (SPT), selects the machine that does the operation within the shortest processing time.

    2. 2.

      Earliest start (ES), selects the machine that will start with this operation earlier.

    3. 3.

      Earliest finish (EF), selects the machine that is able to finish the operation earlier.

    4. 4.

      Least utilized machine (LUM), selects the machine that currently has the minimum workload.

    5. 5.

      Minimum idle time (MIT), selects the machine that achieves the least idle time.

    6. 6.

      Earliest machine interval (EMI), selects the machine with the minimum current interval.

    7. 7.

      Minimum gap per job (MGJ), selects the machine on which the gap between the operation and its preceding (the time that the job will be waiting for the machine) one is the smallest.

    8. 8.

      Combined rules (CR), in combined dispatching rules two or more rules are performed together. The function value of the first rule is calculated for both operations. In case both operations have the same function value for this rule, another rule is used to choose the best operations.

  2. b)

    Dispatching rules used to select an operation in sequencing: Two operations are selected and compared according to these rules, and then one of them is placed in the sequence. During this selection, the operations are already assigned to machines.

    1. 1.

      Shortest processing time (SPT), selects the operation with the shorter processing time.

    2. 2.

      Maximum processing time per job (MPJ), selects the operation which belongs to the job with the longer processing time.

    3. 3.

      Least utilized machine (LUM), selects the operation which is done by the machine with the greater workload.

    4. 4.

      Latest machine interval (LMI), selects the operation that is done by the machine with the greater processing time interval.

    5. 5.

      Combined rules (CR), two or more rules are combined as in (b).

The first four of these dispatching rules were adopted previously by Ausaf et al. [37]. Only four dispatching rules of the mentioned above are offline rules; SPT, MPJ, LUM (b) and LMI.

3.6 Techniques Used to Control the Algorithm

  1. 1.

    Adjusting the split point position

    A special function is used to adjust the split point. In case of exploitation, the split point is shifted to the right side, while in case of exploration; the split point is shifted to the left side. As is shown in Figure 6.

    Figure 6
    figure 6

    Adjusting the split point position

  2. 2.

    Adjusting the size of destructed part:

    Increasing the number of destructed operations will give the exploration that is needed in the algorithm, while decreasing this number will support exploitation process. The minimum and maximum sizes of the destructed part are determined initially for the instance. During the run, the algorithm starts with the minimum size, and increases gradually if the solution is not improving, when it reaches the maximum limit, the size decreases again for local search as in the previous Technique.

4 Experiments and Discussion

The algorithm MIG has been coded in C++, and performed by a computer with 3.2 GHz processor and with 4.0 GB RAM memory. Three experiments of in total 35 benchmark problems are used to test the performance of the proposed algorithm. The objective considered in this paper is to minimize the makespan. The best solutions are shown in bold red font in each experiment.

4.1 Experiment 1

The data in this experiment includes 5 problems adopted from [40]. The results are compared in Table 1 with 6 other algorithms; HA, GATS, TABC, HHS, HDE-N2 and hGA proposed by Li and Gao [1], Nouri et al. [41], Gao et al. [42], Yuan et al. [43], Yuan and Xu [44], and Gao et al. [45] respectively. The proposed (MIG) obtained the global optimum for all instances.

Table 1 Results of Kacem data (experiment 1)

Table 2 shows the CPU time compared with the same algorithms. It’s clear that MIG consumed the lowest CPU time among other algorithms.

Table 2 CPU time comparison for instances in experiment 1

4.2 Experiment 2

The data includes 20 instances adopted from Fattahi et al. [46]. The first ten consecutive instances are considered as small-scale FJSP, while the remaining ten instances are categorized as medium- and large-scale FJSP. The results are compared in Table 3 with 6 algorithms; HA, EPSO, EM2, MILP and HHS proposed by Li and Gao [1], Teekeng et al. [47], Demir and İşleyen [6], Birgin et al. [48] and Yuan et al. [43] respectively. The data of AIA results are also taken from Yuan et al. [43].

Table 3 Results of experiment 2

The proposed algorithm (MIG) performed well on both small- and medium-scale instances. It obtains all optimum solutions for small-scale instances. For medium-scales problems, MIG outperforms HA, EPSO and HHS in two instances, and outperforms EM2 and MILP in 3 instances. And finally, MIG outperforms AIA in 5 instances. In short, for medium-scale problems MIG outperforms all algorithms in literature in two instances (MFJS01 and MFJS03). The obtained solutions for large-scale instances were less competitive. For Large-scale problems, results for MIG are dominated by HA, EPSO and HHS. The Gantt charts for instances (MFJS01 and MFJS03) are shown in Figure 7 and Figure 8, respectively.

Figure 7
figure 7

Gantt chart of instance MFJS01 (experiment 2)

Figure 8
figure 8

Gantt chart of instance MFJS03 (experiment 2)

Table 4 shows that MIG consumed the least CPU time for all instances among the above mentioned algorithms.

Table 4 CPU time comparison for instances in experiment 2

4.3 Experiment 3

The data in this experiment is adopted from Brandimarte [49], it contains 10 instances with number of jobs ranges from 10 to 20, and the number of machines ranges from 6 to 15. We compared the results with 11 algorithms from literature; HA, HTGA, GATS, TABC, Heuristic, AMMA, HHS, hGA, TS, and IACO. These algorithm were proposed by Li and Gao [1], Chang et al. [50], Nouri et al. [41], Gao et al. [42], Ziaee [51], Zuo et al. [52] and Yuan et al. [43]. Results of AIA and TS are taken from Yuan et al. [43].

We can see in Table 5 that the proposed MIG outperforms each of; Heuristic in 9 instances, GATS in 7 instances, HTGA and AIA in 3 instances, TS in 2 instances, and finally TABC in one instance. On the other hand, HA, AMMA, and HHS dominates MIG in 3 instances, while TS and TABC outperforms MIG in 2 instances as HTGA and AIA in one instance. Table 6 shows the CPU time for each instance compared with CPU consumed by the above mentioned algorithms. MIG consumed less CPU time than all other algorithms. Only heuristic method is competitive with MIG regarding the CPU time. Heuristic method consumed less CPU time than MIG in only three instances; MK5, MK6, and MK10. On the other hand, MIG obtained better solutions for these instances than heuristic methods.

Table 5 Results of experiment 3
Table 6 CPU time comparison for instances in experiment 3

4.4 Analysis Summary

In this paper, a modified iterated greedy is proposed for solving the flexible job shop problem. In the experiments, 35 instances in total have been used from 3 different benchmarks to test MIG. We divide these instances into three categories; small-scale instances (10 instances), medium-scale instances (13 instances), and large-scales instances (12 instances).

The results for small-scale instances prove that MIG is able to obtain global optimum for all instances as many previous algorithms actually did before. Figure 9 shows the results of MIG in comparison with minimum and maximum value of makespan that are obtained by other algorithms.

Figure 9
figure 9

Results comparison for small-scale problems

For medium-scale instances, MIG has obtained the best results for all instances and has outperformed all algorithms in literature for 2 instances (MFJS01 and MFJS03) of experiment 2. Figure 10 illustrates the performance of MIG in comparison with minimum and maximum makespan obtained by other algorithms.

Figure 10
figure 10

Results comparison for medium-scale problems

While dealing with large-scale instances, MIG is able to obtain near-optimum solutions for many instances, but it has got trapped in local optimum for some others. Figure 11 shows the performance of MIG for 12 large-scales instances. It can be observed that the curve of MIG includes worse results for larger problems.

Figure 11
figure 11

Results comparison for large-scale problems

Experiments shows that MIG consumes less CPU time in comparison to all other algorithms included in this study. For only 3 instances of large-scale problems, only one algorithm (Heuristic) consumed less CPU time than MIG, but MIG on the other hand obtained much better solutions. This confirms that MIG is an effective method which costs the least CPU time.

The outstanding performance of MIG for small and medium-scale problems encourages us to make further development in the global search technique in future.

5 Conclusions and Future Work

In this paper, a modified iterated greedy is proposed for solving the flexible job shop scheduling problem. The experimental results show that the algorithm can find the global optimum solution for small and medium scale instances. MIG outperforms all other algorithms for 4 medium-scale instances. For large scale instances, the proposed algorithm has obtained optimum solutions for some instances and only near optimum solutions for some other instances. The main contribution of the proposed algorithm is to provide a simple and effective algorithm that can be easily employed in the real-life problems, and furthermore, it has insignificant CPU time cost in comparison with other metaheuristics that are widely used in this field.

In future, we will continue developing this algorithm to perform better on large-scale problems. The global search in the proposed algorithm will be developed for this purpose. Finally, multi-objective flexible job shop scheduling problem will be considered a good challenge for the developed version. Another option is to hybridize MIG with another algorithm.