Abstract
The job shop scheduling problem (JSSP) is critical for building one smart factory regarding resource management, effective production, and intelligent supply. However, it is still very challenging due to the complex production environment. Besides, most current research only focuses on classical JSSP, while flexible JSSP (FJSSP) is more usual. This article proposes an effective method, GAILS, to deal with JSSP and FJSSP based on genetic algorithm (GA) and iterative local search (ILS). GA is used to find the approximate global solution for the JSSP instance. Each instance was encoded into machine and subtask sequences. The corresponding machine and subtasks chromosome could be obtained through servaltime gene selection, crossover, and mutation. Moreover, multiobjects, including makespan, average utilization ratio, and maximum loading, are used to choose the best chromosome to guide ILS to explore the best local path. Therefore, the proposed method has an excellent search capacity and could balance globality and diversity. To verify the proposed method's effectiveness, the authors compared it with some stateoftheart methods on sixtysix public JSSP and FJSSP instances. The comparative analysis confirmed the proposed method's effectiveness for classical JSSP and FJSSP in makespan, average utilization ratio, and maximum loading. Primarily, it obtains optimallike solutions for several instances and outperforms others in most instances.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Introduction
With the continuous development of global economics, modern industry is more and more complex^{1,2}, causing a new challenge for implementing smart factories^{3}. Job shop scheduling plays a core role in the process of building one smart factory, which has extracted more attention^{4,5}.
The current methods for solving JSSP consist of exact and approximation methods. The exact methods that can obtain optimal solutions while it is very timeconsuming and resourceconsuming^{6}, especially when meeting one largescale JSSP instance. Therefore, most current methods mainly focus on approximated methods, including the simplest dispatching rules (DRs) and artificial intelligence (AI)^{7}. Among these two methods, DRs such as first in first out (FIFO), shortest processing time (SPT), and earliest completion time first (ECTF) are the most straightforward and simplest but are far from optimal solutions^{8}. On the contrary, the AIbased methods extract the hidden information from the JSSP instances to construct the corresponding rules more accurately within a tolerable time, which has been mainstream for solving JSSP. It mainly consists of the artificial immune algorithm^{9,10}, genetic algorithm (GA)^{11}, swarm intelligence (SI)^{12}, local search (LS) algorithms, and networkbased approaches^{13,14,15}, etc. GA and neural networkbased (NN) methods have extracted more attention.
GA is one global searching algorithm with parallelism, robustness, wide applicability, and interpretability^{16}, which has been widely used for solving JSSPlike optimization problems. For instance, Omar et al. applied one improved GA to solve JSSP^{17}, in which they initialed the population with some wellknown DRs rather than a random solution. Teekeng et al. proposed one improved GA for solving FJSSP by changing three operators: selection, crossover, and mutation^{18}. Mohamed^{19} proposed a new GA based on the island model for solving JSSP using a migration selection mechanism to evaluate and select the genes, whose effectiveness is demonstrated on 52 public JSSP instances. Li et al.^{7} applied GA to search the global solution first, and then Tabu search was used to find the best local solution for solving both JSSP and FJSSP. Sun et al.^{20} applied GA with variable neighborhood search for solving FJSSP. Besides, similar to the idea of GA, Lu et al.^{21} proposed an improved iterated greedy (IIG) algorithm to solve the distributed hybrid flow shop scheduling problem. They constructed three operators to search for the global path, and one LS algorithm consisting of four neighborhood structures is designed to find its best local path. Moreover, GAbased algorithms are developed to solve other JSSPlike problems, including timetabling scheduling^{22,23}, traveling salesman problem (TSP)^{24}, and network parameter optimization^{25,26}.
The NNbased methods have two branches: traditional and deep reinforcement learning (DRL). Traditional learning methods for solving JSSP treat it as one subclassification problem. It uses other wellknown algorithms, such as DRs and GA, to obtain corresponding labels (the priority for each subtask) and simultaneously calculates the corresponding statuses used to train the model. The trained model is used to predict the priority of each subtask, which is converted into a JSSP pattern at the end. E. g., Weckman et al.^{13} are the first to apply NN for solving JSSP. They utilized GA to solve 6 \(\times\) 6 JSSP instances first, and then one NN with three hidden layers was adopted to predict each subtask's priority by inputting twelve features. The comparative results on ft06^{27} indicated that the NNbased method outperforms attributeoriented induction (AOI) and SPT but is still far from GA.
Recent deep learning (DL) technology has achieved great success in many fields, such as image classification^{28}, fault diagnosis^{29}, and medical healthcare^{30}, due to its powerful feature extraction capacity. Also, DL has extracted attention in the field of JSSP. For instance, Zang et al.^{31} applied a convolutional neural network (CNN) to extract the hidden features from ten input features and their transformations corresponding to the subtask generated from GA. Shao et al.^{14} employed GA to generate training samples and longshortterm memory (LSTM) with Kmeans to mine key hidden features for solving JSSP. Besides, Kim et al.^{8} applied multilevel CNN (MLCNN) to find the approximate global path and applied ILS to explore the best local solution for solving JSSP.
DRL^{32} describes one JSSP as a Markov decision process (MDP), in which the DL part extracts rich hidden features that reflect the current dynamic production environmental state \(s\) to predict the corresponding reward \(r\). The RL part records a pair of actions and rewards. Significantly, the current state will be transformed into a new state \({s}^{{{\prime}}}\) by doing action \(a\) and returning a reward \(r\). The DRL arranges all subtasks in one JSSP instance by maximizing the total reward. E.g., Ye et al.^{33} utilized DRL for resource scheduling, which utilizes onedimension (1D) CNN to extract the hidden dynamic features. Lin et al.^{34} proposed a novel multiclass DRLbased method, deep Q network (DQN), to select the rule for each machine to arrange a corresponding job, in which six DRs are utilized. Considering the shortcoming of a single DQN that predicts and evaluates the action using the same model, double DQN (DDQN) is applied to solve JSSP within eighteen DRs^{4}. Besides, Liu et al.^{35} combined actorcritic with reinforcement learning (ACRL) to solve JSSP, and the comparative results in terms of makespan on some public JSSP instances indicated its effectiveness; and a graph network is combined with Qlearning to solve JSSP in traffic management^{36}.
Although the abovementioned methods have obtained good performance, they still have some limitations, as described in Table 1. The exact method is the most accurate but cannot deal with largescale JSSP instances and is timeconsuming; DRs are simplest but not adequate^{37}; The GAbased method could obtain a nearoptimal solution for JSSP instances due to its good global exploring ability, but it lacks some of the local searching ability; On the contrary, LS lacks global exploring ability; Traditional learning methods could solve JSSP fast but highly depends on other algorithms; DRLbased methods are effective and intelligent, but how to design dynamic input nodes and reward function still need to think more.
Table 1 shows that no algorithm can handle all JSSPlike optimization problems well. Besides, most current methods only focused on JSSP or FJSSP, except for^{7}. They evaluated the proposed method in makespan, which could not satisfy human beings' needs. Motivated by those, this article presents one effective scheduler, GAILS, to solve multiobjective JSSP and FJSSP. In which an improved GA based on reference^{7} is designed to find the approximate global solution for the JSSP and FJSSP instances. The initial chromosome of machine and subtask sequences are randomly set first. Then, three operators, including selection, crossover, and mutation, are designed to explore the global genes for each sequence. Moreover, the multiobject fitness function is designed by combining makespan, average utilization ratio, and maximum loading to choose the best global chromosome to guide ILS to explore the best local path. The reason for choosing ILS rather than other local searching algorithms is to find its best local path because ILS can ensure the solution's diversity by adjusting different perturbation strategies^{38}. Powered by GA, ILS, and multiobject fitness function, the proposed method has an excellent searching capacity and could balance globality and diversity for solving JSSP and FJSSP.
The main contributions of this article are summarized as follows:

1.
An improved GA is proposed to find the global solution of JSSP and FJSSP instances using three new operators: selection, crossover, and mutation. The comparative results showed that the improved GA is more effective than traditional GA.

2.
The ILS is applied to explore the optimal local path from GA obtained global path, which ensures the solution is near optimal.

3.
A multiobject fitness function that contains makespan, average utilization rate (AUR), and maximum loading (ML) is designed to select the best global and local path, which could be easily extended to other optimization algorithms. Besides, it can easily attach personal goals for different production statutes by adjusting their order in the fitness function.

4.
Based on the good design of GA, ILS, and multiobject fitness function, the proposed method has an excellent searching capacity. It could balance globality and diversity for solving JSSP and FJSSP. The Comparative analysis based on sixty public instances confirmed its effectiveness in terms of makespan, average utilization rate, and maximum loading. In addition, the effectiveness of GA and ILS has been analyzed in the proposed method.
The rest of this article is arranged as follows. Section “Problem definition” defines the JSSP and FJSSP. Section “The proposed methods” presents the proposed method in detail. Section “Experimental verification” performs the experimental verification based on public JSSP and FJSSP instances. In section “Discussion”, we discuss the proposed method in depth. Section “Conclusions” concludes this article.
Problem definition
JSSP aims at arranging \(n\) jobs \({\mathbb{J}}=\{{J}_{1},{ J}_{2},\dots ,{ J}_{i}\dots ,{ J}_{n}\}\) to be processed by \(m\) machines \({\mathbb{M}}=\{{M}_{1},{ M}_{2},\dots ,{ M}_{j},\dots ,{ M}_{m}\}\) with satisfactory metrics such as makespan, AUR, and ML. Where each job \({J}_{i}\) consists of \({n}_{i}\) operations \({\mathbb{O}}=\{{O}_{i,1},{O}_{i,2},{O}_{i,3},\dots ,{O}_{i,{\text{k}}},\dots ,{O}_{i,{n}_{i}}\}\). Each operation \({O}_{i,k}(i={1,2},3,\dots ,n;k={1,2},3,\dots ,{ n}_{i})\) needs to be processed by a functional machine \({M}_{j}\) within a particular time \({T}_{i,j}\in {\mathbb{T}}=\{{T}_{i,1},{ T}_{i,2},{ T}_{i,3},\dots ,{{ T}_{i,k},\dots ,T}_{{ i, n}_{i}}\}\). Notice that one operation in the JSSP instance can only be processed once by one certain machine.
Unlike classical JSSP, FJSSP needs to determine both a job operation and the corresponding machine to execute the selected operation to attach good criteria. That is, each operation \({O}_{i,j}\in {\mathbb{O}}\) needs to be processed by a machine \({M}_{i,j,}\) out of a given set \({\mathbb{M}}_{s}\) \(\in {\mathbb{M}}\). From the definition of JSSP and FJSSP, we know that FJSSP is one complex kind of JSSP. Generally, both JSSP and FJSSP pursue finding the minimized makespan, defined as (1). Where \({C}_{i,k}\) is the completion time of \(k\) th operation for job \(i\), and \({C}_{max}\) is the maximum completion time for all operations. Besides, this manuscript aims at developing one accurate algorithm to attach multiple goals, including AUR, and ML, as shown in (2) for AUR, and (3) for ML. Where \(En{d}_{i}={{\text{max}}\left({C}_{i, j}\right)}_{{m}_{i}}\) is the ending time of machine \({m}_{i}\), and \(idl{e}_{i}\) is the idle time of \({m}_{i}\).
To simplify the problem, we define four constraints for both JSSP and FJSSP, as follows:
Constraint 1: Starting time constraint, Eq. 4 (a) indicated that the completing time \({C}_{i,k}\) equals its operation time \({T}_{i,1}\), which indicates that all jobs start from zero.
Constraint 2: Operation order constraint, Eq. 4 (b) indicated that one job \({J}_{i}\) has specific orders \(\{{O}_{i,1},{ O}_{i,2},{ O}_{i,3},\dots ,{O}_{{ Oi, n}_{i}}\}\) to execute at corresponding machine \(\{{M}_{i,1},{ M}_{i,2},{ M}_{i,3},\dots ,{ M}_{{ i, n}_{i}}\}\) in a certain time \(\{{T}_{i,1},{ T}_{i,2},{ T}_{i,3},\dots ,{{ T}_{i,n}}_{i}\}\in {\mathbb{T}}\) for JSSP since the difference between \({C}_{i,k+1}\) and \({C}_{i,k}\) is greater or equal to the operation time \({T}_{i,k+1}\). For FJSSP, the only difference is that each suboperation \({O}_{i,k}\) requires selecting the best machine \({M}_{i,j}\) from a given machine set \({\mathbb{M}}_{{\text{s}}}\) to execute within the corresponding time duration. That is, the next suboperation will start after completing the previous suboperation.
Constraint 3: Resource constraint, Eq. 4 (c) indicated that the machine \({M}_{j}\) can only process one given suboperation \({O}_{i,j}\) at once. Where different suboperations for the \(i\) th job’s completing time difference \({C}_{i,k}{C}_{i,j}\) should be greater or equal to the \(k\) th suboperation time \({T}_{i,k}\).
Constraint 4: Eq. 4 (d) showed that we do not consider the setup and transmission times during scheduling.
The proposed methods
The proposed method consists of two parts: GA finds the global path (step 1 to step 6), and ILS explores the optimal local path (step 7 to step 9), whose workflow is shown in Fig. 1. The overall procedure of the proposed method is described in the subsequent sections.
Step 1: Setting the parameters of the proposed method, including max generation \(Gen\), population size \(Po{p}_{size}\), crossover ratio \(Cr\), and mutation ratio \(Mr\). Setting a bigger \(Gen\) and \(Po{p}_{size}\) may receive more satisfactory results, but they require massive resources to find the global path. Considering both performance and time costing, we set \(Gen\) and \(Po{p}_{size}\) as 400, which is similar to the work of Gao^{7}. Moreover, the relatively bigger crossover and smaller mutation ratios can ensure gene diversity while simultaneously preserving excellent genes to optimize the algorithm. Thus, we set the crossover ratio \(Cr\) and mutation ratio \(Mr\) as 0.8 and 0.1, respectively.
Step 2: Initialize the population \(Pop\). Each individual has an operation sequence (OS) and a machine sequence (MS). Both OS and MS are equal to \(\sum_{i=0}^{n}{O}_{i{,n}_{i}}\), generated by algorithm 1. Running algorithm 1 \(Po{p}_{size}\) times to initialize populations. The gene in OS represents the processing order of each job's operations on a specific machine over the occurring orders. The gene in MS represents the corresponding machine over the job and operation.
One \(3\times 3\) FJSSP instance is used to explain, as shown in Fig. 2. The code of OS = {1,2,1,2,3} corresponding to operation \({\{{\text{O}}}_{11}, {{\text{O}}}_{21},{{\text{O}}}_{12},{{\text{O}}}_{22},{{\text{O}}}_{31}\}\); And MS = {2,2,0,0,1} responds \({\{{\text{O}}}_{11},{{\text{O}}}_{12},{{\text{O}}}_{21},{{\text{O}}}_{21},{{\text{O}}}_{22},{{\text{O}}}_{23}\}\) will be processed by machine {3,3,1,1,2}, respectively. After obtaining the final chromosomes, the decoding method generates the FJSSP pattern in step 9. The JSSP only needs to update OS while all elements of MS are equal to 0 since it only requires one machine for one suboperation.
Step 3: Evaluate the population. If \(Gen\) is up to 400, it will output the global solution and feed it into ILS in step 7. Else, go to step 4. The evaluation metric is the combination of {\(Makespan\),\(AUR, ML\)}. The makespan is defined as (1), AUR and ML are expressed as (2) and (3). Noticing that the traditional JSSP instance's operationmachine pair is already given and cannot change during the scheduling. Therefore, the ML is the same for all methods in JSSP but is different in FJSSP. As a result, the proposed method will first evaluate the population by \({C}_{max}\); if several solutions have the same \({C}_{max}\), then check \(AUR\) for JSSP and check AUR and \(ML\) for FJSSP, respectively.
Step 4: Selection operator. The proposed method combined elitist and tournament selection methods as selection operators. Mainly, elitist selection copies 5% of individuals from the original population \(Pop\) as a part of the new population \({\text{newPop}}\). The rest 95% of the new population is generated from the tournament selection algorithm. The tournament selection algorithm sets \({\text{k}}=2\) to select the rest of the individuals. That is, two individuals are evaluated using the fitness function \(Best=\{{C}_{m}\),\(AUR, ML\}\), and the best one will be selected and added to the new population. The whole algorithm, as described in Algorithm 2.
Step 5: Crossover operator. The proposed method adopted precedence operation crossover (POX) and jobbased crossover (JBX) for OS string. Each selected 50% randomly to crossover the OS string and adopted a twopoint crossover for the MS string, which is identical to^{7}.
Step 6: Mutation operator. The proposed method applies neighbor mutation for 15% of the OS string. The process of neighbor mutation is described in the following steps and Fig. 3 (a).
1) Randomly select three different elements in parent \(P\) and generate all neighbors of the OS string.
2) Randomly select one neighbor as the current OS string, denoted as \(C\).
Also, adopting a jobbased half mutation operator for the MS string, as shown in Fig. 3 (b). The half gene in each job will be mutated using a jobbased halfmutation operator. After mutation operation, set \(Gen=Gen+1\), and go to step 3.
Step 7: Output the global solution and feed it into ILS.
Step 8: The ILS algorithm explores the optimal local path for the JSSP and FJSSP instances. We set the maximum iteration to be 10, 000 and the maximum noimproved value for makespan is 0.02. The ILS algorithm used in this article is the same as our previous paper^{8}.
Step 9: If some criteria are satisfied (generation step up to 10,000 or no improved value is less than 0.02) in the ILS algorithm, output the solution. The final solution will be decoded into JSSP or FJSSP pattern using reference^{7}. Else, go to step 8.
Experimental verification
The authors implemented the proposed method based on the system of Ubuntu 16.0.4 with Intel(R) i77700 CPU at 3.60 GHz, and the programming language is Python 3.5. Moreover, we compared the proposed method with some leading methods to indicate its effectiveness for both classical JSSP and FJSSP in some public instances.
Verification for JSSP
Makespan
The JSSP is simpler than FJSSP as it does not require machine selection. We compared the proposed method with GA^{39}, DDQN^{4}, ACRL^{35}, HDNN^{31}, and ILS, GA1 (the GA in the proposed method) on some famous JSSP instances, including ft10 (10 \(\times\) 10), ft20 (20 \(\times 5\))^{40}, la24 (15 \(\times 10\)), la36 (15 \(\times 15\))^{41}, abz7 (20 \(\times\) 15)^{42}, yn1(20 \(\times\) 10)^{43}. To fairly compare, the comparative results are collected directly from the original paper. The term '’ means not report, and the term 'GA1' is the GA part of the proposed method. The comparative results in terms of makespan are shown in Table 2.
The results indicated that the proposed method outperforms others for all JSSP instances. Besides, the GA1 applied in the proposed method performs better than GA^{39}. ILS performs better than GA1 on ft10 and ft20. We calculate the scheduling score \(score=\frac{{C}_{optimal}}{{C}_{al}}\), as shown in Fig. 4, except method ACRL^{35} since it is incomplete. Where \({C}_{al}\) and \({C}_{Optimal}\) are the makespans of each method and optimal solution. The results indicated that the proposed method obtained scheduling scores higher than 90% for six JSSP instances and outperformed others. The averaged scheduling scores are also given in Table 2. The results confirmed the proposed method's effectiveness, with an average scheduling score of 96.94%. To show the priority of the proposed improved GA1, we compare it with GA^{39}. The calculation result indicated that GA1 had improved a 36.20% = (94.92–69.69)/69.69 scheduling score compared to GA^{39}. Also, the application of ILS has improved by 2.13% = (96.94–94.92)/94.92 of the average scheduling score, which is conducted by comparing the proposed method with GA1. The priority of each method could be ranked as: The proposed > GA1 > ILS > DDQN > HDNN > GA.
We also compare the proposed method with the current leading learningbased methods, DDQN^{4}, ACRL^{35}, MLCNN^{8}, and GA1, ILS, on more la01 to la20^{41}. The results in terms of makespan are shown in Table 3. The findings showed that the proposed method performs much better than ACRL^{35} on la11 to la 15, whose solutions are optimal, while ACRL^{35} is not for la12, la133, and la15. Besides, the proposed method won all cases for twenty JSSP instances compared to other algorithms. In addition, it received eighteen optimal solutions except for la20. It indicated that the proposed method outperforms others and could effectively solve JSSP in terms of makespan. The solution of la16 using the proposed method is shown in Fig. 5, whose makespan is 945.
To see the difference between the proposed method and others, we calculate each method's average scheduling score except ACRL^{35}, as shown in Fig. 6. The results indicated that the proposed method is the most near to the optimal method, whose average scheduling scores are 99.97%. Besides, all method's scheduling scores are near 99%, but the proposed method is the highest. According to the average scheduling scores, they can rank as: The proposed > MLCNN > ILS > DDQN > ILS. In addition, the application of ILS has improved the scheduling scores from 98.84 to 99.97, proving that the GA lacks some local search capacity and that ILS could guide GA to find the best local path.
Average utilization rate (AUR)
The proposed method aims at optimizing multiple objects, including makespan, AUR, and ML, for each instance. Here, we give the AUR for each JSSP instance using GA1, ILS, and the proposed method since all MLs are the same for JSSP, as shown in Fig. 7. The results showed that the proposed method won fourteen times out of twenty in terms of AUR, including la01, la02, la05, la06, la07, la09, and la12la19. On the contrary, GA only won one time on la12, and ILS won five times out of twenty, including la03, la04, la08, la10, and la20, respectively. The results confirmed each component's effectiveness again in terms of AUR. In summary, the proposed method could process JSSP effectively within a satisfactory makespan and AUR.
Verification for FJSSP
The above section has confirmed the effectiveness of the proposed method for solving JSSP. This section focuses on verifying its effectiveness for FJSSP. We compared the proposed method with existing leading methods, including TSN1 and TSN2^{44}, the improved GA1, and ILS on forty FJSSP instances from rdata^{44}.
Makespan
The comparative results in terms of makespan are shown in Table 4. The results indicated that the proposed method won thirtyseven times out of forty FJSSP instances in terms of makespan, while the components of GA1 and ILS won zero. It illustrates that only using GA1 or ILS cannot solve FJSSP well since GA lacks local exploring capacity while ILS lacks global searching capacity. Combining GA and ILS could solve FJSSP effectively. TSN1 and TSN2 won six and seven times out of forty FJSSP instances. Moreover, ILS generally performs better than GA1. Similar to makespan, the scheduling score results showed that the proposed method is near 99% and won 37 times out of 40 FJSSP instances. Those methods could be ranked as: The proposed > TSN2 > TSN1 > ILS > GA. One example of la19's solution using the proposed method is given in Fig. 8, whose makespan is 704.
To see each method's difference, we calculate the average scheduling scores of each method, as shown in Fig. 9. The results confirmed that the proposed method is one nearoptimal method for solving FJSSP, whose average scheduling score is 99.0%. Another finding is that only using GA1 or ILS cannot solve FJSSP well, as their average scheduling scores are 93.45% and 95.48%, which are far from the proposed method. The performance of each method for solving FJSSP could be ranked as: The proposed > TSN1 > TSN2 > ILS > GA1 according to average scheduling scores.
Average utilization ratio (AUR)
The AUR illustrates each machine's utilization ratio during the whole production process. Generally, the higher the AUR, the better performance the algorithm shows. We calculate the AUR of each method on forty FJSSP, as shown in Fig. 10. The results showed that the proposed method performed better and won most cases. It won thirtythree times out of 40 FJSPP instances, including la02la13, la15, la16, la18, la19, la22la24, la26la30, and la32la40, respectively. On the contrary, GA1 and ILS only won eight and six times out of forty. The proposed method has an absolute advantage on largescale FJSSP instances conducted from la23la40. Moreover, all methods perform well on smallscale FJSSP instances, from la01 to la16, whose AURs are high up to 99%. The above findings indicated that the proposed method could arrange each suboperation well on the selected machine to execute for solving FJSSP, and only using GA or ILS cannot obtain satisfactory AUR.
Max loading (ML)
The max loading (ML) represents each machine's loading capacity. The huge load may damage the machine and result in a delay in the whole manufacturing process. Therefore, testing the proposed method's ML for each FJSSP instance is necessary. The results showed that the proposed method has a low ML compared to GA1 and ILS, as shown in Fig. 11. The proposed method won thirtythree times out of forty instances, including la01la18, la22, la23, and la25la37, respectively. However, GA1 only won six times, including la19, la21, la24, la38, la39, and la 40, respectively; and ILS only won la20. The above evidence showed that the proposed method could deal with FJSSP with low max loading, and only using GA1 or ILS cannot find the best solution due to the lack of local exploring and global searching capacities.
In summary, the proposed method could process FJSSP instances within satisfactory makespan, AUR, and ML due to its excellent global search and local exploring capacities.
Discussion
Job shop scheduling is critical in building one smart factory regarding resource supply and intelligent production. Most current methods only focus on one type of job shop scheduling: JSSP or FJSSP, and singleobject, which cannot satisfy human beings' needs. This article proposes an effective scheduler, GAILS, to solve JSSP and FJSSP with multiple objects, including makespan, AUR, and ML.
Considering each algorithm's advantages and disadvantages, as summarized in Table 1. The proposed GAILS applied GA to find the global path and guide ILS to explore the optimal local path. In GA, three improved operators, including selection, crossover, and mutation operators, are utilized to find the best chromosome for each instance. Therefore, the proposed method has an excellent search capacity and could balance globality and diversity. The whole process of the proposed method includes nine steps, as shown in Fig. 1. Steps 1–6 are to find the global path, while steps 7–9 are for best local path exploring.
To verify the proposed method's effectiveness, we tested and compared the proposed method and several leading methods based on sixtysix JSSP instances. We compared the proposed method with GA^{39}, DDQN^{4}, ACRL^{35}, HDNN^{31}, and MLCNN^{8} on twentysix JSSP instances to validate its effectiveness for solving classical JSSP. The results in terms of makespan showed that the proposed method outperforms others, which can be seen in Tables 2 and 3. Besides, we calculated their scheduling scores to see their performance compared to the optimal method. The results indicated that the proposed method is near the optimal method, whose average scheduling score is higher than 95%, which could be conducted in Figs. 4 and 6.
To verify each component's effectiveness for solving JSSP in the proposed method, we compared it with GA1 and ILS. The results showed that only using GA1 or ILS cannot solve JSSP effectively, as shown in Tables 2, 3, and Figs. 4 and 6. One solution using the proposed method for solving la16 is given in Fig. 5. The AUR testing results indicated that the proposed method could deal well with JSSP with a good AUR, as shown in Fig. 7.
Moreover, we tested and compared the proposed GAILS with TSN1 and TSN2^{44} on forty FJSSP instances to validate its effectiveness. The results in terms of makespan indicated that the proposed method performs the best. It won thirtyseven times out of forty, as shown in Table 4. Similar to JSSP, we calculated the scheduling score of the proposed method, and the results showed that the proposed method could receive 99.0% of the average scheduling score for solving FJSSP, as shown in Fig. 9. It indicated that the proposed method is one nearoptimal solution for FJSSP. Besides the comparative results between the proposed GAILS and GA1, ILS has confirmed each component's effectiveness. Another finding is that using GA1 or ILS alone cannot solve FJSSP well, as their average scheduling scores are 93.45% and 95.48%, which are far from the proposed method. One solution using the proposed method based on the la19 FJSSP instance is given in Fig. 8, whose makespan is 704.
The AUR results showed that the proposed method could deal with FJSSP instances well with satisfactory AUR, as shown in Fig. 10. The proposed method has an absolute advantage on largescale FJSSP instances.
The ML testing results showed that the proposed method could arrange each operation on the selected machine well and has a small ML, which could be conducted in Fig. 11. The comparative results between the proposed method and GA1, ILS, confirmed the proposed method's effectiveness again.
In summary, the proposed method could effectively process both JSSP and FJSSP within multiple objectives. By changing the order of makespan, AUR, and ML in the fitness function, we can easily receive multiple solutions for different production statuses. However, GA is much more timeconsuming as it requires executing the selection, crossover, and mutation operations 400 hundred times. Respectively, it takes almost halfday to process la40 while ILS only needs ten seconds. How to design one effective and timesaving GA to search global path is worth thinking more about.
Conclusions
This article proposes an effective job shop scheduler, GAILS, to solve both JSSP and FJSSP within multiple objects, including makespan, AUR, and ML. In the proposed method, GA is used to search approximate global solutions, and the searched solution is fed into ILS to explore the best optimal local solution. In the GA, three improved operators, including selection, crossover, and mutation, are designed to select the best chromosome for the global solution by using one combination fitness function made of makespan, AUR, and ML. Thus, the proposed method has an excellent searching capacity and could balance globality and diversity. The comparative analysis based on sixtysix instances has confirmed the effectiveness of the proposed method for both JSSP and FJSSP. It received a 96.94% average scheduling score in largescale JSSP instances (see Table 2), won all cases on la01la20 JSSP instances in terms of makespan, and won fourteen times out of twenty in terms of AUR. For FJSSP, it won thirtyseven, thirtythree, and thirtythree times out of forty FJSSP instances in terms of makespan, AUR, and ML, respectively.
As discussed in the above section, the GA used in the proposed method is much more timeconsuming. In the future, we will focus on designing one timesaving GA to combine with ILS for solving JSSP and FJSSP. Besides, we will compare the proposed method with other multiobjective JSSP algorithms to validate its effectiveness.
Data availability
The datasets used in this article are from the wellknown public dataset, which has been appropriately cited.
References
Lu, C., Zhang, B., Gao, L., Yi, J. & Mou, J. A KnowledgeBased Multiobjective Memetic Algorithm for Green Job Shop Scheduling With Variable Machining Speeds. IEEE Syst. J. 16, 844–855 (2022).
Lu, C., Gao, R., Yin, L. & Zhang, B. HumanRobot Collaborative Scheduling in Energyefficient Welding Shop. IEEE Trans. Ind. Inform. PP, 1–9 (2023).
Osterrieder, P., Budde, L. & Friedli, T. The smart factory as a key construct of industry 4.0: A systematic literature review. Int. J. Prod. Econ. 221, 107476 (2020).
Han, B. A. & Yang, J. J. Research on adaptive job shop scheduling problems based on dueling double DQN. IEEE Access 8, 186474–186495 (2020).
Jiang, E., Wang, L. & Wang, J. Decompositionbased multiobjective optimization for energyaware distributed hybrid flow shop scheduling with multiprocessor tasks. Tsinghua Sci. Technol. 26, 646–663 (2021).
Pezzella, F., Morganti, G. & Ciaschetti, G. A genetic algorithm for the Flexible Jobshop Scheduling Problem. Comput. Oper. Res. 35, 3202–3212 (2008).
Li, X. & Gao, L. An effective hybrid genetic algorithm and tabu search for flexible job shop scheduling problem. Int. J. Prod. Econ. 174, 93–110 (2016).
Shao, X. & Kim, C. S. An Adaptive Job Shop Scheduler Using Multilevel Convolutional Neural Network and Iterative Local Search. IEEE Access 10, 88079–88092 (2022).
Bagheri, A., Zandieh, M., Mahdavi, I. & Yazdani, M. An artificial immune algorithm for the flexible jobshop scheduling problem. Futur. Gener. Comput. Syst. 26, 533–541 (2010).
Li, J., Liu, Z., Li, C. & Zheng, Z. Improved artificial immune system algorithm for type2 fuzzy flexible job shop scheduling problem. IEEE Trans. Fuzzy Syst. 29, 3234–3248 (2020).
Wisittipanich, W. & Kachitvichyanukul, V. Differential evolution algorithm for job shop scheduling problem. Ind. Eng. Manag. Syst. 10, 203–208 (2011).
Schranz, M., Umlauft, M. & Elmenreich, W. Bottomup job shop scheduling with swarm intelligence in large production plants. Proc. 11th Int. Conf. Simul. Model. Methodol. Technol. Appl. SIMULTECH 2021 327–334 (2021). https://doi.org/10.5220/0010551603270334.
Weckman, G. R., Ganduri, C. V. & Koonce, D. A. A neural network jobshop scheduler. J. Intell. Manuf. 19, 191–201 (2008).
https://www.kaggle.com/datasets/robikscube/hourlyenergyconsumption.
Chen, W., Xu, Y. & Wu, X. Deep Reinforcement Learning for MultiResource MultiMachine Job Scheduling. arXiv:1711.07440 [cs.DC] 1–2 (2017).
Katoch, S., Chauhan, S. S. & Kumar, V. A review on genetic algorithm: past, present, and future. Multimedia Tools and Applications vol. 80 (Multimedia Tools and Applications, 2021).
Omar, M., Baharum, A. & Hasan, Y. A. a JobShop Scheduling Problem (Jssp) Using Genetic Algorithm (Ga) (2006).
Teekeng, W. & Thammano, A. Modified genetic algorithm for flexible jobshop scheduling problems. Procedia Comput. Sci. 12, 122–128 (2012).
Kurdi, M. An effective new island model genetic algorithm for job shop scheduling problem. Comput. Oper. Res. 67, 132–142 (2016).
Sun, K. et al. Hybrid genetic algorithm with variable neighborhood search for flexible job shop scheduling problem in a machining system. Expert Syst. Appl. 215, 119359 (2023).
Lu, C., Zheng, J., Yin, L. & Wang, R. An improved iterated greedy algorithm for the distributed hybrid flowshop scheduling problem. Eng. Optim. https://doi.org/10.1080/0305215X.2023.2198768 (2023).
Bashab, A. et al. Optimization techniques in university timetabling problem: Constraints, methodologies, benchmarks, and open issues. Comput. Mater. Contin. 74, 6461–6484 (2023).
Abdipoor, S., Yaakob, R., Goh, S. L. & Abdullah, S. Metaheuristic approaches for the university course timetabling problem. Intell. Syst. Appl. 19, 200253 (2023).
Abbasi, M., Rafiee, M., Khosravi, M.R. et al. An efficient parallel genetic algorithm solution for vehicle routing problem in cloud implementation of the intelligent transportation systems. J. Cloud Comp. 9, 6 (2020). https://doi.org/10.1186/s1367702001574.
Tang, A. M., Quek, C. & Ng, G. S. GATSKfnn: Parameters tuning of fuzzy neural network using genetic algorithms. Expert Syst. Appl. 29, 769–781 (2005).
Huang, Y., Gao, Y., Gan, Y. & Ye, M. A new financial data forecasting model using genetic algorithm and long shortterm memory network. Neurocomputing 425, 207–218 (2021).
Fisher, H. & Thompson, G. L. Probabilistic Learning Combinations of Local JobShop Scheduling Rules. In Industrial Scheduling 225–251 (1963).
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. 2016, 770–778 (2016).
Shao, X., Wang, L., Kim, C. S. & Ra, I. Fault diagnosis of bearing based on convolutional neural network using multidomain features. KSII Trans. Internet Inf. Syst. 15, 1610–1629 (2021).
Chen, M., Shi, X., Zhang, Y., Wu, D. & Guizani, M. Deep features learning for medical image analysis with convolutional autoencoder neural network. IEEE Trans. Big Data 7790, 1–1 (2017).
Zang, Z. et al. Hybrid Deep Neural Network Scheduler for JobShop Problem Based on Convolution TwoDimensional Transformation. Comput. Intell. Neurosci. 2019 (2019).
Li, H., Wei, T., Ren, A., Zhu, Q. & Wang, Y. Deep reinforcement learning: Framework, applications, and embedded implementations: Invited paper. IEEE/ACM Int. Conf. Comput. Des. Dig. Tech. Pap. ICCAD 2017, 847–854 (2017).
Ye, Y. et al. A New Approach for Resource Scheduling with Deep Reinforcement Learning. arXiv:1806.08122 [cs.AI] 2–6 (2018).
Lin, C. C., Deng, D. J., Chih, Y. L. & Chiu, H. T. Smart manufacturing scheduling with edge computing using multiclass deep Q network. IEEE Trans. Ind. Informatics 15, 4276–4284 (2019).
Liu, C. L., Chang, C. C. & Tseng, C. J. Actorcritic deep reinforcement learning for solving job shop scheduling problems. IEEE Access 8, 71752–71762 (2020).
Liu, Z. et al. A graph neural networksbased deep Qlearning approach for job shop scheduling problems in traffic management. Inf. Sci. (Ny) 607, 1211–1223 (2022).
Chen, B. & Matis, T. I. A flexible dispatching rule for minimizing tardiness in job shop scheduling. Int. J. Prod. Econ. 141, 360–365 (2013).
Ishigaki, A. & Takaki, S. Iterated Local Search Algorithm for Flexible Job Shop Scheduling. Proc.  2017 6th IIAI Int. Congr. Adv. Appl. Informatics, IIAIAAI 2017 947–952 (2017). https://doi.org/10.1109/IIAIAAI.2017.126.
Moghadam, A. M., Wong, K. Y. & Piroozfard, H. An efficient genetic algorithm for flexible jobshop scheduling problem. IEEE Int. Conf. Ind. Eng. Eng. Manag. 2015, 1409–1413 (2014).
Fisher, H. & Thompson, G. L. Probabilistic learning combinations of local jobshop scheduling rules. Ind. Sched. 3(2), 225–251 (1963).
Lawrence, S. An Experimental Investigation of heuristic Scheduling Techniques. Suppl. Resour. Constrained Proj. Sched. (1984).
Applegate, D. & Cook, W. A Computational study of the jobshop scheduling problem. ORSA J. Comput. 3, 149–156 (1991).
Yamada, T. & Nakano, R. Job shop scheduling. IEEE Control Eng. Ser. 134 (1997).
Hurink, J., Jurisch, B. & Thole, M. Tabu search for the jobshop scheduling problem with multipurpose machines. OR Spektrum 15, 205–215 (1994).
Acknowledgements
This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (2021R1I1A3052605).
Author information
Authors and Affiliations
Contributions
Conceptualization, X.S.; methodology, X.S.; validation, X.S. and S.K.F.; data curation, X.S. And S.K.F.; writing—original draft preparation, X.S.; writing—review and editing, X.S.; visualization, X.S.; supervision, C.S.K.; project administration, C.S.K.; funding acquisition, C.S.K. All authors have read and agreed to the published version of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Shao, X., Kshitij, F.S. & Kim, C.S. GAILS: an effective multiobject job shop scheduler based on genetic algorithm and iterative local search. Sci Rep 14, 2068 (2024). https://doi.org/10.1038/s41598024517781
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598024517781
 Springer Nature Limited