Improved precedence preservation crossover for multi-objective job shop scheduling problem
Authors
- First Online:
- Received:
- Accepted:
DOI: 10.1007/s12530-010-9022-x
- Cite this article as:
- Ripon, K.S.N., Siddique, N.H. & Torresen, J. Evolving Systems (2011) 2: 119. doi:10.1007/s12530-010-9022-x
- 4 Citations
- 771 Views
Abstract
Over the last three decades, a great deal of research has been focused on solving the job-shop scheduling problem (JSSP). Researchers have emerged with a wide variety of approaches to solve this stubborn problem. Recently much effort has been concentrated on evolutionary techniques to search for the near-optimal solutions optimizing multiple criteria simultaneously. The choice of crossover operator is very important in the aspect of genetic algorithms (GA), and consequently a wide range of crossover operators have been proposed for JSSP. Most of them represent a solution by a chromosome containing the sequence of all the operations and decode the chromosome to a real schedule from the first gene to the last gene. However, these methods introduce high redundancy at the tail of the chromosome. In this paper, we address this problem in case of precedence preservation crossover (PPX) which is regarded as one of the better crossover operators and propose an improved version, termed as improved precedence preservation crossover (IPPX). Experimental results reveal that our proposed approach finds the near-optimal solutions by optimizing multiple criteria simultaneously with better results and also reduces the execution time significantly.
Keywords
Precedence preservation crossover (PPX)Job-shop scheduling problem (JSSP)Multi-objective evolutionary optimizationPareto optimal front1 Introduction
In job shop scheduling problem (JSSP), the objective is to allocate resources in a way, such that a number of tasks can be completed cost-effectively within a given set of constraints. JSSP is one of the most widely studied problems in computer science, which has great importance to manufacturing industries with the objective to minimise the production cost. Classical JSSP can be described as scheduling n different jobs on m machines. The m machines are identical, and the n nonpreemptable jobs are all independent. Job i has the size J_{i} > 0, 1 ≤ i ≤ n, which is to be processed on a set of m machines (M_{r}), 1 ≤ r ≤ m. Each job has a technological sequence of machines to be processed. The jobs arrive one by one, and each job must be immediately and irrevocably scheduled without knowledge of later jobs. The size of a job is known on arrival, and the jobs are executed only after the scheduling is completed. The processing of job J_{i} on machine M_{r} is called the operation O_{ir.} Operation O_{ir} requires the exclusive use of M_{r} for an uninterrupted duration P_{ir}, its processing time. A schedule is a set of completion time for each operation C_{ir} that satisfies these constraints. Thus, JSSP can be considered as a searching or optimization problem, where the goal is to find the best possible schedule. The classical JSSP is one of the most challenging combinatorial optimization problems, mainly because of two reasons. Firstly, even for the special case of m = 2, JSSP is NP-hard. Secondly, one has to deal with tricky search space as neighboring points may represent very different solutions.
Due to expansion in manufacturing industry and industrial automation, JSSP has become a practical problem in the industry and started getting a lot of attention from the research community. The difficulty of the general JSSP is that it makes rather hard for conventional search-based methods to find a set of optimal schedules within polynomial time. Deterministic scheduling methods like the branch-and-bound method (Bucker et al. 1994) or the dynamic programming (Peter et al. 1999) are always computationally expensive for an optimum solution when searching a large search space. To combat the increasing complexity, it has been suggested by many researchers for near-optimal solutions instead. Such near-optimal solutions can only be endowed by stochastic search techniques such as evolutionary algorithms (EAs) (Ripon 2007). Davis proposed the first GA-based approach to the solution of scheduling problems in 1985 (Davis 1985). This paper was instructive to the application of GA on JSSP. Since then, GA has been applied with increasing frequency to JSSP. In contrast to smart heuristics such as simulated annealing (Kirkpatrick et al. 1985) and tabu-search (Glover 1989) which are local search techniques and use a generate-and-test search manipulating one feasible solution based on physical rather than a biological analogy, the GA utilizes a population of solutions in its search, giving it more resistance to premature convergence on local minima.
Surprisingly most of the researches in JSSP have focused mainly on a single objective, and predominantly optimization of the makespan, which is defined as the time interval between start of first operation and completion of last operation. However, real world scheduling problems naturally involve multiple objectives. In general, minimization of the total makespan is often used as the optimization criterion in JSSP. However, tardiness, flow time, lateness, earliness are also the important criteria in JSSP. Inherently, real-life scheduling problems are multi-objective by nature and the final schedule must consider various objectives simultaneously. Consequently, scheduling falls into the category of multi-objective optimization problem (MOOP). In such MOOPs, there is no single optimal solution, rather there is a set of alternative solutions. These solutions, namely Pareto-optimal solutions (Deb 2001; Zitzler et al. 2000; Veldhuizen and Lamont 2000), are optimal in the wider sense that no other solutions in the search space are superior when all the objectives are considered. In a Pareto-optimal set, any two solutions of this set do not dominate each other and one or more members of this set dominate all other solutions in the search space excluding this set. Based on the principle of multi-objective optimization, obtaining an optimal solution that satisfies all objectives is almost impossible. It is mainly for the conflicting nature of objectives, where improving one objective may only be achieved when worsening another objective. Accordingly, it is desirable to obtain as many different Pareto-optimal solutions as possible, which should be converged to, and diverse along the Pareto-optimal front with respect to multiple criteria.
Nagar et al. (1995) provided a good review on multi-objective production scheduling. This survey mostly covered researches conducted up to early 1990s based on a complete classification scheme. In those researches, the solution methodology primarily comprises of implicit enumeration techniques such as branch and bound, dynamic programming and trade-off curves. They specifically mentioned that they did not come across any paper that used techniques like simulated annealing, tabu-search and genetic algorithm. This has also been confirmed in a survey by Lei (2009) that conventional techniques are the main approaches to multi- and bi-objective scheduling problems before 1995. Lei provided extensive survey coverage on the literature of multi-objective production scheduling, where he first classified scheduling problems into deterministic and non-deterministic classes. The deterministic scheduling problems are categorized into four types. The author surveyed more than 90 papers published between 1995 and 2008, out of which about 40 papers adopted evolutionary algorithm (EA) and GA as optimization methods. It is evident from the two surveys that only in the current decade a great deal of attention has been focused on solving JSSP using EAs (Ripon 2007; Bierwirth et al. 1996; Jensen 2003; Syswerda 1991; Bierwirth 1995; Yamada and Nakano 1992; Song et al. 1999; Chan et al. 2008; Ombuki and Ventresca 2004).
The fundamental concept of JSSP can be thought of as an ordering problem. A schedule is a representational issue which resembles to a permutation problem. A permutation problem can generally be formulated in the following way. A set of n operations (tasks) with known processing times has to be scheduled on m machines (resources). A group of m operations forms a complex called a job. Altogether N jobs are defined within the set of operations, i.e. the number of operations is the Cartesian product of the number of jobs and machines defined by O = N × m, where O is the number of operations. Partitioned permutation provides the bedrock for the application of EA to many combinatorial optimisation problems (Bierwirth et al. 1996) and thus serves as chromosomes in GA. The known snare of this permutation chromosome is the crossover operator, which may not safeguard the semantical properties of the underlying problem. Therefore, one of the central issues in the use of GAs for the JSSP is an efficient characterization of the strength of crossover operator. This is perhaps due to crossover operator’s most exploratory power in an GA. Eshelman et al. investigated the exploratory power of crossover operator (Eshelman et al. 1989). Bierwirth developed repeating permutation representation while Mattfeld implemented a number of crossover operations (Bierwirth et al. 1996; Bierwirth 1995; Mattfeld 1996). A large number of crossover operators have been proposed in the literature such as generalized partially mapped crossover (GPMX) (Bierwirth et al. 1996), generalized order crossover (GOX) (Bierwirth 1995), precedence preservation crossover (PPX) (Bierwirth et al. 1996; Mattfeld 1996) and such others, mainly due to the need for designing specialist crossover operations to use with permutation representations. The details of crossover operators specifically designed for ordering applications can be found in (Bierwirth et al. 1996; Jensen 2003). In all of these permutation based crossover techniques, there exist redundancies at the tail of a chromosome. A schedule, decoded from a chromosome sequence of redundant genes, has no effect on the final solution. It also imposes additional time complexity. Therefore, the objective of the current paper is to improve the crossover technique and propose the improved precedence preservation crossover (IPPX) to be used in the JSSP, which eventually resolves this issue and also reduces the execution time. In an attempt to address multiple objectives simultaneously in this work, we apply makespan and mean-flow time as the objectives and present the schedules as a set of Pareto-optimal solutions.
The remainder of the paper is organized as follows. Section 2 describes various crossover operators. Section 3 presents the proposed approach for the improvement of PPX. Experimental results on the performance of the proposed method (IPPX) and comparisons with others approaches are demonstrated in Sect. 4. Finally, some concluding remarks on the method are made in Sect. 5.
2 Crossover in permutation representation
In crossover, also known as recombination, information is exchanged among the chromosomes (or individuals) present in the mating pool to create new chromosomes. Permutation-based representations present particular difficulties for the design of crossover operators, since it is not generally possible simply to exchange strings of genes between selected parents and still maintain the permutation properties. The crossover operator has to comply with the semantic meaning (properties) of the chromosome representation meaning to combine building blocks to form larger building blocks which share the phenotypical traits of the smaller building blocks. Two genes may articulate meaningful information if they appear side by side (relative order). They may even articulate information if one gene precedes the other gene in the chromosome (absolute order) regardless of how many genes lay between. Syswerda (1991) inferred that the order as well as the position of genes in the permutation is meaningful. An excellent theoretical analysis on order-based crossover has been provided by Wroblewski (1996). To be more precise, we expect the absolute order to be of particular interest because it directly expresses precedence relation among the operations in a schedule. A number of specialized crossover operators have been designed for permutations. These aim at exchanging information as much as possible which especially held in common in both parents.
In order to apply PPX in a uniform crossover fashion the choices may alternatively change at random. However, the absolute order between any of two genes in the offspring has its origin in at least one of the parental chromosomes. In short, GOX passes on the relative order of genes, GPMX tends to pass on positions of genes by respecting the ordering to some extent, and PPX respects the absolute order of genes resulting in a perfect preservation of precedence relations among genes.
2.1 Superiority of PPX over GPMX and GOX
Phenotypical preservation of crossover
Operator | d(p_{1}, p_{2}) | d(o, p_{1}) | d(o, p_{2}) | d(o, p_{1}) + d(o, p_{2}) |
---|---|---|---|---|
PPX | 0.273 | 0.137 | 0.136 | 0.273 |
GPMX | 0.273 | 0.141 | 0.139 | 0.280 |
GOX | 0.273 | 0.150 | 0.152 | 0.302 |
The average normalized Hamming distance between two arbitrary solutions is 0.273. For all three crossover operators, we observe that d(o, p_{1}) ≈ d(o, p_{2}). This verifies all operators to pass on the same portion of parental information. In case of PPX, d(o, p_{1}) + d(o, p_{2}) = d(p_{1}, p_{2}) holds. It means PPX passes on precedence relations perfectly in comparison to that of GOX and GPMX.
2.2 Redundancy problem of PPX
Most of the GA approaches for JSSP represent a solution by a chromosome containing the sequence of all the operations and decode the chromosome to a real schedule from the first gene to the last gene. According to Song et al. (1999), there are three common problems for these approaches. Firstly, it introduces high redundancy at the tail of the chromosome. Secondly, there exists little significance of rear genes on the overall schedule quality. And finally, GA operators applied on the real part of the chromosome are less likely to create genetically improved good offspring, i.e., most likely a waste of evolution time.
Pointing at the last three genes (3, 2, and 1), we can find that all of these operations are the last operation of respective job. Consequently job 3, in the last four bits, we find the last operation for job 3. Now finishing the last operation of job 3 at an earlier time does not reduce the total completion time for job 3. So, it is redundant to manipulate these last genes. Most importantly, in case of large number of jobs and operations (Suppose 100 × 100 JSSP), this redundancy increases and makes unnecessary delay to deliver an optimal schedule.
3 Proposed improved PPX (IPPX) approach
3.1 Chromosome representation
Chromosome representations for JSSP using GA can be performed by two basic encoding approaches: direct and indirect (Chan et al. 2008). The direct approach encodes a schedule as a chromosome and the genetic operators are used to evolve these chromosomes into better schedules. In indirect representation, the chromosome encodes a sequence of decision preferences, like simple ordering of jobs in a machine or any heuristic rules, and a schedule builder is required to decode the chromosome into a schedule.
3.2 Reduction of redundancy
As discussed earlier in Sect. 2, the permutation based crossover techniques exhibit redundancy at the tail of chromosomes. To reduce such tail redundancy, we perform PPX for (N–M) genes from the two parent chromosome and choose the last M genes by using a simple heuristic method, where M follows the constraint \( O \le M \le \frac{N}{3}.\) Here, O is the number of operation and N is the total number of genes.
3.3 Mutation
3.4 Efficiency of IPPX method
4 Experimental result and analysis
4.1 Experimental setup
The experiments are conducted using 100 chromosomes and 150 generations. The probabilities of crossover and mutation are 0.9 and 0.3, respectively. Using the same setting, each problem is tested for 30 times with different seeds, and the best and average solutions are recorded. Then all the final generations were combined and a non-dominated sorting is performed to constitute the final non-dominated solutions. To justify the efficiency of the proposed crossover method (IPPX) (as shown in Algorithm 1) the results are compared to existing well-known crossover methods GPMX, GOX, and PPX in the framework for a multi-objective genetic algorithm. In this work, we use the non-dominated sorting genetic algorithm (NSGAII) (Deb et al. 2002) as the multi-objective GA framework. Additionally, to justify the efficiency of the proposed IPPX-based JSSP approach, we present a comparison with other GA-based JSSP approaches in single objective context. In addition, we compare the performance of the IPPX-based approach with the performance of other existing GA-based JSSP approaches in multi-objective context.
4.2 Benchmark problems
Benchmark problems with their lower bounds
Instance data | Number of jobs | Number of machines | Lower bound (makespan) |
---|---|---|---|
mt06 | 6 | 6 | 55 |
mt10 | 10 | 10 | 930 |
mt20 | 20 | 5 | 1,165 |
la21 | 15 | 10 | 1,040 |
la24 | 15 | 10 | 935 |
la25 | 15 | 10 | 977 |
la27 | 20 | 10 | 1,235 |
4.3 Objective functions
4.4 Analysis of results
4.4.1 Single objective context
Until now, almost all JSSP algorithms try to optimize single criteria only (mainly minimization of the makespan). Therefore, to evaluate our proposed IPPX based algorithm as an evolutionary approach, we first compared the makespan obtained by our approach with the existing GA-based approaches to justify its capability to optimize makespan. We then demonstrated its performance as a multi-objective evolutionary JSSP algorithm by optimizing makespan and mean flow-time simultaneously. Note that, for both cases we have used the same results achieved by our approach.
Comparison of different operators in single objective context
Instance data | Lower bound | sGA | GTGA | LSGA | SGA | JGGA | IPPX |
---|---|---|---|---|---|---|---|
mt06 | 55 | 55 | 55 | 55 | 55 | 55 | 55 |
mt10 | 930 | 994 | 930 | 976 | 965 | 930 | 930 |
mt20 | 1,165 | 1,247 | 1,184 | 1,209 | 1,215 | 1,180 | 1,180 |
4.4.2 Multi-objective context
Comparison with other operators in multi-objective context
Instances | Crossover operators | Makespan | Mean flow time | ||
---|---|---|---|---|---|
Best | Average | Best | Average | ||
Mt06 | GPMX | 55 | 59.73 | 53 | 53.046 |
GOX | 55 | 59.81 | 53 | 52.51 | |
PPX | 55 | 59.71 | 53 | 53.918 | |
IPPX | 55 | 59.35 | 50 | 50.086 | |
Mt10 | GPMX | 945 | 1,035.13 | 814 | 875.89 |
GOX | 943 | 1,030.01 | 812 | 880.13 | |
PPX | 942 | 1,016.14 | 830 | 871.815 | |
IPPX | 930 | 1,013.16 | 822 | 830 | |
Mt20 | GPMX | 1,188 | 1,266.60 | 818 | 901.089 |
GOX | 1,196 | 1,325.80 | 820 | 822.043 | |
PPX | 1,188 | 1,256.90 | 810 | 915 | |
IPPX | 1,180 | 1,216.39 | 767 | 768.90 | |
La21 | GPMX | 1,062 | 1,112.43 | 898 | 903.173 |
GOX | 1,058 | 1,116.543 | 905 | 908.231 | |
PPX | 1,058 | 1,123.56 | 908 | 914.184 | |
IPPX | 1,046 | 1,103.68 | 913 | 919.66 | |
La24 | GPMX | 968 | 972.054 | 817 | 817.34 |
GOX | 968 | 986.61 | 819 | 829.59 | |
PPX | 966 | 998.30 | 829 | 833.006 | |
IPPX | 935 | 975.88 | 829 | 867.73 | |
La25 | GPMX | 998 | 1,065.18 | 809 | 821.30 |
GOX | 1,002 | 1,071.09 | 811 | 856.13 | |
PPX | 993 | 1,045.701 | 803 | 811.861 | |
IPPX | 982 | 1,033.314 | 773 | 823.27 | |
La27 | GPMX | 1,257 | 1,395.31 | 1,088 | 1,099.91 |
GOX | 1,258 | 1,401.03 | 1,091 | 1,116.81 | |
PPX | 1,255 | 1,398.10 | 1,123 | 1,133.7 | |
IPPX | 1,243 | 1,384.66 | 1,111 | 1,111.25 |
The values of hypervolumes for this problem are 1068.5 for GPMX, 847.5 for GOX, 717.5 for PPX and 1784.5 for IPPX operators. Zitzler (1999) defined a metric, called maximum spread, measuring the length of the diagonal of a hyperbox formed by the extreme function values observed in the non-dominated set. The maximum spreads of the Pareto-front solutions of the corresponding operators were 40.4969 for GPMX, 48.1041 for GOX, 39.6232 for PPX, and 48.7647 for IPPX. Though the maximum spread does not reveal true distribution of the solutions, but inspecting the Fig. 12 we can see all non-dominated solutions are nearly uniformly spaced. It is evident from the above performance metrics that IPPX outperforms its peers.
4.5 Computational time
Computation time by different operators
Problem domain | Generation | GPMX (s) | GOX (s) | PPX (s) | IPPX (s) |
---|---|---|---|---|---|
Mt10 (10 × 10) | 1,000 | 18 | 18 | 18 | 17 |
2,000 | 34 | 35 | 35 | 34 | |
3,000 | 53 | 53 | 53 | 51 | |
4,000 | 70 | 71 | 70 | 68 | |
Mt20 (20 × 5) | 1,000 | 18 | 18 | 18 | 18 |
2,000 | 35 | 36 | 36 | 34 | |
3,000 | 54 | 54 | 54 | 51 | |
4,000 | 71 | 72 | 71 | 68 | |
La27 (20 × 10) | 1,000 | 34 | 34 | 34 | 34 |
2,000 | 69 | 69 | 68 | 67 | |
3,000 | 101 | 101 | 100 | 99 | |
4,000 | 134 | 135 | 134 | 130 | |
La21 (15 × 10) | 1,000 | 26 | 26 | 26 | 26 |
2,000 | 52 | 52 | 52 | 51 | |
3,000 | 77 | 78 | 77 | 75 | |
4,000 | 102 | 103 | 102 | 99 | |
La24 (15 × 10) | 1,000 | 26 | 27 | 27 | 26 |
2,000 | 52 | 52 | 52 | 50 | |
3,000 | 77 | 77 | 77 | 75 | |
4,000 | 101 | 102 | 101 | 98 |
5 Conclusion
This paper presents an improved PPX-based crossover for multi-objective JSSP that reduces the redundancy at the tail of a chromosome. Experimental results exhibit that removing redundancy at the tail of a chromosome is helpful for getting better result in a reasonable computational time. Considering the real world demand for scheduling with reasonable time limit, experimental results justify that the proposed method will be very effective for large problems like 100 × 200, 150 × 300, etc. which is very usual in practice. We also believe that removing redundancy at the tail of a chromosome will be beneficial for all operation based chromosome representation for multiple-objective JSSP.
Open Access
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.