New meta-heuristic for dynamic scheduling in permutation flowshop with new order arrival

Permutation flowshop scheduling problem (PFSP) has attracted lots of attention from both academia and industry as it finds many applications especially for today’s mass customization. The PFSP is proved NP-hard, and the dynamic uncertainties such as stochastic new order arrivals significantly increase the problem complexity and difficulty. Many enterprises often struggle to make decisions on accepting new orders and setting due dates for them due to lack of effective scheduling methods. To fill in the knowledge gap, this paper is to propose a new meta-heuristic algorithm which is based on a new enhanced destruction and construction method and a novel repair method while adopting the architecture of the iterated greedy algorithm. Statistical tests were conducted and results show that the new algorithm outperforms existing ones.


Introduction
Customers nowadays demand more and more customized products at no extra cost comparing to that mass produced by dedicated production lines.Mass customization requires flow line production systems which can respond to the changing demand quickly and efficiently.A flow line, also known as a flowshop, refers to a production system in which machines are allocated on a line where one or more products are manufactured in a specified order, starting from the first to the last.Flow lines can be found in auto manufacturing [1], integrated circuit (IC) fabrication [2], photographic film production [3], and pharmaceutical and agro-food industries [4].
Flowshop scheduling is a challenging problem [5][6][7][8][9].The flowshop scheduling problem has been proven to be nondeterministic polynomial-time hard (NP-hard) when the number of machines is larger than 2 [10].Many studies [5,11] focus on the static environment which simplifies the problem constraint.However, more attention should be paid to dynamic aspects that frequently occurred in reality such as new order arrival [12,13], machine breakdown [14], and rush order [15].It is typical in today's mass customization that new orders frequently arrive into factories.The randomness of new orders often leads to sheer complexity in scheduling due to the dynamic changes given various constraints of resources.The uncertainties in terms of job types, job numbers, arrival times, due dates, etc. have created significant challenges [12,16], which increases the complexity of the scheduling problem significantly.This happens in warehouses as well, and the problem is defined online order batching problems [17][18][19][20], in which new orders arrive frequently and stochastically, and scheduling or rescheduling should respond immediately upon their arrival.Three key questions should be considered concerning new orders: (1) if the new order can be accepted or not, given the due date constraint of the old order; (2) how to complete the new order as soon as possible; and (3) how to maximize the number of accepted orders in order to maximize revenue.
The problem is a constraint optimization problem.It is NPhard in the strong sense especially when the due date of the old order is loose as it can be reduced to the PFSP by including all jobs from new orders [13].In this paper, a new meta-heuristic is derived from the framework of the iterated greedy (IG) algorithm [21] under which a new enhanced destruction and construction method and a novel repair method are proposed.The paper is organized as follows.Literature review is presented in section 2, followed by the problem statement in section 3. Section 4 presents the new meta-heuristic algorithm named eIG_Rep.Section 5 introduces the experimental validation of the new algorithm.Section 6 concludes this paper.

Related work
The approaches to handling uncertainties can be classified into three categories [22,23]: (1) reactive approach; (2) proactive approach; and (3) predictive-reactive approach.The reactive methods such as the shortest processing time (SPT) rule, the first come first serve (FCFS) rule, and the longest processing time (LPT) rule can be easily used to schedule jobs in real time.Right shifting (RS) strategy and real-time (RT) strategy are also reactive methods for coping with newly arrived orders [12].RS and RT are two strategies which attach new orders to the end of the existing schedule.It has been confirmed that rescheduling the remaining jobs of the old order can provide a higher quality schedule [13,16,24].However, no reactive methods can provide high-quality solutions with order mixing, therefore missing the opportunity of optimized schedule [25].Proactive scheduling techniques are developed based on anticipated disturbances [26,27].It is widely used to deal with uncertainties that are easy to predict and simulate.By assuming probabilistic processing times, an improved genetic algorithm is developed in order to maximize the probability of completing orders before expected due dates [28].A twophase simulation-based algorithm is developed for flowshop scheduling with stochastic processing times [29].Though the proactive approaches are robust to uncertainties, it cannot be applied to the problem studied in this paper, because new order arrivals are difficult to be predicted and simulated precisely due to many uncertain parameters in terms of job types, number of jobs, arrival times, and due dates.
The predictive-reactive approaches are commonly used scheduling methods under uncertainties [30,31].Two main steps are included: (1) a predictive schedule is generated over the time horizon and (2) the resulted schedule is then modified to respond to uncertainties.To handle newly arrived orders, a genetic algorithm with the non-reshuffle and reshuffle strategies is proposed for job scheduling, and a match-up strategy is applied to determine the time window for rescheduling of flexible manufacturing systems [32].A hybrid algorithm by integrating genetic algorithm and tabu search algorithm is developed as a rescheduling technique to cope with new orders and machine breakdowns simultaneously in jobshops [33].An assignment model is built for generating the baseline schedule and an improved genetic algorithm is developed for rescheduling of new orders in single-machine layout [34].A refreshing variable neighborhood search (RVNS) approach is presented for job rescheduling to set common due dates for newly arrived orders in permutation flowshops [13].In summary, the predictive-reactive method is the most popular one and it can provide high-quality schedules as order mixing can be achieved in job rescheduling.However, existing work focuses on jobshop or flexible manufacturing system [33,34].The PFSP coping with new orders of multiple jobs has not been well addressed.
Iterated greedy (IG) algorithm is a predicative-reactive approach.In the IG algorithm framework, a greedy construction heuristic including destruction and construction phases is iterated until the stopping criterion is satisfied, and a local search can be applied to further improve the performance.Depending on the solution feasibility, a repair method could also be used in order to enhance the algorithm's effectiveness.IG has been employed to solve many scheduling problems [35][36][37][38][39], including the static permutation flowshop scheduling problems (PFSP) [40].However, it tends to be trapped to local optimum.Many modifications have been developed for the IG algorithm [39][40][41].For the destruction and construction phases, the majority of studies focus on the job insertion method.To avoid repeated job removing, tabu-based destruction methods are developed [39,42].Different heuristics or strategies are presented for choosing suitable jobs [43].In [35], the prior and posterior jobs of the inserting job are reinserted in all possible positions in the construction phase.Some speed-up methods are presented for improving job insertion efficiency [38,40].Moreover, many studies concentrate on escaping from a local optimum with the destruction and construction phases [44,45].An insertion perturbation is designed to provide a different solution from the previous local optimum [44].Some studies alter the number of perturbed jobs to change the strength of perturbation [36,38].Normally, it is a fixed and small value [45].However, it has been confirmed that no single fixed value could ensure high-quality solutions [36].It is difficult to escape from a local optimum if a small number of perturbed jobs is defined [35,44,46].In [36], the number of perturbed jobs is randomly selected within a defined range.In [38], a differential evolution algorithm is modified and used to optimize the number of perturbed jobs within IG algorithm for noidle permutation flowshop scheduling problems.However, extra computation time is needed for the optimization process.
Many variants or modifications [41,43] have been developed by modifying the local search method for improving IG algorithm performance.A descent local search is added to the IG algorithm for complex flowshop problems with sequencedependent setup times [41].Three local search methods are revised based on jump moves, swap moves, and variable neighborhood descent approach for a scheduling problem with unrelated parallel machines [43].For no-wait flowshop scheduling problems, a neighborhood search method based on insert, swap, and double-insert moves is developed [39].A variant of the non-exhaustive descent algorithm is developed by swapping any two positions in the sequence and the influence of ties is also investigated during the local search [37].It can be concluded that different problems need different local search methods.In this paper, a dedicated local search mechanism is proposed for skipping infeasible solutions which violate the due date constraints of old orders, in order to save computation time.
For highly constrained optimization problems, e.g., new order scheduling problem with due date constraints of old orders, solution feasibility may not be guaranteed during searching process, especially when using meta-heuristics [47].It is important to determine whether the infeasible solutions are discarded or searched for finding feasible solutions.A repair algorithm is developed in GA algorithms for solving numerical optimization problems with nonlinear constraints where two separate populations are kept, one for marking "search points" which are to be repaired and evaluated, and the other for keeping "reference points" which are used for evaluation directly [47].A repair algorithm for robot path planning problems is presented through designed genetic operators based on prior knowledge [48].A genetic repair operator is designed in parallel GA algorithms for the traveling salesman problem and the graph partitioning problem by constructing a new feasible chromosome after identifying all gene loci and alleles [49].There are two techniques to deal with the fixed infeasible solutions after repairing: returning it to populations or not.Some studies [50,51] have taken the "never replacing" approach, i.e., the repaired is never returned to the population, while others [13] have taken the "always replacing" approach.Orvosh and Davis [52] evaluate the performance of both techniques in GA algorithms in terms of returning the repaired or the original chromosome to populations.In summary, there are no standard heuristics for designing a repair algorithm [53], and a repair method is usually problem-dependent [54].
Various acceptance criteria are used for different problems.A criterion "replace if better" (RB) is adopted for dynamic parallel machine problems [55].In solving blocking jobshop problems, two acceptance criteria are developed, i.e., random walk (RW) and simulated annealing-like (SA) [56].When using RW, each feasible solution is accepted after the construction phase.While using SA, only the feasible solution with a better performance is accepted, or the candidate solution is accepted with a probability.In [43], RB and RW criteria are used for unrelated parallel machine problems.In [36], a simulated annealing-like acceptance is applied with a sinking temperature value in order to diversify the searching area for distributed permutation flowshops.In IG algorithms, a simulated annealing-like acceptance criterion is widely adopted [35,41].However, for the constrained optimization problem studied in this paper, e.g., new order scheduling problem with due date constraint of old orders, solution feasibility may not be guaranteed during searching process, especially in metaheuristics [47].It is important to determine whether the infeasible solutions are discarded or searched for finding feasible solutions.Three strategies are generally used to handle infeasible solutions: reject, penalize, and repair [57].Reject strategy is a simple approach where only feasible solutions are kept.It is reasonable to use this strategy when computation time is to be saved.Penalizing strategy focuses on designing an appropriate penalty function for the infeasible ones.It is difficult to find a good compromise between the objective value and the penalty.Repair method is a constraints handling strategy.In many cases, the feasible region of the search space is very small, and feasible solutions may be surrounded by the infeasible ones.So it is necessary to guide the search towards feasible regions based on the infeasible solution information.In this paper, the acceptance criteria take infeasible solutions into consideration.

Problem statement
As shown in Fig. 1, the problem studied in this paper is a constraint optimization problem in which n O jobs of the old order J O have been scheduled but not fully completed yet when n N jobs of the new order J N come into the production system [16,25].All jobs of the old order have a common due date d O which cannot be violated.Mathematically, it can be expressed as T J O max ¼ 0. It is assumed that new orders will arrive continuously and randomly but only one at a time.So it is a dynamic problem but order selection is not considered in this paper.In order to set a tight due date for J N and accept the maximum number of orders, the objective is defined as minimizing the maximum completion time of the new order C J N max given the constraint of machine availability, a k , following the convention of [24].The problem can be denoted as Fm prmu; d O ; a k C J N max =T J O max ¼ 0 according to [58] where Fm represents a flowshop with m machines, prmu stands for permutation, d O indicates that all jobs of the old order have a common due date, a k represents the availability on machine k and C J N max =T J O max ¼ 0 is a constrained objective.Therefore, the objective function and due date constraint can be expressed as where s represents a job sequence.The assumptions are defined as follows.
(1) All jobs should start as soon as possible; (2) Processing times are known and deterministic; (3) Setup time is included in the processing time for each operation; (4) Machines are continuously available but cannot process two or more jobs simultaneously; (5) Job pre-emption is not permitted; (6) Buffers' capacity between machines is infinite; (7) Only permutation schedules are allowed; (8) Each order arrives randomly; (9) Only one order arrives at a time; (10) Each order may contain one or multiple jobs; (11) Job information is known when the order arrives; (12) All jobs in an order should be finished before its due date; (13) Once a new order arrives, rescheduling is activated; (14) If all existing jobs are finished and no new jobs arrive, all machines stay idle and available; (15) No more than two orders can be mixed together, i.e., only the last old order can be rescheduled with the newly arrived order.
Note that when the new order arrives, the existing uncompleted jobs from the old order will be rescheduled with the new order.Old order jobs and partial jobs from the new order have to be scheduled before the due date.Therefore, the slack between the due date and its completion time of the old order has a direct impact on the resulted schedule.A relaxed due date of the old order can absorb more jobs from the new order and have a better chance to obtain a high-quality schedule.So, a reasonable and relaxed due date for the old order is assumed.
To solve the above problem, a new meta-heuristic is proposed based on the IG algorithm.Three types of problems including due date setting, order acceptance, and maximizing the number of accepted orders are studied.
4 New meta-heuristic: a modified iterated greedy algorithm (eIG_Rep) In this section, a new meta-heuristic algorithm named the enhanced iterated greedy algorithm with repair method (eIG_Rep) is proposed.It is derived from the IG algorithm which is a single-point meta-heuristic that only one population is saved and searched in each iteration.It is fast but easy to be trapped in a local optimum even if an acceptance criterion is used.Therefore, an effective escape mechanism from a local optimum should be designed.Besides, the studied problem in this paper has a strong due date constraint that jobs from the old order should be completed by their due date.The best feasible solution may be surrounded by and evolved from infeasible solutions.So, an effective repair method is required to convert infeasible solutions into feasible ones.
During the searching process, the following two challenging issues should be solved: (1) how to escape from a local optimum and (2) how to repair infeasible solutions effectively.To address these issues, a new destruction and construction method and a new repair approach are proposed in this paper.
Figure 2 shows the flow chart of the new eIG_Rep algorithm.First, during initialization, the initial solution is generated, and the local search is applied to the initial solution.The initial parameters are given including δ, z, δ′, and Ti.In the main body of the algorithm, destruction and construction, local search, repair method, and acceptance criterion are run in an iterated manner until the stopping criterion is satisfied.Note that in the new enhanced destruction and construction method, the number of perturbed jobs, δ, should be first decided before destruction and construction phases.The local search is then applied to the incumbent.If the solution resulted from the local search is infeasible, repair it.Otherwise, acceptance criterion is conducted directly.The details of the new algorithm are described below.

Initialization
An initial population is required for the eIG_Rep algorithm at the beginning.For this need, three initial solutions are generated: one is obtained by the NEH heuristic [59] that jobs from both the old and the new orders are mixed and scheduled without accounting for the due date constraint; the other two are generated by RS and RT strategies [12] respectively implemented with the NEH heuristic.Among these three solutions, the feasible one with the best objective value is chosen as the initial best solution S best .So, a feasible solution is guaranteed in the initialization phase as a relaxed due date is assumed and determined by RT strategy implemented with the NEH heuristic.The local search method is then applied to S best .If a solution S obtained through the local search is feasible and better than S best , then S best will be updated by S. Note that S is defined as the incumbent solution in each iteration.The local search method will be detailed in section 4.3.
Four key parameters δ, z, δ′, and Ti are set during initialization.δ represents the number of perturbed jobs, δ′ is the enlarged number of perturbed jobs in the destruction and construction method, z indicates the number of iterations allowing the performance of the incumbent to remain steady while not enlarging perturbation, and Ti represents a control parameter for the acceptance criterion.

New destruction and construction method
In the herein paper, a new destruction and construction method with a steered variation approach for δ is designed in order to escape efficiently from a local optimum while maximizing the computational efficiency.The new method is presented as follows.
In the destruction phase, δ jobs are randomly selected, half from the old order and half from the new order.So δ is defined as an even number.When the incumbent is trapped in a local optimum or its objective value stays unchanged for z iterations, the perturbation will be intensified by replacing δ with a larger value δ′ (δ′ > δ).The number of removed and reinserted jobs increases to δ′, so as to diversify the population and escape from the local optimum.Once the objective value of the incumbent is improved, δ is changed back to a smaller value in order to save computation time.
For example, 4 (δ = 4) jobs are evenly selected from both the old and new orders in destruction phase.If the objective value does not improve after 2 (z = 2) iterations, 6 jobs (δ' = 6) are then selected to enlarge the perturbation.If the objective value is improved, δ is changed back to maintain algorithm efficiency.
In the construction phase, for our problem, many infeasible solutions will be generated if all positions are Fig. 2 Flow chart of the eIG_Rep algorithm (δ, the number of perturbed jobs; z, the number of iterations with δ unchanged; δ′, the enlarged number of perturbed jobs; Ti, a parameter to adjust the temperature in acceptance criterion) investigated when inserting jobs of the old order.The due date constraint will be violated if the jobs of the old order are reinserted after the due date.To avoid infeasible solutions, the idea of general local search [13] is used.If the job belongs to the new order, it is reinserted into each possible position.For the job from the old order, it is reinserted into each possible and feasible position given the due date constraint.That is, if the maximum completion time of the reinserted job exceeds the due date, the subsequent positions are then neglected.After construction, if a better and feasible solution is generated, then S best is updated.

Local search
In the IG algorithm, a local search can be applied after the construction phase to improve its performance [21].In this paper, the general local search method [13] is modified to avoid infeasible solutions and save computational time.
Flow chart of the local search is shown in Fig. 3.For a given solution, choose one job and insert it into each possible and feasible position sequentially.When inserting jobs by using Taillard improvement [60], jobs from the old order are inserted given the due date constraint.That is, when the maximum completion time of the inserted job exceeds the due

New repair method
Although infeasible solutions whose completion times are beyond the due date constraint are largely avoided, many infeasible solutions with better objective values would be generated after the local search.Therefore, a repair method is required to convert these infeasible solutions into feasible ones during the search.In this paper, a novel repair mechanism is proposed.Figure 4 shows the flow chart of the new repair method.The jobs of the old order J T which are completed beyond their due date are selected and defined as tardy jobs.The jobs of the new order J E which are scheduled before the due date are deemed as early jobs.In order to fix the infeasible solutions and keep their excellent performance, the tardy jobs and early jobs with similar processing times are paired and swapped one by one until all tardy jobs are scheduled before the due date.Herein, the least Euclidean distance is introduced as a measure of similarity of tardy and early jobs, in order to maintain the overall performance of the schedule after swapping.The Euclidean distance between two jobs is defined as If the number of tardy jobs J T is larger than that of J E , the rest jobs in J T after pairing will be randomly inserted before the last job position of the old order.After repairing, if a feasible and better solution is found, S best is then updated.

Acceptance criterion
In the IG algorithm, a simulated annealing-like acceptance criterion is normally used to improve population diversification and escape from a local optimum.During searching, the incumbent does not have to be feasible.If the solution S′ resulted from the local search or the repair method has a better objective value, it is then used as the new incumbent S. If the solution S′ is worse than previous incumbent, it can also be accepted as the new incumbent with a probability of where sumt is the sum of job processing times including the remaining jobs of the old order and the new order, n 0 1 is the number of the remaining jobs from the old order, n 2 is the number of jobs from the new order, and T i is a parameter to adjust the temperature which needs to be defined in the initialization.Otherwise, the current incumbent S is retained if the probability is not satisfied.

Calibration
In order to fully investigate algorithm potentials, parameter values should be decided carefully.In this paper, reference algorithms such as RS_NEH [12], RT_NEH [12], MR_WWD [25], RS_IG, RT_IG, and RVNS [13] are included.As few studies are involved with this topic, three existing heuristic algorithms, namely RS_NEH, RT_NEH, and MR_WWD are considered.No parameters need be tuned for them.IG is also adopted with RS and RT strategies, denoted by RS_IG and RT_IG for performance comparison.RVNS, as the existing study on this topic, is taken as the reference in this paper.
The parameters of IG algorithms in RS_IG and RT_IG algorithms should be carefully selected and kept same.Since the studied problem in this paper is similar to that of [13], δ = 4 and Ti = 0.4 are selected as they have been confirmed robust in solving PFSPs and due date setting problems.RVNS algorithm [13] focuses on setting tight due dates for new orders by minimizing makespan of the new order.It can be transformed into solving dynamic PFSPs with new order arrivals, and it is taken as a reference algorithm.In the RVNS algorithm, two parameters are required, i.e., k max and q. k max is the maximum number of jobs in shaking method and q is the percentage of jobs in escape procedure.According to [13], k max is set as 40 and q is randomly obtained between 75 and 95% of the number of jobs in either the old or the new order depending on the feasibility of the solution.
Herein, the objective of setting a tight due date for the new order is taken for algorithm calibration since it can be validated on a large number of instances.The small instances of VRF benchmark [61] are used as samples.Each instance is evenly divided into two sets: the first half including the front part of the instance is taken as the old order while the second half is defined as the new order.Herein, the old order due date is set as a constant, (1 + α)×C J O max , where α is the relaxation factor and C J O max is the maximum completion time of the old order obtained by the NEH heuristic.α is defined as 0.4.The computation time limit n × (m/2) × t milliseconds where t = 60 [21,62] is defined as the stopping criterion.To avoid stagnation, all algorithms are run for 10 independent times and the best solution for each instance is retained.Note that the stopping criterion is also used in the following sections.All algorithms are coded in Matlab R2014b and run on a CPU E5520 computer with 6G memory.
To evaluate each algorithm, the performance measure of relative percentage deviation (RPD) is employed, and it is computed as follows: where C J N max is the maximum completion time of the new order obtained on every instance, UB is the upper bound for each instance.The UB values provided by Vallada et al. [61] are used directly because the problem can be deemed as a static permutation flowshop scheduling problem if the slack between the completion time and due date of the old order is very loose and all jobs from the new order can be completed in the slack.
Figure 5 shows the results of each combination of the parameters on small instances of VRF benchmark.As shown in Fig. 5, the best three combinations {δ, z, δ′, Ti} associated with

Experimental validation
To validate the new algorithm, two evaluation schemes are conducted.One is to set the earliest due dates for new orders, and the second is to maximize the accepted number of orders whose due dates are given by a case study.The criterion of accepting an order is based on the due date.If the new order can be completed by its due date, it is accepted.Otherwise, it is rejected.The acceptance status of each new order Order no.

Setting a tight due date
To fully evaluate the effectiveness of the new eIG_Rep algorithm, both Taillard and VRF benchmarks are used, and the performance measure ARPD is adopted.As described in section 4, each instance is evenly divided into two sets representing old and new orders.The old order due date is set as (1 + α)×C J O max , where α set as 0.4.The UB values for Taillard benchmark are taken from [64].Each meta-heuristic algorithm is run for ten times and the best one on each instance is retained.
Table 1 shows the test results of each algorithm on Taillard benchmark.The new eIG_Rep algorithm has an ARPD value of 2.10 which is much better than that of all the other references including RS_NEH, RT_NEH, MR_WWD, RT_IG, RS_IG, and RVNS algorithms.It performs best under most situations.Note that these algorithms, such as MR_WWD, RVNS, and eIG_Rep, allowing order mixing, show better performance than RS_NEH, RT_NEH, RS_IG, and RT_IG algorithms in which no order mixing is allowed.
Test results on VRF benchmark are shown in Table 2.The same conclusion can be obtained.The eIG_Rep algorithm provides the best result in terms of ARPD, followed by RVNS, MR_WWD, RT_IG, RT_NEH, RS_IG, and RS_NEH algorithms.
Figures 6 and 7 show the means plots and 95 confidence intervals of each algorithm on both benchmarks.No overlaps of ARPD means are observed between eIG_Rep and other algorithms, demonstrating that eIG_Rep shows significantly better performance.
In order to check if the differences of these ARPD values are statistically significant, the paired samples t test is conducted with a confidence level of 95%.Results are shown in Table 3 and Table 4 for both Taillard and VRF benchmarks.The p values for the comparison of the new algorithm are 0.000 on both test beds.Thus, it can be concluded that there are significant differences between the performances of the new algorithm eIG_Rep comparing to these reference algorithms.
Figures 8 and 9 show the convergence curve of the new algorithm on two instances, TA052 from Taillard and VRF 30_20_2 instance.The algorithm, eIG_Rep, is run for five times independently on each instance.It can be observed that the new algorithm generates stable solutions each time.The solution quality improves after several iterations as the size of the new destruction and construction method varies, escaping from a local optimum.

Case study on maximizing the number of accepted orders
To check the capability of each algorithm for the objective of maximizing the acceptance of new orders, a case study is conducted.It is assumed that ten orders with assigned individual due dates arrive continuously and randomly.Both the arrival times and due dates of each order are generated randomly.The ten orders are selected from Taillard benchmark [63], i.e., TA011-20 instances.
Table 5 shows the test results of order acceptance.No more than half of the new orders are completed by due dates if RS_IG and RT_IG algorithms are used.RVNS algorithm can accept seven new orders while the new eIG_Rep algorithm can complete nine new orders before their due dates.It can be seen that the algorithms with order mixing show a better chance of obtaining high-quality schedules against makespan.More orders can be absorbed and completed by their due dates.And eIG_Rep is more effective than RS_IG, RT_IG, and RVNS algorithms for this case.

Conclusions
This paper proposes a new algorithm for solving dynamic scheduling problems in permutation flowshops with new order arrivals.A new algorithm, named as eIG_Rep, is introduced based on a new destruction and construction method together with a novel effective repair method.The new destruction and construction method is developed by changing the strength of perturbation if the population performance remains unchanged for some iterations, in order to escape from a local optimum while maintaining high computational efficiency.The new repair method is developed for the first time by swapping tardy and early jobs.By using the new repair method, the infeasible solutions are converted into feasible ones while maintaining their solution quality as much as possible.
Statistical test results show that the new algorithm eIG_Rep is effective and outperforms the existing and reference algorithms including RS_NEH, RT_NEH, MR_WWD, RS_IG, RT_IG, and RVNS.This new eIG_Rep algorithm can be well applied to three types of problems, i.e., due date setting problem, order acceptance problem, and maximizing the number of accepted new orders.For the problem of setting tight due dates of orders, both the Taillard and VRF test beds are used for algorithm validation and the effectiveness of the new algorithm is demonstrated on both benchmarks.For the problems of order acceptance and revenue maximization, ten orders are assumed by assigning random arrival times and due dates when evaluating the new algorithm.Test results show that more orders can be completed by their due dates than existing algorithms by using the new algorithm eIG_Rep.

Fig. 8 Fig. 9
Fig. 8 Convergence curve of eIG_Rep on TA052 Taillard instance The italic number is the best ARPD value for each problem date, the subsequent positions are not examined.All jobs are selected and inserted one by one and the best solution is saved.After local search, if the generated solution is feasible and better than S best , then S best is updated.Otherwise, a repair method is applied.