Minimizing total completion time on a single machine with step improving jobs

Abstract Production systems often experience a shock or a technological change, resulting in performance improvement. In such settings, job processing times become shorter if jobs start processing at, or after, a common critical date. This paper considers a single machine scheduling problem with step-improving processing times, where the effects are job-dependent. The objective is to minimize the total completion time. We show that the problem is NP-hard in general and discuss several special cases which can be solved in polynomial time. We formulate a Mixed Integer Programming model and develop an LP-based heuristic for the general problem. Finally, computational experiments show that the proposed heuristic yields very effective and efficient solutions.


Introduction
Most classical scheduling models assume that job processing times remain constant over time. However, in various real-life production and manufacturing systems job processing times change over time. The contributing factors to varying processing times may be machine or worker learning, machine deterioration, production system upgrades or technological shocks. Gawienjnowiz (2008) provides a wide variety of applications and to date results on scheduling with time-dependent jobs. Time-dependent scheduling models can be classified into three general categories. The first category consists of linear models, whereby job processing times increase or decrease linearly as a function of the job's start time. The first papers to introduce time deteriorating jobs were Gupta et al (1987), Gupta and Gupta (1998), and Browne and Yechiali (1990). The authors of these papers assumed that job processing times consist of two components. The former is a basic processing time which remains constant throughout the planning horizon. The latter varies over time and is the product of the job start time and a jobdependent deterioration index. The makespan minimization problem on a single machine is shown to be solved in polynomial time using a simple sorting rule; however, the total completion time counterpart remains open even when the basic processing times are assumed to be identical for all jobs (see Mosheiov, 1991). The second category of time-dependent processing time scheduling models addresses piece-wise linear job processing time functions. The underlying assumption of these models is that job processing times remain constant until a certain event occurs. Following changes in the production setting, job processing times begin to increase, or in some cases decrease, linearly at a predefined point in time. Cheng et al (2004) and the references within describe a variety of applications for this class of models, including fire fighting efforts, searching for items under worsening light or weather conditions and maintenance scheduling. Wu et al (2009) propose a branchand-bound algorithm as well as heuristic algorithms for solving a single machine scheduling problem with the objective of minimizing the makespan. Farahani and Hosseini (2013) show that a single machine scheduling problem with the objective of minimizing the cycle time where the job processing time is a piecewise linear function of its start time is polynomially solvable. The third category is devoted to scheduling settings where job processing times have two possible values, depending on their start time. The motivation for such models arises from systems which experience a single discrete change in setting. For example, the introduction of a new technology or the purchase of a more efficient machine. Under this model job processing times do not change over time and are only affected by the different machine setting. These models are often referred to as step-deteriorating or step-improving problems, depending on the context.
Step-deteriorating models were first introduced by Sundararaghavan and Kunnathur (1994) and later studied by Mosheiov (1995), Cheng and Ding (2001) and Jeng and Lin (2004). The makespan minimization problem was shown to be NP-hard in the ordinary sense by both Mosheiov (1995) and Cheng and Ding (2001). The same problem under the assumption that each job has a distinct event which affects its processing time is also NP-hard (see Jeng and Lin, 2004). Cheng and Ding (2001) also addressed the total completion time objective, showing that the problem is strongly NP-hard. In a recent paper, Cheng et al (2006) study a makespan minimization problem with step-improving processing times and a common critical date. They prove the problem is NP-hard and develop an efficient pseudo-polynomial time algorithm. They also introduce a Fully Polynomial Time Approximation Scheme (FPTAS) and an online algorithm with a worst case ratio of 2. Ji et al (2007) address the same problem and develop a simple linear time algorithm with a worst case ratio of 5/4. For earlier results on scheduling with step-improving or step-deteriorating processing times, we refer the reader to two extensive survey papers by Alidaee and Womer (1999) and Cheng et al (2004).
Recently, some studies investigate time-dependent scheduling models with other variations of classic scheduling models. Cai et al (2011) consider the single machine scheduling problem with time-dependent processing times and machine breakdowns. The objectives are to minimize the expected makespan and the variance of the makespan. Mor and Mosheiov (2012) consider a classical batch scheduling model with step-deterioration of processing times. The objective is to minimize flowtime. Qian and Steiner (2013) consider a single machine scheduling problem with learning/deterioration effects and time-dependent processing times. By using special assignment problem on product matrices, they solve the problem in near-linear time. Lu et al (2014) consider a single machine scheduling problem with decreasing time-dependent processing times and group technology assumption. The group setup times and job processing times are both decreasing linear functions of their starting times. They show that the problem can be solved in polynomial time.
In this paper we consider the same setting as Cheng et al (2006) and Ji et al (2007): we assume a single machine scheduling environment and a common critical date after which job processing times reduce by a job-dependent amount. To the best of our knowledge, this is the first paper to consider a single machine setting with step-improving jobs and a critical date under a scheduling criterion which is not the makespan. We consider the total completion time objective which is paramount in determining the work in process of a manufacturing system and the level of service provided to customers. The focus on a min-sum type objective gives rise to interesting problem characteristics, various special cases which can be solved optimally and a very efficient approximation scheme for the general setting. We begin by introducing some notation in Section 2. We show that the total completion time minimization problem is NP-hard in Section 3, and discuss several special cases which can be solved in polynomial time in Section 4. We formulate a Mixed Integer Programming (MIP) model and develop an LPbased heuristic for the problem in Section 5. In Section 6, computational experiments are performed. Some concluding remarks and directions for future research are given in Section 7.

Problem description
A set of n non-preemptive jobs N = {1, …, n} is available for processing at time zero. All jobs in N share a common critical date D which affects their processing times. The processing time of job j is specified by two integers a j and b j with 0 ⩽ b j ⩽ a j . If job j begins processing at some time t < D, then its processing time equals a j ; if it starts at some time t ⩾ D, then its processing time is a j − b j . The goal consists of finding a non-preemptive schedule which minimizes the total completion time.
A schedule σ is an assignment of the jobs in N to the single machine such that each job receives an appropriate amount of processing time, and no two jobs can be processed on the single machine at the same time. Let S j (σ) and C j (σ) denote the start and finish times of job j in schedule σ, respectively. We represent S j (σ) as S j and C j (σ) as C j when schedule σ is clear from the context. Let σ* represent the optimal schedule, that is, the one which minimizes the total completion time, and let ∑C j (σ*) denote the minimum total completion time associated with this schedule. Let [i] denote the job in the ith position of the job sequence.
The standard classification scheme for scheduling problems (Graham et al, 1979) is α 1 | α 2 | α 3 , where α 1 describes the machine structure, α 2 gives the job characteristics or restrictive requirements, and α 3 defines the objective function to be minimized. We extend this scheme to provide for the stepimproving processing time with a common critical date by using p j = a j or p j = a j − b j and d j = D in the α 2 field. Our problem can be denoted as 1| p j = a j or p j = a j − b j , d j = D | ∑C j .

Complexity results
This section studies the complexity issue of the considered problem. A reduction method is used from the following problem, which is known to be NP-complete.
Even-odd partition (Garey and Johnson, 1979): Given positive integers x 1 , x 2 , …, x 2t , where x 1 ⩽ x 2 ⩽ ⋯ ⩽ x 2t , does it exist a set X ⊆ T = {1, 2, …, 2t} such that ∑ j ∈ X x j = ∑ j ∈ T\X x j = r, and X contains exactly one of {x 2j − 1 , x 2j } for j = 1, 2, …, t? Assume without loss of generality that r > 3t. If not, then we can multiply each partition element and partition size by 3t without changing the solution of the problem.
Given any instance of even-odd partition problem with t, r and x j for j = 1, …, 2t, we consider the following instance of 1| p j = a j or p j = a j − b j , d j = D | ∑C j , called instance I: n = 2t + 1; a j = 2r 2 + x j ; b j = 2r 2 ; j = 1; ; 2t; a 2t + 1 = 2r 2 + 2r; b 2t + 1 = 2r 2 ; D = 2tr 2 + r: A threshold value, K, is defined as K = Σ j = 1 t (t − j + 1)(x 2j − 1 + x 2j ) + (3t 2 + 3t)r 2 + (t + 4)r. In the following we prove that there exists a schedule for this instance of 1 | p j = a j or p j = a j − b j ,d j = D | ∑C j with ∑C j ⩽ K if and only if there exists a solution to even-odd partition problem.
Lemma 1 If there exists a solution to even-odd partition problem, there exists a feasible schedule for instance I.
Proof Since there exists a solution to even-odd partition, the elements are reindexed such that Consider a schedule σ as constructed in Figure 1. The jobs 2j − 1 for j = 1, …, t are scheduled in this order without idle time. Then, since Σ j = 1 t a 2j − 1 = D, jobs 2j for j = 1, …, t and job 2t + 1 are scheduled in this order starting at time D without idle time. Thus, The third equality follows because Σ j = 1 t (a 2j − b 2j ) = r. This implies that there exists a feasible schedule with ∑C j ⩽ K for instance I. □ We now show that a feasible solution to instance I implies a solution to even-odd partition. Let σ* denote a optimal schedule for instance I.
Lemma 2 In σ*, job 2t + 1 is the last job to be processed.
Proof Suppose that job 2t + 1 starts processing before D in σ*.
Since a 2t + 1 = 2r 2 + 2r > 2r 2 + x j = a j for j = 1, …, 2t, job 2t + 1 and the last job for processing, job [2t + 1], can be interchanged while the total completion time decreases or remains the same, where job [i] denotes the job in the ith position in a given sequence. Alternatively, suppose that job 2t + 1 starts processing after D in σ*. For the jobs starting after D, the SPT (Shortest Processing Time) rule is optimal (Smith, 1956). Therefore, job 2t + 1 must be the last job in σ* because a 2t In σ*, there are exactly t jobs that start processing before D, and the total processing time of these jobs is not greater than D.
Proof Observe that a j > 2r 2 for j = 1, …, 2t and 2r 2 t < D < 2r 2 (t + 1). Thus, there are at most t + 1 jobs that start processing before D. Suppose that t + 1 jobs start processing before D. Then, it follows that , the total completion time can be decreased by inserting idle time so that job [t + 1] start processing at D. Consequently, exactly t jobs start processing before D in σ*. Furthermore, the total processing time of the first t jobs is not greater than D. Otherwise, since and job [t + j] can be interchanged while the total completion time decreases or remains the same. □ Proof Lemma 1 shows that a solution to even-odd partition implies a feasible schedule for instance I. To complete the proof, we must now show that a feasible schedule for instance I implies a solution to even-odd partition. From the results of Lemma 2 and Lemma 3, exactly t jobs in {1, …, 2t} start processing before D, and t jobs in {1, …, 2t} and job 2t + 1 start processing from D. Thus, similar to the analysis in Lemma 1, the total completion time can be expressed as Figure 1 The structure of schedule σ.
Eun-Seok Kim and Daniel Oron-Minimizing total completion time on a single machine with step improving jobs The third equality follows from the fact that As a result, the set of jobs that start processing before D provides a solution to even-odd partition problem, which indicates that a feasible schedule for instance I implies a solution to even-odd partition. By combining Lemma 1 and the proof above, there exists a schedule for this instance of 1 | p j = a j or p j = a j − b j , d j = D | ∑C j with ∑C j ⩽ K if and only if there exists a solution to even-odd partition problem. Therefore, the

Polynomially solvable cases
In this section, we study several special cases of the problem which can be solved in polynomial time.
4.1. All processing times improve according to a fixed ratio b j = δa j for all 1 ⩽ j ⩽ n, and there exists k such that Σ j = 1 k a j = D We first focus on the case where jobs are shortened according to a fixed ratio if they start processing at, or after, the common critical date D. If there exists k such that Σ j = 1 k a j = D, then we show that a simple sorting procedure yields an optimal solution for the problem. Otherwise, the problem remains NP-hard.
Lemma 4 Suppose that a 1 ⩽ a 2 ⩽ … ⩽ a n and b j = δa j for ∀ j where 0 < δ < 1. If there exists k such that Σ j = 1 k a j = D, then the SPT rule with respect to a j is optimal.
Proof For a given schedule, there are two sets of jobs: jobs that start processing before D and jobs that start processing after D. Since the SPT rule on each set of jobs is optimal, it suffices to show that jobs in {1, …, k} are scheduled in the time interval [0, D] in an optimal schedule. Consider a schedule σ where some jobs in {k + 1, …, n} start processing before D. In σ, let A denote a set of jobs that start processing before D, let B denote a set of jobs that start processing after D. Let X = A ∩ {k + 1, …, n} and Y = B ∩ {1, …, k}. Note that the SPT rule on each set of jobs is optimal. Thus, since a i ⩽ a j for i ∈ A\X and j ∈ X, jobs in X do not precede any jobs in A\X. Similarly, since and y [i] denote the jobs in the ith position of the job sequences in X and Y, respectively. Figure 2 depicts the schedule σ. We From the assumption that a 1 ⩽ a 2 ⩽ … ⩽ a n , we obtain that a x ½i ⩾ a y ½i for i = 1, …, | Y | . Therefore, we can exchange x [i] and y [i] for i = 1, …, | Y | without increasing the start time of x [ | Y | + 1] . This implies that k + 1 jobs can start its processing before D, which contradicts to the assumption that Σ j = 1 k a j = D, that is, at most k jobs can start it processing before D. Alternatively, we assume that | X | ⩽ | Y | . As shown in Figure 2, there are two cases: ∑ j ∈ A a j < D and ∑ j ∈ A a j ⩾ D.
Case 1: ∑ j ∈ A a j < D.
Construct a new schedule σ′ by interchanging jobs in A and jobs in B in the same order as in σ. Let t = D − ∑ j ∈ A\X a j . Then, The last inequality follows because a y ½i ⩽ a x ½i for i = 1… | X | , and δ∑ j ∈ Y a j < t and ∑ j ∈ X a j < ∑ j ∈ Y a j . As a result, we can create a new schedule σ′ where jobs in {1, …, k} are scheduled in the time interval [0, D] without increasing the total completion time.
Case 2: ∑ j ∈ A a j ⩾ D.
Let s = ∑ j ∈ X a j and t = D − ∑ j ∈ A\X a j , respectively. Observe that s ⩾ t. Construct a new schedule σ′ by interchanging jobs in A and jobs in B in the same order as in σ. Then, The last inequality follows because t ⩽ s, and a y ½i ⩽ a x ½i for i = 1… | X | , and δ∑ j ∈ Y a j < s and ∑ j ∈ Y a j ⩾ ∑ j ∈ X a j . As a result, we can create a new schedule σ′ where jobs in {1, …, k} are scheduled in the time interval [0, D] without increasing the total completion time. □ In the following we present an example to demonstrate that even when a 1 ⩽ a 2 ⩽ … ⩽ a n and b j = δa j for ∀ j where 0 < δ < 1, if there does not exist any k such that Σ j = 1 k a j = D, then the SPT rule may not be optimal. Consider an example where there are three jobs with a 1 = 16, a 2 = 18 and a 3 = 22 while D = 20 and δ = 0.5. Therefore, a j − b j are 8, 9 and 11, respectively, for j = 1, 2, 3. The SPT rule yields a solution value of 95 (the completion times are 16, 34 and 45, respectively). If we schedule job 2 first and insert an idle time of 2 time units, followed by jobs 1 and 3, then the total completion time is 85 (with completion times of 18, 28 and 39). Clearly, since at most one job can start processing before the critical date, it is beneficial to sequence the longest job which does not exceed the critical date D, that is, job 2 in our case.

All improved processing times are identical a i
We consider the special case where jobs have different processing times if scheduled before D, but the same improved processing time if they start processing after D. If D does not coincide with the completion time of a job, it may be beneficial to insert some idle time before D, as shown below.
Lemma 5 Suppose that a 1 ⩽ a 2 ⩽ … ⩽ a n and a i − b i = a j − b j for all 1 ⩽ i, j ⩽ n. The SPT rule with respect to a j is optimal.
Proof Let k denote the maximum number of jobs which can finish processing before D. Construct a schedule σ based on the SPT rule: Assign the first k jobs in non-decreasing order of a j before D; If Σ j = 1 k + 1 a j ⩽ D + a k + 1 − b k + 1 (which is equivalent to Σ j = 1 k a j ⩽ D − b k + 1 ), then job k + 1 starts directly after job k finishes at time Σ j = 1 k a j . If Σ j = 1 k a j > D − b k + 1 , then job k + 1 starts at time D inserting idle time; The remaining jobs are assigned in non-decreasing order of a j − b j without any idle time. We now show that any feasible schedule can be changed into σ without increasing its total completion time. Take a feasible schedule σ′. Let k′ denote the number of jobs which finish processing before D in σ′. It is clear that in order to minimize the total completion time of the first k′ jobs they must be ordered according to the SPT rule with respect to a j . The remaining jobs can be ordered arbitrarily as they have identical improved processing times. Without loss of generality we assume that the jobs processed after time D are also in SPT order. Now construct a new schedule σ″ by interchanging jobs [k′] and [k′ + 1], without changing the start time of the jobs [k′] and [k′ + 1]. From the assumption that the first k′ jobs and the latter n − k′ jobs are in SPT order, we have that a ½k′ ⩾ a ½k′ + 1 . Therefore, X j2N C j σ′′ ð Þ-C j σ′ ð Þ À Á = a k′ + 1 ½ a k′ ½ ⩽ 0: Eun-Seok Kim and Daniel Oron-Minimizing total completion time on a single machine with step improving jobs By repeating this procedure, we can construct σ without increasing the total completion time for the given schedule. □ 4.3. All initial processing times are identical a j = a for all 1 ⩽ j ⩽ n Next, we consider a special case for which all jobs have the same processing time initially but different processing times if they begin processing no earlier than D. It is clear that exactly k = ⌊D/a⌋ jobs can finish their processing before D. The focus is on the job [k + 1] which is in the ( Lemma 6 Suppose that a j = a for ∀ j and that b The SPT rule with respect to a j − b j (or Longest Process Time (LPT) rule with respect to b j ) is optimal.
The proof of Lemma 6 is based on a standard pair-wise interchange argument and is similar to the proof of Lemma 5. For the sake of brevity we omit the formal proof. These results are meaningful considering that the problem is NP-hard even when b j = b, but polynomially solvable either when a j = a or when a i − b i = a j − b j .

The LP-based heuristic
In this section, we develop an LP-based heuristic for the proposed problem. The scheduling problem is expressed in an MIP formulation in Section 5.1. Then, an LP-based heuristic is developed on the basis of an LP relaxation of MIP in Section 5.2.

The MIP formulation
In this section we develop an MIP formulation of the proposed problem. Without loss of generality, we assume that the jobs are indexed in non-decreasing order of a j .

Variables
x ij + y ij ⩽ 1 8i 2 N; 8j 2 N; x ij ; y ij 2 0; 1 f g 8i 2 N; 8j 2 N; The objective function (1) seeks to minimize the total completion time. Constraint (2) implies that the starting time of the job in the (i + 1)st position in the job sequence is greater than or equal to the completion time of the job in the ith position. Constraints (3) and (4) imply that the processing time of the job in the ith position in the job sequence is a [i] if the job starts before D and that the processing time of the job in the ith position in the job sequence is a [i] − b [i] if the job starts after D. Note that M is a sufficiently large number. Constraint (5) implies that a job starts either before or after D. Constraints (6) and (7) imply that exactly one job occupies exactly one position in the job sequence. Constraint (8) implies that x ij and y ij are binary variables. Constraint (9) implies that starting times of the jobs are nonnegative.
Since the jobs are indexed in non-decreasing order of a j , Then, the right-hand side of Constraint (10) is not greater than zero, which imposes ∑ j ∈ N x ij = 0. As a result, Constraint (10) is equivalent to Constraint (3) in MIP. □ Observe that Constraint (10) is stronger than Constraint (3). We use Constraint (10) instead of Constraint (3) in order to reduce a search space for finding an optimal solution of MIP.

Heuristic
In this section, we develop an LP-based heuristic using the optimal solutions of an LP relaxed problem of MIP. In the LP relaxed problem of MIP, we relax the binary constraint for x ij and y ij , Constraint (8), and using Lemma 7, we develop the LP relaxed problem of MIP as follows.
In the LP-based heuristic, let X and Y denote two distinct subsets of the job set N where jobs in X start processing before time D and jobs in Y start processing at, or after time D. The heuristic classifies each job into two sets, X and Y, according to an optimal solution of LPR. Then, schedule the jobs in X in non-decreasing order of a j , and schedule the jobs in Y in nondecreasing order of a j − b j .
We now present a formal description of the heuristic.
The LP-based heuristic algorithm: Construction of a schedule for 1 | p j = a j or p j = a j − b j , d j = D | ΣC j .
Input: a j , b j and D Output: S j and C j Code: 1. Initialization X ← ∅ and Y ← ∅ 2. Obtain an optimal solution of LPR Solve the LPR: let x ij r and y ij r denote the optimal solution of the LPR 3. Construction of X and Y Construction of a schedule 4.1 Schedule jobs in X in non-decreasing order of a j from time zero 4.2 Schedule jobs in Y in non-decreasing order of a j − b j from time max{D, Σ j ∈ X a j } The following example demonstrates the LP-based heuristic algorithm for a small problem containing three jobs.

Computational study
In this section, we undertake extensive numerical tests to evaluate the quality of solutions found for problem 1 | p j = a j or p j = a j − b j , d j = D | ∑C j by the proposed heuristic. The heuristic algorithm is coded in GAMS/CPLEX with the default settings, and run on a personal computer with an Intel Core i7-2600, 3.40GHZ processor. Since there is no computational study which considers the setting addressed in this paper in the existing literature, we have adapted the data generation scheme used in Jeng and Lin (2004). In Jeng and Lin (2004), the authors study a makespan minimization problem under the assumption that job processing times are a non-linear function of their start time and due-date. In order to evaluate the performance of their proposed algorithms, the authors undertake a comprehensive computational study. We have adapted their framework to our

Continued:
Eun-Seok Kim and Daniel Oron-Minimizing total completion time on a single machine with step improving jobs problem setting for the purpose of measuring the performance of our heuristic algorithm. In our study we aim to isolate the effects, if these exist, of the following factors: the number of jobs (n), the difference (or ratio) between the job processing times before and after the common due date (a j and b j ), and the restrictiveness of the common due date (D). The results will allow us to determine the accuracy of the heuristic algorithm for any specific problem setting.
To test the effects of varying n and D, we let n ∈ {50, 100, 200, 400} and D = α∑a j where α ∈ {0.2, 0.4, 0.6, 0.8}. In order to determine whether or not the range of a j has an impact on the performance of the heuristic, we let a j~D U[1, 50], a j~D U[1, 100], a j~D U[1, 150] and a j~D U [1,200], where DU [l, u] is the discrete uniform distribution over the interval [l, u]. Moreover, the ratio of a j to b j may affect the performance of the heuristic. For the associated test, we let b j~D U[0, βa j ] where β ∈ {0.1, 0.4, 0.7, 1.0}.
Two sets of experiments were run. In the first set, n and D were allowed to vary, while a j~D U[1, 100] and b j~D U[0, a j ]. These results are presented in Table 1. In the second set, a j and b j were allowed to vary, while n = 100 and D = 0.6∑a j . These results are presented in Table 2. Since calculating the optimal solution is computationally difficult, we use the lower bound which is the optimal solution value of the LPR as a surrogate. As a result, our performance indicator, the relative error, 100 × (z H − z L )/z L , provides an upper bound for the actual sample relative error, where z H and z L represent the solution value of the proposed heuristic and a lower bound, respectively. For each condition, the table entry is the average of 20 instances. Times are given in seconds and only include computation time.
In Table 1, one can observe that the error increases as the number of jobs increases. Also, the error decreases as the common critical date (D) increases. This may be due to the quality of the lower bounds obtained from the LPR. As D increases, more jobs start their processing before the common critical date increases. Therefore, as shown in the proof of Lemma 7, the set of valid inequalities of LPR becomes larger, that is, the number of Constraints (10) increases. As a result, the LPR yields a tighter lower bound as the common critical date increases.
In Table 2, we observe that the range of a j has no significant impact on the performance of the heuristic. Moreover, the error tends to be maximized when b j~D U[0, 0.7a j ]; however, there is no linear relationship between the error and the ratio of the standard deviation of a j to the standard deviation of b j .
To evaluate the performance of the proposed heuristic in terms of the deviation from the optimal solutions, we generated 20 instances of the problem where n = 70, a j~D U[1, 100], b j~D U[1, a j ] and D = 0.4∑a j . Optimal solutions were obtained using GAMS/CPLEX with a fixed time limit of 1 h of CPU time. These results are presented in Table 3. The error is defined as 100 × (z H − z O )/(z O ), where z H and z O represent the solution values of the heuristic algorithm and the optimal solution, respectively. The heuristic algorithm yields near-optimal solutions in short computation times. The average error is 1.942% with the worst case error of 2.633%. Optimal solutions were not obtained in six instances among 20 instances within 1 h. In these cases, the best feasible solutions obtained within the time limit were used for comparison. The average computation time required for the heuristic algorithm is less than 1 s, while approximately 50 min are required for GAMS/ CPLEX. Considering that real-life manufacturing systems are often required to process thousands of jobs, even commercial optimization software packages would be unable to find optimal solutions for the problem. Therefore, it is sensible to use the heuristic algorithm instead of an optimal algorithm.

Conclusions
We study a total completion time minimization problem on a single machine with step-improving processing times. We establish that the problem is NP-hard for the general case, and develop polynomial time solution procedures for several interesting special cases. For solving the general case, we formulate an MIP model and develop an LP-based heuristic for the problem.
In order to evaluate the effectiveness and efficiency of the heuristic, extensive computational experiments are carried out. The experimental results show that the heuristic yields very close-to-optimal solutions in short computational times. Moreover, the proposed heuristic algorithm performs better as the common critical date increases and as the number of jobs decreases.
Future research may focus on studying step-improving processing times with different scheduling criteria, such as total weighted completion time. Also, it would be interesting to extend our results to the various multiple machine environments.