1 Introduction

Changeability is a characteristic feature of many real-life systems (e.g., manufacturing, industrial, computer, etc.) that in general can be classified as an improvement or a degradation of such a system. The improvement was for the first time discovered and described in a quantitative form in aircraft industry by Wright (1936). He observed that the total hours to assemble an aircraft decreases as the number of assembled aircrafts increases due to the increasing experience of workers (the learning effect). Therefore, the same resources allow to produce more units in a shorter period of time. On this basis, Wright (1936) formulated a relation, called learning curve, where the time p(v) required to produce the vth unit was defined as follows:

$$ p(v) = av^{\alpha},$$
(1)

where a is the time required to produce the first unit and α≤0 is the learning index.

It is not surprising that the learning effect has attracted particular attention in the aircraft industry earlier than in any other on account of the high cost of aircrafts (Kerzner 1998). The benefits following the theory of the learning effect were soon recognized by USA War Production Board and the methodology proposed by Wright was used to plan the production of airplanes for the World War II needs (see Roberts 1983). Further empirical studies on the learning effect carried out within the last 60 years horizon proved its significant impact on productivity in manufacturing systems specialized in Hi-Tech electronic equipment (Adler and Clark 1991), memory chips and circuit boards (Webb 1994), electronic guidance systems (Kerzner 1998) and in many others (e.g. Carlson and Rowe 1976; Cochran 1960; Holzer and Riahi-Belkaoui 1986; Jaber and Bonney 1999; Lien and Rasch 2001; Yelle 1979). Most of these investigations confirmed the high accuracy of (1), however, they also revealed that some systems are more precisely described by other characteristics (learning curves), e.g., S-shaped, Stanford-B or DeJong (see Holzer and Riahi-Belkaoui 1986; Jaber and Bonney 1999; Lien and Rasch 2001).

In general the learning effect takes place in typical human activity environments or in automatized manufacturing, where a human support for machines is needed during activities such as operating, controlling, setup, cleaning, maintaining, failure removal, etc. Although learning can cease with time, it is often aroused by such factors as new inexperienced employees, extension of an assortment, new machines, more refined equipment, software update or general changes of the production environment (Biskup 2008).

However, the learning effect is not limited to the areas dominated by human. For instance, highly automatized manufacturing systems may benefit on the fact that if a machine does the same job repetitively, then the knowledge from the previous iterations can be used to improve the performance of a system when the job is processed the next time. An example of such method is iterative learning control that compensates a repetitive error in a robot motion control (see Arimoto et al. 1984).

The learning effect also occurs in machine learning and artificial intelligence. For instance, reinforcement learning algorithms, that usually learn and operate on-line (see Whiteson and Stone 2004), improve their efficiency on the basis of interactions with an environments (learning-by-doing). Thus, the performances of the systems optimized by such algorithms improve in their succeeding iterations (e.g., Buşoniu et al. 2008; Janiak and Rudek 2011).

The described learning effect that is a result of repeating similar operations (learning-by-doing) is called an autonomous learning (e.g., Yelle 1979). The theory of the learning effect enables (based on (1)) for efficient estimation of the variable production time and/or cost caused by learning. Thus, it allows to improve lot-sizes, worker management, energy/resource consumption, etc. (e.g., Keachie and Fontana 1966; Kerzner 1998; Li and Cheng 1994; Webb 1994). Nevertheless, it is not possible to optimize time and/or cost objectives beyond reductions resulting from learning-by-doing (Biskup 2008). It follows from the innate nature of autonomous learning and from an assumption of identical products, therefore, control (management) abilities provided by the theory of the learning effect are significantly limited.

However, in many manufacturing systems jobs (e.g., products) are not identical, but similar and the time required to process each of them can differ. This rigorous constraint on identical jobs was relaxed by Biskup (1999). Based on (1), he assumed that the time p j (v) required to process job (e.g., to produce unit) j decreases as the number v of processed similar (not necessarily identical) jobs increases and this relation was described as follows:

$$ p_j(v) =a_j v^{\alpha},$$
(2)

where a j is the time required to process job j if no learning exists (i.e., it is processed as the first one). On this basis, a new model was obtained that offers an additional control variable, i.e., sequence of processed jobs. Thus, it became possible to optimize production objectives such as the maximum completion time of jobs, the maximum lateness or the number of late jobs (e.g. Bachman and Janiak 2004; Cheng and Wang 2000; Cheng et al. 2008; Lee and Lai 2011; Lee et al. 2010; Wu et al. 2007; Wang and Wang 2011; Yang and Kuo 2009; Zhang et al. 2011), which were beyond control using the theory of the learning effect. Therefore, fundamental for this approach is that control decisions (schedule) do not influence learning (that anyway in many cases is impossible), but they allow to efficiently utilize learning abilities of the system to optimize given objectives. Thus, not surprisingly, this direction of research has attracted particular attention in the scheduling theory, especially in issues devoted to manufacturing systems (for survey see Biskup 2008).

On the other hand, degradation of a system can be caused by deterioration/aging of machines (understood as lathe machines, chemical cleaning baths, etc.) or fatigue of human workers that affects the production parameters such as time/cost required to produce a single unit (e.g. Dababneh et al. 2001; Eilon 1964; Mandich 2003; Stanford and Lister 2004). Similarly as for learning systems, the objectives of deteriorating systems can also be controlled (in a specified range) by a schedule of processed jobs. In the scheduling theory there are two approaches to model deterioration. Although both of them describe the dependency between job processing times and deteriorating factors, for each of them the deteriorating factor is represented by different parameters. Namely, the first approach, called deteriorating effect, assumes that the job processing times are non-decreasing functions of their starting times and it has been extensively studied in the last decade (see Cheng et al. 2004 and Gawiejnowicz 2008).

However, scheduling models consistent with this approach are not relevant to many real-life industrial problems. It is especially significant for environments, where deterioration does not take place (or is negligible) during idle times of machines (workers), e.g., caused by different release dates of jobs. Such inconveniences are absent in the second approach, called aging/fatigue effect, in which job processing times are described by non-decreasing functions dependent on the actual condition (fatigue) of machines affected by already processed jobs (e.g., Cheng et al. 2010; Janiak and Rudek 2010; Kuo and Yang 2008; Rudek and Rudek 2011; Yang et al. 2010). Similarity of jobs usually allows to assume that each of them has the same impact on the fatigue of a machine (e.g. Cheng et al. 2008; Gawiejnowicz 1996; Mosheiov 2001; Yang and Yang 2010). Therefore, we will focus on this approach, where the time p j (v) required to process job j increases together with the number v of processed jobs. This relation can be described by (2), where α≥0 (see Mosheiov 2001).

In this paper, we analyse computational complexity of single machine scheduling problems with linear models of learning/aging and the following minimization objectives: the maximum lateness, the makespan with release dates and additionally the number of late jobs. The main theoretical result of this paper is to prove that the considered problems are strongly NP-hard with linear functions of job processing times. Although the maximum lateness minimization scheduling problems with position dependent job processing times are broadly discussed (e.g., Bachman and Janiak 2004; Cheng and Wang 2000; Cheng et al. 2008; Lee and Lai 2011), their computational complexity is not fully determined. Therefore, to complement these results and to make their analysis coherent, we will determine their computational complexity. Namely, we will prove that the problems even with the simplest possible (nontrivial) mathematical models of job processing times are strongly NP-hard. Moreover, it will be shown that if the models are simpler (then trivial) the related problems are polynomially solvable. Thus, we will determine the boundary between the polynomial solvability of the problems and their strong NP-hardness. Thereby, we will complement the results provided inter alia by Bachman and Janiak (2004) and Cheng and Wang (2000) and foremost complete the results concerning the maximum lateness minimization with position dependent job processing times. Since, any NP-hardness proof is more significant if it is done for the simplest possible problem as it is done in this paper.

On the other hand, the practical aspect of this research is to show that the maximum acceptable simplification of job processing time functions (by their linearization) does not lead to decreasing complexity of the considered problems. Therefore, it does not lead to decreasing effort required to obtain optimal control decisions but rather to decreasing their accuracy. Additionally, we also prove that the considered problems with an arbitrary functions describing decreasing/increasing of a job processing time are polynomially solvable if the functions are the same for each job.

The remainder of this paper is organized as follows. Section 2 contains problem formulation. Computational complexity of the considered problems is determined in Sect. 3 and some polynomially solvable cases are provided in Sect. 4. Finally Sect. 5 concludes the paper.

2 Problem formulation and notation

In this section, we will formulate scheduling problems with two phenomenon: aging (fatigue) and learning.

There is given a single machine and a set J={1,…,n} of n jobs (e.g., tasks, products, cleaned items) that have to be processed by a machine; there are no precedence constraints between jobs. The machine is continuously available and can process at most one job at a time. Once it begins processing a job it will continue until this job is finished. Each job is characterized by its aging/learning curve p j (v) that describes increasing/decreasing of the time required to process this job depending on the number of jobs completed before it. In other words, we will say that p j (v) is a processing time of job j if it is processed as the vth job in a sequence. Moreover, each job j is also characterized by the normal processing time a j that is the time required to process the job if the machine is not influenced by aging/learning (i.e., a j p j (1)). Other job parameters are the release date r j that is the time at which the job is available for processing and the due-date d j when it should be completed.

For the aging effect, the processing time (aging/fatigue curve) of job j is described by a linear function of its position v in a sequence:

$$ p_j(v)=a_jv.$$
(3)

On the other hand, for the learning effect the processing time (learning curve) is given as follows:

$$ p_j(v)=a-b_jv,$$
(4)

where a is the normal processing time common for all jobs (a j =a for i=1,…,n) and b j is a learning ratio of job j. Thus, we consider the simplest linear aging/learning models for processing non-identical jobs (parameters are not common for all jobs).

We also consider problems where learning/aging curves are identical for all jobs. The processing times of such jobs are described as follows:

$$ p_j(v)=a_j+f(v),$$
(5)

where f(v) is an arbitrary function of a job position in a sequence, such that f(1)=0 and a j +f(v)>0 for j,v=1,…,n. Note that f(v) models both learning (f(v)<0) and aging (f(v)>0) for v=1,…,n.

As it was mentioned in the previous section, the objectives of the considered aging/learning systems can be controlled by the sequence (schedule) of processed jobs (e.g. manufactured products). Therefore, let us define control variables (schedule) formally.

Let π=〈π(1),…,π(i),…,π(n)〉 denote the sequence of jobs (permutation of the elements of the set J), where π(i) is the job processed in position i in this sequence. By Π we will denote the set of all such permutations. For the given sequence (permutation) π, we can easily determine the completion time C π(i) of a job placed in the ith position in π from the following recursive formulae:

$$ C_{\pi(i)} = \max\{C_{\pi(i-1)}, r_{\pi(i)}\} +p_{\pi(i)}(i),$$
(6)

where C π(0)=0 and the lateness L π(i) is defined as follows:

$$ L_{\pi(i)} = C_{\pi(i)} - d_{\pi(i)}.$$
(7)

We will say that job π(i) is late if L π(i)>0. The objective is to find such an optimal control, i.e., sequence (schedule) π ∈Π of jobs on the single machine, which minimizes one of the following objective functions: the maximum completion time (makespan) \(C_{\max} \triangleq\max _{i=1,\ldots,n}\{C_{\pi^{*}(i)} \}\) (i.e., \(C_{\max} \triangleq C_{\pi^{*}(n)}\)), the maximum lateness \(L_{\max} \triangleq\max_{i=1,\ldots,n}\{ L_{\pi^{*}(i)} \}\) and the number of late jobs \(\sum_{i=1}^{n} U_{\pi^{*}(i)}\), where

$$U_{\pi^{*}(i)} =\begin{cases}0,&C_{\pi^{*}(i)} \leq d_{\pi^{*}(i)}\\1,&C_{\pi^{*}(i)}> d_{\pi^{*}(i)}\end{cases}$$

and \(U_{\pi^{*}(i)}=1\) means that job π (i) is late.

Formally the optimal control (schedule) π ∈Π for the considered minimization objectives is defined as follows π ≜ argmin π∈Π{C π(n)}, π ≜ argmin π∈Π{max i=1,…,n {L π(i)}}, and \(\pi^{*} \triangleq \mathrm{argmin}_{\pi\in\Pi}\{\sum_{i=1}^{n} U_{\pi (i)}\}\), respectively.

For convenience and to keep an elegant description of the considered problems we will use the three field notation scheme X|Y|Z (see Graham et al. 1979), where X describes the machine environment, Y describes job characteristics and constraints and Z represents the minimization objectives. According to this notation, the problems will be denoted as follows: 1|r j ,ALE|C max, 1|ALE|L max and 1|ALE|∑U j , where ALE∈{p j (v)=a j vp j (v)=ab j vp j (v)=a j +f(v)}. If r j =0 for j=1,…,n, then it is omitted in the given notation.

3 Computational complexity

In this section, we will prove that the considered problems are strongly NP-hard. First, we will determine the computational complexity of the maximum lateness minimization problem with the aging effect and next with the learning effect. The strong NP-hardness proofs for both problems are similar and based on the same idea. However, the problem with aging is simpler, thus, it is analyzed in the first order. From the results for the maximum lateness minimization problems follows the strong NP-hardness of the minimization of the number of late jobs with aging/learning. Next we will prove, on the basis of a problem equivalency, the makespan minimization with release dates is also strongly NP-hard with aging/learning models.

3.1 Aging effect

At first note that the problem \(1|p_{j}(v)=a'_{j} + b'_{j} v|L_{\max}\) (where \(a'_{j}\) is the normal processing time of job j and \(b'_{j}\) is its aging ratio) was proved to be strongly NP-hard (Bachman and Janiak 2004). However, we will show that the simplest (nontrivial) problem 1|p j (v)=a j v|L max is strongly NP-hard. To do it, we will provide the pseudopolynomial time transformation from the strongly NP-complete problem 3-Partition (Garey and Johnson 1979) to the decision version of the considered scheduling problem, 1|p j (v)=a j v|L max.

3-Partition (3PP)

(Garey and Johnson 1979)

There are given positive integers m, B and x 1,…,x 3m of 3m positive integers satisfying \(\sum _{q=1}^{3m}x_{q} =mB\) and \(\frac{B}{4} <x_{q} < \frac{B}{2}\) for q=1,…,3m. Does there exist a partition of the set Y={1,…,3m} into m disjoint subsets Y 1,…,Y m such that \(\sum_{q \in Y_{i}}x_{q} = B\) for i=1,…,m?

The decision version of the problem 1|p j (v)=a j v|L max (DAEL) is given as follows: Does there exist such a schedule π of jobs on the machine for which L maxy?

At first, we will present the main idea of the proof. There are given 3m partition jobs (constructed on the basis of the elements from the set Y of 3PP) and mN enforcer jobs (where N=mB). The instances of DAEL are constructed such that the optimal schedules have the following properties: the enforcer jobs are partitioned into m subsets E 1,…,E m such that each consists of N jobs, partition jobs are partitioned into m subsets X 1,…,X m such that each consists of exactly 3 jobs and the optimal schedule has the following form (E 1,X 1,E 2,X 2,E 3,…,E i ,X i ,E i+1,…,X m−1,E m ,X m ), where jobs within each subset are scheduled arbitrary. If a schedule is not consistent with these properties, then the criterion value L max is always greater than a given value y (i.e., it cannot be optimal). On this basis, we will show that the answer for the constructed instances of DAEL is yes (i.e., L maxy) if and only if it is yes for 3PP (i.e., \(\sum_{q\in Y_{i}} x_{q} =B\) for i=1,…,m).

The formal transformation from 3PP to DAEL is given as follows. The instance of DAEL contains the set X={1,…,3m} of 3m partition jobs (constructed on the basis of the elements from the set Y of 3PP) and the set E={e 1,…,E mN } of mN enforcer jobs. The enforcer jobs can be partitioned into m sets E i ={e N(i−1)+1,…,e Ni } for i=1,…,m, such that jobs within each set E i have the same parameters, i.e., a k =a l and d k =d l for k,lE i for i=1,…,m.

The parameters of the enforcer jobs are defined as follows:

for i=1,…,m where

(8)
(9)

for i=1,…,m and the parameters of the partition jobs are

for j=1,…,3m and y=0.

Observe that each parameter of DAEL can be calculated in a time bounded by the polynomial dependent on m and B. Moreover, the maximum value of DAEL does not increase exponentially in reference to 3PP (i.e., D is O(m 6 B 3)) and the problem size does not decrease exponentially in reference to 3PP (i.e., n=O(m 2 B)). Thus, the transformation from 3PP to DAEL is pseudopolynomial.

Let X i denote the set of the partition jobs that are processed just after jobs from the set E i (for i=1,…,m). Define a schedule π , where jobs are scheduled as follows: (E 1,X 1,E 2,X 2,E 3,…,E i ,X i ,E i+1,…,X m−1,E m ,X m ), where X i ={3i−2,3i−1,3i} for i=1,…,m, if it is not a case we can always renumber the partition jobs. Let V(X i ) and W i denote the sum of processing times of the partition jobs from X i and the enforcer jobs from E i , respectively, for the schedule π . Based on the transformation V(X i ) is defined as:

$$V(X_i) = 3M \bigl( i(N+3)-1 \bigr) + i(N+3)\sum _{q\in X_i}x_q - 2x_{3i-2}-x_{3i-1},$$

for i=1,…,m and it can be estimated as follows:

(10)

It is easy to observe that the sum of processing times of the enforcer jobs from the set E i , i.e., W i , (i=1,…,m) in schedule π is given by (9). The completion time of the last job in E i is \(C_{E_{i}}\) and of the last job in X i is \(C_{X_{i}}\) for i=1,…,m.

Let us also define useful inequalities:

(11)
(12)
(13)

for i=1,…,m. Note also that the processing times of the partition jobs can be estimated as follows p j (v)>M for j=1,…,3m and v=1,…,m(N+3).

On this basis, we will provide properties of an optimal solution for DAEL.

Lemma 1

The optimal sequence of jobs for the problem 1|p j (v)=av,d j =d|L max is arbitrary.

Proof

Trivial. □

Lemma 2

The problem 1|p j (v)=av|L max can be solved in O(nlogn) steps by scheduling jobs according to the non-decreasing order of their due dates (the EDD rule).

Proof

Trivial. □

Lemma 3

The problem 1|p j (v)=a j v|C max can be solved in O(nlogn) steps by scheduling jobs according to the non-increasing order of their normal processing times (LPT rule).

Proof

Trivial. □

Based on the above lemmas we will prove the following.

Lemma 4

There is an optimal schedule π, for the given instance of DAEL, in which, before the enforcer jobs form E i (i=1,…,m) at last 3(i−1) partition jobs can be scheduled.

Proof

See Appendix. □

Lemma 5

Jobs in each block E i are processed one after another and between E i and E i+1 exactly 3 partition jobs are scheduled for i=1,…,m−1.

Proof

See Appendix. □

Based on the above lemmas, we will prove the following theorem.

Theorem 1

The problem 1|p j (v)=a j v|L max is strongly NP-hard.

Proof

Based on the given transformation from 3PP to DAEL and on Lemma 5 we construct a schedule π for DAEL, that is given as follows: (E 1,X 1,E 2,X 2,E 3,…,E i ,X i ,E i+1,…,X m−1,E m ,X m ). Recall that blocks of the enforcer jobs are scheduled according to the EDD rule and the schedule of jobs within each E i is immaterial and the sequence of jobs within each set X i is arbitrary. To make the calculations easier, renumber the jobs in these sets, i.e., X i ={3i−2,3i−1,3i} for i=1,…,m.

Now we will show that the answer for DAEL is yes (i.e., L maxy) if and only if it is yes for 3PP (i.e., \(\sum_{q\in Y_{i}} x_{q} =B\) for i=1,…,m).

Only if.” Assume that the answer for 3PP is yes. Thus, for each subset Y i (i=1,…,m) holds \(\sum_{q \in Y_{i}} x_{q} = B\), thereby, for each X i also holds \(\sum_{q \in X_{i}} x_{q} = B\). Therefore, V(X i )<V i for i=1,…,m. Obviously, \(C_{E_{1}} =d_{E_{1}}\) for schedule π regardless of a solution of 3PP. The completion times of the enforcer jobs for the schedule π are as follows:

Thus, \(C_{E_{i}} < d_{E_{i}}\) for i=1,…,m and

$$C_{X_m} = C_{E_m} + V(X_m) <C_{E_m} + V_m < \sum_{i=1}^{m}(W_i + V_i ) = D.$$

Thus, L max(π)≤y=0, thereby DAEL has the answer yes.

If.” Assume now that the answer for 3PP is no. Therefore, there is no partition of the set Y such that \(\sum_{q \in Y_{i}} x_{q} = B\) holds for all i=1,…,m, thereby \(\sum_{q \in X_{i}} x_{q} = B\) does not hold for i=1,…,m. Note that |X i |=3 for i=1,…,m (follows from Lemma 5) regardless of the partition of 3PP.

Let \(\sum_{q \in X_{i}} x_{q} = B+\lambda_{i}\) for i=1,…,m and from the assumption of 3PP \(\frac{B}{4} < x_{q} < \frac{B}{2}\) (for q=1,…,3m) follows that \(\frac{3}{4}B<\sum_{q\in X_{i}} x_{q}<\frac{3}{2}B\), thereby \(\lambda_{i} \in(- \frac{B}{4}, \frac {B}{2})\).

Thus, for any partition of the set {1,…,3m} into disjoint subsets X 1,…,X m , there must exist at least two subsets X u and X w (uw) such that \(\sum_{q\in X_{u}} x_{q}\neq\sum_{q \in X_{w}} x_{q}\) for u,w∈{1,…,m} and u<w. For this proof, it is sufficient to consider only two cases, since any distribution of λ i (following the partition of jobs) can be represented by these cases. They are given as follows:

  1. (a)

    λ u >0 and λ w <0, such that \(\sum _{i=1}^{u-1}\lambda_{i} = 0\) and w is the index of the first set X w for which \(\sum_{l=u}^{w}\lambda_{l}\leq0\), i.e., \(\sum_{l=u}^{i} \lambda_{l}>0\) for i=u,…,w−1,

  2. (b)

    λ u <0 and λ w >0, such that \(\sum _{i=1}^{u-1}\lambda_{i} = 0\) and w is the index of the first set X w for which \(\sum_{l=u}^{w}\lambda_{l}\geq0\), i.e., \(\sum_{l=u}^{i} \lambda_{l}<0\) for i=u,…,w−1,

where u,w∈{1,…,m} and u<w. Consider case (a) and assume that X w is the first one such that \(\sum _{i=u}^{w}\lambda_{i} \leq 0\) and if \(\sum_{i=u}^{w}\lambda_{i} + \lambda_{w+1}<0\) (i.e., λ w+1<0), then there must exist such k>w+1, for which λ k >0 and \(\sum_{i=w+1}^{k}\lambda_{i} \geq0\), but this is represented by case (b). Thus, without loss of generality, we assume that λ i =0 for i∈{1,…,u−1}∪{w+1,…,m}.

Based on (8) and (10) for i=1,…,u−1 (λ i =0) we have \(V(X_{i}) > V_{i} - \frac{3}{4}B\). Following this, we can estimate the completion time of the last job in job E u+1:

Since \(\lambda_{u} \in[1,\frac{B}{2})\) and \(N = mB > \frac{3}{4}B\), then \(C_{E_{u+1}} > d_{E_{u+1}}\), thereby L max>y=0.

Consider now case (b). The completion time of the last job in E u+1 can be estimated as follows:

$$C_{E_{u+1}} > d_{E_{u+1}} + (N+3)u\lambda_u - \frac{3}{4}Bu.$$

Following this way the completion time of the last job in E i for i=u+1,…,w+1 (and wm−1) can be estimated:

$$C_{E_{i}} > d_{E_{i}} + (N+3)\sum _{l=u}^{i-1} l\lambda_{l} -\frac{3}{4}B(i-1).$$

On this basis and taking into consideration \(\sum_{i=u}^{w} i \lambda _{i} =w \sum_{i=u}^{w} \lambda_{i} - \sum_{i=u}^{w-1} \sum_{l=u}^{i}\lambda _{l}\), the completion time of the last job in E w+1 (where wm−1) can be estimated:

Since \(\sum_{i=u}^{w}\lambda_{i}=0\) and \(\sum_{l=u}^{i}\lambda_{l}<0\) for ui<w, then \(\sum_{i=u}^{w-1}\sum_{l=u}^{i}\lambda_{l}<0\), thereby

$$C_{E_{w+1}} > d_{E_{w+1}} + (N+3) -\frac{3}{4}Bw > d_{E_{w+1}}, $$

for wm−1. If w=m, then the completion time of the last scheduled job in X m can be estimated as follows:

Therefore, for all the cases the criterion value L max(π) is greater than y.

We hereby showed that DAEL has an answer yes if and only if the answer for 3PP is also yes, which means DAEL is strongly NP-complete, thereby the considered scheduling problem 1|p j (v)=a j v|L max is strongly NP-hard. □

Note that any further relaxation of the problem 1|p j (v)=a j v|L max is polynomially solvable, namely 1|p j (v)=av|L max (the EDD rule, see Lemma 2) and 1|p j (v)=a j v,d j =d|L max that is equivalent to 1|p j (v)=a j v|C max (see Lemma 3). Therefore, we also determine the boundary between polynomially solvable and NP-hard cases.

Since 1|p j (v)=a j v|L max is strongly NP-hard, thereby 1|p j (v)=a j v|∑U j is not less complex.

3.2 Learning effect

Cheng and Wang (2000) proved that the problem 1|p j (v)=a j b j min{v−1,g j }|L max is strongly NP-hard. However, we will show that even the significantly simpler problem 1|p j (v)=ab j v|L max (with linear job processing times) is strongly NP-hard. Thus, we decrease the boundary between polynomially solvable and NP-hard cases of the maximum lateness minimization problems with position dependent job processing times.

The strong NP-hardness of 1|p j (v)=ab j v|L max will be proved in the similar manner as in the case of the problem 1|p j (v)=a j v|L max and the main idea of the proof is exactly the same.

At first, we will provide the pseudopolynomial time transformation from the strongly NP-complete problem 3-Partition (Garey and Johnson 1979) to the decision version of the considered scheduling problem, 1|p j (v)=ab j v|L max.

The decision version of the problem 1|p j (v)=ab j v|L max (DLEL) is given as follows: Does there exist such a schedule π of jobs on the machine for which L maxy?

The pseudopolynomial time transformation from 3PP to DLEL is given. The constructed instance of DLEL contains the set X={1,…,3m} of 3m partition jobs (constructed on the basis of the elements from the set Y of 3PP) and the set E={e 1,…,E mN } of mN enforcer jobs, where N=mB. The enforcer jobs can be partitioned into m sets E i ={e N(i−1)+1,…,e Ni } for i=1,…,m, such that jobs within each set E i have the same parameters, i.e., b k =b l and d k =d l for k,lE i for i=1,…,m.

Similarly as in the proof of Theorem 1, the parameters of the enforcer jobs are defined as follows:

for i=1,…,m, where

(14)
(15)

for i=1,…,m and of the partition jobs

for j=1,…,3m and y=0.

Observe that each parameter of DLEL can be calculated in a time bounded by the polynomial dependent on m and B. Moreover, the maximum value of DLEL does not increase exponentially in reference to 3PP (i.e., D is O(m 9 B 6)) and the problem size does not decrease exponentially in reference to 3PP (i.e., n=O(m 2 B)). Thus, the transformation from 3PP to DLEL is pseudopolynomial.

Let X i denote the set of the partition jobs that are processed just after jobs from the set E i (for i=1,…,m). Define a schedule π , where jobs are scheduled as follows: (E 1,X 1,E 2,X 2,E 3,…,E i ,X i ,E i+1,…,X m−1,E m ,X m ), where X i ={3i−2,3i−1,3i} for i=1,…,m, if it is not a case we can always renumber the partition jobs. Let V(X i ) and W i denote the sum of processing times of the partition jobs from X i and the enforcer jobs from E i , respectively, for the schedule π . Based on the transformation V(X i ) is defined as:

$$V(X_i) = 3a - 3M \bigl( i(N+3)-1 \bigr) + i(N+3)\sum _{q\in X_i}x_q - 2x_{3i-2}-x_{3i-1},$$

for i=1,…,m and it can be estimated as follows:

(16)

It is easy to observe that the sum of processing times of the enforcer jobs from the set E i , i.e., W i (i=1,…,m) in schedule π is given by (15). The completion time of the last job in E i is \(C_{E_{i}}\) and of the last job in X i is \(C_{X_{i}}\) for i=1,…,m.

Let us also define useful inequalities:

(17)
(18)
(19)

for i=1,…,m.

On this basis, we will provide properties of an optimal solution for DLEL.

Lemma 6

The optimal sequence of jobs for the problem 1|p j (v)=abv,d j =d|L max is arbitrary.

Proof

Trivial. □

Lemma 7

The problem 1|p j (v)=abv|L max can be solved in O(nlogn) steps by scheduling jobs according to the non-decreasing order of their due dates (the EDD rule).

Proof

Trivial. □

Lemma 8

The problem 1|p j (v)=ab j v|C max can be solved in O(nlogn) steps by scheduling jobs according to the non-decreasing order of b j parameters.

Proof

Trivial. □

Based on the above lemmas we will prove the following.

Lemma 9

There is an optimal schedule π, for the given instance of DLEL, in which, before the enforcer jobs from E i (i=1,…,m) at last 3(i−1) partition jobs can be scheduled.

Proof

See Appendix. □

Lemma 10

Jobs in each block E i are processed one after another and between E i and E i+1 exactly 3 partition jobs are scheduled for i=1,…,m−1.

Proof

See Appendix. □

Based on the above considerations, we will prove the following theorem.

Theorem 2

The problem 1|p j (v)=ab j v|L max is strongly NP-hard.

Proof

Based on the given transformation from 3PP to DLEL and on Lemma 10 we construct a schedule π for DLEL, that is given as follows: (E 1,X 1,E 2,X 2,E 3,…,E i ,X i ,E i+1,…,X m−1,E m ,X m ). Recall that blocks of the enforcer jobs are scheduled according to the EDD rule and the schedule of jobs within each E i is immaterial and the sequence of jobs within each set X i is arbitrary. To make the calculations easier, renumber the jobs in these sets, i.e., X i ={3i−2,3i−1,3i} for i=1,…,m.

The further part of the proof is exactly the same as for Theorem 1. □

Note that any further relaxation of the problem 1|p j (v)=ab j v|L max is polynomially solvable, namely 1|p j (v)=abv|L max (the EDD rule, see Lemma 7) and 1|p j (v)=ab j v,d j =d|L max that is equivalent to 1|p j (v)=ab j v|C max (see Lemma 8). Therefore, we also determine the boundary between polynomially solvable and NP-hard cases.

Since 1|p j (v)=ab j v|L max is strongly NP-hard, thereby 1|p j (v)=ab j v|∑U j is not less complex.

3.3 Problem equivalency

In the classical scheduling theory, the following problems 1||L max and 1|r j |C max are equivalent with respect to the criterion value. Moreover, an algorithm solving problem 1||L max can be taken as an algorithm solving the problem 1|r j |C max. Now, we will show that this equivalency still holds in the presence of learning and aging, but if the corresponding job processing times are symmetric for the both phenomena.

Theorem 3

The problems 1|p j (v)|L max and \(1|r'_{j}, p'_{j}(v)|C'_{\max}\) are equivalent in the following sense: the optimal schedules are inverse and the criterion values differ only by a constant, if p j (v) is a positive function of a job position and \(p'_{j}(v)=p_{j}(n-v+1)\) for v,j=1,…,n.

Proof

First, a transformation from 1|p j (v)|L max (LP) to \(1|r'_{j}, p'_{j}(v)|C'_{\max}\) (CP) is given:

$$n' = n; \qquad r'_j = D -d_j; \quad j=1, \ldots, n,$$

where D=max j=1,…,n d j . It is trivial to show that the transformation can be done in polynomial time.

Given a schedule π for the problem LP construct a schedule π′ for the problem CP viewed from the reverse direction. It means that for a given permutation π=〈π(1),π(2),…,π(n)〉, the corresponding permutation π′ is defined as π′(v)=π(nv+1) for v=1,…,n. Since \(p'_{j}(v)=p_{j}(n-v+1)\) for v,j=1,…,n, then the processing times of the jobs placed in the vth position in π′ and in the (nv+1)th position in π are equal, i.e., \(p'_{\pi'(v)}(v) = p_{\pi(n-v+1)}(n-v+1)\) for v=1,…,n.

Let \(S'_{j}\) denote the starting time of job j. First, we show that if for the given permutation π of the problem LP the following equality L π(nk+1)=L max holds for job π(nk+1), then in the corresponding permutation π′ of the problem CP, job π′(k) starts at its release date (i.e., \(S'_{\pi'(k)}=r'_{\pi'(k)}\)), k=1,…,n.

Observe that for the given job π(nk+1) the following equality L max=L π(nk+1)L π(n)=C π(n)d π(n)C π(n)D holds, thus C π(nk+1)d π(nk+1)C π(n)D. Note also that \(C_{\pi(n)} - C_{\pi(n-k+1)} = \sum _{i=n-k+2}^{n}p_{\pi (i)}(i)\) and on this basis:

Since \(S'_{\pi'(k)}=\max\{r'_{\pi'(k)},C'_{\pi'(k-1)}\}\), job π′(k) starts at its release date.

On this basis, we prove that for the constructed permutations π and π′ the optimal criterion values for the corresponding problems differ only by a constant. Thus, assume that ni+1 denotes a position of the first job in π for which the following equality L max=L π(ni+1) holds. Therefore, we have \(S'_{\pi'(i)} \geq r'_{\pi '(i)}\) for i=k,…,n. Thus, the criterion value calculated for CP is equal to

Since the processing times of the jobs placed in appropriate positions in both schedules are equal, the criterion values calculated for both problems differ only by the constant D. Thus, we proved that both problems are equivalent. □

On this basis, we can easily prove the complexity of the following problems.

Corollary 1

The problem 1|r j ,p j (v)=a j (n+1−v)|C max is strongly NP-hard.

Proof

The problem 1|p j (v)=a j v|L max is strongly NP-hard (Theorem 1), thus, on the basis of Theorem 3, the considered problem 1|r j ,p j (v)=a j (n+1−v)|C max is not less complex. □

Corollary 2

The problem \(1|r_{j}, p_{j}(v)=a'_{j}+ a_{j}v, a'_{j}=(c-a_{j}(n+1)),c>a_{j}n|C_{\max }\) is strongly NP-hard.

Proof

On the basis of Theorem 2 and Theorem 3, in the similar manner as the previous proof. □

Note that we prove the strong NP-hardness of the problems with models that are even simpler than analyzed by Bachman and Janiak (2004).

4 Polynomially solvable cases

The problems defined in Sect. 2 are strongly NP-hard even if job processing times are described by simple linear functions, where the decreasing/increasing of a job processing time is different for each job. However, in this section, we prove that if the decrease/increase of a job processing time is the same for each job (i.e., p j (v)=a j +f(v)), then the considered problems can be solved optimally in polynomial time even if the function f(v) is arbitrary. Therefore, we will determine the boundary between polynomially solvable and NP-hard cases.

The algorithms presented in this section solve the problems with the learning effect as well as with the aging effect.

Property 1

The problem 1|r j ,p j (v)=a j +f(v)|C max can be solved optimally in O(nlogn) by scheduling jobs according to the non-decreasing order of their release dates (Earliest Release Dates—the ERD rule).

Property 2

The problem 1|p j (v)=a j +f(v)|L max can be solved optimally in O(nlogn) steps by scheduling jobs according to the non-decreasing order of their due dates (Earliest Due Date—the EDD rule).

Since Property 1 and Property 2 can be proved by simple job interchanging technique the proofs are omitted.

It is well known that the problem 1||∑U j (with constant job processing times) can be solved optimally by Moore’s Algorithm (Moore 1968). We will prove that this algorithm is still optimal for the problem 1|p j (v)=a j +f(v)|∑U j .

Property 3

The problem 1|p j (v)=a j +f(v)|∑U j can be solved optimally in O(nlogn) steps by Moore’s Algorithm.

Proof

The proof will be done using the inductive method in the similar manner as by Sturm (1970). Based on Property 2, we can note that there exists a schedule for 1|p j (v)=a j +f(v)|∑U j having no late jobs if and only if the schedule of jobs according to the non-decreasing order of their due dates (EDD) has no late jobs. On this basis, we will consider only EDD sequences. To simplify the proof, assume that such a sequence is 1,2,…,n (if it is not a case we can renumber the jobs).

Assume that using Moore’s Algorithm (Algorithm 1) we determine a subset B={β 1,…,β q } of q late jobs. Suppose also that it is possible to choose from the set J={1,…,n} a subset Γ={γ 1,…,γ q−1} of q−1 jobs, such that the remaining nq+1 jobs J\Γ are not late. Thus, for all i=1,…,n the following inequality must hold:

$$ d_{i}\geq\sum _{j=1}^{i}\bigl(a_j + f(j)\bigr) - \sum_{\gamma_{j}\in\Gamma_i}a_{\gamma_j} - \sum _{j=1}^{|\Gamma_i|}f(i-j+1),$$
(20)

where Γ i ={γ j :γ j i,γ j ∈Γ} and |Γ i | is the cardinality of Γ i . Without loss of generality we can also assume ∀(i,j)β i γ j .

Algorithm 1
figure 1

Moore’s Algorithm (MA)

Using Moore’s Algorithm (MA), we find the first late job α 1, i.e., that satisfies \(d_{\alpha_{1}} < \sum_{j=1}^{\alpha_{1}}a_{j} + f(j)\) and \(d_{i} \geq\sum_{j=1}^{i}a_{j} + f(j)\) for i=1,…,α 1−1. From the definition of Γ follows that there is at least one job \(\gamma_{j} \in \Gamma_{\alpha_{1}}\), i.e., inequality (20) must hold. Let us choose an element δ 1 from \(\Gamma_{\alpha_{1}}\) with \(a_{\delta_{1}} = \max \{a_{\gamma_{i}}:\gamma_{i}\in \Gamma_{\alpha_{1}}\}\). On the other hand, MA chooses job β 1 (\(a_{\beta_{1}}\geq a_{\alpha_{1}}\)) that satisfies

Observe that if α 1β 1, then job α 1 is no longer late, since job β 1 is skipped. It is easy to notice that \(a_{\delta_{1}} \leq a_{\beta_{1}}\). Thus, there must be at least one job in \(\Gamma_{\alpha_{1}}\) and \(\delta_{1} \in\Gamma _{\alpha_{1}}\).

Suppose now that there is at least l (l<q) jobs in \(\Gamma_{\alpha _{l}}\) and we are able to choose among them l elements δ i such that \(a_{\delta_{i}}\leq a_{\beta_{i}}\) for i=1,…,l.

Using MA we find job α l+1 (i.e., the first late job after l jobs are skipped) that satisfies

From (20) follows that there must be at least l+1 jobs in \(\Gamma_{\alpha_{l+1}}\) to satisfy

$$d_{\alpha_{l+1}} \geq\sum _{j=1}^{\alpha_{l+1}}\bigl(a_j + f(j)\bigr) - \sum_{\gamma_j \in \Gamma_{\alpha_{l+1}}}a_{\gamma_j} - \sum _{j=1}^{|\Gamma_{\alpha_{l+1}}|}f(\alpha_{l+1}-j+1),$$

and \(\delta_{i} \in\Gamma_{\alpha_{l+1}}\), i=1,…,l. Thus, we find the (l+1)th element with \(a_{\delta_{l+1}} = \max\{a_{\gamma_{i}}:\gamma _{i}\in \Gamma_{\alpha_{l+1}}\backslash\{ \delta_{j}\}, j=1,\ldots, l \}\). On the other hand, MA finds β l+1 (i.e., the (l+1)th late job) with \(a_{\beta_{l+1}} =\max_{\{}a_{\beta_{i}}:i=1,\ldots, \alpha_{l+1}, i\neq\beta_{1},\ldots, \beta_{l}\}\) and it is easy to notice that \(a_{\delta_{l+1}} \leq a_{\beta_{l+1}}\). Therefore, there must be at least l+1 jobs in \(\Gamma_{\alpha_{l+1}}\) and among them l+1 jobs δ i with \(a_{\delta_{i}}\leq a_{\beta_{i}}\) for i=1,…,l+1. Concluding in the same way, we can show that when MA finds job α q , then there must be at least q jobs in \(\Gamma _{\alpha_{q}}\) and it contradicts the assumption |Γ|=q−1. Thus MA finds the minimum number of late jobs for the considered scheduling problem. Note that the complexity of MA is O(nlogn) if the algorithm is implemented with a special data structure. □

5 Conclusions

In this paper, we proved that the minimization of the maximum lateness or of the makespan with release dates is strongly NP-hard even if job processing times are described by simple linear functions dependent on a number of processed jobs (i.e., a job position in a sequence). Moreover, we showed that the minimization of the makespan with release dates is equivalent to the minimization of the maximum lateness if job processing times are described by functions dependent on the number of processed jobs and the functions are monotonically opposite for these problems.

The main conclusion concerning the proved strong NP-hardness of the considered problems is that the maximum acceptable simplification of job processing time functions (by their linearization) does not decrease the complexity of the considered problems. Therefore, it does not decrease the effort required to obtain optimal control decisions but it rather decreases their accuracy.

Finally, we also proved that the considered problems with an arbitrary functions describing decrease/increase of a job processing time are polynomially solvable if the functions are the same for each job. The proper algorithms were provided.

The future research will concern on the analysis of the scheduling problems with position dependent job processing times under additional constraints and different criteria (e.g., Leung et al. 2008; Shabtay and Steiner 2008; Steiner and Zhang 2011; Xu et al. 2010).