Shop scheduling problems with pliable jobs

In this paper, we study a new type of flow shop and open shop models, which handle so-called “pliable” jobs: their total processing times are given, but individual processing times of operations which make up these jobs are flexible and need to be determined. Our analysis demonstrates that many versions of flow shop and open shop problems with pliable jobs appear to be computationally easier than their traditional counterparts, unless the jobs have job-dependent restrictions imposed on minimum and maximum operation lengths. In the latter case, most problems with pliability become NP-hard even in the case of two machines.


Introduction
In traditional flow shop and open shop models, n jobs of the set J = {1, 2, . . . , n} are processed by m machines M i , 1 ≤ i ≤ m. Each job j, 1 ≤ j ≤ n, consists of m operations O i j , one operation on each machine M i , with processing times p i j , 1 ≤ i ≤ m. A job cannot be processed by two machines at the same time and a machine cannot process two jobs simultaneously. In the flow shop model, all jobs have the same machine order, while in the open shop model the machine order is not fixed and can be chosen arbitrarily for each job. The goal is to select an order of operations on each machine and for open shops additionally the order of operations for each job, so that a given objective function f depending on job completion times is minimized.
Both models, flow shop and open shop, have a long history of study, see, e.g., Brucker (2007), Pinedo (2016). Over the last 60 years, the classical versions were extended to handle additional features of practical importance. The main extensions relevant for our study are processing with preemption and processing with lot splitting or lot streaming. In preemptive models, an operation can be cut into an arbitrary number of pieces which are then processed independently. In the models with lot splitting or lot streaming, operations can be divided into sublots, which can then be treated as new operations (cf. Kropp and Smunt 1990;Trietsch and Baker 1993;Chang and Chiu 2005). In the preemptive case, pieces of the same job processed by two machines cannot overlap in time, while in the case of lot splitting or lot streaming overlapping may happen if the pieces belong to different sublots.
Other concepts related to operation splitting and relocation, which have appeared more recently, deal with flexible operations and operation redistribution (cf. Gupta et al. 2004;Burdett and Kozan 2001, respectively). In the first model, jobs typically consist of more than m operations, some of which are fixed and have to be processed by dedicated machines while others are flexible and need to be assigned to one of the appropriate machines. In the second model, operation parts can be moved to neighboring machines if those machines are equipped to handle them.

Pliability
In all models discussed above, processing times p i j are given for the operations O i j and these processing amounts have to be completed in full even if splitting happens and/or operation parts are moved to different machines. In this paper, we study a different way of splitting jobs, where operation lengths are not given in advance, but have to be determined. To distinguish our model from those studied previously, we introduce the notion of pliability (note that the term "splitting" is already reserved for other models, see, e.g., Serafini (1996), where a job can be split and then processed independently on different machines). Formally, a pliable job j is given by its total processing amount p j , which has to be split among the m machines. Operation lengths x i j ∈ R ≥0 have to be determined as part of the decision making process. The combined length of all operations of job j over all machines has to match the given total processing requirement of job j: We initiate this line of research by introducing three models with varying restrictiveness on the pliability of jobs.
(i) Unrestricted pliability means that we only have to fulfill 0 ≤ x i j ≤ p j , 1 ≤ i ≤ m, 1 ≤ j ≤ n.
(ii) Restricted pliability with a common lower bound means that the jobs need to be split complying with the restriction on a minimum length p of any operation: p ≤ x i j ≤ p j , 1 ≤ i ≤ m, 1 ≤ j ≤ n.
Note that for a feasible instance we must have p j ≥ m p for each job 1 ≤ j ≤ n. (iii) Restricted pliability means that the jobs need to be split complying with individual lower and upper bounds p i j , p i j given for all operations: Again, to assure feasibility we must have p i j for all j = 1, . . . , n.
Model (i) assumes full flexibility of the machines and admits a low level of job granularity, accepting arbitrarily small operations. Although infinitesimally small operations are allowed, they are treated as zero-length operations rather than missing operations (for these concepts cf. Hefetz and Adiri 1982). They need to be allocated to the corresponding machine in a non-conflict way, and in the flow shop case the required machine order has to be respected.
Model (ii) is characterized by a limited level of job granularity defined by a common parameter p for all jobs, while the limitations in model (iii) can be different for distinct jobmachine pairs. Clearly, model (i) is a special case of model (ii) with p = 0, and model (ii) is a special case of model (iii) with p i j = p, p i j = p j . The classical flow shop and open shop problems are special cases of model (iii) with p i j = p i j = p i j for all 1 ≤ i ≤ m, 1 ≤ j ≤ n.
Extending the α|β|γ -notation, we denote flow shop and open shop problems with unrestricted pliability of type (i) by F| plbl|γ and O| plbl|γ . In the presence of additional restrictions of models (ii)-(iii), the given bounds are indicated in the second field as plbl( p) and plbl( p i j , p i j ), respectively.
The third field γ denotes the optimization criterion. It may include one of the traditional scheduling functions or simply " f " for an arbitrary non-decreasing objective function depending on completion times C j of the jobs j ∈ J . We distinguish between minmax and minsum objectives: where f j (C j ) are non-decreasing functions. The two typical minmax examples are the makespan C max = max{C j | j ∈ J } and the maximum lateness L max = max{C j −d j | j ∈ J }, where d j is the due date of job j. Minsum examples include the total completion time objective C j , total tardiness T j , where T j = max C j − d j , 0 , or the total number of late jobs U j , where U j ∈ {0, 1}, depending on whether a job is completed on time or after its due date d j . If jobs j ∈ J have different weights w j , then the latter functions can be extended to their weighted counterparts w j C j , w j T j , w j U j .

Contributions
Our main results include a study of general properties of pliability models, formulating a general methodology for handling them and using it to perform a thorough complexity classification of the models. Note that all results are derived under the assumption that the jobs are available simultaneously, at zero release times. The obtained results for flow shop and open shop problems with pliable jobs are summarized in Tables 1, 2 for the case where the number of jobs is not smaller than the number of machines, n ≥ m, and in Table 3 for the case where there are fewer jobs than machines, n < m. For comparison, we also Table 1 Open shop and flow shop problems with pliable jobs and minmax objectives, n With compact encoding of the output; additional term O(nm) for constructing complete schedule provide related results for traditional (non-pliability) models; for references see, e.g., Brucker (2007), Pinedo (2016).
Note that the input for problems with pliability of type (i) and (ii) consists of O(n) entries, while the output consists of O(nm) entries if it is needed to produce a full characterization of a schedule, specifying starting/completion times of all operations. Whenever an optimal schedule has a special structure and it is possible to derive formulae for computing starting/completion times of individual operations, we list reduced time complexities in Tables 1, 2, 3, associated with finding characteristics of a single operation and for required auxiliary preprocessing. An additional term O(nm) has to be added if an optimal solution should be specified in full (see the entries in the table marked by an asterisk).
The remainder of the paper is organized as follows. In Sect. 2 we provide an overview of related models studied in the literature. General properties of flow shop and open shop models with pliability are discussed in Sect. 3. Our main focus is on the case where the number of jobs is not smaller than the number of machines, n ≥ m: pliability models with minmax objectives are discussed in Sects. 4, 5, and 6; results for models with minsum objectives are presented in Sect. 7. The situation n < m is discussed in Sect. 8. Concluding remarks are summarized in Sect. 9.

Related work
The study of shop scheduling models with pliability is motivated by scenarios where jobs are processed by machines of different types in a flow shop or open shop manner, and transition from one machine to another requires intermediate actions, which can be performed by either of two consecutive machines. Those actions may be related to quality control, preprocessing, postprocessing, or setup operations. Alterna-tively, operators specializing on serving particular machines may be able to perform not only the main operations they are trained for, but also additional operations on adjacent machines, thus reducing possible delays and idle times in the system. In manufacturing applications, not only the operators can be flexible, but machines as well if they are designed to perform operations of different types. Various examples of flexible machines, such as CNC machines and machines producing printed circuit boards, are reviewed by Crama and Gultekin (2010), and examples of flexible operations are presented in the context of assembly line scheduling, see, e.g., Ostolaza et al. (1990), McLain et al. (1992, Anuar and Bukchin (2006) and Askin and Chen (2006). Note that the nature of processing in assembly lines and in flow shops and open shops is quite different.
For the shop models we consider, the most relevant results are known in the area of flow shop scheduling, for (a) models with flexible operations and (b) models with operation redistribution. In the models of type (a), there are fixed operations processed by dedicated machines and flexible ones which can be processed (without preemption) by one of the two adjacent machines. In the models of type (b), any flexible operation can be preempted and parts of it can be relocated to an adjacent machine.
Comparing the pliability model with models (a) and (b), we summarize below the main common points and the points of difference.
• In both models (a) and (b), it is usually assumed that machines operate in the flow shop manner. In comparison, the nature of the pliability model is rather general: it is relevant for any shop model, including flow shop and open shop, the two models considered in this paper. It can be also generalized for the job shop model, although this is beyond the scope of our paper. • All models, including the pliability one, deal with flexible allocation of operations or their parts. The level of flexibility is slightly different in the three models. In model (a), each flexible operation has to be allocated to one machine in full. In model (b), flexible operations can be split at any point of time (see Burdett and Kozan 2001). Still there is a limitation on the machine choice for the allocation: only an adjacent machine in the flow shop chain of machines can be selected. Additionally, for some operations on a given machine, it may only be allowed to share them with one of the two neighboring machines (either the previous or the next machine). In the pliability model, every machine must get at least the minimum workload associated with job j (namely, 0, p or p i j , depending on the model type), with full freedom for the distribution of the remaining workload.
• In models (a) and (b), processing times p i j are given for all operations; for the pliability model the total job lengths p j are given and operations' lower and upper bounds.
The difference between the three models discussed above becomes more apparent if the number of machines is m ≥ 3. The models with m = 2 machines are closely related, especially if the flexible operation can be started on the first machine and then preempted at any point in time to be restarted on the second machine. Indeed, in such an instance of model (a) with preemption of the flexible operation, every job j consists of three operations of lengths p 1 j , p * j and p 2 j . The first and the last operations are fixed and have to be processed by machines M 1 and M 2 , while the middle one can be split into at most two parts and distributed between M 1 and M 2 . In an equivalent instance of the pliability model, job j is defined by its total processing time p j = p 1 j + p * j + p 2 j and lower and upper bounds on operation lengths, p 1 j = p 1 j , p 2 j = p 2 j and p 1 j = p 2 j = p j . On the other hand, the flow shop model of type (b) with relocatable operations of lengths p 1 j , p 2 j associated with machines M 1 and M 2 has similarities with the pliability model of type (i) with p 1 j = p 2 j = 0 and p 1 j = p 2 j = p 1 j + p 2 j . Interestingly, the preemptive version of model (a) has not been considered in the literature, although it is observed by Lin et al. (2016) that this case would be an interesting extension of the model.
The traditional, unsplittable flow shop problem (a) with flexible operations is NP-hard for the makespan objective even in its simplest setting with m = 2 machines, see Gupta et al. (2004). It remains NP-hard even if the job sequence is fixed (cf. Lin et al. 2016). Therefore, the study of models with flexible operations focuses on approximability results (Gupta et al. 2004), pseudopolynomial-time algorithms (Lin et al. 2016 ), construction heuristics and local search methods (Ruiz-Torres et al. 2010, 2011. The models often incorporate special features such as limitations on the buffer capacities used for handling jobs in-between the machines, requirements to optimize workstation utilization or throughput rate, etc. The main special case of the flexible model, for which efficient algorithms have been developed, is the one with identical jobs, see Crama and Gultekin (2010), Gultekin (2012).
Flow shop problems with redistribution are less studied (compared to the model with flexible operations), especially in terms of a complexity analysis. Burdett and Kozan (2001) consider several scenarios where adjacent machines can perform the same tasks and parts of an operation may be shifted to the upstream or downstream machine. Besides proposing a MILP formulation, heuristics are described and empirically evaluated. In Bultmann et al. (2018a), a very general framework for flexibility is introduced. Similar to the model with pliable jobs, the processing times of the operations are not fixed in advance, but lower and upper bounds on the processing times are specified for consecutive machines. A decomposition algorithm is proposed, using a local search procedure on the set of all permutations where optimal corresponding processing times are efficiently computed in a second step. In Bultmann et al. (2018b), a similar approach can be found for a synchronous flow shop environment with pliable jobs.
Our study continues the line of research on flow shop models with flexibility and relocation, and extends it also to open shop counterparts.

General properties and reductions
In this section, we explore the links between the pliability models and classical scheduling models: flow shop, open shop and single-stage scheduling with parallel machines. Furthermore, we establish some key properties of the pliability models and discuss their implications.

Unrestricted pliability
In order to address type (i) problems O| plbl| f and F| plbl| f , it is often useful to relax the requirement of dedicated machines typical for open shops and flow shops and to consider identical parallel machines instead. The pliability condition, that allows to determine the actual processing times of the operations, can then be interpreted as processing with preemption. The resulting problem is denoted by P| pmtn| f , where P denotes "identical parallel machines" and pmtn denotes preemption. In this problem, jobs may be split into multiple parts and these job parts can be processed on different machines.
Clearly, if in a feasible schedule for problem P| pmtn| f every machine processes exactly n job parts, one for each job, then that schedule also represents a feasible open shop schedule. Alternatively, if every machine processes each job at most once, then the schedule can be converted into a feasible open shop schedule by introducing zero-length operations for missing operations at the beginning of a schedule. We will call a schedule for problem P| pmtn| f with exactly nm job parts, some of which may be of zero length, an "open shop type" schedule, or an O-type schedule for short.
In a "flow shop type" schedule, or an F-type schedule for short, each machine processes exactly one part of each job, as in an O-type schedule; additionally jobs visit the machines in a flow shop manner, moving from machine M i to M i+1 , 1 ≤ i ≤ m − 1. As for an O-type schedule, some of the nm operations may be of zero length. However, in F-type schedules, those zero-length operations might appear in the middle of the schedule and zero-length operations which appear on a machine M i with 2 ≤ i ≤ m cannot usually be moved to the beginning of the schedule instead. Therefore, as opposed to the open shop case, for F-type schedules such zero-length operations may create idle times on upstream or downstream machines and have an impact on the completion time of the job they belong to.
In the case of permutation schedules, with all machines processing the jobs in the same order, the notion of an F-type schedule coincides with the notion of a "Permutation Flow Shop-like schedule", introduced by Prot et al. (2013).
For a scheduling problem α|β|γ , let S(α|β|γ ) denote the set of its feasible solutions. Since any feasible solution to F| plbl| f is also feasible for O| plbl| f , and in its turn any feasible solution to O| plbl| f is feasible for P| pmtn| f , we conclude: In what follows we revise known algorithms and NPhardness results for problem P| pmtn| f with the focus on optimal schedules of O-and F-type. The existence of an optimal F-type schedule for problem P| pmtn| f with any non-decreasing objective function f was proved by Prot et al. (2013).
Theorem 1 (Prot et al. 2013) For problem P| pmtn| f with any non-decreasing objective function f ∈ { f max , f }, there exists an optimal F-type schedule.
Clearly, due to the inclosed structure of solution regions (3), an optimal schedule for problem P| pmtn| f , which is of F-type, is also an optimal schedule for problems F| plbl| f and O| plbl| f with pliable jobs. It follows that for problems P| pmtn| f , O| plbl| f and F| plbl| f there exists a common optimal schedule and it is of F-type. Thus the optimal objective values for these three problems are the same, and the following corollary holds.
Corollary 1 Any optimal schedule for problem P| pmtn| f which is of F-type is also optimal for problems F| plbl| f and O| plbl| f . Any optimal schedule for problem F| plbl| f or O| plbl| f is also optimal for problem P| pmtn| f .
We use Corollary 1 in order to transfer complexity results from scheduling problems with parallel machines to shop scheduling with pliability.
Consider first the case when a particular version of problem P| pmtn| f is NP-hard. Then, the corresponding versions of F| plbl| f and O| plbl| f are also NP-hard since otherwise, due to the second part of Corollary 1, a polynomial-time algorithm for F| plbl| f or O| plbl| f would also solve problem P| pmtn| f in polynomial time. We combine this observation with the known NP-hardness results for P| pmtn| f (see Brucker 2007;Lawler et al. 1993).
Theorem 2 Problems F2| plbl| f and O2| plbl| f with f ∈ { w j C j , T j , w j U j } are NP-hard in the ordinary sense, and they are NP-hard in the strong sense if f = w j T j .
Problems F| plbl| f and O| plbl| f with f = U j are NP-hard in the ordinary sense and they are NP-hard in the strong sense if f = w j C j .
Whenever a job sequence is known for an optimal F-type schedule for problem P| pmtn| f , an optimal allocation of jobs to the machines can be obtained as a solution to the following model, see Prot et al. (2013): Here, it is assumed that the jobs are numbered in the order they appear in an optimal schedule, x i j is the processing time of operation O i j , t i j is its starting time, and C j is the completion time of job j. The range of functions that allow fixing the job sequence includes f = C max (the job order can be arbitrary), f = L max (earliest due date first; see Sahni 1979) and f = C j (shortest processing time first; see Conway et al. 1967). Thus, in the case of f ∈ C max , L max , C j problems O| plbl| f and F| plbl| f are solvable via linear programming. However, special properties of these pliability problems allow us to develop faster algorithms. We present them in Sects. 4.1, 4.2 and 7.1.

Restricted pliability with a common lower bound
In this section, we establish two important properties for the type (ii) flow shop model F| plbl( p)| f with a common lower bound p on operation lengths. Unfortunately these properties cannot be easily generalized to the open shop version of the problem, O| plbl( p)| f . They also do not hold for the most general model of type (iii), where job splitting has to respect individual lower and upper bounds on operation lengths. In Sect. 3.2.1, we prove that for problem F| plbl( p)| f with an arbitrary (not necessarily non-decreasing) objective function f , there exists an optimal permutation schedule. Note that in Prot et al. (2013) Theorem 1 was proved for a more restricted type (i) problem ( p = 0) with a nondecreasing objective function. Their technique cannot be reused for our proof, as it involves cutting jobs into arbitrarily small pieces.
In Sect. 3.2.2, we present the common methodology for Adjacent jobs swap: initial schedule S (above) and modified schedule S (below); gray boxes represent fixed parts of schedules S and S where the jobs J \{u, v} are processed based on job disaggregation and decomposing the problem into two subproblems. The implementation details of that methodology are then elaborated in Sects. 5.1, 5.2 and 7.2.

The existence of an optimal permutation schedule for F|plbl(p)|f
We start with an auxiliary property based on adjacent swaps. By that property, the order of any two adjacent jobs u and v in a permutation schedule can be reversed without making changes to the rest of the schedule. To achieve this, the operation lengths of jobs u and v may be redistributed, if necessary.

Lemma 1 Given a feasible permutation schedule S for problem F| plbl( p)| f where job u is sequenced immediately before job v in the permutation, there exists another feasible permutation schedule S where u is sequenced immediately
after v, while all remaining jobs are scheduled in the same time slots as in S. For schedule S , Lemma 1 is illustrated in Fig. 1 and proved in Appendix 1. Note that the proof uses the property that there is a common lower bound p i j = p for the operation lengths and that the remaining processing time p j − m p of a job j can be dis-tributed among the operations without restrictions. This does not work for problem F| plbl( p i j , p i j )| f .

Theorem 3 Given an arbitrary schedule S for problem F| plbl( p)| f , there exists a permutation schedule S in which every job has the same completion time on machine M m as in S.
Proof The proof is done by induction on the number of machines m. For m = 1 the statement is obvious. Consider m ≥ 2, assuming that the statement of the theorem holds for m − 1 machines.
Let S be a non-permutation schedule on m machines. We split S into two subschedules S ( IfŜ is a permutation schedule, then no further action is needed. Otherwise consider S (M 1 , . . . , M m−1 ) and apply a sequence of adjacent swaps, described in the proof of Lemma 1. The swaps eventually result in the same job order as in S (M m ). We demonstrate that each swap on the first m − 1 machines does not cause conflicts with operations in S (M m ).
Assume u is sequenced immediately before v in S (M 1 , . . . , M m−1 ), but somewhere after v in S(M m ). Then, by Lemma 1, after swapping u and v on the first m − 1 machines, the completion time of job v on machine M m−1 is at most as large as before, and hence job v is not postponed on M m . By the same lemma, the completion time of job u on M m−1 remains no larger than the completion time of job v before the swap. This means that u finishes on M m−1 before v starts on M m , and therefore before u starts on M m .
Performing at most O(n 2 ) swaps in the part S (M 1 , . . . , M m−1 ), we get a permutation schedule on m machines without changing the completion times on machine M m .
It is worth noting that in the proof of Theorem 3, the schedule transformations keep the operations on the last machine unchanged. This implies that an optimal permutation schedule exists for any objective function depending on job completion times, monotone or non-monotone. Note also that Theorem 3 does not hold for the more general problem F| plbl( p i j , p i j )| f since for the special case F||C max with more than three machines there exist instances for which only non-permutation schedules are optimal (see, e.g., Potts et al. 1991).

A job disaggregation approach for problem F|plbl(p)|f
In this section, we introduce the disaggregation approach, which serves as a common methodology for solving prob- It provides the tool for constructing optimal schedules and for justifying their optimality. Problem-specific details on how the methodology can be implemented are presented in Sects. 5.1, 5.2 and 7.2. The main idea is to define for an instance I of problem F| plbl( p)| f two auxiliary instances by disaggregating the jobs into two parts: instance I e of type (ii) with equal processing times and instance I d of type (i) with diminished processing times. Optimal solutions to the two instances are then found and combined, delivering a solution to the initial problem.
Definition 1 For an instance I of problem F| plbl( p)| f , there are two associated instances: Instance I e of type (ii) with processing times p e j = m p for all jobs j ∈ J and with the same lower bound p as in the original instance I , Instance I d of type (i) with processing times p d j = p j − m p, for all jobs j ∈ J , and zero lower bounds.
Let S d and S e be two feasible schedules for instances I d and I e which satisfy the following conditions, see Fig. 2 for an illustration.
(D1) S d and S e are permutation schedules with the same job sequence (1, 2, . . . , n). (D2) S e has a staircase structure, uniquely defined by com- and p e i j , respectively, where p e i j = p. Schedules S d and S e satisfying properties (D1)-(D3) can be easily combined to produce a permutation schedule S for the original instance I , as illustrated in Fig. 2. The job order remains the same as in schedules S d and S e , while the aggregate operation lengths p i j are defined as where C d i j and C e i j = (i + j − 1) p are completion times of operations O i j in S d and S e .
Conversely, if in a permutation schedule S for instance I with the job order (1, 2, . . . , n) there are no idle times except for time intervals 0, (i − 1) p (as in Condition (D2) for schedule S e ), then S can be decomposed into two schedules S e and S d such that conditions (D1)-(D3) and relation (5) holds.
Proof Consider schedules S d , S e and their disjunctive graph representation shown in Fig. 3. In that graph, nodes The nodes are associated with weights: p d i j for the graph representing S d and p e i j = p for the graph representing S e . The length of a path in the graph is defined as the sum of weights of the nodes on the path. The completion time of any operation O i j is calculated as the length of a longest path from the source node (1, 1) to node (i, j). Combining S d and S e implies increasing the weights of all nodes in the graph for S d by the same amount p. Since any path from (1, 1) to (i, j) includes exactly i + j − 1 nodes, the structure of a longest path does not change, and its length increases by (i + j − 1) p, so that (5) holds for the aggregate schedule.
Similar arguments justify the reverse statement on decomposing S into S e and S d .

Restricted pliability with individual lower and upper bounds
The following proposition establishes basic reductions for type (iii) problems.

Proposition 1 For flow shop and open shop problems, the following reductions hold:
where α ∈ {F, O}.
Here A ∝ B indicates that problem A polynomially reduces to problem B, see Garey et al. (1976).
The chain of reductions (7) reflects the fact that pliability model (i) is a special case of model (ii), which in its turn is a special case of model (iii).
Using (6), we can transfer all NP-hardness results known for F|| f and O|| f to the corresponding pliability problems of type (iii), concluding that problem O3| plbl( p i j , p i j )|C max is NP-hard in the ordinary sense, while problems F3| plbl( p i j , p i j )|C max , O| plbl( p i j , p i j )|C max and α2| plbl ( p i j , p i j )| f with α ∈ {F, O} and f ∈ L max , C j are NP-hard in the strong sense.
Similarly, using (7), we transfer the NP-hardness results to the pliability problems of types (ii) and (iii). Note that for the problems of type (iii) these results are dominated by those obtained through reduction (6).

Unrestricted pliability: Minmax objectives
In this section, we apply the methodology from Sect. 3.1 to develop efficient algorithms for problems F| plbl| f max and O| plbl| f max with unrestricted pliability. To this end, we consider the relaxed problem P| pmtn| f and construct optimal F-and O-type schedules for it.
where p(J ) = j∈J p j . In order to force McNaughton's wrap-around algorithm to produce a solution of F-and Otype, suitable for problems F| plbl|C max and O| plbl|C max , we consider the jobs in the order of their numbering and allocate them in the time window [0, C * ] first on machine M m , then on M m−1 , etc., until all jobs are fully allocated. Notice that machine M m is always fully occupied in the interval [0, C * ], while other machines might only be partly occupied in that interval, if C * = p q for a job q ∈ J with the longest processing time. Note that after performing the wrap-around algorithm, there exist at most m − 1 jobs which have operations of length greater than zero on more than one machine while all other jobs are processed on a single machine for their whole processing time. The order in which the machines are considered, gives an easy way for introducing zero-length operations, as illustrated in Fig. 4. The resulting schedule satisfies the requirements of F-and O-type schedules, has the minimum makespan C * , and is therefore optimal for both problems, F| plbl|C max and O| plbl|C max .
Theorem 5 Problems F| plbl|C max and O| plbl|C max are solvable in O(n) time.

Problems F|plbl|L max and O|plbl|L max
Consider the open shop problem O| plbl|L max and its relaxed counterpart P| pmtn|L max . Our approach to find an optimal O-type schedule for the latter problem consists of two stages. First calculate an optimal value L * of the objective using, for example, the closed form expression for L * from Baptiste (2000). That calculation requires O(n log n) time due to the sorting of the jobs. In the second stage, adjust the due dates to d j = d j + L * , treat them as deadlines, and find a feasible schedule for P| pmtn, C j ≤ d j |−. The fastest algorithm is due to Sahni (1979); its time complexity is O(n log(nm)) or O(n log n) under our assumption n ≥ m. It is a property of Sahni's algorithm that the resulting parallel machine schedule has at most one preemption per job, and a preempted job is not restarted on the same machine. Therefore, the schedule is of O-type, if zero-length operations are added at the beginning of the schedule.
Consider now the flow shop problem F| plbl|L max , using again its relaxed counterpart P| pmtn|L max . The approach discussed above for constructing an O-type schedule is no longer applicable, since Sahni's algorithm used at the second stage does not guarantee that the resulting schedule is of Ftype. An alternative approach for solving the second stage problem is to apply the O(n log n + mn)-time algorithm by Baptiste (2000), which does find an optimal schedule of Ftype, thus providing a solution to problem F| plbl|L max .
Interestingly, the term mn in the complexity estimate cannot be reduced, since there are instances which require (nm) nonzero operations in an optimal schedule. One such instance is presented in Appendix 2. Recall that for problem F| plbl|C max with the makespan objective, there exists an optimal schedule with the total number of nonzero operations bounded by n + m − 1, see Sect. 4.1.

Restricted pliability with a common lower bound: Minmax objectives
In this section, we apply the methodology of Sect. 3.2 to problems F| plbl( p)| f max by solving the flow shop problems F| plbl| f max . Furthermore, we discuss difficulties encountered for problem O| plbl( p)|C max .

Problem F|plbl(p)|C max
By Theorem 3 we limit our consideration to the class of permutation schedules and use the disaggregation technique from Sect. 3.2 to construct an optimal schedule and to justify its optimality. Given an instance I of problem F| plbl|C max , introduce instances I d and I e as in Definition 1.
Thus, if C max (S d ) achieves its minimum value, then C max (S) is minimum as well.
Following the approach from Sect. 4.1, construct an optimal schedule S d * by McNaughton's wrap-around algorithm, using an arbitrary job permutation. Note that, by construction, schedule S d * is of permutation type. The illustrative example presented in Fig. 2 satisfies this requirement. Without loss of generality, we assume that the jobs are sequenced in the order of their numbering, and the same job order is used in an optimal solution S e * to I e . Consider the aggregate schedule S * , obtained as a merger of S d * and S e * . Due to (9), S * is an optimal schedule among all permutation schedules, and by Theorem 3 it is globally optimal among all schedules.
The most time-consuming step in the described approach is the merger of S d * and S e * . Its time complexity is O(nm), and it defines the overall time complexity for constructing a complete optimal schedule for F| plbl( p)|C max .
Following the ideas of a compact encoding of an optimal solution, known in the context of high-multiplicity scheduling problems (see, e.g., Brauner et al. 2005), we specify formulae for starting times of all operations, each of which can be computed in O(1) time, provided a special O(n) preprocessing is done.
At the preprocessing stage, the diminished instance I d is analyzed and the calculations related to McNaughton's wrap-around algorithm are performed. The optimal schedule S d * can be specified by m − 1 split jobs, which define three types of operations in S d * : zero-length initial operations I, zero-length final operations F, and the remaining nonzero middle operations M characterized by starting times t i j (S d * ) and processing times p i j (S d * ) for operations O i j . After the merger of the two schedules S d * and S e * , the aggregate initial and final operations I ∪ F become of length p, while the lengths of the middle operations M increase by p; see Fig. 2 where the middle operations M are represented as shaded boxes. For the resulting schedule, the processing times and the starting times are calculated as where C * (M i ) is the completion time of the last operation on machine M i in schedule S d * , 1 ≤ i ≤ m.
Theorem 8 An optimal schedule for problem F| plbl( p)|C max can be specified by formulae (10), (11) for the processing times and starting times of all nm operations, each com- where p(J ) is the total processing time of all jobs.
Notice that the makespan formula (12) follows from (9): where C d * is the optimal makespan of the diminished instance calculated as Here p d j = p j − m p is the processing time of job j in the diminished instance I d and p d (J ) is the total processing time of all jobs in the diminished instance.

Problem F|plbl(p)|L max
Now, we consider problem F| plbl( p)|L max . By Theorem 3, it is sufficient to consider permutation schedules. The following lemma justifies that we can fix the job sequence in accordance with the earliest due date order (EDD).

Lemma 2 For problem F| plbl( p)|L max , there exists an optimal permutation schedule with the jobs sequenced in the EDD order.
The lemma can be proved using pairwise interchange arguments by swapping adjacent jobs violating the EDD order and verifying that the L max -value does not increase. Note that the swapping of adjacent jobs is always feasible, as established in Lemma 1.
Given an instance I of F| plbl( p)|L max , renumber the jobs so that d 1 ≤ d 2 ≤ · · · ≤ d n , and apply the job disaggregation methodology of Sect. 3.2.2. Define the two instances: Instance I e : p e j = m p, d e j = ( j + m − 1) p, Notice that the original instance I satisfies In the class of permutation schedules with the fixed job sequence (1, 2, . . . , n), let S e , S d and S be the schedules for instances I e , I d and I , respectively. Note that S e is the same as the top left schedule in Fig. 2 with L max (S e ) = 0. By Theorem 4, so that L max (S) = L max (S d ).
An optimal F-type schedule S d with the EDD job sequence (1, 2, . . . , n) can be constructed by the algorithm from Baptiste (2000); see Sect. 4.2. Combining S d (with the smallest possible value of L max ) and S e (with the same job sequence) delivers an optimal schedule S for the original instance I . The most time-consuming step is the algorithm from Baptiste (2000), which takes O(n log n + mn) time, dominating the time needed to renumber the jobs in the EDD order and the time for combining the two schedules.
Theorem 9 Problem F| plbl( p)|L max is solvable in O(n log n + mn) time.
Note that as opposed to Sect. 5.1, we cannot eliminate the term mn from the complexity estimate by introducing compact encoding. Indeed, even the easier problem F| plbl|L max with unrestricted pliability, which needs to be solved as a subproblem, already requires O(n log n + mn) time; see Sect. 4.2.

Problem O|plbl(p)|C max
The open shop problem O| plbl( p)|C max appears to be much harder to handle than the corresponding flow shop version. While for the model studied in the previous section, F-type permutation schedules have a well-defined structure, the Otype schedules for the current model provide a greater level of flexibility. Another difficulty comes from the optimality check: in contrast to problem O| plbl|C max , there exist instances for O| plbl( p)|C max where the lower bound C * , Instance I 2 : m = 3, p = 2 j 1 2 3 4 5 6 7 8 p j 2 1 6 6 6 6 6 6 6 The average machine workload 1 3 p(J ) and the processing time of the longest job p 1 have the same value of 21, resulting in the lower bound value C * = 21. For instance I 1 that lower bound can be achieved, while for instance I 2 the lower bound is unachievable: it is impossible to process job 1 without some waiting time in-between its operations, while observing the restriction p = 2 and ensuring that no machine is idle before it finishes processing all jobs. This is due to the fact that all operations of all other jobs are of even length 2, but job 1 has odd processing time and cannot be split into three even length operations; see Fig. 5 for an illustration.
In the following, we consider the cases from Table 4 where problem O| plbl( p)|C max can be solved efficiently. We use the notations p 1 , p 2 and p 3 for the processing times of the three longest jobs, assuming p 1 ≥ p 2 ≥ p 3 .
In Case 1 we adjust the approach from Sect. 5.1 developed for problem F| plbl( p)|C max . Using the concept of the  By the same theorem, an optimal schedule can be fully defined in O(nm) time. Case 2 follows immediately from the more general result for problem O2| plbl( p i j , p i j )|C max , which we present in Sect. 6.2.
Conditions that define Cases 3-5 also result in optimal schedules with permutation-like structure. They were derived by Koulamas and Kyparisis (2015) for the open shop problem with m = 3 machines and with each job consisting of equalsize operations. Due to our assumption p j ≥ m p (necessary for feasibility), splitting each job into m equal-size operations of length p j m leads to a feasible schedule for the model with pliable jobs. Moreover, the makespan of each optimal schedule for Cases 3-5 achieves the lower bound C * . Therefore, the resulting schedules are optimal for O| plbl( p)|C max .
The longest processing time order, assumed in Koulamas and Kyparisis (2015) for the whole set of jobs J , is not needed once the three longest jobs {1, 2, 3} are identified, so that the optimal schedules in Cases 3-5 can be found in O(n) time.
It is likely that the permutation-like property holds for the general case of problem O| plbl( p)|C max . An optimal job splitting may violate the equal-size property, with possibly unequal splitting of a job into m operations, as illustrated by the top schedule of Fig. 5. However, a proportionate open shop schedule, where jobs are split into equal-size operations, can be a good starting point for identifying the boundary jobs, processed at the beginning and at the end on each machine. The optimal operation lengths can then be found via linear programming. Unfortunately, we were unable to prove the correctness of the outlined approach and leave it for future research.

Restricted pliability with individual lower and upper bounds: Makespan objective
In this section, we consider flow shop and open shop problems with m = 2 machines, the makespan objective, and individual lower and upper bounds on operation processing times. To simplify the notation, we denote the two machines by A and B, and the lower and upper bounds of the operations by a j , a j (for machine A) and by b j , b j (for machine B). The objective is to find an order of the jobs for each machine and the lengths a j and b j for A-and B-operations for every job j, 1 ≤ j ≤ n, so that and the makespan is minimized.

, p ij )|C max
We demonstrate that problem F2| plbl( p i j , p i j )|C max is NPhard and its special case with a fixed job order can be solved via linear programming. Interestingly, the counterpart of the problem with flexible operations is NP-hard in both cases; see Gupta et al. (2004) and Lin et al. (2016).

Theorem 10 Problem F2| plbl( p i j , p i j )|C max is NP-hard.
Proof Consider an instance of PARTITION with integers e 1 , . . . , e n and n j=1 e j = 2E. The objective is to decide whether a set J 1 ⊂ {1, 2, . . . , n} exists with i∈J 1 e i = E. We construct an instance of the flow shop problem with n +1 jobs: Notice that job n + 1 has a fixed splitting, with two operations of length E, and for any permutation schedule it partitions the remaining jobs into two subsets, jobs J 1 preceding n + 1 and jobs J 2 which follow it.
It is easy to verify that PARTITION has a solution if and only if a flow shop schedule of makespan C max = 2E exists; see Fig. 6 for an illustration, where A-operations of jobs J 1 and B-operations of jobs J 2 have 0 length.
The problem becomes solvable via linear programming if a job sequence is fixed, even in the case of more than two machines and for more general objective functions. For this, we need to extend the LP formulation (4) by Prot et al. (2013), adding box inequalities p i j ≤ x i j ≤ p i j for all variables x i j .

Problem O2|plbl(p ij , p ij )|C max
Solving problem O2| plbl( p i j , p i j )|C max involves two decisions: finding the job splitting p j = a j + b j for all jobs j ∈ J , and sequencing the operations with fixed lengths on two machines to minimize the makespan. The second task can be done in O(n) time using the well-known algorithm by Gonzalez and Sahni (1976), which constructs an optimal schedule with the makespan Here, the first two terms correspond to the loads of machines A and B, while the last term is the processing time of a longest job q, In what follows we formulate three LP problems L P(A), L P(B), and L P(q), one for each term in (16), aimed at finding an optimal job splitting. Notice that some of the problems may be infeasible. An optimal solution is selected among the solutions to these three problems as the one with the smallest makespan value. Consider first problem L P(A) formulated for the class of schedules with C max = j∈J a j , assuming that the first term in (16) determines the maximum. If denotes the difference between the third and the first term in (16), then this class of schedules is characterized by j∈J a j = p q + ≥ j∈J b j , where ≥ 0, and minimizing the makespan is equivalent to minimizing : From the first and the third constraints we derive the following expressions for and b j : and rewrite L P(A) as follows: The resulting problem is the knapsack problem with continuous variables a j , j ∈ J , solvable in O(n) time (Balas and Zemel 1980). Problem L P(B) is formulated similarly for the class of schedules with C max = j∈J b j ; it is also solvable in O(n) time.
Consider now problem L P(q) formulated for the class of schedules with C max = p q . Since the makespan value is constant, there is no objective function to minimize, and we only need to find a feasible solution with respect to the following constraints: Using expression b j = p j − a j for j ∈ J we obtain: where j and u j are given by (19). The latter problem can be solved in O(n) time by performing the following steps.
1. Compute a * = j∈J j , the smallest value of j∈J a j . 2. If a * satisfies both main conditions, i.e., p(J ) − p q ≤ a * ≤ p q , then stop: a feasible solution is found. 3. If a * > p q , then stop: problem L P (q) is infeasible.
and verify whether for the found solution the required condition j∈J a j ≥ p(J ) − p q is satisfied. Problem (20) is again the knapsack problem with continuous variables a j , j ∈ J , solvable in O(n) time.
To summarize, each of the problems L P(A), L P(B) and L P(q) can be solved in O(n) time, and this is the overall time complexity of the described approach for solving O2| plbl( p i j , p i j )|C max .

Theorem 11 Problem O2| plbl( p i j , p i j )|C max is solvable in O(n) time.
We conclude this section by reviewing the results for another related problem, namely the parallel machine problem with restricted preemption and the makespan objective studied by Ecker and Hirschberg (1993), Baranski (2011), andPienkosz andPrus (2015). In that problem, job preemption may happen several times, but each job part has to be at least p time units long, for some lower bound p. Notice that unlike the pliability problem O| plbl( p)|C max , it is not required that every job is split exactly into m pieces, one per machine.
Polynomial-time algorithms for the parallel machine problem with restricted preemption where no job part may be shorter than p are known only for special cases with two machines; see Table 5. The complexity of the general case with m = 2 is left as open in the literature. Interestingly, our algorithm presented in this section finds an optimal schedule not only for the pliability problem O2| plbl( p i j , p i j )|C max , but it also solves the problem with restricted preemption for m = 2 machines, assuming p i j = p for all operations and p j ≥ 2 p.

Unrestricted and restricted pliability: Minsum objectives
In this section, we consider pliability problems with minsum objectives, focusing on problems with unrestricted pliability and restricted pliability with a common lower bound p. The restricted problems of type (iii) are strongly NP-hard since by Proposition 1 they are not easier than the related classical problems F2|| f and O2|| f , which are known to be strongly NP-hard for all traditional minsum objectives f ; see Brucker (2007).

Problems F|plbl| C j and O|plbl| C j
As observed in Sect. 3.1, problem F| plbl| C j can be solved in polynomial time via linear programming, considering a fixed sequence of job completion times corresponding to the shortest processing time (SPT) order; the optimality of that order for P| pmtn| C j is stated, e.g., in Conway et al. (1967). A faster algorithm is based on the approach which constructs an optimal F-type schedule for problem P| pmtn| C j , formulated by Bruno and Gonzalez (1976) and Labetoulle et al. (1984) for the more general problem Q| pmtn| C j , where machines have different processing speeds. The algorithm can be described as follows.

Algorithm F-Sum
1. Construct an SPT schedule by assigning a shortest job to the earliest available machine, breaking ties arbitrarily. 2. Consider time intervals I u = C u−1 , C u , 1 ≤ u ≤ n, defined by the completion times C u of the jobs; for completeness set C 0 = 0. In each interval I u , reallocate the job parts so that any machine M i , 1 ≤ i ≤ m, processes the job with the index u + m − i in that interval, if u + m − i ≤ n, and no job otherwise; see the bottom schedule in Fig. 7.
Steps 1 and 2 of the algorithm are illustrated by the two schedules of Fig. 7. Note that Step 1 constructs an optimal schedule for problems P|| C j and P| pmtn| C j , while Step 2 reshuffles operation parts without increasing completion times of individual jobs, producing an F-type solution for P| pmtn| C j . By Corollary 1, the resulting schedule is optimal for F| plbl| C j .
The combined time complexity of the two steps is O(n log n + mn). Following the ideas of compact encoding of an optimal solution, we can use the following function J (i, u) that specifies for each machine-interval pair (i, u) the job which is processed by machine M i in interval I u : Fig. 7 Two schedules with the same completion times for all jobs: an optimal schedule for problems P|| C j and P| pmtn| C j (above) and an optimal schedule for problem F| plbl| C j (below) where the job index 0 means that no job is assigned.
Theorem 12 An optimal schedule for problem F| plbl| C j can be constructed in O(n log n + mn) time. It can be specified by formula (21), computable in O(1) time for each machine-interval pair (i, u), provided the O(n log n) preprocessing is done and n intervals I u , 1 ≤ u ≤ n, are found using Step 1 of Algorithm F-Sum.
Consider now problem O| plbl| C j . The task of constructing an O-type schedule optimal for problem P| pmtn| C j is a simpler task than constructing an F-type schedule. It is sufficient to adjust an SPT schedule produced by Step 1 of Algorithm F-Sum by adding zero-length operations in the beginning of the schedule, so that every job has an operation on every machine.

Theorem 13
Problem O| plbl| C j is solvable in O(n log n) time.

Problem F|plbl(p)| C j
In order to solve problem F| plbl( p)| C j , we use the disaggregation methodology from Sect. 3.2.2, which results in two simplified instances: one instance of problem F| plbl| C j with unrestricted pliability, which can be solved by the approach from the previous section, and one instance with equal processing times. Let the two instances be I d and I e , as in Definition 1, and the corresponding schedules be S d and S e . We assume that S d and S e satisfy conditions (D1)-(D3) required for Theorem 4. Note that if (D3) is not satisfied for S d , i.e., if there are idle times on some machines, then they can be eliminated, without increasing job completion times, by left-shifting operations or by moving processing load to downstream machines.
By Theorem 4, the schedule S obtained as a merger of S d and S e satisfies: As the objective value n j=1 C e j of schedule S e for instance I e is the same for any permutation schedule S e , n j=1 C j achieves its minimum value if and only if n j=1 C d j is minimum. Thus, an optimal schedule for problem F| plbl( p)| C j can be found as a merger of schedule S e defined in Sect. 3.2.2 and schedule S d constructed by Algorithm F-Sum for instance I d as in Sect. 7.1.
The merger involves n + m − 1 intervals of schedule S e , which we denote by I v , 1 ≤ v ≤ (m − 1) + n, and n intervals of schedule S d , defined in the previous section as I u , 1 ≤ u ≤ n. Notice that the last n intervals of schedule S e have exactly the same job allocation as the n intervals of schedule S d . Thus, as a result of the merger, the combined schedule gets the first (m − 1) intervals of length p taken from S e , and the next n intervals taken from S e and S d ; resulting intervals are of length |I u | + p, 1 ≤ u ≤ n. We modify the function J (i, u), introduced in the previous section for a compact encoding of schedule S d . For the current problem F| plbl( p)| C j , function J (i, v) specifies for each machine-interval pair (i, v) the job which is processed by machine M i in the v-th interval: where the job index 0 means that no job is assigned. Here the expression v − i + 1 corresponds to the main expression u + m − i from (21) after substituting u = v − (m − 1), the link between the u-th interval of S d and the v-th interval of S e .
Theorem 14 An optimal schedule for problem F| plbl( p)| C j can be constructed in O(n log n + mn) time by defining the lengths of (m − 1) + n intervals and mn operations allocated to them. It can be specified by formula (22), computable in O(1) time for each machine-interval pair (i, v), provided the O(n log n) preprocessing is done and n intervals I u , 1 ≤ u ≤ n, are found using Step 1 of Algorithm F-Sum.

Problem O|plbl(p)| C j
We find an optimal schedule for problem O| plbl( p)| C j by constructing a common optimal schedule for problems P|| C j and P| pmtn| C j and reorganizing its structure in order to achieve a solution of O-type.
Without loss of generality, we assume that n is a multiple of m, i.e., n = Qm for some integer Q. Otherwise, we add as many jobs of maximum length as needed to satisfy this condition (at most m − 1 jobs are sufficient). Then we apply the approach described below, which would place the longest jobs at the end of the schedule, and remove the added jobs from the resulting schedule.
Construct an SPT schedule by assigning a shortest job to the earliest available machine; see the top schedule in Fig. 8. In what follows, we assume that the jobs are renumbered in SPT order, and the job numbering is from 0 up to n − 1 (in order to improve readability of the formulae for an optimal schedule).
Consider Q sections of the schedule, each of length m p: the first section is given by the interval 0, m p and the remaining sections by intervals C qm−1 , C qm−1 + m p , 1 ≤ q ≤ Q − 1. In each section q, there are m jobs J q = { j = qm + r |0 ≤ r ≤ m − 1} allocated to m machines, one job per machine. To justify this, notice that all jobs from J 0 start at time 0 and have processing times no less than m p. The property holds for every subsequent section q since and C j ≥ C qm for any j ∈ J q . Note that each job appears in exactly one of the Q sections.
In order to produce a feasible O-type schedule with m operations per job, each of length no less than p, we adopt a two-stage approach: first redistribute the jobs within every section q, 0 ≤ q ≤ Q − 1; next redistribute the jobs inbetween the sections. Stage 1 ensures that every machine gets one operation of every job of length p; Stage 2 places a nonzero operation of a job from in-between two sections next to the p-operation of the same job and combines them into one operation, see Fig. 8 for an illustration.

Fig. 8
Two schedules with the same completion times for all jobs: an optimal schedule for problems P|| C j and P| pmtn| C j (above) and an optimal schedule for problem O| plbl( p)| C j (below) Formally, in Stage 1 we split every section q into m subintervals of length p and reshuffle the jobs in each subinterval in a wrap-around manner. The reshuffling is done slightly differently, depending on whether q is even or odd: Machine index for the k-th operation of job j After Stage 1, the following property holds for any pair of jobs j ∈ J q and j + m ∈ J q+1 , which were adjacent in the initial schedule: the last p-operation of j in section q and the first p-operation of j + m in the next section are assigned to the same machine (compare the last column of the top table with the first column of the bottom one). Due to this, at Stage 2 we can rearrange the job parts in-between the sections while keeping j and j + m adjacent: the last part of job j is placed on the same machine as the last p-operation of that job in section q, and the two operations are merged; the first part of j + m is placed on the same machine as the first p-operation of that job in the section q + 1, and the two operations are also merged; see the bottom schedule in Fig. 8. Thus, we create a schedule with exactly m operations per job, the m − 2 middle operations are of minimum length p while the first and last operation of each job may be larger.
The operation lengths are thus defined as follows: (a) all intermediate operations of any job j, corresponding to position 2 ≤ k ≤ m − 1, have the common length p; (b) all first operations of section q = 0 have the common length p; (c) for any job j = qm + r processed in position k = 1 in section q, 1 ≤ q ≤ Q − 1, its first operation is merged with the part of that job positioned just before section q; the combined length is C qm−1 − C j−m + p, where C qm−1 is the starting point of section q and C j−m is the completion time of the job j − m which precedes job j in the initial schedule; (d) for any job j = qm + r processed in position k = m in section q, 0 ≤ q ≤ Q − 1, its last operation is merged with the part of that job positioned just after section q; their combined length is C j − C qm−1 + m p + p, where C j is the completion time of the last part of job j in the initial schedule and C qm−1 + m p is the end point of section q (assuming C −1 = 0 for completeness).
For a compact encoding, the machine order can be represented by the function M( j, k) which defines the machine index for the k-th operation of job j = qm +r that appears in the schedule (this is not necessarily the operation on machine M k ): if q is even and if q is even and k − r < 1, Note that here 1 ≤ k ≤ m and 0 ≤ r ≤ m − 1. The described approach finds an optimal open shop schedule, since the job completion times in it are the same as in an optimal schedule for problem P| pmtn| C j . If n = Qm and auxiliary jobs of maximum length have been added initially, we can assume without loss of generality that they are the last jobs to finish processing, for some < m. Their removal from the final schedule keeps completion times of the remaining jobs equal to their competition times in an optimal schedule to P| pmtn| C j .
Theorem 15 An optimal schedule for problem O| plbl( p)| C j can be constructed in O(n log n + mn) time by constructing an SPT schedule for P|| C j , splitting it into Q = n/m sections and rearranging nm operations. It can be specified by formula (23), which defines the machine number for the k-th operation of job j, and rules (a)-(d) for operation lengths, each computable in O(1) time, provided the O(n log n) preprocessing related to SPT scheduling is done.

Other minsum criteria
In this section, we give a brief overview of other traditional minsum criteria f ∈ w j C j , U j , w j U j , T j , w j T j .

Weighted sum of completion times
By Theorem 2, problems α2| plbl| w j C j , are NP-hard and problems α| plbl| w j C j are strongly NP-hard for α ∈ {F, O}.
Problem Om| plbl| w j C j can be solved in O(mn p j m−1 ) time by adopting the algorithm due to Lawler et al. (1993) developed for Pm|| w j C j and adding zero-length operations in the beginning of the schedule. Note that the same schedule is optimal for Pm|| w j C j and Pm| pmtn| w j C j .
For problem Fm| plbl| w j C j , by Theorem 1, an optimal solution can be found in the class of F-type schedules for Pm| pmtn| w j C j . If an optimal job permutation is known, then the LP formulation (4) from Prot et al. (2013) produces such a solution. Thus, the following two-step approach solves the problem.

Solve problem
Pm| pmtn| w j C j by the O(mn p j m−1 ) time algorithm by Lawler et al. (1993). Renumber the jobs in the order of their completion times. 2. Solve LP (4) and treat it as a solution to Fm| plbl| w j C j .
Unfortunately, our methodology is not applicable to the corresponding pliability problems with a common lower bound. In the flow shop case, the disaggregation methodology of Sect. 3.2.2 cannot be adopted as it requires a common permutation for optimal schedules S d and S e for instances I d and I e . A common optimal permutation may not exist for arbitrary job weights w j and processing times p j .
In the case of the open shop, the approach from Sect. 7.3 cannot be generalized since an optimal schedule for problem Pm| pmtn| w j C j does not necessarily have time intervals of length m p where exactly m jobs are scheduled.

The number of late jobs
By Theorem 2, problems O| plbl| U j and F| plbl| U j are NP-hard. It is an open question whether these problems are solvable in pseudopolynomial time. Notice that this question is also open for P| pmtn| U j . In what follows we consider the versions of the above problems with a fixed number of machines.
Problem Om| plbl| U j can be solved in O(n 3(m−1) ) time by adopting the algorithm due to Lawler et al. (1993) developed for Pm| pmtn| U j and adding zero-length operations in the beginning of the schedule.
For problem Fm| plbl| U j , by Theorem 1, an optimal solution can be found in the class of F-type schedules for Pm| pmtn| U j . For the latter problem, there exists an optimal schedule in which all on-time jobs are processed before the late jobs. Thus, we can use an optimal solution to Pm| pmtn| U j in order to define the largest set J 1 ⊆ J of on-time jobs. For scheduling them in the flow shop manner, introduce an auxiliary problem Fm| plbl|L max defined on the set of jobs J 1 . It can be solved in O(mn + n log n) time, as discussed in Sect. 4.2. Adding the jobs J \ J 1 at the end of the schedule provides a solution to the original problem Fm| plbl| U j . Combining O(n 3(m−1) ) and O(mn + n log n), we conclude that the overall time complexity is O(n 3(m−1) ), assuming m ≥ 2 and n ≥ m.
Next we consider the pliability problem Fm| plbl( p)| U j .
Note that the open shop problem Om| plbl( p)| U j is left open since it causes difficulties similar to the easier problem Om| plbl( p)|C max with the makespan objective. Our approach is based on the following two properties, which hold even for the NP-hard problem F| plbl( p)| U j with an arbitrary number of machines.
Property 1 For problem F| plbl( p)| U j , there exists an optimal permutation schedule with on-time jobs scheduled in non-decreasing order of due dates, followed by all late jobs. Note that the first property can be proved by pairwise interchange arguments using adjacent swaps, as in Lemma 1. The second property can be proved by considering the L max -equivalents of the two problems in question.

Property 2 Let I be an instance of problem F| plbl( p)| U j and let I d be the diminished instance as in
Based on Properties 1, 2, we formulate the following 2step approach.

Construct the diminished instance I d for problem
Fm| plbl( p)| U j and find the largest set J 1 ⊆ J of on-time jobs using the O(n 3(m−1) )-time approach described in the beginning of this section. 2. Solve problem Fm| plbl( p)|L max defined on the set of jobs J 1 , using the O(mn + n log n)-time approach from Sect. 5.2. Adding the jobs J \ J 1 at the end of the schedule provides a solution to the original problem Fm| plbl( p)| U j .
The combined time complexity of the above two steps is O(n 3(m−1) ), assuming m ≥ 2 and n ≥ m.

Weighted number of late jobs, total tardiness and weighted total tardiness
By Theorem 2, problems α2| plbl| T j and α2| plbl| w j U j are NP-hard in the ordinary sense and problems α2| plbl| w j T j are strongly NP-hard for α ∈ {F, O}.
Problem Om| plbl| w j U j can be solved in O n 3m−5 w i 2 time for m ≥ 3 and in O n 2 w i time for m = 2 by adopting the algorithms due to Lawler and Martel (1989) and Lawler et al. (1993) developed for Pm| pmtn| w j U j and adding zero-length operations in the beginning of the schedule. For problems Fm| plbl| w j U j and Fm| plbl( p)| w j U j we can use the same idea as in the previous section: first solve the related parallel machine problem to find out an optimal set J 1 of on-time jobs (after creating the diminished instance in the case of problem Fm| plbl( p)| w j U j ), then solve problem Fm| plbl|L max or Fm| plbl( p)|L max , respectively, for the job set J 1 to obtain a schedule in which all jobs of set J 1 are on time.
Finally, add the late jobs at the end of the schedule. Thus, problems Fm| plbl| w j U j and Fm| plbl( p)| w j U j are pseudopolynomially solvable with the same time complexity as problem Pm| pmtn| w j U j .
It is an open question whether problems O2| plbl| T j and F2| plbl| T j are solvable in pseudopolynomial time. Notice that this question is also open for P2| pmtn| T j .

Pliability problems with n < m
If the number of jobs is smaller than the number of machines, n < m, an optimal schedule for a pliability problem with unrestricted pliability or restricted pliability with a common lower bound exhibits a more regular structure than in the case n ≥ m. Note that for the flow shop problem Theorem 3 still holds, and we can limit our consideration to permutation schedules. If we limit our consideration to a specific permutation π , we add that restriction in the second field of the problem notation.

Theorem 16 For problem O|n < m, plbl( p)| f , there exists an optimal schedule with
For problem F|n < m, plbl( p)| f with a fixed job permutation π = (1, 2, . . . , n), there exists an optimal schedule with  ( j, k), p( j, k) and S( j, k), which define for the k-th operation of job j the corresponding machine index, operation length and its starting time: The schedule is illustrated in Fig. 9. The interval 0, (m − 1) p is used for operations of length p sequenced in a wrap-around manner; the last m operations on m downstream machines handle the remaining processing for all jobs, one job per machine. The completion times C j satisfy (24) and they cannot be reduced; thus, the resulting schedule is optimal.
For problem F|n < m, plbl( p)| f and a fixed job permutation π = (1, 2, . . . , n), consider a permutation schedule constructed in the following way. Split each job j into m − j initial operations of length p, followed by operation m − j +1 of length p j − (m − 1) p, followed by j − 1 tail operations, again of length p. For each job j the initial operations are scheduled in a staircase manner, starting at time ( j − 1) p. Operation m − j + 1 is started at time (m − 1) p for each job j. Finally, schedule the tail operations of length p, again in a staircase manner as early as possible, without violating any (permutation) flow shop constraints. The schedule is illustrated in Fig. 10.
The job completion times in the constructed schedule satisfy: Here, C j = C j−1 + p corresponds to the case when the last operation of j, which is of minimum length, starts on the last machine immediately after job j − 1 is completed, and C j = p j + ( j − 1) p corresponds to the case when job j is processed contiguously after the minimum quantities of the preceding j − 1 jobs are processed on M 1 . Since in either case C j matches the lowest achievable value, the resulting schedule is optimal. Notice that the above formulae imply (25). For a compact encoding, we specify an optimal schedule via functions p( j, i) and S( j, i), which compute the processing times and starting times for operations O i j : Here, C j is given by (25). Note that in the flow shop, the i-th operation of a job j is operation O i j processed by machine M i .
We conclude that for problem O|n < m, plbl( p)| f , a single schedule is optimal for any non-decreasing objective f , while solving problem F|n < m, plbl( p)| f reduces to finding an optimal job permutation minimizing f (C 1 , . . . , C n ) with C j given by (25). Notice that formula (25) is similar to the formula for C j known for the proportionate flow shop Fm| p i j = τ j | f (C 1 , . . . , C n ) (Pinedo 2016): In the latter problem, every job j ∈ J consists of m operations of equal length τ j , so that the total length of job j is p j = mτ j . Not surprisingly, the sequencing problems with completion times given by (25) and (26) are similar to their single-machine counterparts: • for f = C max , any permutation provides the same value; namely C max = max{ p j | j ∈ J } + (n − 1) p for the pliability problem; ation lengths: there can be a common lower bound p for all jobs, or the lower bounds p i j can be individual for all jobmachine pairs. The intermediate cases, lying in-between type (ii) and type (iii) models, are job-dependent lower bounds p i j = p j or machine-dependent lower bounds p i j = p i . Our study already provides some initial results, in particular those presented in Sects. 5.1, 6.1, 6.2: the approach from Sect. 5.1 can be generalized for solving F2| plbl( p i j = p i )|C max , the NP-hardness proof from Sect. 6.1 is applicable for problem F2| plbl( p i j = p j )|C max , and the algorithm from Sect. 6.2 solves efficiently the most general open shop problem of type (iii). Other versions require further analysis. Another type of pliability can be defined in terms of the deviation from "ideal" operation lengths p 0 i j . In such models actual processing times p i j have to be selected from intervals p 0 i j − , p 0 i j + with some given parameter . Whenever an actual processing time exceeds its ideal value, p i j > p 0 i j , a cost may be incurred associated with additional power or other resources for extra-work, allocated to the machine above the expected "ideal" load. Alternatively, performing a part of an operation on a "wrong" machine may increase the processing time of that part, since a "wrong" machine may operate at a slower rate processing the relocated operation part. The proposed model has similarities to models with controllable processing times where operation lengths can be reduced via the usage of additional resources. It will be interesting to explore links between the proposed " -redistribution" model and the stream of research related to controllable processing times.

Schedule S: Properties
For the combined interval [ 1 , r m ], define subintervals of types I 1 and I 2 in accordance with the number of available machines: a subinterval is of type I 1 , if only one machine is available for processing {u, v}, and it is of type I 2 , if there are at least two machines available. An illustrative example is presented in Fig. 12. Denote the total lengths of I 1 -and I 2intervals by T 1 and T 2 , respectively, with T 1 + T 2 = r m − 1 .
In what follows we formulate properties of I 1 -and I 2intervals for schedule S.
(P1) On machine M 1 , an interval of type I 1 is followed by an interval of type I 2 . (P2) On every machine M i , 2 ≤ i ≤ m − 1, there is either one interval of type I 2 or there are three intervals of types I 2 , I 1 , I 2 , which appear in this order. (P3) On machine M m , an interval of type I 2 is followed by an interval of type I 1 .
Property (P5) holds since during time interval [ i , r i ], machine M i processes one operation of job u and one operation of job v, each of length no less than p.
If property (P6) does not hold, then scheduling the longest job does not leave room of length p for the compulsory part of the other job on M m (if job u is the longest) or on M 1 (if job v is the longest). In that case, no feasible permutation schedule exists with job u and job v scheduled in intervals [ i , r i ], 1 ≤ i ≤ m, in any order; a contradiction to the feasibility of S.
Finally, if property (P7) does not hold, then no feasible schedule exists even for the relaxed problem with m identical parallel machines processing two jobs {u, v} of lengths p u and p v with preemption in intervals [ i , r i ], 1 ≤ i ≤ m; again, a contradiction to the feasibility of S. Here we take into account that only intervals of type I 2 are suitable for processing two jobs simultaneously.

Schedule S : Construction
Given time windows [ i , r i ], 1 ≤ i ≤ m, satisfying (A1), (A2) and (P1)-(P7), we construct schedule S by allocating operations of jobs v and u in these time windows, with v preceding u. Our task is to find operation lengths p iv and p iu for these two jobs on all machines M i , 1 ≤ i ≤ m.
Without loss of generality we assume that otherwise adjust (temporarily) the processing times of jobs u and v to max p u , T 2 + p and max p v , T 2 + p , respectively; the extra amount of processing will be removed from a feasible schedule, after it is constructed. This condition simplifies the construction of the new schedule, and the adjustment does not affect (P1)-(P5) and keeps (P6), (P7) satisfied. Indeed, property (P6) for the adjusted processing times is of the form max p u , p v , T 2 + p ≤ T 1 + T 2 − p. It holds since the original property (P6) is satisfied for p u , p v and by (P4). For property (P7), the sum of the adjusted processing times in the left hand side is equal to one of the following values: The first expression is bounded by T 1 + 2T 2 if (P7) holds for original p u , p v ; the second and the third expressions are bounded by T 1 + T 2 − p + T 2 + p by (P6); the last expression is bounded by 2T 2 + T 1 by (P4). The algorithm for scheduling jobs u and v satisfying (27) consists of two stages.
Stage 1 Allocate the processing amount T 2 + p of job v, using the following intervals: The schedule found as a result of Stage 1 is shown in Fig. 13.
Stage 2 If p u = p v = T 2 + p, then no further action is needed. Otherwise use intervals of type I 1 , which are left free after Stage 1, for allocating the remaining quantities of jobs u and v, splitting each I 1 -interval into three parts, some of which may be of zero length: one part for job v, another one for an idle interval and the third part for job u. For each machine, the splitting can be performed arbitrarily, but the resulting cumulative length of all operations of jobs u and v has to constitute p u and p v . Such a splitting can always be found, justified as condition (C6) below.
Note that if an interval of type I 2 has more than two available machines, then two of them receive jobs u and v for processing, while the remaining available machines are left idle (for example, an idle interval on machine M 2 in Fig. 13).
In order to justify the correctness of the algorithm, we demonstrate that the following conditions hold for the resulting schedule: (C1) every machine M i , 1 ≤ i ≤ m, processes job v first and job u next; (C2) for each job v and u, the minimum processing amount p is assigned to each machine M i , 1 ≤ i ≤ m; (C3) the time intervals with v-operations on two consecutive machines do not overlap; the same is true for u-operations; (C4) the time intervals with vand u-operations on one machine do not overlap; (C5) at the end of Stage 1 the intervals of combined capacity of T 2 + 2 p are used to process the amount 2T 2 + 2 p of jobs v and u; (C6) at the end of Stage 2 processing quantities p v , p u are allocated in full.
First we prove conditions (C1)-(C3) for job v; similar arguments are applicable for job u.
Condition (C1) holds since job v starts at time i on every machine M i , according to allocation of Stage 1; further allocation of the same job on M i performed at Stage 2 is done in the I 1 -interval, if one exists; such an interval follows the previously used I 2 interval by property (P2).
Condition (C2) holds for job v since the length of the interval i , min {r i−1 , i+1 } is no smaller than p: i+1 − i ≥ p by (A1) and r i−1 − i > p by (A2).
Condition (C3) holds for job v since after Stage 1 is completed, the allocated intervals i , min {r i−1 , i+1 } and i+1 , min {r i , i+2 } on machines M i , M i+1 do not overlap for any i, 1 ≤ i ≤ m − 1. If at Stage 2 an interval of type I 1 is used for allocating job v on machine M i , 1 ≤ i ≤ m, no overlapping can happen as any I 1 -interval is associated with only one available machine.
We now prove the remaining conditions. Condition (C4) follows from the splitting strategy of Stage 2.
Condition (C5) holds since at the end of Stage 1 operations of job v fully occupy all intervals of type I 2 together with 1 , 1 + p , and operations of job u fully occupy all intervals of type I 2 together with r m − p, r m .
In order to prove condition (C6), first recall that due to (27) we have p u , p v ≥ T 2 + p. The parts of the processing amounts of jobs u and v that still need to be allocated after Stage 1, are p u − T 2 + p and p v − T 2 + p , respectively.
The remaining capacity of I 1 -intervals is T 1 − 2 p . Then condition (C6) follows from property (P7) rewritten as Finally, in the case that one of the original processing times is lower than T 2 + p, the extra amount can be arbitrarily removed from intervals of type I 2 as long as the processing times on all machines remain at least p.
Thus, the described algorithm constructs the required schedule S .

Appendix 2: An instance of problem F|plbl| L max with Ä(nm) nonzero operations in an optimal solution
The following example shows that the term mn in the complexity estimate O(n log n + mn) for problem F| plbl|L max in Theorem 7 cannot be eliminated. Consider an instance with m machines and n jobs under the assumption that m < 1 3 n.
The job set J consists of three types of jobs, U = {u 1 , u 2 , . . . , u m }, H = {h 1 , h 2 , . . . , h n−2m } and V = {v 1 , v 2 , . . . , v m }, with the following characteristics: Clearly, d = n − m + 1 is the maximum due date in the instance. We demonstrate that in an optimal solution, there is a unique way for allocating the jobs from U ∪ H such that every job u j is split into j unit-length operations and every job h j is split into m unitlength operations, see Fig. 14 for an illustration. This implies that the total number of nonzero operations is at least where the last inequality holds by (28). Deriving the estimate Q, we do not count the number of nonzero operations associated with the jobs V as there may be multiple ways for their allocation. First notice that for the relaxed problem P| pmtn|L max , the optimal objective value is L max = 0, which can be calculated using the closed form expression from Baptiste (2000). Moreover, the total processing time of all n jobs is equal to the total capacity dm of all machines in the interval 0, d . Thus, for any schedule with L max = 0, every machine M i operates without idle times in 0, d .
Without focusing on the allocation of operations of jobs V on machine M m , consider the allocation of the jobs U ∪ H on that machine. Job u 1 has to be fully processed in time interval [0, 1] completing at time 1 on M m . If the operation of u 1 on M m is of length p m,u 1 < 1, then there is an idle time on M m , since in an optimal schedule with L max = 0 Job u 2 has to be fully processed in time interval [0, 2] completing at time 2 on M m . Again, the operation length of u 2 on M m has to be p m,u 2 = 1 in order to avoid an idle time on that machine.
Continuing this line of arguments, it is easy to prove by induction that machine M m processes unit-length operations of jobs U ∪ H in the order of their numbering, first U and then H. Such an allocation on M m induces the deadlines for completing jobs U ∪ H on machine M m−1 : d u j − 1 for the U-jobs and d h j − 1 for the H-jobs. In this way, we obtain a smaller instance with machines {M 1 , . . . , M m−1 } and adjusted job characteristics: for every job from U ∪ H the d-and p-values are reduced by 1, while for the V-jobs, their total processing time is reduced by 1. Since this subinstance is similar to the initial one, the above arguments are applicable to prove that machine M m−1 processes a zerolength operation of u 1 first, followed by the unit-length operations of jobs U\{u 1 }∪H in the order of their numbering.
Proceeding similarly, we conclude that there is a unique way of allocating jobs U ∪ H in an optimal F-type schedule, and it leaves on every machine M i , 1 ≤ i ≤ m, time windows d − m + i − 1, d for processing jobs V. One possible allocation of the V-jobs is presented in Fig. 14 , where each job v i is processed in full on one of the machines, with zerolength operations on the remaining machines. Thus, estimate (29) holds and the constructed instance has (mn) nonzero operations.