Open Shop Scheduling with Synchronization

In this paper, we study open shop scheduling problems with synchronization. This model has the same features as the classical open shop model, where each of the n jobs has to be processed by each of the m machines in an arbitrary order. Unlike the classical model, jobs are processed in synchronous cycles, which means that the m operations of the same cycle start at the same time. Within one cycle, machines which process operations with smaller processing times have to wait until the longest operation of the cycle is finished before the next cycle can start. Thus, the length of a cycle is equal to the maximum processing time of its operations. In this paper, we continue the line of research started by Weiß et al. (Discrete Appl Math 211:183–203, 2016). We establish new structural results for the two-machine problem with the makespan objective and use them to formulate an easier solution algorithm. Other versions of the problem, with the total completion time objective and those which involve due dates or deadlines, turn out to be NP-hard in the strong sense, even for $$m=2$$m=2 machines. We also show that relaxed models, in which cycles are allowed to contain less than m jobs, have the same complexity status.


Introduction
Scheduling problems with synchronization arise in applications where job processing includes several stages, performed by different processing machines, and all movements of jobs between machines have to be done simultaneously. This may be caused by special requirements of job transfers, as it happens, for example, if jobs are installed on a circular production unit which rotates to move jobs simultaneously to machines of the next stage (see Soylu et al. 2007;Huang 2008;Waldherr and Knust 2014). Alternatively, there may be health and safety regulations requiring that no machine is in operation while jobs are being removed from or moved to a machine. Similar synchronization takes place in the context of switch-based communication systems, where senders transmit messages to receivers in a synchronous manner, as this eliminates possible clashes for receivers (see Gopal and Wong 1985;Rendl 1985;Kesselman and Kogan 2007).
Synchronization arises naturally in assembly line systems where each assembly operation may start only after all preceding operations are completed, see Doerr et al. (2000), Chiang et al. (2012), and Urban and Chiang (2016), and the survey by Boysen et al. (2008). In the context of shop scheduling models, synchronization aspects were initially studied for flow shops (Soylu et al. 2007;Huang 2008;, and later for open shops (Weiß et al. 2016). In the latter paper the makespan problem is addressed, with the main focus on the underlying assignment model. In the current paper we continue that line of research, elaborating further the study of the makespan minimization problem and addressing other variants of the model with different objective functions.
Formally, the open shop model with synchronization is defined as follows. As in the classical open shop, n jobs J 1 , J 2 , . . . , J n have to be processed by m machines M 1 , M 2 , . . . , M m , n ≥ m. Each job J j , 1 ≤ j ≤ n, consists of m operations O i j for 1 ≤ i ≤ m, where O i j has to be processed on machine M i without preemption for p i j time units. The synchronization requirement implies that job processing is organized in synchronous cycles, with operations of the same cycle starting at the same time. Within one cycle, machines which process operations of smaller processing times have to wait until the longest operation of the cycle is finished before the next cycle can start. Thus, the length of a cycle is equal to the maximum processing time of its operations. Similar to the classical open shop model, we assume that unlimited buffer exists between the machines, i.e., jobs which are finished on one machine can wait for an arbitrary number of cycles to be scheduled on the next machine.
The goal is to assign the nm operations to the m machines in n cycles such that a given objective function f is optimized. Function f depends on the completion times C j of the jobs J j , where C j is the completion time of the last cycle in which an operation of job J j is scheduled. Following the earlier research by Huang (2008) and , we denote synchronous movement of the jobs by "synmv" in the β-field of the traditional three-field notation. We write O|synmv| f for the general synchronous open shop problem with objective function f and Om|synmv| f if the number m of machines is fixed (i.e., not part of the input). The most common objective function is to minimize the makespan C max , defined as the completion time of the last cycle of a schedule. If deadlines D j are given for the jobs J j , the task is to find a feasible schedule with all jobs meeting their deadlines, C j ≤ D j for 1 ≤ j ≤ n. We use the notation O|synmv, C j ≤ D j |− for the feasibility problem with deadlines. In problem O|synmv| C j the sum of all completion times has to be minimized.
Usually, we assume that every cycle contains exactly m operations, one on each machine. In that case, together with the previously stated assumption n ≥ m, exactly n cycles are needed to process all jobs. However, sometimes it is beneficial to relax the requirement for exactly m operations per cycle. Then a feasible schedule may contain incomplete cycles, with less than m operations. We denote such a relaxed model by including "rel" in the β-field. Similar to the observation of Kouvelis and Karabati (1999) that introducing idle times in a synchronous flow shop may be beneficial, we will show that a schedule for the relaxed problem O|synmv, rel| f consisting of more than n cycles may outperform a schedule for the nonrelaxed problem O|synmv| f with n cycles.
The synchronous open shop model is closely related to long known optimization problems in the area of commu-nication networks: the underlying model is the max-weight edge coloring problem (MEC), restricted to complete bipartite graphs, see Weiß et al. (2016) for the link between the models, and Mestre and Raman (2013) for the most recent survey on MEC and other versions of max-coloring problems. As stated in Weiß et al. (2016), the complexity results from Rendl (1985) and Demange et al. (2002), formulated for MEC, imply that problems O|synmv|C max and O|synmv, rel|C max are strongly NP-hard if both n and m are part of the input. Moreover, using the results from Demange et al. (2002), Escoffier et al. (2006), Kesselman and Kogan (2007), de Werra et al. (2009), and Mestre and Raman (2013), formulated for MEC on cubic bipartite graphs, we conclude that these two open shop problems remain strongly NP-hard even if each job is processed by at most three machines and if there are only three different values for nonzero processing times.
On the other hand, if the number of machines m is fixed, then problem Om|synmv|C max can be solved in polynomial time as m-dimensional assignment problem with a nearly Monge weight array of size n × · · · × n, as discussed in Weiß et al. (2016) and in Sect. 2 of the current paper. The relaxed version Om|synmv, rel|C max admits the same assignment model, but with a larger m-dimensional weight array extended by adding dummy jobs. As observed in Weiß et al. (2016), the number of dummy jobs can be bounded by (m − 1)n. Both problems, Om|synmv|C max and Om|synmv, rel|C max , are solvable in O(n) time, after operations are sorted in nonincreasing (or nondecreasing) order of processing times on all machines. However, this algorithm becomes impractical for larger instances, as the constant term of the linear growth rate exceeds (m!) m .
The remainder of this paper is organized as follows. In Sect. 2, we consider problem O2|synmv|C max and establish a new structural property of an optimal solution. Based on it we formulate a new (much easier) O(n)-time solution algorithm, assuming jobs are presorted on each machine. Then we address in more detail problem O|synmv, rel|C max and provide a tight bound on the maximum number of cycles needed to get an optimal solution. In Sects. 3 and 4 we show that problems O2|synmv, C j ≤ D j |− and O2|synmv| C j are strongly NP-hard. Finally, conclusions are presented in Sect. 5.

Minimizing the makespan
In this section, we consider synchronous open shop problems with the makespan objective. Recall that problem Om|synmv|C max with a fixed number of machines m can be solved in O(n) time (after presorting) by the algorithm from Weiß et al. (2016). In Sect. 2.1 we elaborate further results for the two-machine problem O2|synmv|C max , providing a new structural property of an optimal schedule, which results in an easier solution algorithm. In Sect. 2.2 we study the relaxed problem O|synmv, rel|C max and determine a tight bound on the maximum number of cycles in an optimal solution.

Problem O2|s ynmv|C max
Problem O2|synmv|C max can be naturally modeled as an assignment problem. Consider two nonincreasing sequences of processing times of the operations on machines M 1 and M 2 , renumbering the jobs in accordance with the sequence on M 1 : To simplify the notation, let (a i ) n i=1 and (b j ) n j=1 be the corresponding sequences of processing times in nonincreasing order. The ith operation on M 1 with processing time a i and the jth operation on M 2 with processing time b j can be paired in a cycle with cycle time max{a i , b j } if these two operations are not associated with the same job. Let F = {(1, j 1 ), (2, j 2 ), . . . , (n, j n )} be the set of forbidden pairs: (i, j i ) ∈ F if operations O 1i and O 2 j i belong to the same job.
Using binary variables x i j to indicate whether the ith operation on M 1 and the jth operation on M 2 (in the above ordering) are paired in a cycle, the problem can be formulated as the following variant of the assignment problem: Due to the predefined 0-variables x i j = 0 for forbidden pairs of indices (i, j) ∈ F it is prohibited that two operations of the same job are allocated to the same cycle. In Weiß et al. (2016) a slightly different formulation is used to model synchronous open shop as an assignment problem: Here, for the forbidden pairs (i, j) ∈ F there are ∞-entries in the cost matrix, one in every row and every column. A feasible solution of the open shop problem exists if and only if the optimal solution value of AP ∞ is less than ∞.
Note that the problem AP ∞ in its more general form is the subject of our related paper, Weiß et al. (2016). In the current paper we focus on the formulation AP F , which is equivalent to AP ∞ for costs c i j of form (2). Formulation AP F allows us to produce stronger results, see Theorems 1-2 in the next section. The main advantage of formulation AP F is the possibility to use finite w-values for all pairs of indices, including w i j 's defined for forbidden pairs (i, j) ∈ F. Example 1 Consider an example with n = 4 jobs and the following processing times: j 1 2 3 4 p 1 j 7 5 3 2 p 2 j 3 4 6 2 The sequences (a i ) and (b j ) of processing times are of the form: The forbidden pairs are F = {(1, 3), (2, 2), (3, 1), (4, 4)}, the associated matrices W and C are i \ j 1 2 3 4 1 7 7 7 7 2 6 5 5 5 3 6 4 3 3 4 6 4 3 2 The entries in bold font in W and C correspond to the optimal solution illustrated in Fig. 1. Here, x 12 = 1 for the pair of jobs J 1 , J 2 assigned to the same cycle, and x 23 = x 34 = x 41 = 1 for the other cycles. The makespan is 7 + 6 + 3 + 3 = 19. It is known (cf. Bein et al. 1995;Burkard et al. 1996) that matrix W = (w i j ) defined by (1) satisfies the Monge property, i.e., for all row indices 1 ≤ i < r ≤ n and all column indices 1 ≤ j < s ≤ n we have Without the additional condition on forbidden pairs F, a greedy algorithm finds an optimal solution X = x i j to the assignment problem and that solution is of the diagonal form: Forbidden pairs or, equivalently, ∞-entries, may keep the Monge property satisfied so that the greedy algorithm remains applicable, as discussed by Burkard et al. (1996) and Queyranne et al. (1998). However, if at least one of the forbidden pairs from F is a diagonal element, then solution (4) is infeasible for problem AP F . A similar observation holds for problem AP ∞ if an ∞-entry lies on the diagonal. In that case, as demonstrated in Weiß et al. (2016), there exists an optimal solution X which satisfies a so-called corridor property: the 1-entries of X belong to a corridor around the main diagonal of width 2, so that for every x i j = 1 of an optimal solution the condition |i − j| ≤ 2 holds. Notice that in Example 1 there are two forbidden pairs in F of the diagonal type, (2, 2) and (4, 4); the specified optimal solution satisfies the corridor property. A related term used typically in twodimensional settings is the bandwidth (see, e.g., Ćustić et al. 2014). The corridor property is proved in Weiß et al. (2016) in its generalized form for the case of the m-dimensional assignment problem with a nearly Monge array (this is an array where ∞-entries are allowed and the Monge property has to be satisfied by all finite entries). Thus, this property also holds for the m-machine synchronous open shop problem. It appears that for the case of m = 2 the structure of an optimal solution can be characterized in a more precise way, which makes it possible to develop an easier solution algorithm.
In the following, we present an alternative characterization of optimal solutions for m = 2 and develop an efficient algorithm for constructing an optimal solution. Note that the arguments in Weiß et al. (2016) are presented with respect to problem AP ∞ ; in this paper our arguments are based on the formulation AP F and on its relaxation AP F =∅ , with the condition "x i j = 0 for (i, j) ∈ F " dropped.
A block X h of size s is a square submatrix consisting of s × s elements with exactly one 1-entry in each row and each column of X h . We call a block large if it is of size s ≥ 4, and small otherwise. Our main result is establishing a blockdiagonal structure of an optimal solution X = (x i j ), around the main diagonal, and 0-entries elsewhere. Note that the submatrix ⎛ ⎝ 0 0 1 0 1 0 1 0 0 ⎞ ⎠ is excluded from consideration.
Theorem 1 ("Small Block Property"): There exists an optimal solution to problem AP F in block-diagonal structure, containing only blocks of type (6).
This theorem is proved in Appendix 1. The small block property leads to an efficient O(n)-time dynamic programing algorithm to find an optimal solution. Here we use formulation AP ∞ rather than AP F , as infinite costs can be easily handled by recursive formulae. The algorithm enumerates optimal partial solutions, extending them repeatedly by adding blocks of size 1, 2, or 3.
Let S i denote an optimal partial solution for a subproblem of AP ∞ defined by the submatrix of W with the first i rows and i columns. If an optimal partial solution S i is known, together with solutions S i−1 and S i−2 for smaller subproblems, then by Theorem 1 the next optimal partial solution S i+1 can be found by selecting one of the following three options: -extending S i by adding a block of size 1 with x i+1,i+1 = 1; the cost of the assignment increases by w i+1,i+1 ; -extending S i−1 by adding a block of size 2 with x i,i+1 = x i+1,i = 1; the cost of the assignment increases by w i,i+1 + w i+1,i ; -extending S i−2 by adding a block of size 3 with the smallest cost: Let w(S i ) denote the cost of S i . Then The initial conditions are defined as follows: Thus, w(S 3 ), …, w(S n ) are computed by (7) in O(n) time.
Theorem 2 Problem O2|synmv|C max can be solved in O(n) time.
Concluding this subsection, we provide several observations about the presented results. First, the small block property for problem O2|synmv|C max has implications for the assignment problem AP ∞ with costs (2) and for more general cost matrices. The proof of the small block property is presented for problem AP F . It is easy to verify that the proof is valid for an arbitrary Monge matrix W, not necessarily of type (1); the important property used in the proof requires that the set F has no more than one forbidden pair (i, j) in every row and in every column, and that all entries of the matrix W, including those corresponding to the forbidden pairs F, satisfy the Monge property. Thus, the small block property and the O(n)-time algorithm hold for problem AP ∞ if (i) there is no more than one ∞-entry in every row and every column of the cost matrix C, and (ii) matrix C can be transformed into a Monge matrix by modifying only the ∞-entries, keeping other entries unchanged.
Note that not every nearly Monge matrix satisfying (i) can be completed into a Monge matrix satisfying (ii); see Weiß et al. (2016) for further details. However, the definition (2) of the cost matrix C for the synchronous open shop allows a straightforward completion by replacing every entry c i j = ∞ by c i j = max a i , b j . While completability was not used in the proof of the more general corridor property presented in Weiß et al. (2016), the proof of the small block property depends heavily on the fact that the matrix of the synchronous open shop problem can be completed into a Monge matrix. In particular, we use completability when we accept potentially infeasible blocks in the proof of Lemma 3 and repair them later on with the help of Lemmas 4 and 5. In the literature, the possibility of completing an incomplete Monge matrix (a matrix with unspecified entries) was explored by Deineko et al. (1996) for the traveling salesman problem. They discuss Supnick matrices, a subclass of incomplete Monge matrices, for which completability is linked with several nice structural and algorithmic properties.
Finally, we observe that while the assignment matrices arising from the multimachine case are completable in the same way as for the two-machine case (see Weiß et al. 2016), it remains open whether this can be used to obtain an improved result for more than two machines as well. The technical difficulties of that case are beyond the scope of this paper.

Problem O|s ynmv, rel|C max
In this section, we consider the relaxed problem O|synmv, rel|C max where more than n cycles are allowed, with unallocated (idle) machines in some cycles. This problem can be transformed to a variant of problem O|synmv|C max by introducing dummy jobs, used to model idle intervals on the machines. Dummy jobs have zero-length operations on all machines, and it is allowed to assign several operations of a dummy job to the same cycle. Thus, in a feasible schedule with dummy jobs, all cycles are complete, but some of the m operations in a cycle may belong to dummy jobs.
Similar to the observation of Kouvelis and Karabati (1999) that introducing idle times in a synchronous flow shop may be beneficial, we show that a schedule for the relaxed open shop problem O|synmv, rel|C max consisting of more than n cycles may outperform a schedule for the nonrelaxed problem O|synmv|C max with n cycles.
Example 2 Consider an example with m = 3 machines, n = 5 jobs and the following processing times: j 1 2 3 4 5 p 1 j 3 2 4 3 1 p 2 j 5 3 2 3 1 p 3 j 4 5 1 4 1 In the upper part of Fig. 2 an optimal schedule for problem O3|synmv|C max with n = 5 cycles and a makespan of 18 is shown. For the relaxed problem O3|synmv, rel|C max adding a single dummy job J 6 leads to an improved schedule with 6 cycles and makespan 17 (see the lower part of Fig. 2).
The maximum total number of cycles of nonzero length is nm, which occurs if each of the nm "actual" operations is scheduled in an individual cycle. Then, in each of these nm cycles one actual operation and m − 1 dummy operations are processed. To achieve a best schedule it is therefore sufficient to include n(m − 1) dummy jobs, each dummy job consisting of m zero-length operations. This implies that problem Om|synmv, rel|C max for a fixed number m of machines can be solved in polynomial time by the algorithm from Weiß et al. (2016).
Here combining an operation of an actual job (having processing time a i or b j ) with a dummy job incurs a cycle of length a i or b j , while combining two dummy operations incurs an artificial cycle of 0 length, even if both operations belong to the same dummy job. For the case of multiple machines the cost matrix can be adjusted analogously, see Weiß et al. (2016) for details.
Clearly, for algorithmic purposes it is desirable to have the number of added dummy jobs as small as possible. As discussed in , for the synchronous flow shop problem F|synmv, rel|C max , instances exist where for an optimal solution (n − 1)(m − 2) dummy jobs are needed. In the following we show that for the open shop problem O|synmv, rel|C max at most m − 1 dummy jobs are needed to obtain an optimal solution.
Theorem 3 There exists an optimal solution to problem O|synmv, rel|C max with at most m − 1 dummy jobs, so that the number of cycles is at most n + m − 1.
Proof Let S be an optimal schedule with ξ dummy jobs, ξ ≥ m. We construct another schedule S with C max ( S) ≤ C max (S) and ξ − 1 dummy jobs. Notice that it is allowed to assign several operations of the same dummy job to any cycle.
Case 1 If there exists a cycle I which consists solely of dummy operations of the same job J d ∈ {J n+1 , J n+2 , . . . , J n+ξ }, then that dummy job can be eliminated and S is found.
Case 2 If there exists a cycle I which consists solely of dummy operations, some of which belong to different dummy jobs, then we can achieve Case 1 by selecting a dummy job J d arbitrarily and swapping its operations from outside I with the dummy operations in I . The resulting schedule is feasible and has the same makespan.
Case 3 Suppose no cycle in S consists purely of dummy operations. Let I be the shortest cycle and let ν be the number of actual operations in I , 1 ≤ ν ≤ m. We demonstrate that each actual operation processed in I can be swapped with a dummy operation from another cycle. Consider an actual operation O i j in cycle I with machine M i processing job J j . Select another cycle I (its existence is demonstrated below) such that it does not involve an operation of J j and has a dummy operation on M i . Swap operations on M i in I and I , reducing the number of actual operations in I by 1. Clearly, after the swap both cycles are feasible, because introducing a dummy operation into I cannot cause a conflict, and because no operation of J j was processed in I before the swap. After the swap, both cycles I and I have either the same length as before or cycle I becomes shorter. Performing the described swaps for each actual operation O i j in cycle I , we arrive at Case 1 or 2. A cycle I exists since -there are at least ξ cycles with a dummy operation on M i (ξ ≥ m) and those cycles are different from I ; -there are exactly m − 1 cycles with J j processed on a machine that differs from M i , and those cycles are different from I .
We continue by demonstrating that the bound m − 1 is tight.
An optimal schedule consists of m complete cycles of length 2m each, containing operations of the jobs {J 1 , J 2 , . . . , J m } only, and m incomplete cycles with the single actual job J m+1 grouped with m − 1 dummy jobs, see Fig. 3. The optimal makespan is C opt max = 2m 2 + m. In any schedule with less than m − 1 dummy jobs, at least one operation of job J m+1 is grouped with another operation of an actual job, the length of such a cycle being 2m. Thus, a schedule with less that m − 1 dummy jobs consists of at Notice that since our paper focuses on scheduling aspects, we have presented Theorem 3 in the scheduling language for the sake of consistency and self-containment. Knowing that O|synmv, rel|C max is equivalent to the max-weight edge coloring problem on the complete bipartite graph K m,n , we conclude this section by linking Theorem 3 to the results known in the area of max-weight coloring. It is known that an optimal max-weight edge coloring in an edge-weighted graph G can always be obtained using at most 2Δ − 1 colors, where Δ is the maximum vertex degree of G, see for example Demange et al. (2002);de Werra et al. (2009). This bound is worse than the bound given in Theorem 3, as for a complete bipartite graph G = K m,n with m < n we have Δ = n, and therefore 2Δ − 1 = n + n − 1 > n + m − 1. However, for the vertex coloring version of max-weight coloring on a vertex-weighted graph G, Demange et al. (2002) show that an optimal max-weight vertex coloring can be obtained using at most Δ+1 colors. Note that the max-weight edge coloring problem on a graph H can be seen viewed as the max-weight vertex coloring problem on the line graph G = L(H ) of H . Then, since the line graph of K m,n has maximum degree Δ = n + m − 2, the bound Δ + 1 on the number of colors needed yields n + m − 1, which is equal to the maximum number of cycles stated in Theorem 3.

Scheduling with deadlines
In this section, we consider problem O|synmv, C j ≤ D j |−, where each job J j , 1 ≤ j ≤ n, has a given deadline D j by which it has to be completed. We prove that finding a feasible schedule with all jobs meeting their deadlines is NP-complete in the strong sense even if there are only two machines and each job has only one nonzero processing time. Furthermore, we show that problem O2|synmv, C j ≤ D j , D j ∈ {D , D }|−, where the set of all deadlines is limited to two values, is at least NP-complete in the ordinary sense. The proofs presented below are based on the ideas of Brucker et al. (1998) who established the complexity status of the parallel batching problem with deadlines.
Consider the 3-PARTITION problem (3-PART) known to be strongly NP-complete, cf. Garey and Johnson (1979). Given a set Q = {1, . . . , 3q}, a bound E and natural numbers e i for every i ∈ Q, satisfying i∈Q e i = q E and E Based on an instance of 3-PART, we construct an instance I (q) of the two-machine synchronous open shop problem O2|synmv, C j ≤ D j |− with n = 6q 2 jobs, q deadlines and two machines, denoted by A and B. Each job J j,l has two indices j and l to distinguish between jobs of different types, j = 1, 2, . . . , 2q and l = 1, 2, . . . , 3q. We introduce constants For each l, 1 ≤ l ≤ 3q, the processing times a j,l and b j,l of the jobs J j,l on machines A and B are defined as follows: The deadlines D j,l are set to Throughout the proof we use the following terms for different classes of jobs. Parameter l, 1 ≤ l ≤ 3q, characterizes jobs of type l . For each value of l there are 2q jobs of type l, q of which have nonzero A-operations (we call these Ajobs) and the remaining q jobs have nonzero B-operations (we call these B-jobs). Among the q B-jobs of type l, there is one long B-job of type l, namely J q+1,l with processing time lW + qe l , and there are q − 1 short B-jobs of type l, namely J q+2,l , J q+3,l , . . . , J 2q,l , each with processing time lW . Overall, there are 3q long B-jobs, one of each type l, 1 ≤ l ≤ 3q, and 3q (q − 1) short B-jobs, with q − 1 short jobs of each type l. Note that, independent of l, job J j,l is an With respect to the deadlines, the jobs with nonzero B-operations are indistinguishable. The jobs with nonzero A-operations have deadlines D j,l depending on j; we refer to those jobs as component j A-jobs. For each j, there are 3q jobs of that type.  Proof We construct a schedule S * consisting of q components Γ 1 , Γ 2 , . . . , Γ q , each of which consists of 3q cycles, not counting zero-length cycles. In component Γ k , 1 ≤ k ≤ q, machine A processes 3q component k A-jobs, one job of each type l, l = 1, 2, . . . , 3q. Machine B processes 3 long Bjobs and 3 (q − 1) short B-jobs, also one job of each type l, l = 1, 2, . . . , 3q.
Within one component, every cycle combines an A-job and a B-job of the same type l, 1 ≤ l ≤ 3q. The ordering of cycles in each component is immaterial, but component Finally, there are 3q 2 cycles of length zero. We assume that each zero-length operation is scheduled immediately after the nonzero operation of the same job.
The resulting schedule S * is shown in Fig. 4. It is easy to verify that if Q 1 , Q 2 , . . . , Q q define a solution to the instance of 3-PART, then the constructed schedule S * is feasible with all jobs meeting their deadlines.
We now prove the reverse statement. The proof is structured into a series of properties where the last one is the main result of the lemma.

Lemma 2 If there exists a feasible schedule S for the instance I (q) of the synchronous open shop problem with q deadlines, then the following properties hold:
(1) each cycle of nonzero length contains an A-job of type l and a B-job of the same type l, l = 1, 2, . . . , 3q; without loss of generality we can assume that each zero-length operation is scheduled in the cycle immediately after the nonzero-length operation of the same job; indices that correspond to long B-jobs scheduled in Γ j ; the resulting sets Q 1 , Q 2 , . . . , Q q define a solution to the instance of 3-PART.
Proof (1) In a feasible schedule S satisfying the first property, all cycles have a balanced workload on machines A and B: in any component Γ k , 1 ≤ k ≤ q, the cycle lengths are W, 2W , …, 3qW , with the value qe l or (q − k)e l added. Thus, the total length of such a schedule is at least q · T W . For a schedule that does not satisfy the first property, the machine load is not balanced in at least two cycles, so that the lW -part of the processing time does not coincide in these cycles. Thus, the total length of such a schedule is at least qT W + W = qT W + q 3 E. Since q > 1, the latter value exceeds the largest possible deadline Note that the above especially shows that zero-length operations are only paired in cycles with other zero-length operations. Therefore, we can assume without loss of generality that zero-length operations are scheduled immediately after the nonzero-length operations of the same job. Indeed, if this is not the case, we can change the order of cycles, and possibly the assignment of zero-length operations to the zero-length cycles in order to achieve the assumed structure, without changing the feasibility of the schedule.
(2) Consider a schedule S in which all component u A-jobs precede component u+1 A-jobs for u = 1, 2, . . . , i − 1, but after that a sequence of component i A-jobs is interrupted by where T W is a lower bound on the total length of all component u A-jobs, u = 1, 2, . . . , i, and W is the smallest length of a cycle that contains the violating The second property implies that on machine A all component 1 A-jobs are scheduled first, followed by all component 2 A-jobs, etc. Thus, the sequence of jobs on machine A defines a splitting of the schedule S into components Γ 1 , Γ 2 , . . . , Γ q .
(3) Given a schedule S satisfying the first two properties, we first define sets Q 1 , Q 2 , . . . , Q q and then show that they provide a solution to 3-PART.
Schedule S consists of components Γ j , 1 ≤ j ≤ q. In each component Γ j machine A processes all component j A-jobs J j,l (1 ≤ l ≤ 3q), each of which is paired with a B-job of the same type l. Recall that a B-job J q+1,l of type l is long, with processing time lW + qe l . All other B-jobs J j,l , q +2 ≤ j ≤ 2q, of type l are short, with processing time lW . Considering the long B-jobs of component Γ j , define a set Q j of the associated indices, i.e., l ∈ Q j if and only if the long B-job J q+1,l is scheduled in component Γ j . Denote the sum of the associated numbers in Q j by e(Q j ) := l∈Q j e l .
The length of any cycle in component which for a feasible schedule S does not exceed the common Notice that the deadline of any B-job in component Γ j is not less than D j,l .
Thus, for any j, 1 ≤ j ≤ q we get If all inequalities in (8) hold as equalities, i.e., then it is easy to prove by induction that E − e(Q h ) = 0 for each h = 1, . . . , q and therefore the partition Q 1 , Q 2 , . . . , Q q of Q defines a solution to 3-PART.
Assume the contrary, i.e., there is at least one strict inequality in (8). Then a linear combination L of inequalities (8) with strictly positive coefficients has to be strictly positive. Using coefficients 1 j − 1 j+1 for j = 1, 2, . . . , q − 1 and 1 q for j = q we obtain: It follows that where the last equality follows from the definition of E for an instance of 3-PART. The obtained contradiction proves the third property of the lemma.
Lemmas 1 and 2 together imply the following result.
Theorem 4 Problem O2|synmv, C j ≤ D j |− is NPcomplete in the strong sense, even if each job has only one nonzero operation.
Similar arguments can be used to formulate a reduction from the PARTITION problem (PART) to the two-machine synchronous open shop problem, instead of the reduction from 3-PART. Notice that in the presented reduction from 3-PART all B-jobs have the same deadline, while A-jobs have q different deadlines, one for each component Γ j defining a set Q j . In the reduction from PART we only require two different deadlines D, D , one for each of the two sets corresponding to the solution to PART. Similar to the reduction from 3-PART, we define component 1 A-jobs with deadline D and component 2 A-jobs with deadline D which define a splitting of the schedule into two components Γ 1 , Γ 2 . For each of the natural numbers of PART we define one long B-job and one short B-job and show that the distribution of the long jobs within the two components of the open shop schedule corresponds to a solution of PART. Omitting the details of the reduction, we state the following result.
Theorem 5 Problem O2|synmv, C j ≤ D j , D j ∈ {D , D }|− with only two different deadlines is at least ordinary NP-complete, even if each job has only one nonzero operation.
At the end of this section we note that the complexity of the relaxed versions of the problems, which allow incomplete cycles modeled via dummy jobs, remains the same as stated in Theorems 4 and 5. Indeed, Property 1 of Lemma 2 stating that each nonzero operation of some job is paired with a nonzero operation of another job, still holds for the version with dummy jobs. Therefore, in the presence of dummy jobs a schedule meeting the deadlines has the same component structure as in Lemmas 1 and 2, so that the same reduction from 3-PART (PART) works for proving that O2|synmv, rel, C j ≤ D j |− is strongly NP-complete and O2|synmv, rel, C j ≤ D j , D j ∈ D , D |− is at least ordinary NP-complete.

Minimizing the total completion time
In this section, we prove that the synchronous open shop problem with the total completion time objective is strongly NP-hard even in the case of m = 2 machines. The proof uses some ideas by Röck (1984) who proved NP-hardness of problem F2|no − wait| C j . Note that the latter problem is equivalent to the synchronous flow shop problem F2|synmv| C j .
For our problem O2|synmv| C j we construct a reduction from the auxiliary problem AUX, which can be treated as a modification of the HAMILTONIAN PATH problem known to be NP-hard in the strong sense (Garey and Johnson 1979).
Consider the HAMILTONIAN PATH problem defined for an arbitrary connected graph G = (V , E ) with n − 1 vertices V = {1, 2, . . . , n − 1} and edge set E . It has to be decided whether a path exists which visits every vertex exactly once. To define the auxiliary problem AUX, we introduce a directed graph − → G obtained from G in two stages: -first add to G a universal vertex 0, i.e., a vertex connected by an edge with every other vertex; denote the resulting graph by G = (V, E); -then replace each edge of graph G by two directed arcs in opposite directions; denote the resulting directed graph by For problem AUX it has to be decided whether an Eulerian tour in − → G starting and ending at 0 exists where the last n vertices constitute a Hamiltonian path, ending at 0. As shown in Appendix 2, the two problems HAMILTONIAN PATH and AUX have the same complexity status. An example that illustrates graphs G , G and − → G is shown in Fig. 5; a possible Eulerian tour in − → G is = (0, 1, 0, 2, 0, 3, 0, 4, 2, 4, 3, 2, 1,2,3,4,0 ), where the last n = 5 vertices form a Hamiltonian path.
Given an instance of AUX with n vertices V = {0, 1, . . . , n − 1} and arcs − → E , we introduce an instance of the synchronous open shop problem SO using the constants  (0) vertex-jobs, as described, and additionally one more vertexjob Ve 0 0 that corresponds to the origin of the Eulerian tour . In addition to these 2σ + 1 vertex-jobs and 2σ arc-jobs, we create 2n 9 + 1 "forcing" jobs F 0 , F 1 , . . . , F 2n 9 to achieve a special structure of a target schedule. We denote the set of jobs N . Their processing times are given in Table 1.
We call each operation with a processing time of L a "long operation" and each operation with a processing time of less than L a "short operation." Further, we refer to a job as a long job if at least one of its operations is long and as a short job if both of its operations are short.
The threshold value of the objective function is defined as Table 1 Processing times of the jobs in instance SO Θ 2 = 2(n 9 + 1) n 9 + 1 L + 4σ ξ + 2σ K + n .
As we show later, in a schedule with C j ≤ Θ, the total completion time of the short jobs is Θ 1 and the total completion time of the long jobs is Θ 2 .
Proof Consider an instance AUX and the corresponding scheduling instance SO. We prove that an instance of problem AUX has a solution, if and only if the instance SO has a solution with C j ≤ Θ.
"⇒": Let the solution to AUX be given by an Eulerian tour = (v 0 , v 1 , . . . , v 2σ ) starting at v 0 = 0 and ending at v 2σ = 0 such that the last n vertices form a Hamiltonian path. The solution to problem SO consists of two parts and it is constructed as follows: -In Part 1, machine M 1 processes 2σ + 1 vertex-jobs and 2σ arc-jobs in the order that corresponds to traversing . Machine M 2 starts with processing the forcing job F 0 in cycle 1 and then proceeds in cycles 2, 3, . . . , 4σ + 1 with the same sequence of vertex-jobs and arc-jobs as they appear in cycles 1, 2, . . . , 4σ on machine M 1 . Notice that in Part 1 all vertex-and arc-jobs are fully processed on both machines except for job Ve which is processed only on M 1 in the last cycle 4σ + 1. -In Part 2, machine M 1 processes the forcing jobs F 0 , F 1 , . . . , F 2n 9 in the order of their numbering. Machine M 2 processes in the first cycle of Part 2 (cycle 4σ + 2 ) the vertex-job Ve d(v) 0 which is left from Part 1. Then in the remaining cycles 4σ + 3, . . . , 4σ + 2 + 2n 9 , every job F i (i = 1, . . . , 2n 9 ) on M 1 is paired with job F i+1 on M 2 if i is odd, and with job F i−1 , otherwise.
We demonstrate that the constructed schedule satisfies C j = Θ. Observe that most cycles have equal workload on both machines, except for the n cycles that correspond to the vertex-jobs of the Hamiltonian path; in each such cycle the operation on M 1 is one unit longer than the operation on M 2 .
First consider the short jobs. The initial vertex-job Ve 0 0 that corresponds to the origin v 0 = 0 of = (v 0 , v 1 , . . . , v 2σ ) completes at time ξ . Each subsequent vertex-job that corresponds to v i , 1 ≤ i ≤ 2σ − n, where we exclude the last n vertices of the Hamiltonian path, completes at time (2i + 1)ξ + i K . Consider the next n − 1 vertex-jobs v i with 2σ − n + 1 ≤ i ≤ 2σ − 1 (excluding the very last vertex-job Ve d(0) 0 as it is a long job); every such job v i completes at time (2i + 1)ξ + i K + (n + i − 2σ ).
The remaining short jobs correspond to arc-jobs. The completion time of the i-th arc-job Thus, the total completion time of all short jobs sums up to Fig. 6 An optimal solution to SO Here we have used the equality Next, consider the completion times of the long jobs. The second operations of jobs Ve d(0) 0 and F 0 appear in cycle 4σ + 2; all other long operations are scheduled in cycles 4σ + 3, . . . , 4σ + 2 + 2n 9 . There is a common part of the schedule, with cycles 1, 2, . . . , 4σ + 1 that contributes to the completion time of every long job; the length of that common part is Then the first two long jobs, F 0 and Ve d(0) 0 , are both completed at time Δ + L and for i = 1, . . . , n 9 the completion time of each pair of jobs F 2i−1 and F 2i is Δ + (2i + 1)L. Thus, the total completion time of the long jobs sums up to 2 (Δ + L) + 2 = 2Δ(n 9 + 1) + 2(n 9 + 1) 2 L (10) = 2(n 9 + 1)[(4σ ξ + 2σ K + n) + (n 9 + 1)L] = Θ 2 (11) and therefore the total completion time sums up to Θ = Θ 1 + Θ 2 .
"⇐": Now we prove that if an instance of AUX does not have a solution, then also SO does not have a solution with C j ≤ Θ. Suppose to the contrary that there exists a schedule with C j ≤ Θ and let S be an optimal schedule. 1. In each cycle in S, both operations are either short or long. 2. All long operations are scheduled in the last 2n 9 + 1 cycles. This defines the splitting of schedule S into Parts 1 and 2, with cycles 1, 2, . . . , 4σ +1 and 4σ +2, . . . , 4σ + 2 + 2n 9 . 3. The sum of completion times of all long jobs is at least Θ 2 . 4. In S, machine M 1 operates without idle times. 5. In Part 1 of S, job Ve 0 0 is processed in the first two cycles which are of the form Ve 0 represents a short operation. While the order of these two cycles is immaterial, without loss of generality we assume that Ve 0 0 F 0 precedes * Ve 0 0 ; otherwise the cycles can be swapped without changing the value of C j . 6. The two operations of each vertex-job and the two operations of each arc-job are processed in two consecutive cycles, first on M 1 and then on M 2 . 7. In Part 1 of S, machine M 1 alternates between processing arc-jobs and vertex-jobs. Moreover, an operation of a vertex-job corresponding to v is followed by an operation of an arc-job corresponding to an arc leaving v. Similarly, an operation of an arc-job for arc (v, w) is followed by an operation of a vertex-job for vertex w. By Property 6, the same is true for machine M 2 in Part 1 and in the first cycle that follows it. 8. The first arc-job that appears in S corresponds to an arc leaving 0. Among the vertex-jobs, the last one is Ve Using the above properties we demonstrate that if problem AUX does not have a solution, then the value of C j in the optimal schedule S exceeds Θ. Due to Property 3 it is sufficient to show that the total completion time μ j=1 C j of all short jobs exceeds Θ 1 . Let us assume that {1, 2, . . . , μ} with μ = 4σ are the short jobs of the instance SO. This set consists of 2σ short vertex-jobs (the long vertex-job Ve d(0) 0 is excluded) and 2σ arc-jobs.
Properties 1-2 allow the splitting of S into two parts. Part 2 plays an auxiliary role. Part 1 is closely linked to problem AUX.
The sequence of arc-and vertex-jobs in Part 1 of S defines an Eulerian tour in − → G . Indeed, all arc-jobs appear in S and by Property 7 the order of the arc-and vertex-jobs in S defines an Eulerian trail in − → G . Since for every vertex v, its in-degree equals its out-degree, an Eulerian trail must be an Eulerian tour. Denote it by = (v 0 , v 1 , . . . , v 2σ ). Due to Property 8 and by the assumption of Property 5 the Eulerian tour starts and ends at v 0 = 0.
In Fig. 7 we present the structure of Part 1 of schedule S, where Ve * v represents one of the vertex-jobs Ve 1 v , Ve 2 v ,…, Ve

depending on whether the upper index is d(v)
or a smaller number. Part 2 is as in the proof of " ⇒".
Notice that all operations of the short jobs appear only in Part 1 of the above schedule, with one short job completing in each cycle 2, 3, . . . , μ + 1. In each cycle of Part 1, both operations are of the same length, except for the n − 1 cycles where the vertex-jobs Ve is not included in this set as its precise location is known by Property 8.
We show that for any Eulerian tour = (v 0 , v 1 , . . . , v 2σ ), the value of μ j=1 C j does not depend on the order of the vertices in ; it only depends on the positions of the n − 1 jobs from ϑ. In particular, we demonstrate that where Υ = Θ 1 − n 2 is a constant, and x ∈ {0, 1} indicates whether some job from ϑ is allocated to cycle or not. The constant term Υ is a lower bound estimate for μ j=1 C j obtained under the assumption that one additional O2|synmv| C j str. NP-h. Section 4 * with a constant bounded by 2m 2 m 2 m time unit for each job from ϑ and also for job Ve d(0) 0 is ignored. If we drop "+1" from the input data of the instance SO, then both machines have equal workload in every cycle. Job Ve 0 0 contributes ξ to Υ . The job corresponding to v i , except for job Ve Consider now the effect of the additional time unit on machine M 1 for each job from ϑ and for job Ve 0 . If some ϑ-job is allocated to a cycle , then the additional unit of processing increases by one the completion time of every short job finishing in cycles , + 1, . . . , μ + 1, and thus contributes (μ + 1) − + 1 to μ j=1 C j . This justifies formula (12).
As shown in the above template, each of the n − 1 jobs j ∈ ϑ can be scheduled in any odd-numbered cycle ∈ {3, 5, . . . , μ − 1}. Also, by Property 8, an additional time unit appears in cycle μ + 1 due to the allocation of Ve d(0) 0 to machine M 1 , which affects the completion time of a short job in that cycle. Thus, the minimum value of μ j=1 C j is achieved if all n − 1 jobs from ϑ are allocated to the latest possible odd-numbered positions, i.e., to positions = (μ − 1) − 2i for i = 0, 1, . . . , n − 2. Together with an extra "1 " related to the allocation of Ve d(0) so that μ j=1 C j is equal to Θ 1 if jobs ϑ are allocated to the latest feasible positions. Due to (12), any other allocation of jobs ϑ, which does not involve the last n − 1 odd-numbered positions, results in a larger value of μ j=1 C j .
By the main assumption of the part "⇐", AUX does not have a solution where the last n vertices form a Hamiltonian path. Therefore, the last n vertices of any Eulerian tour = (v 0 , v 1 , . . . , v 2σ ) have at least two occurrences of the same vertex v and therefore in the associated schedule, among the last n vertex-jobs there are at least two vertex-jobs Ve i v , Ve j v associated with v. Thus, it is impossible to have n − 1 jobs from ϑ allocated to the last n − 1 odd-numbered cycles and to achieve the required threshold value Θ 1 .
At the end of this section we observe that the proof of Properties 1-8 can be adjusted to handle the case with dummy jobs. Indeed, in an optimal solution of the instance, even if we allow dummy jobs, dummy operations are not allowed to be paired with actual operations of nonzero length (see Property 4). We conclude therefore that the complexity status of the relaxed problem is the same as that for the standard one.

Conclusions
In this paper we studied synchronous open shop scheduling problems. The results are summarized in Table 2. Note that the polynomial time results in lines 2 and 3 do not include presorting of all jobs.
All results from Table 2 also hold for the relaxed versions of the scheduling problems, in which cycles may consist of less than m jobs.
For problem O2|synmv|C max we proved a new structural property, namely the small block property. Using it, we formulated a much easier solution algorithm than previously known. Unfortunately, we were unable to prove an improved structural property for any fixed m > 2. In Table 2 we quote a previously known algorithm, which is based on the corridor property. Our result for two machines gives hope that this general result for fixed m may also be improved and highlights possible approaches for such an improvement.
The NP-completeness results of Sect. 3 imply that if instead of hard deadlines D j soft due dates d j are given (which are desirable to be met, but can be violated), then the corresponding problems O|synmv| f with the traditional regular due date-related objectives f such as the maximum lateness L max = max 1≤ j≤n {C j −d j }, the number of late jobs n j=1 U j , or the total tardiness n j=1 T j are NP-hard, even if there are only two values of the due dates, d j ∈ {d, d }.
The corresponding problems become strongly NP-hard in the case of arbitrary due dates d j .
Finally, due to the symmetry known for problems with due dates d j and those with release dates r j , we conclude that problem O2|synmv, r j |C max is also strongly NP-hard and remains at least ordinary NP-hard if there are only two different values of release dates for the jobs.
In Sect. 4 we show that O2|synmv| C j and its relaxed version are strongly NP-hard. Thus, due to the reducibility between scheduling problems with different objectives, the open shop problem with synchronization is NP-hard for any traditional scheduling objective function, except for C max .
Overall the synchronized version of the open shop problem appears to be no harder than the classical version, with two additional positive results for it: 1) Om|synmv|C max is polynomially solvable for any fixed m while Om||C max is NP-hard for m ≥ 3 (Gonzalez and Sahni 1976); 2) O|synmv, n = n |C max is polynomially solvable for any fixed number of jobs n (due to the symmetry of jobs and machines), while O|n = n |C max is NP-hard for n ≥ 3. Moreover, in a solution to O|synmv|C max with n ≤ m all jobs have the same completion time, so that an optimal schedule for C max is also optimal for any other nondecreasing objective f . It follows that we can solve problem O|synmv, n = n | f , with a fixed number of jobs n , for any such objective f .
Finally, comparing the open shop and flow shop models with synchronization, we also observe that the open shop problem is no harder, with a positive result for Om|synmv|C max with an arbitrary fixed number of machines m, while its flow shop counterpart Fm|synmv|C max is NPhard for m ≥ 3, see .

Appendix 1: The proof of the small block property (Theorem 1)
The general idea of the proof can be described as follows.
Starting with an arbitrary optimal solution which does not satisfy the small block property, replace repeatedly each block of size s ≥ 4 by two blocks of cumulative size s, one of which is small. The replacement is performed for the relaxed problem AP F =∅ , ignoring forbidden pairs F, but making sure that the cost of the new solution (possibly infeasible in terms of F) is not larger than that of its predecessor. Additionally, we keep 0-entries on the main diagonal unchanged, so that no new blocks of size 1 are created. As a result a new solution is constructed, feasible for AP F =∅ , of no higher cost than the original one, consisting of small blocks only.
If the constructed solution is infeasible for AP F , then at the next stage infeasible blocks of size 2 and 3 are replaced by feasible blocks, also without increasing the cost, achieving an optimal solution consisting of small blocks. First we prove the possibility of block splitting (Lemma 3), and then explain how infeasible blocks can be converted into feasible ones (Lemmas 4, 5 for blocks of size 2 and 3, respectively). It leads to the main result (Theorem 1) -the existence of an optimal solution consisting of small blocks of type (6).
For a block consisting of 1-entries in rows and columns { j 1 , j 1 + 1, . . . , j 1 + s − 1}, renumber those rows and columns as { j 1 , j 2 , . . . , j s } with j i = j 1 + i − 1, 1 ≤ i ≤ s. The cost associated with block X h is defined as so that the total cost of solution X with blocks (5) is Lemma 3 If an optimal solution to problem AP F contains a block X y of size s > 3, defined over rows and columns { j 1 , j 2 , . . . , j s }, then without increasing the cost it can be replaced by two blocks, one block of size 2 or 3 defined over rows and columns { j 1 , j 2 } or { j 1 , j 2 , j 3 }, and one block defined over the remaining rows and columns. Furthermore, if a diagonal entry x j k j k in the initial solution is 0, then in the modified solution x j k j k is 0 as well.
Proof Given a solution, we identify the nonzero entries in columns j 1 , j 2 , and j 3 , and denote the corresponding rows by j a , j b , j c . For these indices we have Fig. 8 Transformation of block X y into X y Furthermore, for nonzero entries in rows j 1 , j 2 , and j 3 , we denote the corresponding columns by j t , j u , j v and have The proof is presented for the case Notice that the case x j 1 , j 1 = 1 contradicts the assumption that block X y is large. In the case of x j 2 j 2 = 1 we replace block X y by block X y as shown in Fig. 8. Here the 1-entries which are subject to change are enclosed in boxes and * denotes an arbitrary entry, 0 or 1. This transformation involves 4 entries in rows { j 1 , j 2 } and columns { j 2 , j t }. Notice that the marked 1entries in the initial block X y belong to a diagonal of type , while the marked 1-entries in the resulting block X y belong to a diagonal of type , so that w X y ≤ w X y by the Monge property.
In the case of x j 1 j 1 = x j 2 j 2 = 0, x j 3 j 3 = 1, at least one of the values, a or t, is larger than 3 (a = 3 or t = 3 is not possible for x j 3 j 3 = 1; a ≤ 2 and t ≤ 2 is not possible since block X y is large). If t > 3, then the transformation is similar to that in Fig. 8: it involves 4 entries in rows { j 1 , j 3 } and columns { j 3 , j t }. Alternatively, if a > 3, then the transformation involves 4 entries in rows { j 3 , j a } and columns { j 1 , j 3 }. In either case, the 1-entries in the initial solution belong to a diagonal of type and to a diagonal of type after the transformation, so that the cost does not increase by the Monge property.
Thus, in the following we assume that condition (15) holds.
Case t = 2, or equivalently x j 1 j 2 = 1. This implies b = 1. If a = u, then the transformation from X y toX y shown in Fig. 9 creates a small block of size 2 without increasing the cost. Consider the case a = u and notice that we can assume a = u > 3. Indeed, cases a = u = 1 and a = u = 2 cannot happen as the corresponding assignment is infeasible (see Fig. 10a, b), and in case a = u = 3 the block is already small (see Fig. 10c). For a = u > 3 the transformation illustrated in Fig. 9 is not applicable as it results in a new diagonal entry x j a j a = 1. Instead, we perform the two transformations from X y to X y and then to X y shown in Figs. 11 and 12, creating eventually a small block of size 3.
Observe that both of the values, c and v, are different from a = u. Note further that we have c = 1 as t = 2, c = 2 as u > 3, and c = 3 due to (15). Similarly v = 1 as a > 3, v = 2 as t = 2, and v = 3 due to (15). Thus c > 3 and v > 3. The relationship between a = u and c is immaterial, as the above transformations work in both cases, a = u < c and a = u > c. Similarly, the relationship between a = u and v is immaterial as well. Moreover, the presented transformation works for either case, c = v or c = v.
Case a = 2 is similar to the case of t = 2 since the Xmatrices for these two cases are transposes of each other. Recall that whenever the swaps are done in the case of t = 2, the 1-entries on a diagonal of type become 0-entries, while Transformation from X y to X y Fig. 12 Transformation from X y to X y the 0-entries on a diagonal of type become 1-entries, so that the Monge inequality (3) is applicable. In the case of a = 2, the initial 1-entries in the transpose matrix also belong to a diagonal of type , while the new 1-entries are created on a diagonal of type .
Case a > 2 and t > 2 with a < b and t < u. Consider the transformation from X y to X y shown in Fig. 13. It uses the Monge property two times, once for the entries in rows { j 2 , j a } and columns { j 1 , j u }, and another time for the entries in rows { j 1 , j b } and columns { j 2 , j t }.

Fig. 13
Transformation from X y to X y Fig. 14 Transformation from X y to X y If a = u and b = t, then the resulting matrix X y satisfies the conditions of the lemma and a matrix without changed diagonal entries and with a small block of size 2 is obtained.
Consider the case a = u or b = t. By the definition of the indices a, b, t, u, according to (13)-(14), we have a = b and t = u. The latter two conditions, combined with either a = u or b = t, imply a = t and b = u.
Then, after X y is transformed into X y , we perform one more transformation from X y to X y shown in Fig. 14. Then, the resulting matrix X y satisfies the conditions of the lemma.

Fig. 15
Transformation from X y to X * y Case a > 2 and t > 2 with a < b and t > u. We start with an additional preprocessing step shown in Fig. 15 replacing x 1t = x 2u = 1 by 0-entries and x 1u = x 2t = 0 by 1-entries without increasing the cost.
In the resulting matrix X * y , we interchange the notation of the columns j t and j u in accordance with definition (14) and proceed as described above for the case t < u. Since a > 2 and therefore u = 1, no new diagonal entry is produced in the preprocessing.
Case a > 2 and t > 2 with a > b. This case corresponds to the transposed of the picture in the previous case. We undertake a similar preprocessing step as before, to transform this case into one with a < b.
Lemma 4 If a solution X contains an infeasible block of size 2, i.e., x j 1 , j 2 = x j 2 , j 1 = 1 with at least one of the entries ( j 1 , j 2 ) or ( j 2 , j 1 ) belonging to F, then without increasing the cost it can be replaced by two feasible blocks of size 1, given by x j 1 , j 1 = 1 and x j 2 , j 2 = 1.
Proof For the above transformation the cost does not increase due to the Monge property. As far as feasibility is concerned, by the definition of set F, there is exactly one forbidden entry in each row and each column. Thus, if ( j 1 , j 2 ) ∈ F, then neither ( j 1 , j 1 ) nor ( j 2 , j 2 ) are forbidden. Similar arguments hold for ( j 2 , j 1 ) ∈ F.
Lemma 5 If a solution X contains an infeasible block of size 3, then that block can be replaced, without increasing the cost, by three feasible blocks of size 1, or by two feasible blocks, one of size 1 and another one of size 2.
Proof Let X y be an infeasible block consisting of rows and columns j 1 , j 2 and j 3 . The proof is presented for the case x j 1 j 1 = x j 3 j 3 = 0;  Fig. 17, without increasing the cost. Then we proof that at least one of those blocks is feasible.
The transformation of X (I ) y involves a quadruple of 1-entries, so that the cost does not increase due to the Monge property. Transforming X (I ) y into the diagonal solution X (c) y we achieve a minimum cost assignment for the Monge submatrix given by rows and columns { j 1 , j 2 , j 3 } (see, e.g., Burkard et al. 1996). The same arguments hold for the transformation of X In the following, we deal with feasibility. If the initial infeasible block is of type X (I ) y , then at least one of the pairs ( j 1 , j 3 ), ( j 2 , j 1 ), or ( j 3 , j 2 ) is forbidden. Therefore, at least one pair ( j 1 , y is feasible. Similar arguments can be used if the initial infeasible block is of type X Combining Lemmas 3-5 and using them repeatedly we arrive at the main result of Theorem 1. Below we present the formal proof.
Proof of Theorem 1: Consider any optimal solution. Apply Lemma 3 repeatedly until all blocks are of size 1, 2, or 3. Since all diagonal 1-entries of the new solution are also present in the original solution, those entries are feasible. Therefore, all blocks of size 1 are feasible. By Lemmas 4-5 all blocks of size 2 or 3 are either feasible or can be converted into feasible blocks without increasing the cost. Thus, the resulting solution has blocks of size 1, 2, and 3, it is feasible, and its cost is not larger than the cost of the original optimal solution.
Finally, the only small blocks that are not of type (6), have three 1's on the secondary diagonal, since other configurations of 0's and 1's combine blocks of type (6). Due to the arguments used in the proof of Lemma 5 with respect to block X (I I I ) y , such blocks can also be eliminated, which concludes the proof. contains operations of both types and in which the long operation is on a different machine than in cycle s. In the following, assume that long operations are on machine M 2 in cycle s and on machine M 1 in cycle t; the alternative case is similar. Let J s and j s be the jobs in cycle s, with the operation of J s being long and the operation of j s being short. Let J t and j t be the jobs in cycle t, with the operation of J t being long and the operation of j t being short.
Assume first that J s = J t and j s = j t . Construct a new schedule S by swapping the operations on M 2 , see Fig. 18. Then, the length of cycle s in S decreases by at least L −(ξ + K + 1) while all other cycles keep their lengths unchanged. The completion time of job J s either decreases, if its second operation is in cycle after t, or it increases by at most (ξ + K + 1) plus the total length of cycles s + 1, . . . , t. Thus, C J s − C J s ≤ (t − s − 1)L + (ξ + K + 1). For all other jobs that finish in cycle s or later, the completion times decrease by at least L − (ξ + K + 1). As there are at least t − s + 1 cycles in the tail part of the schedule, starting from cycle s, this affects at least t − s + 1 jobs. Thus, the difference in total completion time is where |N | is the the number of jobs in the instance SO or equivalently the number of cycles. To show that we get an improved solution S , we prove that −2L + |N |(ξ + K + 1) < 0 using the estimate on σ = | − → E |/2, n ≤ σ < n 2 (for n > 2), |N | = 2n 9 + 4σ + 2 ≤ 2n 9 + 4n 2 + 2, L = 2n 9 ξ = 2n 9 · 32nσ 2 ≥ 64n 12 > 576n 10 (for n > 3).
Thus, swapping the operations leads to a smaller total completion time, contradicting that S has minimal total completion time.
Consider the case J s = J t , j s = j t and assume first that there are other long operations scheduled before s. The case where there is no other long operation scheduled prior to cycle s is discussed afterwards. Let r be the last cycle before s in which a long operation is scheduled, r < s. Since s is the first cycle that contains a short as well as a long operation, both operations in cycle r are long. In this case construct S by exchanging in cycles r, s, and t the three operations on machine M 2 , as shown in Fig. 19a. As a result, the completion time of job J r either decreases, if the last operation of that job is in a cycle after cycle t, or it increases by at most (s −r )(ξ + K +1)+(t −s)L, where the first term represents the estimate on the length of short cycles r +1, . . . , s in S and the second term estimates the length of cycles s +1, . . . , t, which may be short or long. For all other jobs, including job J s , that finish in cycle s or later, their completion times decrease by at least L −(ξ + K +1). Note that there are at least t −s +2 such jobs. Thus, where the last inequality can be proved as a slight modification of (16).
Assume now that there is no long operation scheduled prior to cycle s and let u > s be the first cycle that contains a long operation of some job J u on the same machine as the long operation in cycle t. Construct S by swapping the M 1operations in cycles t and u and the M 2 -operations in cycles s and t, see Fig. 19b for an example with u > t. The completion time of job J s (or job J u if s < u < t) increases by at most |u − t|L. For the remaining jobs scheduled in the tail part of the schedule, starting from cycle s, their completion times reduce by at least L −(ξ + K +1). Again, using the evaluation from above, this leads to a decrease in the total completion time, In all previous cases a better schedule S is constructed by grouping two short operations j s and j t in one cycle. Next, consider j s = j t and first assume that there are other short operations scheduled in some cycle r < s. Since s is the first cycle that contains a short as well as a long operation, both operations in cycle r are short. Construct a schedule S as illustrated in Fig. 20a. Then, the completion time of job j s decreases by at least 2(L − (ξ + K + 1)), as both its operations were paired with long operations before and are now paired with short ones. Further, the completion times of all jobs that were completed in cycles r, r + 1, …, t − 1 in S, increase by at most (ξ + K + 1) and the completion times of all jobs that are completed in cycle t or later decrease by at least L − (ξ + K + 1). Thus, where the last inequality can be proved similar to (16).
Consider now the case where r > s and assume additionally that both operations in cycle r are short (see Fig. 20(b) for an example of r > t > s). In this case, the completion times of job j s and all jobs that finish in cycle s or later, except for job J s , decrease by at least L − 2(ξ + K + 1). The completion time of job j s decreases further by the total length Δ of cycles s + 1, …, t − 1, while the completion time of job J s may increase by at most 2(ξ + K + 1) plus Δ, again leading to a decrease in the total completion time: where the last inequality follows from (16) since 6 < |N | = 2n 9 + 4σ + 2.
The only remaining case with j s = j t is where no cycle r exists such that both operations in r are short, i.e., all short operations are paired with long operations. In this case, for cycle s with a short operation on M 1 and a long operation on M 2 , we select a cycle t similar to t, with a short operation on M 2 and a long operation on M 1 , t = t (such a cycle always exists for n > 2). Then the two short operations in cycles s and t belong to different jobs, and the case reduces to one of the cases with j s = j t considered before.
In case that j s = j t and J s = J t we can first swap one of the long operations with a long operation in another cycle, as described for J s = J t , and afterwards continue with moving the two short operations to the front of the schedule, as described for j s = j t , again leading to a decrease in the total completion time. Therefore, in an optimal schedule there is no cycle consisting of a long and a short operation.
Property 1 is proved. From now on we refer to cycles as short or long assuming that there are no "mixed cycles" in an optimal schedule.
Let U = (s +1, . . . , t) be the last sequence of short cycles and let s be a long cycle that precedes U . Assume first that |U | ≥ 2. Then, as cycles in U are the last short cycles, at least |U | − 1 jobs finish within U (U may contain the two short operations of jobs Ve d(0) 0 and F 0 for which their respective second, long operations may be scheduled in later cycles). Construct a schedule S by moving the long cycle s after U , see Fig. 21. Then the completion times of at least |U | − 1 jobs decrease by L while the completion times of the two jobs scheduled in cycle s in S increase by at most |U | (ξ + K +1), Using conditions |U | ≥ 2 and L − 2(ξ + K + 1) > 0 (which can be proved in the same way as (16)) we deduce j∈N (C j − C j ) ≤ L − 2 (L − 2(ξ + K + 1)) = −L + 4(ξ + K + 1).
The last expression is negative, again by the same arguments as (16). Thus, we get a contradiction to the optimality of S.
If |U | = 1 and the short cycle in U is different from then at least one job, different from Ve d(0) 0 and F 0 , finishes in U . In this case the previous transformation reduces the completion time of at least one job by L, while the completion times of the two jobs scheduled in cycle s in S increase by at most (ξ + K + 1), where the last inequality can be proved in the same way as (16).
Consider now the case with U consisting of only one short cycle of the form (22). Notice that such a cycle is avoided in the optimal solution presented in Fig. 6.
Let U be the last sequence of short cycles before U and assume first that U is preceded by some long cycles. Clearly U does not contain short operations of jobs F 0 and Ve d(0) 0 , as they appear in U . Since no other job consists of both, long and short operations, at least | U | short jobs finish within U . Then the arguments presented in the beginning of the proof of Property 2 are applicable for the set of cycles U used instead of U .
Lastly, consider the remaining case with U consisting of only one short cycle of the form (22) and there are no other short cycles in the preceding part of the schedule that follow a long cycle. Then, the first 4σ cycles are short and the cycle (22) appears among the last 2(n 9 + 1) cycles. Using pairwise interchange arguments it is easy to make sure that in an optimal schedule, the latter cycles are of the form shown in Fig. 22, where without loss of generality jobs F i , except for F 0 , are renumbered in the order they appear in schedule S on machine M 1 . In Fig. 22, τ is the number of jobs from the set {F 1 , F 2 , ..., F 2n 9 } completed before cycle (22), 1 ≤ τ ≤ 2n 9 . Let Δ be the total length of the first 4σ short cycles. If the length of cycle (22) was L, then the total completion time of all long jobs would be 2(n 9 + 1)Δ + 2L n 9 +1 i=1 2i = 2(n 9 + 1) Δ + (n 9 + 2)L .
In reality, the length of cycle (22) is less than L by the amount L − (ξ + K + 1), so that completion times of the jobs that appear after F τ −1 , F τ should be adjusted. The number of Fig. 22 A special short cycle appearing among the last 2(n 9 + 1) cycles jobs completed in the corresponding tail part of the schedule is 2(n 9 + 1) − τ , so that long jobs C j = 2(n 9 + 1) Δ + (n 9 + 2)L − (2(n 9 + 1) − τ )(L − ξ − K − 1) = 2(n 9 + 1) (n 9 + 1)L + Δ + ξ + K + 1 We demonstrate that the objective value for schedule S exceeds the given threshold Θ, using the estimate (17) together with the following conditions: where the first one is a lower bound on Δ calculated as the sum of processing times on the second machine of the short operations in the first 4σ cycles (which includes every operation except the zero-length operation of job F 0 ), while the second one can be proved in a similar way as (16). We have long jobs C j > 2(n 9 + 1) (n 9 + 1)L + (4σ ξ + 2σ K ) + ξ + K + 1 .
Property 3 The sum of completion times of all long jobs is at least Θ 2 .
Due to Property 2, all long operations are scheduled in the last 2n 9 +1 cycles. Again by interchange arguments, the long should be the first one among all long cycles, while other long cycles should be grouped in pairs, as shown in Fig. 22. Following the arguments used in the proof of part "⇒ " for calculating the sum of completion times of the long jobs, it is easy to verify that calculations (9)-(10) hold in the current case as well. Instead of the precise value of Δ that leads to (11), now we can only substitute an estimate of Δ, Δ ≥ 4σ ξ + 2σ K + n, which corresponds to the total length of short operations on machine M 1 . Thus, we obtain: long jobs C j = 2Δ(n 9 + 1) + 2 n 9 + 1 2 L ≥ 2 (4σ ξ + 2σ K + n) (n 9 + 1) + 2 n 9 + 1 2 L = Θ 2 .
Therefore, there should be no idle time on machine M 1 to achieve j∈N C j ≤ Θ.
Property 5 In Part 1 of S, job Ve 0 0 is processed in the first two cycles which are of the form , where * represents a short operation. While the order of these two cycles is immaterial, without loss of generality we assume that Ve 0 0 F 0 precedes * Ve 0 Since the sum of completion times of all long jobs is at least Θ 2 , the remaining 4σ short jobs may only contribute a total completion time of at most Θ 1 to obtain a schedule with total completion time C j ≤ Θ. For all of these jobs, their operations on machine M 2 have length of at least ξ . Thus, it is not possible for i + 1 short jobs to be completed at time iξ and we can use the lower bound iξ for the completion time C [i] of the i-th job: This implies that short jobs Notice that for i = 1 there is only one job that can be completed at time ξ , namely Ve 0 0 , and this happens only if the first two cycles satisfy the statement of Property 5.
Suppose the statement of Property 5 does not hold for S. Then the above estimate needs to be adjusted by ξ since in that case the completion time of the first completed job is at least 2ξ rather than ξ . It follows that short jobs Notice that n > 2 and by construction d(v) ≥ 2 for each vertex v (G is connected). This leads to 2 v∈V vd(v) − n 2 ≥ 2n(n − 1) − n 2 = n 2 − 2n > 0 for n > 2. Thus, the total completion time of all jobs is greater than Θ, a contradiction.
Property 6 The two operations of each vertex-job and the two operations of each arc-job are processed in two consecutive cycles, first on M 1 and then on M 2 .
We have demonstrated in the proof of Property 5 that C [1] < 2ξ (with job Ve 0 0 defining C [1] ); otherwise the lower bound Θ 1 is violated. Similar arguments can be used to prove (by induction) that the lower bound Θ 1 is achievable only if C [i] < (i + 1) ξ for 1 ≤ i ≤ 4σ . Combining this with (24) we can limit our consideration to schedules satisfying iξ ≤ C [i] < (i + 1) ξ for 1 ≤ i ≤ 4σ.
Property 6 holds for the first job Ve 0 0 due to Property 5. Let job j be the short job that is processed on machine M 1 in cycle 2. Then, as ξ ≤ p 1 j < 2ξ , for Ve 0 0 (25) is satisfied.
Suppose j does not satisfy the conditions of Property 6. Then cycle 3 consists of two short jobs k and that have not been processed yet in the preceding cycles. The situation is illustrated below.
In that case no job other than Ve 0 0 can be finished in the first three cycles, so C [2] , corresponding to some job finishing no earlier than cycle 4, is at least as big as the finishing time of cycle 4. As the processing time of any short operation other than Ve 0 0 is at least ξ , cycles 2, 3, and 4 have a combined length of at least 3ξ as illustrated above. Thus, we have C [2] ≥ 3ξ in violation of (25).
Therefore j should be processed in cycles 2 and 3, first on M 1 and then on M 2 . The proof of Property 6 can be done by induction using the above arguments.
Property 7 In Part 1 of S, machine M 1 alternates between processing arc-jobs and vertex-jobs. Moreover, an operation of a vertex-job corresponding to v is followed by an operation of an arc-job corresponding to an arc leaving v. Similarly, an operation of an arc-job for arc (v, w) is followed by an operation of a vertex-job for vertex w. By Property 6, the same is true for machine M 2 in Part 1 and in the first cycle that follows it.
Note that due to the numbers and distributions of vertexand arc-jobs, if two vertex-jobs are scheduled consecutively, then there should also be two arc-jobs scheduled consecutively. Hence we can restrict our proof to the latter case. So assume there are cycles s, s + 1, s + 2 in which two operations of arc-jobs are scheduled consecutively on the first machine in cycles s, s + 1 (and thus their second operations are scheduled in cycles s + 1, s + 2 by Property 6). Then, because the processing times of the arc-jobs are chosen such that they are no larger than ξ + 2n on machine M 1 and no smaller than ξ + 6n on machine M 2 , there is an idle time on machine M 1 in cycle s + 1, as illustrated below. However, this is a contradiction to Property 4 as Machine M 1 has to operate without idle time. Therefore this situation cannot happen, which proves the first part of Property 7.
We now show that an operation of a vertex-job corresponding to v is followed by an operation of an arc-job corresponding to an arc leaving v. Note that due to Property 6 a vertex-or arc-job processed on machine M 1 in some cycle s is processed on machine M 2 in cycle s + 1. Assume there is an operation of vertex-job Ve i that is succeeded by an operation of an arc-job Ar jk for some i = j in cycles s, s + 1 on machine M 1 and s + 1, s + 2 on machine M 2 .
Among all such pairs Ve i , Ar jk select the one with i > j (notice that the case that i < j for all pairs is not possible). Then there is an idle time on machine M 1 in cycle s + 1, contradicting Property 4.
In a similar fashion it can be shown that an operation of an arc-job corresponding to an arc entering a vertex w is followed by a vertex-job corresponding to vertex w.
Property 8 The first arc-job that appears in S corresponds to an arc leaving 0. Among the vertex-jobs, the last one is Ve d(0) 0 . Due to Property 5, the first two cycles contain the two operations of job Ve 0 0 . Thus, according to Property 7, both operations of this job have to be succeeded by operations of an arc-job leaving vertex 0. Further, as shown in the proof of Property 6, the last vertex-job to be completed is Ve