Two-machine flow shop with dynamic storage space

The publications on two-machine flow shop scheduling problems with job dependent storage requirements, where a job seizes a portion of the storage space for the entire duration of its processing, were motivated by various applications ranging from supply chains of mineral resources to multimedia systems. In contrast to the previous publications that assumed that the availability of the storage space remains unchanged, this paper is concerned with a more general case when the availability is a function of time. It strengthens the previously published result concerning the existence of an optimal permutation schedule, shows that the variable storage space availability leads to the NP-hardness in the strong sense even for unit processing times, and presents a polynomial-time approximation scheme together with several heuristic algorithms. The heuristics are evaluated by means of computational experiments.


Introduction
This paper is concerned with the two-machine flow shop scheduling problem. This problem is widely used for modelling various real-world situations which can be viewed as a problem of scheduling a set of jobs on two machines, the first-stage machine and the second-stage machine [6]. According to the two-machine flow shop model, each job should be processed on the first-stage machine and, after the completion of this first operation, on the second-stage machine. The considered objective function is the total time needed for the completion of all jobs. In the literature on scheduling, the problems with this objective function are normally referred to as the makespan minimisation problems or simply makespan problems.
A number of applications, ranging from multimedia systems [14] and star data gathering networks [1] to supply chains of mineral resources [7], motivated the recent interest in the two-machine flow shop with limited storage space, also referred to as the two-machine flow shop with job dependent buffer requirements. This direction of research focuses on the situations with two distinct characteristics: (1) each job requires some storage space and the storage requirements vary from job to job; and (2) each job seizes the required portion of the storage space (buffer) from the start of its first operation till the completion of its second operation. Such use of the storage space differentiates the flow shop with limited storage from the two-machine flow shop with an additional resource, where the additional resource is used only during the processing on the machines [4,5].
It is known that the two-machine flow shop problem with limited storage and the objective of makespan is NP-hard in the strong sense [14]. Moreover, it remains NPhard in the strong sense even under the restriction that the order in which the jobs should be processed on one of the machines is given [12]. According to [13] the makespan minimisation problem is also NP-hard in the following two cases: when all jobs have the same processing time on the second-stage machine and the buffer requirement of a job is proportional to its processing time on the first-stage machine, and in the case when all jobs have the same processing time on the second-stage machine and the same buffer requirements.
The case where all jobs have the same processing time on one of the machines, or the same processing time on the first-stage machine and the same processing time on the second-stage machine was considered in a number of publications, including [2,9] and the mentioned above [13]. Besides the theoretical interest, this case has applications in star data gathering networks where different workstations are allocated datasetindependent time slots for data transfer, and in unloading and loading involving a crane.
Furthermore, as is shown in [8], there are instances of the makespan minimisation problem where, in any optimal schedule, the order in which the jobs are processed on one of the machines differs from the order on the other machine. The existence of such instances significantly complicates the development of optimisation algorithms.
A schedule where the order in which the jobs are processed is the same for both machines, is called a permutation schedule. For two particular cases, the existence of an optimal schedule that is also a permutation one is proved in [15]. In both cases, the buffer requirement of a job is proportional to its processing time on the firststage machine. One of these two cases is the case where the smallest processing time on the second-stage machine is greater than or equal to the largest processing time on the first-stage machine. It is shown that in this case an optimal schedule can be constructed in polynomial time. The publication [14] can be viewed as a source of motivation for studying this particular case as well as its mirror reflection where the smallest processing time on the first-stage machine is greater than or equal to the largest processing time on the second-stage machine.
There are many situations where the available storage space (buffer capacity) is a function of time. For example, the storage space is often shared by several clients of the same transportation facility, and the computer memory is often used by several processes simultaneously. Despite this, to the best of the authors' knowledge, there are no publications except [2] that study such systems. The paper addresses this gap in the literature on scheduling by considering the two-machine flow shop with variable storage capacity. The paper (a) proves that the introduction of variable storage availability makes the problem NP-hard in the strong sense even for unit processing times (in contrast, the same problem with constant availability of the storage space is solvable in polynomial time [9]) (Sect. 3); (b) strengthens the result in [15] by establishing the existence of an optimal permutation schedule even for arbitrary storage requirements which are not necessarily proportional to the duration of the first operation (Sect. 4); (c) establishes the existence of an optimal permutation schedule for the case of arbitrary storage requirements when the smallest processing time on the first-stage machine is greater than or equal to the largest processing time on the second-stage machine (Sect. 4); (d) shows that in the case of variable resource availability, even for unit processing times, the two-machine flow shop with an additional resource may not have an optimal permutation schedule (Sect. 4); (e) presents a polynomial-time approximation scheme for the case where all jobs have the same processing time on the first-stage machine and the same processing time on the second-stage machine (Sect. 5); (f) presents several heuristics for the case where all jobs have the same processing time on the first-stage machine and the same processing time on the second-stage machine, and compares them by means of computational experiments (Sects. 6, 7).

Problem formulation
The considered scheduling problem can be stated as follows. The jobs, comprising the set N = {1, . . . , n}, are to be processed on two machines, machine M 1 and machine M 2 . Each job should be processed first on M 1 (the first operation of the job) and then, from some point in time after the completion of the first operation, on M 2 (the second operation of the job). Each machine can process at most one job at a time (except the points in time when one job completes processing and another job starts processing on this machine), and each job can be processed on at most one machine at a time (except the situations when the completion time of the first operation coincides with the start of the second operation). If a machine starts processing a job, it continues its processing until the completion of the corresponding operation, i.e. no preemptions are allowed.
All processing times are positive integers. The processing time of job j on machine M i will be denoted p i, j and the total processing time will be denoted by T , i.e.
The processing of jobs commences at time t = 0. A schedule σ specifies for each j ∈ N the point in time S 1 j (σ ) when job j starts its processing on M 1 and the point in time C 2 j (σ ) when job j completes its processing on M 2 . Since preemptions are not allowed, for each job j, a schedule σ also specifies the completion time on machine M 1 In order to be processed, each job j requires ω j units of an additional resource, referred to as storage space or a buffer, which it seizes during the time interval [S 1 j , C 2 j ) and releases at the point in time C 2 j . All ω j are nonnegative integers. For any point in time 0 ≤ t < T the permissible consumption of the storage space is determined by a piecewise constant function Ω(t). In other words, at any point in time 0 ≤ t < T , any schedule must satisfy the condition The function Ω(t) satisfies the condition changes its value only at integer points and is given as the sequence The goal is to find a schedule that minimises the makespan In what follows, the problem stated above will be referred to as the two-machine flow shop with job dependent storage requirements and dynamic storage availability. In the standard three-field notation [11] this problem can be denoted Since all processing times are integer and all points of discontinuity of Ω(t) are also integer, for any instance of F2|storage, ω j , Ω(t)|C max , there exists an optimal schedule σ such that all starting times S 1 j (σ ) and S 2 j (σ ) are integer. Therefore, in what follows, without loss of generality, it is assumed that the starting times of all operations on both machines should be integer.

Computational complexity
Denote by F2|storage, ω j , Ω(t), p i, j = 1|C max the restricted version of the F2|storage, ω j , Ω(t)|C max problem where all processing times are equal to one unit of time. In contrast to the case of constant storage availability, which is polynomially solvable when all processing times are equal to one unit of time [9], the F2|storage, ω j , Ω(t), p i, j = 1|C max problem is NP-hard in the strong sense. This will be proved below by a reduction from the Numerical Matching with Target Sums (NMTS) decision problem, which is NP-complete in the strong sense [10]. The NMTS decision problem is stated as follows: INPUT: three sets {x 1 , . . . , x r }, {y 1 , . . . , y r } and {z 1 , . . . , z r } of positive integers, where Consider the following instance of the decision version of the makespan minimisation problem F2|storage, ω j , Ω(t), p i, j = 1|C max : Observe that this instance satisfies the condition (1) and suppose that there exists a schedule σ with the makespan that does not exceed 3r , i.e. assume that the answer to the scheduling problem is YES.
The function Ω(t) induces the partition of the time interval [0, 3r ] into 2r unit intervals with the buffer capacity 2Z + x, referred to as 1-intervals because at most one job can use the storage space at any point in such an interval, and r unit intervals with the buffer capacity 3Z + x + z k where k ∈ {1, . . . , r }, referred to as 2-intervals. Taking into account that the total processing time is T = 4r , both machines are busy in σ during each 2-interval and one machine is busy in σ during each 1-interval.
The same job cannot be processed in any two 2-intervals because in this case it consumes the storage space during each 1-interval that separates these 2-intervals (there are at least two such 1-intervals), leaving not enough storage space for the jobs that must be processed in the 1-intervals. Since the number of jobs is 2r and since the number of jobs which have an operation, processed in the 2-intervals, is 2r , one operation of each job is processed in a 1-interval whereas its another operation is processed in a 2-interval. Taking into account that no two jobs from the set {1, . . . , r } can be processed concurrently due to the insufficient storage space, in every 2-interval [3k −2, 3k −1), where k ∈ {1, . . . , r }, an operation of some job from the set {1, . . . , r }, denoted i k , is processed concurrently with an operation of some job from the set {r + 1, . . . , 2r }, denoted g k . For each such pair of jobs, Let j k = g k − r . Then, by virtue of ω i k = 2Z + x i k and ω g k = Z + y g k −r + x, which by (2) implies z k = x i k + y j k for all k ∈ {1, . . . , r }. Now suppose that there exist permutations (i 1 , . . . , i r ) and ( j 1 , . . . , j r ) of the indices 1, . . . , r such that z k = x i k + y j k for all k ∈ {1, . . . , r }, i.e. assume that the answer to the considered instance of NMTS is YES. Then, the schedule where, for each job g, S 1 g + 1 = S 2 g and where, for each 1 ≤ k ≤ r , S 1 i k = 3(k − 1) and S 1 j k +r = 3k − 2, has the required makespan of 3r .

Permutation schedules
A schedule for the F2|storage, ω j , Ω(t)|C max problem is a no-wait schedule if, for every j ∈ N , S 2 j = C 1 j . Lemma 1 Let j 1 , . . . , j n be the sequence in which the jobs are processed on M 1 in some schedule σ . If, for all 1 ≤ k < n, the processing times satisfy the condition p 2, j k ≤ p 1, j k+1 , then there exists a no-wait schedule σ such that C max (σ ) ≥ C max (σ ).
Proof Consider the no-wait schedule σ where S 1 j (σ ) = S 1 j (σ ) and S 2 j (σ ) = C 1 j (σ ) for all j ∈ N . Since, for any job j, S 2 j (σ ) ≥ C 1 j (σ ) and therefore S 2 j (σ ) ≤ S 2 j (σ ), the schedule σ satisfies the inequality C max (σ ) ≥ C max (σ ). It also satisfies the storage restrictions because, for each job j, the choice of S 1 j (σ ) and S 2 j (σ ) implies Furthermore, for each job j and each 1 ≤ k < n, which shows that the schedule σ satisfies the restrictions imposed by the processing times.
The proof of the lemma below is similar to the proof of Lemma 1. The main difference is the choice of the schedule σ . Now this schedule is defined as follows: . Lemma 2 Let j 1 , . . . , j n be the sequence in which the jobs are processed on M 2 in some schedule σ . If, for all 1 ≤ k < n, the processing times satisfy the condition If a schedule is a no-wait schedule it obviously is a permutation schedule. This observation leads to the following theorem.

Theorem 2
For any instance of the F2|storage, ω j , Ω(t)|C max problem such that either min j∈N p 1, j ≥ max j∈N p 2, j or min j∈N p 2, j ≥ max j∈N p 1, j , there exists an optimal schedule which is a permutation one.
Consider the restricted version of the F2|storage, ω j , Ω(t)|C max problem where all p 1, j are equal and all p 2, j are also equal. Let p 1 be the common value of all p 1, j and let p 2 be the common value of all p 2, j . Of course p 1 and p 2 vary from instance to instance. This restricted makespan minimisation problem will be denoted by F2|storage, ω j , Ω(t), p i |C max . The instance of the F2|storage, ω j , Ω(t), p i |C max problem with the set of jobs N , function Ω (t), storage requirements ω j , and processing times p 1 and p 2 is conjugate to the instance of the F2|storage, ω j , Ω(t), p i |C max problem with the set of jobs N , function Ω (t), storage requirements ω j , and processing times p 1 and p 2 if

Lemma 3 Any two conjugate instances of the F2|storage, ω j , Ω(t), p i |C max problem have the same optimal makespan.
Proof Consider a pair of conjugate instances I and I with the set of jobs N = {1, . . . , n}, function Ω(t), and storage requirements ω j . Let the first of these instances have processing times p 1 and p 2 and let σ be an optimal schedule for this instance.
In the light of Lemmas 1 and 2, without loss of generality, σ is a no-wait schedule. Then, σ defines n time intervals [S 1 j (σ ), C 2 j (σ )], one for each job j in N . For the conjugate instance I , consider the schedule σ where, for each 1 ≤ j ≤ n, for each job j, i.e. the operations of the same job do not overlap in σ . Furthermore, because σ is a no-wait schedule, for each job j there exists at most one job g such that the second operation of j is processed concurrently with the first operation of g. For any such pair of jobs j and g, these two jobs are processed concurrently in both schedules, σ and σ , in the same time interval, which is the intersection of the time intervals ]. Thus, σ satisfies the buffer restrictions. Finally, since σ is a feasible no-wait schedule, Taking this into account, Hence, the operations of jobs j and g do not overlap on any of the two machines, and in consequence, σ is a feasible schedule. Denote by C * max (I ) the optimal makespan for instance I . Since all job completion times are the same in σ and σ , it holds that C max (σ ) = C max (σ ), and hence, C * max (I ) ≤ C * max (I ) Because the conjugation relation is symmetric, it is also true that C * max (I ) ≤ C * max (I ), and hence, C * max (I ) = C * max (I ).
Problem F2|storage, ω j , Ω(t)|C max is connected to the two-machine flow shop problem with an additional resource, where the resource is used only during the processing on the machines [3,4,16,17], because the storage space can also be viewed as an additional resource. However, the condition that the storage is used in the whole interval from the start of the first operation of a job till the completion of its second operation, differentiates the two problems, even in the case of unit processing times. By Theorem 2, for any instance of problem F2|storage, ω j , Ω(t), p i, j = 1|C max , there exists a permutation schedule. Although in the case when the available amount of the resource is constant, any instance of the two-machine flow shop problem with an additional resource and unit operation execution times also has an optimal permutation schedule [16], it will be shown below that this is not true when the resource availability is a function of time. Proof Let n = 6, and let the resource requirements of the jobs be (ω j ) 6 j=1 = (6, 6, 6, 5, 5, 2). Let the resource availability be described by a function Ω(t) given by the sequence (6,12,6,7,7,11,6,6,6,6,6,6). An optimal schedule of length 8 is presented in Fig. 1. It will be shown that there exists no permutation schedule of length at most 8. Note that the total resource availability in the interval [0, 8) is 7 t=0 Ω(t) = 61, while the total requirement of all operations is 2 6 j=1 ω j = 60. Thus, in order to construct a schedule of length 8, at most one unit of the resource can be wasted during at most one time unit.
Suppose an optimal permutation schedule σ exists. Without loss of generality, assume that if ω i = ω j and i < j, then job i is scheduled before job j. Let j 1 be the first job in σ . If ω j 1 < 6, then at least one unit of the resource is wasted in the interval [0, 1), and at least one unit is lost in the interval [1,2). Thus, ω j 1 = 6, j 1 = 1, and this job completes on M 2 at time C 2 1 = 2, as otherwise at least 6 units of the resource would be wasted in the interval [1,2). Similarly, let j 2 be the second job in σ . If ω j 2 < 6, then at least one unit of the resource is wasted in the interval [1,2), and another one in the interval [2,3) or [3,4). Hence, ω j 2 = 6, j 2 = 2 and S 1 2 = 1. Consider the following two cases: either (1) C 2 2 = 3, or (2) C 2 2 > 3. In case (1), machine M 1 is idle in the time interval [2,3), as there is not enough available resource for starting another job. Let j 3 be the third job in σ . Note that j 3 has to start at time 3 on M 1 , as 7 units of the resource would be wasted otherwise. If ω j 3 < 6, then at least 2 units of the resource are wasted in the interval [3,4). Moreover, if ω j 3 = 6, then one unit of the resource is lost in the interval [3,4), and another one in the interval [4,5).
In case (2), machine M 2 is idle in the interval [2,3), and job 2 has to be completed at time C 2 2 = 4, because at least 2 units of the resource would be wasted in the interval [2, 4) otherwise. Hence, one unit of the resource is wasted in the interval [3,4), and in consequence, job 3 (whose resource requirement is 6) has to be executed on M 1 in the interval [2,3). Then, job 3 is scheduled on M 2 in the interval [4,5), which leads to losing one more unit of the resource.
Thus, in both cases at least two units of the resource are wasted, and hence, an optimal permutation schedule does not exist.

Polynomial-time approximation scheme
This section presents a polynomial-time approximation scheme for problem F2|storage, ω j , Ω(t), p i |C max . By Lemmas 1 and 2, it is enough to consider no-wait schedules. For clarity of presentation, it is assumed that p 1 ≥ p 2 . However, it follows from Lemma 3 that the proposed approximation scheme can also be used for solving the problem when p 1 < p 2 .
For any ε > 0, let k = nε 2 and q = 2 ε . It will be shown that a (1 + ε)approximation of the optimal solution can be found in O( p 2 q 2 n q ) time. Recall that an instance of problem F2|storage, ω j , Ω(t), p i |C max contains the values n, p 1 and p 2 , the buffer requirements ω i of the n jobs, and a sequence Ω(0), . . . , Ω(T − 1), where T = n( p 1 + p 2 ), representing the storage availability function. Thus, the instance size is O(n( p 1 + p 2 )), and hence, the running time of the proposed algorithm is indeed polynomial.

Theorem 4 For any instance of F2|storage
, ω j , Ω(t), p i |C max and any given small where σ * is an optimal schedule, can be constructed in O( p 2 q 2 n q ) time.
Proof Assume that there are sufficiently many jobs and number them in the nondecreasing order of their storage requirements, i.e. ω 1 ≤ · · · ≤ ω n . For each job j, replace its storage requirement ω j by a new one (denoted α j ) as follows. For each 1 ≤ e ≤ q − 1 and each (e − 1)k < j ≤ ke, let α j = ω ke , and for each k(q − 1) < j ≤ n, let α j = ω n . Observe that any schedule for the problem with the new storage requirements is feasible for the problem with the original storage requirements. An optimal schedule for the new storage requirements can be constructed by dynamic programming as follows. For 1 ≤ e ≤ q, let and consider (q + 1)-tuples (n 1 , . . . , n q , i) such that (a) 0 ≤ n e ≤ k for all 1 ≤ e ≤ q − 1, and 0 ≤ n q ≤ n − k(q − 1); and (b) 1 ≤ i ≤ q and n i > 0. Each such (q + 1)-tuple represents n 1 + · · · + n q jobs such that, for each 1 ≤ e ≤ q, this set contains n e jobs j, whose α j is ω π(e) . For each (q + 1)-tuple (n 1 , . . . , n q , i), let F(n 1 , . . . , n q , i) be the minimal time needed for completion of all jobs corresponding to (n 1 , . . . , n q , i), under the condition that the job with the largest completion time among these jobs is a job with the new storage requirement ω π(i) . Consequently, the optimal makespan is C = min 1≤i≤q F(k, . . . , k, n − (q − 1)k, i).
For any positive integer t, any 1 ≤ i ≤ q and any 1 ≤ e ≤ q, let ω i,e = ω π(i) + ω π(e) , and Note that for given i, e and t, the value of The dynamic programming algorithm above constructs an optimal schedule σ in O( p 2 q 2 n q ) time, and it only remains to show that (3) holds. Let σ * be an optimal schedule for the problem with the original storage requirements. This schedule can be converted into a schedule η for the problem with the new storage requirements as follows. For each job j such that 1 ≤ j ≤ n − k, let C 2 j (η) = C 2 j+k (σ * ) and, for each job j such that n − k < j, let Since p 1 ≥ p 2 and p 1 n < C max (σ * ), it holds that which completes the proof.

Heuristics
In this section, heuristic algorithms are proposed for problem F2|storage, ω j , Ω(t), p i |C max . Similarly as in the previous section, it is assumed for clarity that The main difficulty in designing heuristics for the considered problem consists in the frequent changes of the available storage size. Indeed, suppose a partial schedule σ for time interval [t, t + δ) is built, and in order to schedule the remaining jobs or to improve some other schedule part, σ has to be moved to start at time t = t. The buffer availability pattern in interval [t , t + δ) may be completely different from that in the original interval [t, t + δ), and hence, keeping the job order from σ may require additional idle times due to insufficient storage space. In such a case, it is probable that the schedule modification will be counterproductive. Therefore, simple heuristics building the schedules from left to right are proposed first, and the remaining algorithms consist in improving the initial schedules by small modifications.
Algorithm LF implements the Largest Fit rule. Every time t machine M 1 becomes idle, the largest available job which can be executed without violating the buffer limit is started on it. If no such job can be found, machine M 1 remains idle for one unit of time, and the search for a feasible job is repeated at time t + 1.
Using algorithm LF may be disadvantageous if it is often the case that the buffer is large enough to hold two small jobs, but no job fits together with the largest available job. This motivates designing heuristic LFAhead, which also uses the largest fit rule, but additionally looks one job ahead in an attempt to avoid idle time on the first machine. Precisely, the job to be scheduled on M 1 at time t is the largest feasible job such that another available job can be started immediately after it, at time t + p 1 . If no such job can be found, then the largest job that fits in the storage space is chosen. If no job can be started at time t, the algorithm moves to time t + 1.
Algorithm Rnd constructs a random job sequence. The jobs are started without unnecessary delay, as soon as the previous job completes on the first machine and a sufficient amount of storage space is available. This algorithm is used mainly to verify if the remaining heuristics perform well in comparison to what can be achieved without effort.
The next group of heuristics are local search algorithms LFLocal, LFAheadLocal and RndLocal. Each of them starts with an initial schedule delivered by the corresponding heuristic described above (LF, LFAhead or Rnd). Then, for each pair of jobs it is checked whether swapping their positions leads to improving the schedule. Let S = [S [1] , S [2] , . . . , S [n] ] and S = [S [1] , S [2] , . . . , S [n] ] be the increasing sequences of job starting times on machine M 1 in the current schedule σ and in a new schedule σ , respectively. Schedule σ is considered better than σ if its makespan is smaller than that of σ , or if the two makespans are equal and sequence S is lexicographically smaller than S. The swap that results in the best schedule (if any) is executed, and the search is continued until no further improvement is possible.
Furthermore, variable neighborhood search (VNS) algorithms are proposed. Variable neighborhood search is a metaheuristic consisting in systematically changing the neighborhoods used during local search. In the proposed VNS, the following three neighborhoods are used. For a given schedule σ , neighborhood N 1 (σ ) contains all schedules obtained from σ by swapping a single pair of jobs. Neighborhood N 2 (σ ) consists of all schedules obtained from σ by moving a job from position i to some other position j, for any i, j ∈ {1, . . . , n}, i = j. Neighborhood N 3 (σ ) contains all schedules obtained from σ by reversing a sequence of jobs σ (i), . . . , σ ( j), for any pair of positions i, j ∈ {1, . . . , n}, i < j. For a given initial solution, the variable neighborhood search starts with setting the current neighborhood number to k = 1. In each step of the algorithm, if local search with respect to neighborhood N k leads to a schedule improvement, the current schedule is updated and k is changed to 1. In the opposite case, k is increased by 1. The search continues until reaching k = 4, which means that the current solution could not be improved. Corresponding to the choice of the initial schedule, the proposed variable neighborhood search algorithms are denoted by LFVNS, LFAheadVNS and RndVNS.
Finally, the proposed variable neighborhood search is embedded in the iterated search framework, using two tunable parameters f ∈ (0, 0.5) and r > 0. Given an initial schedule, the variable neighborhood search procedure described above is applied. In the obtained schedule, f n randomly chosen pairs of jobs are swapped, and then the variable neighborhood search is run again. This procedure is repeated r times. The best solution found is recorded and returned as the final result. Reflecting the way the initial schedule is constructed, the iterated variable neighborhood search (IVNS) algorithms are called LFIVNS, LFAheadIVNS and RndIVNS.
In order to assess the quality of the results obtained by the heuristics, an integer linear program delivering optimal solutions is proposed. Recall that T = n( p 1 + p 2 ) is an upper bound on the minimum schedule length C max . For each j = 1, . . . , n and t = 0, . . . , T − 1, define binary variables x j,t such that x j,t = 1 if job j starts on machine M 1 at time t, and x j,t = 0 in the opposite case. The minimum schedule length is computed as follows.
x j,t = 1 for j = 1, . . . , n T −1 t=0 t x j,t + p 1 + p 2 ≤ C max for j = 1, . . . , n x j,t ∈ {0, 1} for j = 1, . . . , n, t = 0, . . . , T − 1 In the above program, constraints (5) guarantee that the jobs executed in the interval [t, t + 1) fit in the available buffer. Inequalities (6) ensure that at most one job starts on machine M 1 in each interval [t, t + p 1 ). Since p 1 ≥ p 2 , this means that no two jobs are executed on the same machine at the same time. Each job starts exactly once by (7). Inequalities (8) guarantee that all jobs are completed by time C max .

Computational experiments
The quality of the delivered solutions and the running times of the proposed heuristics were tested in a series of computational experiments. The algorithms were implemented in C++ and run on an Intel Core i7-7820HK CPU @ 2.90 GHz with 32 GB RAM. Integer linear programs were solved using Gurobi.
The number of jobs in the generated instances was n ∈ {30, 100}. In the tests where the two operations of each job had the same duration, their execution times were p 1 =  [2]. In addition to the tests with random buffer availability, instances with available storage space increasing or decreasing with time were constructed by sorting the values Ω(t). These three groups of tests will be referred to as Rnd, Inc and Dec instances, correspondingly. For each analysed setting, 30 tests were generated and solved.
Many test instances could not be solved by Gurobi to optimality in reasonable time. Therefore, a 1 h time limit was imposed. Since the optimal solutions were not always known, the quality of schedules was measured by the relative percentage error from the lower bound LB = max{np 1 + p 2 , LB G }, where LB G was the lower bound obtained by Gurobi during its computations.
The proposed iterated variable neighborhood search algorithms have two parameters, f and r , which have to be tuned. In order to avoid bias due to starting at a specific schedule, algorithm RndIVNS was used for choosing the values of these parameters. Figure 2 presents the results delivered by this algorithm with f ∈ {0.02, 0.05, 0.01, 0.2} and r ≤ 500 for Rnd and Dec instances with n = 100 and p 1 = p 2 = 5. Similar results were obtained for other analysed combinations of instance type and values p 1 and p 2 . Naturally, for any fixed f , the quality of the obtained schedules improves with increasing r . Thus, r = 500 was selected, as it seems a good compromise between quality and time. The value f that results in the shortest schedules depends on the number of iterations r . For r = 500, the best results were obtained for f = 0.05 in most settings. In the cases when some other value of f was better, the difference between the obtained errors was insignificant (see Fig. 2b). Therefore, f = 0.05 was chosen. Table 1 presents the results delivered by the respective algorithms for Rnd instances with n = 30 and p 1 = p 2 . The algorithms are divided into groups according to their complexity, and in consequence, running time: from the fastest simple heuristics to the slowest integer linear programming (ILP). All integer programs were solved within the time limit for p i ∈ {1, 3}, but for p i = 5, optimal solutions could not be obtained for some tests. Still, ILP delivers the best solutions for the analysed instances. In the group of simple heuristics, the best results are achieved by LFAhead, followed by LF. Both these algorithms deliver much better results than Rnd. Algorithms LFLocal and LFAheadLocal do not gain much in comparison to their initial schedules. Local search brings significant improvement only when it starts from a random solution. The quality of schedules delivered by RndLocal is similar to those of LFLocal and LFAheadLocal. Using variable neighborhood search gives better results, but the difference between local search and variable neighborhood search algorithms is not very large. All iterated variable neighborhood search algorithms obtain very good schedules, with average errors below 3%. Thus, the random shake step in IVNS is indeed helpful in moving from local minima to substantially better solutions. The choice of the initial schedule does not seem very important for IVNS algorithms, as all variants deliver solutions of similar quality in similar time. It is interesting that although the instance size increases with growing p i , because the time horizon T gets larger, the quality of schedules delivered by heuristic algorithms improves with increasing p i . The results obtained for Rnd instances with 30 jobs and p 1 = p 2 are shown in Table  2. The heuristic algorithms achieve here smaller errors than in the case of p 1 = p 2 . Thus, it seems that instances with p 1 = p 2 are easier to solve. Indeed, if p 2 < p 1 , then the second machine is idle for at least p 1 − p 2 time units after executing each job. As a single job always fits in the storage space, the buffer limit is automatically observed in such periods, and hence, it may be easier to construct a good solution. Moreover, the relative distance between the upper bound n( p 1 + p 2 ) on the schedule length and the trivial lower bound np 1 + p 2 is smaller when the difference between p 1 and p 2 is larger. The relationships between individual algorithms are similar to the case of p 1 = p 2 . The average errors of all IVNS algorithms are again smaller than 3%.   Table 3 contains the results obtained for Inc instances with 30 jobs and p 1 = p 2 . For such tests, the best simple heuristic is LF, which delivers solutions below 6% from the optimum on average. The schedules constructed by LFAhead are much worse, although not as bad as random solutions. The local search and variable neighborhood search algorithms perform best when starting from an LF schedule. Using a random initial solution leads to substantially better results than starting with an LFAhead schedule. All variants of iterated variable neighborhood search obtain similar results and achieve very high quality, as their average errors are below 1% in almost all cases. Table 4 presents the results for Inc instances with n = 30 and p 1 = p 2 . Similarly to the case of Rnd tests, the reported errors are smaller for p 1 = p 2 than for p 1 = p 2 . Apart from this, no significant differences between these two groups of tests with an increasing Ω(t) function can be seen. The results obtained for Dec instances with n = 30, p 1 = p 2 are shown in Table  5. This time, the best results among the simple heuristics are delivered by LFAhead, while the schedules produced by LF are much worse. Moreover, LFLocal delivers significantly worse results than RndLocal. However, the distance between VNS variants is small. The quality of results produced by LFAheadLocal is the same as that of LFAhead. Thus, it seems that LFAhead schedules are local optima with respect to the neighborhood used in the proposed local search procedure. Using additional neighborhoods in LFAheadVNS gives only very small improvements. In consequence, for p i < 5, LFAheadVNS is outperformed by LFVNS and RndVNS. All IVNS algorithms deliver again similar results, with average errors below 5%. The results produced by all heuristics are generally worse than for Rnd and Inc instances with corresponding p i values. Moreover, the running time of ILP is the longest for Dec tests. Thus, the instances with a decreasing Ω(t) function seem the most difficult to solve. Table 6 presents the results obtained for Dec instances with 30 jobs and p 1 = p 2 . Once again, it is confirmed that it is easier to produce good schedules when the execution times of the two operations of a job are different. The obtained errors are about two times smaller than for Dec instances with p 1 = p 2 . In particular, the average errors of IVNS algorithms are less than 2.5%. The simple heuristic LFAhead delivers good schedules with average errors smaller than 6%, which are not improved by local search or variable neighborhood search.
The experimental results for tests with n = 100 will be presented only for values ( p 1 , p 2 ) ∈ {(1, 1), (4, 2), (5, 5)}, representing short and long jobs, as well as equal and different execution times of the two operations of a job. The results obtained for Rnd instances are shown in Table 7. For p 1 = p 2 = 1, the results of most heuristic algorithms are slightly worse than for the corresponding instances with 30 jobs. The quality loss is the largest for Rnd and iterated variable neighborhood search algorithms. All variants of IVNS produce schedules around 6% longer than the lower bound. The errors obtained for p i > 1 are much larger than for the corresponding instances with n = 30. Recall that these errors are computed with respect to the lower bound LB. For instances with 100 jobs and p i > 1, Gurobi was not able to find good lower bounds within the imposed time limit, and hence, the trivial lower bound np 1 + p 2 had to be used. Thus, the distance between the lower bound and the actual optimum is very large, and this is the key factor determining the reported error values. For p i > 1, IVNS algorithms reach solution quality similar to that of ILP, in a much shorter time.
No instances with n = 100 and p i > 1 were solved to the optimum by ILP. Note that although the time for solving the integer linear program was limited to 1 h, the algorithm running time also includes building the program and retrieving the schedule found. Hence, the average ILP times reported in Table 7 are slightly greater than 1 h when p i > 1.
The results obtained for Inc instances with 100 jobs are shown in Table 8. In this group, no tests were solved to optimality by ILP, even for p 1 = p 2 = 1. Thus, it seems that instances with increasing storage availability are harder to solve by ILP than those with random buffer changes. This is also confirmed by the fact that in the group of tests with n = 30, the average ILP execution time was longer for Inc instances than for the corresponding Rnd instances (see Tables 1, 2, 3, 4). For p 1 = p 2 = 1, the average error of ILP solutions is approximately 4%, and the IVNS schedules are about 5% longer than the lower bound. For larger p i , the reported errors strongly increase, reflecting growing distance between the lower bounds found and the optimal solutions. All variants of iterated variable neighborhood search significantly outperform ILP when p i > 1, achieving both better solution quality and shorter execution time. Table 9 presents the results obtained for Dec instances with n = 100. No tests in this group were solved to the optimum by ILP. The average ILP error reaches 15%  already for instances with p 1 = p 2 = 1, which suggests that the case of decreasing buffer availability is the hardest to solve for this algorithm. This conforms with the earlier observation that the Dec tests seem the most difficult in the group of instances with n = 30. The iterated variable neighborhood search algorithms achieve much better results than ILP for p i > 1. It may seem surprising that the errors obtained by VNS, IVNS and ILP algorithms for Dec instances with p 1 , p 2 > 1 are smaller than for the corresponding Rnd or Inc instances. However, this can be explained by the fact that in Dec instances, the available buffer is large at the beginning of the scheduling period, and hence, the optimum schedules are shorter than in the case of Rnd or Inc instances. Hence, the distance between the optimum and the lower bound np 1 + p 2 , which is the main factor influencing the reported errors, is smaller for Dec instances. Obtaining smaller errors for p 1 = 4, p 2 = 2 than for p 1 = p 2 = 1 confirms the earlier observation that tests with p 1 = p 2 are easier to solve than those with p 1 = p 2 . This effect is not visible for Rnd and Inc instances with n = 100 and p 1 = 4, p 2 = 2, because the distance between the lower bound and the optimal solution increases fast with growing job execution times when the Ω(t) function is not decreasing. The results of the performed computational experiments can be summarised as follows. The optimal solutions can be found using ILP, but at a high computational cost, which seems the largest in the case of decreasing Ω(t), and the smallest in the case of random buffer changes. If a schedule has to be found very fast, one of algorithms LF and LFAhead can be used. LF is suitable for increasing buffer size, and LFAhead should be used when the storage space function is decreasing or random. For n = 30, choosing a correct simple heuristic usually leads to obtaining solutions at most 10% from the lower bound. Better results can be obtained using local search, and variable neighborhood search provides further improvements. However, for some types of instances, choosing a good initial schedule may be necessary to obtain high quality results using these algorithms. Among the proposed heuristics, the best results are delivered by iterated variable neighborhood search, in reasonable time. Moreover, the choice of the initial schedule has a very small impact on the performance of IVNS, and hence, no additional knowledge about the instance is required to achieve high quality solutions. For the largest analysed instances, iterated variable neighborhood search produces better results than ILP with 1 h time limit.

Conclusions
This paper analyses makespan minimisation in two-machine flow shops with jobdependent storage requirements and storage availability changing in time. It shows that the problem is strongly NP-hard even in the case when the duration of each operation is one unit of time. The existence of optimal permutation no-wait schedules is proved for the case when the smallest processing time on the first-stage machine is greater than or equal to the largest processing time on the second-stage machine, and the case when the smallest processing time on the second-stage machine is greater than or equal to the largest processing time on the first-stage machine. For the case where all jobs have the same processing time on the first machine, and the same processing time on the second machine, the paper presents a polynomial-time approximation scheme and several heuristic algorithms. Computational experiments show that iterated variable neighborhood search algorithms are a good tool for solving the considered problem. The average relative errors obtained by all variants of IVNS for instances with n = 30 are below 5%. For the most difficult instances with 100 jobs, the performance of IVNS is hard to estimate because good lower bounds cannot be found, but the delivered solutions are close to or better than those produced by ILP. Future research should include the worst-case analysis of the approximation algorithms. An interesting open question is whether problem F2|storage, ω j , Ω(t), p i, j = 1|C max remains strongly NP-hard when Ω(t) is a monotonic function.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.