1 Introduction

This paper is concerned with the two-machine flow shop scheduling problem. This problem is widely used for modelling various real-world situations which can be viewed as a problem of scheduling a set of jobs on two machines, the first-stage machine and the second-stage machine [6]. According to the two-machine flow shop model, each job should be processed on the first-stage machine and, after the completion of this first operation, on the second-stage machine. The considered objective function is the total time needed for the completion of all jobs. In the literature on scheduling, the problems with this objective function are normally referred to as the makespan minimisation problems or simply makespan problems.

A number of applications, ranging from multimedia systems [14] and star data gathering networks [1] to supply chains of mineral resources [7], motivated the recent interest in the two-machine flow shop with limited storage space, also referred to as the two-machine flow shop with job dependent buffer requirements. This direction of research focuses on the situations with two distinct characteristics: (1) each job requires some storage space and the storage requirements vary from job to job; and (2) each job seizes the required portion of the storage space (buffer) from the start of its first operation till the completion of its second operation. Such use of the storage space differentiates the flow shop with limited storage from the two-machine flow shop with an additional resource, where the additional resource is used only during the processing on the machines [4, 5].

It is known that the two-machine flow shop problem with limited storage and the objective of makespan is NP-hard in the strong sense [14]. Moreover, it remains NP-hard in the strong sense even under the restriction that the order in which the jobs should be processed on one of the machines is given [12]. According to [13] the makespan minimisation problem is also NP-hard in the following two cases: when all jobs have the same processing time on the second-stage machine and the buffer requirement of a job is proportional to its processing time on the first-stage machine, and in the case when all jobs have the same processing time on the second-stage machine and the same buffer requirements.

The case where all jobs have the same processing time on one of the machines, or the same processing time on the first-stage machine and the same processing time on the second-stage machine was considered in a number of publications, including [2, 9] and the mentioned above [13]. Besides the theoretical interest, this case has applications in star data gathering networks where different workstations are allocated dataset-independent time slots for data transfer, and in unloading and loading involving a crane.

Furthermore, as is shown in [8], there are instances of the makespan minimisation problem where, in any optimal schedule, the order in which the jobs are processed on one of the machines differs from the order on the other machine. The existence of such instances significantly complicates the development of optimisation algorithms.

A schedule where the order in which the jobs are processed is the same for both machines, is called a permutation schedule. For two particular cases, the existence of an optimal schedule that is also a permutation one is proved in [15]. In both cases, the buffer requirement of a job is proportional to its processing time on the first-stage machine. One of these two cases is the case where the smallest processing time on the second-stage machine is greater than or equal to the largest processing time on the first-stage machine. It is shown that in this case an optimal schedule can be constructed in polynomial time. The publication [14] can be viewed as a source of motivation for studying this particular case as well as its mirror reflection where the smallest processing time on the first-stage machine is greater than or equal to the largest processing time on the second-stage machine.

There are many situations where the available storage space (buffer capacity) is a function of time. For example, the storage space is often shared by several clients of the same transportation facility, and the computer memory is often used by several processes simultaneously. Despite this, to the best of the authors’ knowledge, there are no publications except [2] that study such systems. The paper addresses this gap in the literature on scheduling by considering the two-machine flow shop with variable storage capacity. The paper

  1. (a)

    proves that the introduction of variable storage availability makes the problem NP-hard in the strong sense even for unit processing times (in contrast, the same problem with constant availability of the storage space is solvable in polynomial time [9]) (Sect. 3);

  2. (b)

    strengthens the result in [15] by establishing the existence of an optimal permutation schedule even for arbitrary storage requirements which are not necessarily proportional to the duration of the first operation (Sect. 4);

  3. (c)

    establishes the existence of an optimal permutation schedule for the case of arbitrary storage requirements when the smallest processing time on the first-stage machine is greater than or equal to the largest processing time on the second-stage machine (Sect. 4);

  4. (d)

    shows that in the case of variable resource availability, even for unit processing times, the two-machine flow shop with an additional resource may not have an optimal permutation schedule (Sect. 4);

  5. (e)

    presents a polynomial-time approximation scheme for the case where all jobs have the same processing time on the first-stage machine and the same processing time on the second-stage machine (Sect. 5);

  6. (f)

    presents several heuristics for the case where all jobs have the same processing time on the first-stage machine and the same processing time on the second-stage machine, and compares them by means of computational experiments (Sects. 6, 7).

2 Problem formulation

The considered scheduling problem can be stated as follows. The jobs, comprising the set \(N = \{1, \ldots , n\}\), are to be processed on two machines, machine \(M_1\) and machine \(M_2\). Each job should be processed first on \(M_1\) (the first operation of the job) and then, from some point in time after the completion of the first operation, on \(M_2\) (the second operation of the job). Each machine can process at most one job at a time (except the points in time when one job completes processing and another job starts processing on this machine), and each job can be processed on at most one machine at a time (except the situations when the completion time of the first operation coincides with the start of the second operation). If a machine starts processing a job, it continues its processing until the completion of the corresponding operation, i.e. no preemptions are allowed.

All processing times are positive integers. The processing time of job j on machine \(M_i\) will be denoted \(p_{i,j}\) and the total processing time will be denoted by T, i.e.

$$\begin{aligned} T = \sum _{i \in \{1, 2\}} \sum _{j \in N} p_{i, j}. \end{aligned}$$

The processing of jobs commences at time \(t = 0\). A schedule \(\sigma \) specifies for each \(j \in N\) the point in time \(S^1_j(\sigma )\) when job j starts its processing on \(M_1\) and the point in time \(C^2_j(\sigma )\) when job j completes its processing on \(M_2\). Since preemptions are not allowed, for each job j, a schedule \(\sigma \) also specifies the completion time on machine \(M_1\)

$$\begin{aligned} C^1_j(\sigma ) = S^1_j(\sigma ) + p_{1,j} \end{aligned}$$

and the starting time on machine \(M_2\)

$$\begin{aligned} S^2_j(\sigma ) = C^2_j(\sigma ) - p_{2,j}. \end{aligned}$$

The notation \(S^1_j\), \(C^1_j\), \(S^2_j\), \(C^2_j\) will be used instead of \(S^1_j(\sigma )\), \(C^1_j(\sigma )\), \(S^2_j(\sigma )\), \(C^2_j(\sigma )\) if it is clear what schedule is considered.

In order to be processed, each job j requires \(\omega _j\) units of an additional resource, referred to as storage space or a buffer, which it seizes during the time interval \([S^1_j, C^2_j)\) and releases at the point in time \(C^2_j\). All \(\omega _j\) are nonnegative integers. For any point in time \( 0 \le t < T \) the permissible consumption of the storage space is determined by a piecewise constant function \(\varOmega (t)\). In other words, at any point in time \(0 \le t <T\), any schedule must satisfy the condition

$$\begin{aligned} \sum _{\{j{:}\, S^1_j \le t < C^2_j\}} \omega _j \le \varOmega (t). \end{aligned}$$

The function \(\varOmega (t)\) satisfies the condition

$$\begin{aligned} \max _{j\in N} \omega _j \le \min _{0 \le t < T} \varOmega (t), \end{aligned}$$
(1)

changes its value only at integer points and is given as the sequence \(\varOmega (0),\dots ,\) \(\varOmega (T-1)\) where, for any integer \(0 \le \tau \le T-1\), the value of \(\varOmega (t) \) in the time interval \([\tau , \tau +1)\) is \(\varOmega (\tau )\).

The goal is to find a schedule that minimises the makespan

$$\begin{aligned} C_{ max }(\sigma ) = \max _{j \in N} C^2_j(\sigma ). \end{aligned}$$

In what follows, the problem stated above will be referred to as the two-machine flow shop with job dependent storage requirements and dynamic storage availability. In the standard three-field notation [11] this problem can be denoted \(F2| storage , \omega _j, \varOmega (t)|C_{ max }\).

Since all processing times are integer and all points of discontinuity of \(\varOmega (t)\) are also integer, for any instance of \(F2| storage , \omega _j, \varOmega (t)|C_{ max }\), there exists an optimal schedule \(\sigma \) such that all starting times \(S_j^1(\sigma )\) and \(S_j^2(\sigma )\) are integer. Therefore, in what follows, without loss of generality, it is assumed that the starting times of all operations on both machines should be integer.

3 Computational complexity

Denote by \(F2| storage , \omega _j, \varOmega (t), p_{i,j}=1|C_{ max }\) the restricted version of the \(F2| storage , \omega _j, \varOmega (t)|C_{ max }\) problem where all processing times are equal to one unit of time. In contrast to the case of constant storage availability, which is polynomially solvable when all processing times are equal to one unit of time [9], the \(F2| storage , \omega _j, \varOmega (t), p_{i,j}=1|C_{ max }\) problem is NP-hard in the strong sense. This will be proved below by a reduction from the Numerical Matching with Target Sums (NMTS) decision problem, which is NP-complete in the strong sense [10]. The NMTS decision problem is stated as follows:

INPUT: three sets \(\{x_1, \dots , x_r \}\), \(\{y_1, \dots , y_r \}\) and \(\{z_1, \dots , z_r \}\) of positive integers, where

$$\begin{aligned} \sum _{i=1}^r x_i + \sum _{i=1}^r y_i = \sum _{i=1}^r z_i. \end{aligned}$$
(2)

QUESTION: do there exist permutations \((i_1, \dots , i_r)\) and \((j_1,\dots , j_r)\) of the indices \(1, \dots , r\) such that \(z_k =x_{i_k} + y_{j_k}\) for all \(k \in \{1, \dots ,r\}\)?

Theorem 1

The \(F2| storage , \omega _j, \varOmega (t), p_{i,j}=1|C_{ max }\) problem is NP-hard in the strong sense.

Proof

Let \(\{x_1, \ldots , x_r\}\), \(\{y_1, \ldots , y_r\}\) and \(\{z_1, \ldots , z_r\}\) be an arbitrary instance of the NMTS problem, and let

$$\begin{aligned} x = \max _{1 \le i \le r}x_i \text{ and } Z = x + \max _{1\le i \le r} y_i + \max _{1\le i \le r} z_i. \end{aligned}$$

Consider the following instance of the decision version of the makespan minimisation problem \(F2| storage , \omega _j, \varOmega (t), p_{i,j}=1|C_{ max }\):

$$\begin{aligned} \begin{array}{ll} \text{ INPUT: } &{} N = \{1, \dots , 2r\}, \\ &{} p_{1,i} = p_{2,i} = 1 \text{ for } \, 1\le i \le 2r, \\ &{} \omega _i = 2Z + x_i \text{ for } \, 1\le i \le r, \\ &{} \omega _i = Z + y_{i - r} +x \text{ for } r+1\le i \le 2r, \\ &{} \varOmega (t) = \left\{ \begin{array}{ll} 2Z + x &{}\quad \text{ for } \, t = 0 \\ 3Z + x + z_k &{}\quad \text{ for } \, t = 3k - 2 \text{ and } k \in \{1, \dots , r\} \\ 2Z + x &{}\quad \text{ for } \, t = 3k -1 \text{ and } k \in \{1, \dots , r-1\} \\ 2Z + x &{}\quad \text{ for } \, t = 3k \text{ and } k \in \{1, \dots , r-1\} \\ 2Z +x &{}\quad \text{ for } \, t = k \text{ and } k \in \{3r -1, \dots , 4r-1\}. \end{array} \right. \\ \text{ QUESTION: } &{} \text{ does } \text{ there } \text{ exist } \text{ a } \text{ schedule } \sigma \text{ such } \text{ that } C_{ max }(\sigma ) \le 3r? \end{array} \end{aligned}$$

Observe that this instance satisfies the condition (1) and suppose that there exists a schedule \(\sigma \) with the makespan that does not exceed 3r, i.e. assume that the answer to the scheduling problem is YES.

The function \(\varOmega (t)\) induces the partition of the time interval [0, 3r] into 2r unit intervals with the buffer capacity \(2Z+x\), referred to as 1-intervals because at most one job can use the storage space at any point in such an interval, and r unit intervals with the buffer capacity \(3Z+x+ z_k\) where \(k \in \{1, \dots , r\}\), referred to as 2-intervals. Taking into account that the total processing time is \(T = 4r\), both machines are busy in \(\sigma \) during each 2-interval and one machine is busy in \(\sigma \) during each 1-interval.

The same job cannot be processed in any two 2-intervals because in this case it consumes the storage space during each 1-interval that separates these 2-intervals (there are at least two such 1-intervals), leaving not enough storage space for the jobs that must be processed in the 1-intervals. Since the number of jobs is 2r and since the number of jobs which have an operation, processed in the 2-intervals, is 2r, one operation of each job is processed in a 1-interval whereas its another operation is processed in a 2-interval. Taking into account that no two jobs from the set \(\{1, \dots , r\}\) can be processed concurrently due to the insufficient storage space, in every 2-interval \([3k-2, 3k-1)\), where \(k \in \{1, \dots , r\}\), an operation of some job from the set \(\{1, \dots , r\}\), denoted \(i_k\), is processed concurrently with an operation of some job from the set \(\{r+1, \dots , 2r\}\), denoted \(g_k\). For each such pair of jobs,

$$\begin{aligned} \omega _{i_k} + \omega _{g_k} \le \varOmega (3k-2). \end{aligned}$$

Let \(j_k = g_k - r\). Then, by virtue of \(\omega _{i_k} = 2Z + x_{i_k}\) and \(\omega _{g_k} = Z + y_{g_k-r} + x\),

$$\begin{aligned} x_{i_k} + y_{j_k} \le z_k, \end{aligned}$$

which by (2) implies \(z_k =x_{i_k} + y_{j_k}\) for all \(k \in \{1, \dots ,r\}\).

Now suppose that there exist permutations \((i_1, \dots , i_r)\) and \((j_1, \dots , j_r)\) of the indices \(1, \dots , r\) such that \(z_k =x_{i_k} + y_{j_k}\) for all \(k \in \{1, \dots ,r\}\), i.e. assume that the answer to the considered instance of NMTS is YES. Then, the schedule where, for each job g, \(S^1_g + 1 = S^2_g\) and where, for each \(1 \le k \le r\), \( S^1_{i_k} = 3(k-1)\) and \( S^1_{j_k + r} = 3k-2\), has the required makespan of 3r. \(\square \)

4 Permutation schedules

A schedule for the \(F2| storage , \omega _j, \varOmega (t)|C_{ max }\) problem is a no-wait schedule if, for every \(j \in N\), \(S^2_j = C^1_j\).

Lemma 1

Let \(j_1, \dots , j_n\) be the sequence in which the jobs are processed on \(M_1\) in some schedule \(\sigma \). If, for all \(1 \le k < n\), the processing times satisfy the condition \(p_{2, j_k} \le p_{1, j_{k+1}}\), then there exists a no-wait schedule \(\sigma '\) such that \(C_{ max }(\sigma ) \ge C_{ max }(\sigma ')\).

Proof

Consider the no-wait schedule \(\sigma '\) where \(S^1_j(\sigma ') = S^1_j(\sigma )\) and \(S^2_j(\sigma ') = C^1_j(\sigma )\) for all \(j \in N\). Since, for any job j, \(S^2_j(\sigma ) \ge C^1_j(\sigma )\) and therefore \(S^2_j(\sigma ') \le S^2_j(\sigma )\), the schedule \(\sigma '\) satisfies the inequality \(C_{ max }(\sigma ) \ge C_{ max }(\sigma ')\). It also satisfies the storage restrictions because, for each job j, the choice of \(S^1_j(\sigma ')\) and \(S^2_j(\sigma ')\) implies

$$\begin{aligned}{}[S^1_j(\sigma '), C^2_j(\sigma ')) \subseteq [S^1_j(\sigma ), C^2_j(\sigma )). \end{aligned}$$

Furthermore, for each job j and each \(1 \le k < n\),

$$\begin{aligned} C^2_{j_k}(\sigma ')= & {} C^1_{j_k}(\sigma ') + p_{2, j_k} = C^1_{j_k}(\sigma ) + p_{2, j_k} \le S^1_{j_{k+1}}(\sigma ) + p_{2, j_k} \\= & {} S^1_{j_{k+1}}(\sigma ') + p_{2, j_k} \le S^1_{j_{k+1}}(\sigma ') + p_{1, j_{k+1}} = S^2_{j_{k+1}}(\sigma ') \end{aligned}$$

which shows that the schedule \(\sigma '\) satisfies the restrictions imposed by the processing times. \(\square \)

The proof of the lemma below is similar to the proof of Lemma 1. The main difference is the choice of the schedule \(\sigma '\). Now this schedule is defined as follows: for each \(j \in N\), \(S^2_j(\sigma ') = S^2_j(\sigma )\) and \(C^1_j(\sigma ') = S^2_j(\sigma )\).

Lemma 2

Let \(j_1, \dots , j_n\) be the sequence in which the jobs are processed on \(M_2\) in some schedule \(\sigma \). If, for all \(1 \le k < n\), the processing times satisfy the condition \(p_{2, j_k} \ge p_{1, j_{k+1}}\), then there exists a no-wait schedule \(\sigma '\) such that \(C_{ max }(\sigma ) \ge C_{ max }(\sigma ')\).

If a schedule is a no-wait schedule it obviously is a permutation schedule. This observation leads to the following theorem.

Theorem 2

For any instance of the \(F2| storage , \omega _j, \varOmega (t)|C_{ max }\) problem such that either \(\min _{j\in N} p_{1,j} \ge \max _{j\in N} p_{2,j}\) or \(\min _{j\in N} p_{2,j} \ge \max _{j\in N} p_{1,j}\), there exists an optimal schedule which is a permutation one.

Consider the restricted version of the \(F2| storage , \omega _j, \varOmega (t)|C_{ max }\) problem where all \(p_{1, j}\) are equal and all \(p_{2, j}\) are also equal. Let \(p_1\) be the common value of all \(p_{1, j}\) and let \(p_2\) be the common value of all \(p_{2, j}\). Of course \(p_1\) and \(p_2\) vary from instance to instance. This restricted makespan minimisation problem will be denoted by \(F2| storage , \omega _j, \varOmega (t), p_i|C_{ max }\). The instance of the \(F2| storage , \omega _j, \varOmega (t), p_i|C_{ max }\) problem with the set of jobs \(N'\), function \(\varOmega '(t)\), storage requirements \(\omega '_j\), and processing times \(p'_1\) and \(p'_2\) is conjugate to the instance of the \(F2| storage , \omega _j, \varOmega (t), p_i|C_{ max }\) problem with the set of jobs \(N''\), function \(\varOmega ''(t)\), storage requirements \(\omega ''_j\), and processing times \(p''_1\) and \(p''_2\) if

$$\begin{aligned} N' = N'', \varOmega '(t) = \varOmega ''(t), \omega '_j = \omega ''_j \text{ for } \text{ all } \text{ jobs } j, p'_1 = p''_2, p'_2 = p''_1. \end{aligned}$$

Lemma 3

Any two conjugate instances of the \(F2| storage , \omega _j, \varOmega (t), p_i|C_{ max }\) problem have the same optimal makespan.

Proof

Consider a pair of conjugate instances \(I'\) and \(I''\) with the set of jobs \(N = \{1, \ldots , n\}\), function \(\varOmega (t)\), and storage requirements \(\omega _j\). Let the first of these instances have processing times \(p_1\) and \(p_2\) and let \(\sigma '\) be an optimal schedule for this instance. In the light of Lemmas 1 and 2, without loss of generality, \(\sigma '\) is a no-wait schedule. Then, \(\sigma '\) defines n time intervals \([S^1_j(\sigma '), C^2_j(\sigma ')]\), one for each job j in N.

For the conjugate instance \(I''\), consider the schedule \(\sigma ''\) where, for each \(1 \le j \le n\), \(S^1_j(\sigma '') = S^1_j(\sigma ')\) and \(C^2_j(\sigma '') = C^2_j(\sigma ')\). Since, for each \(1 \le j \le n\),

$$\begin{aligned} C^2_j(\sigma ') - S^1_j(\sigma ') = p_1 + p_2, \end{aligned}$$

\(S^2_j(\sigma '') = C^1_j(\sigma '')\) for each job j, i.e. the operations of the same job do not overlap in \(\sigma ''\). Furthermore, because \(\sigma '\) is a no-wait schedule, for each job j there exists at most one job g such that the second operation of j is processed concurrently with the first operation of g. For any such pair of jobs j and g, these two jobs are processed concurrently in both schedules, \(\sigma '\) and \(\sigma ''\), in the same time interval, which is the intersection of the time intervals \([S^1_j(\sigma '), C^2_j(\sigma ')]\) and \([S^1_g(\sigma '), C^2_g(\sigma ')]\). Thus, \(\sigma ''\) satisfies the buffer restrictions. Finally, since \(\sigma '\) is a feasible no-wait schedule,

$$\begin{aligned} C^2_j(\sigma ') - S^1_g(\sigma ') \le p_1, C^2_j(\sigma ') = S^1_j(\sigma ') + p_1 + p_2 \; \text{ and } \; S^1_j(\sigma ') + p_1 \le S^1_g(\sigma '). \end{aligned}$$

Taking this into account,

$$\begin{aligned} S^1_g(\sigma '') = S^1_g(\sigma ') = S^1_g(\sigma ') + S^1_j(\sigma ') + p_1 + p_2 - C^2_j(\sigma ') \ge S^1_j(\sigma ') + p_2 = C^1_j(\sigma '') \end{aligned}$$

and

$$\begin{aligned} S^2_g(\sigma '') = S^1_g(\sigma ') + p_2 = S^1_g(\sigma ') + p_2 + C^2_j(\sigma ') - S^1_j(\sigma ') - p_1 - p_2 \ge C^2_j(\sigma ''). \end{aligned}$$

Hence, the operations of jobs j and g do not overlap on any of the two machines, and in consequence, \(\sigma ''\) is a feasible schedule. Denote by \(C^*_{ max }(I)\) the optimal makespan for instance I. Since all job completion times are the same in \(\sigma '\) and \(\sigma ''\), it holds that \(C_{ max }(\sigma ')=C_{ max }(\sigma '')\), and hence, \(C^*_{ max }(I'')\le C^*_{ max }(I')\) Because the conjugation relation is symmetric, it is also true that \(C^*_{ max }(I')\le C^*_{ max }(I'')\), and hence, \(C^*_{ max }(I')= C^*_{ max }(I'')\). \(\square \)

Problem \(F2| storage , \omega _j, \varOmega (t)|C_{ max }\) is connected to the two-machine flow shop problem with an additional resource, where the resource is used only during the processing on the machines [3, 4, 16, 17], because the storage space can also be viewed as an additional resource. However, the condition that the storage is used in the whole interval from the start of the first operation of a job till the completion of its second operation, differentiates the two problems, even in the case of unit processing times. By Theorem 2, for any instance of problem \(F2| storage , \omega _j, \varOmega (t), p_{i,j}=1|C_{ max }\), there exists a permutation schedule. Although in the case when the available amount of the resource is constant, any instance of the two-machine flow shop problem with an additional resource and unit operation execution times also has an optimal permutation schedule [16], it will be shown below that this is not true when the resource availability is a function of time.

Fig. 1
figure 1

Optimal schedule for the proof of Theorem 3. Jobs are described by storage requirement \(\omega _j\), and job number j given in subscript

Theorem 3

There exist instances of the two-machine flow shop scheduling problem with unit processing times, an additional resource and variable resource availability, which do not have optimal permutation schedules.

Proof

Let \(n=6\), and let the resource requirements of the jobs be \((\omega _j)_{j=1}^6=(6,6,6,5, 5,2)\). Let the resource availability be described by a function \(\varOmega (t)\) given by the sequence (6, 12, 6, 7, 7, 11, 6, 6, 6, 6, 6, 6). An optimal schedule of length 8 is presented in Fig. 1. It will be shown that there exists no permutation schedule of length at most 8. Note that the total resource availability in the interval [0, 8) is \(\sum _{t=0}^7 \varOmega (t)=61\), while the total requirement of all operations is \(2\sum _{j=1}^6 \omega _j=60\). Thus, in order to construct a schedule of length 8, at most one unit of the resource can be wasted during at most one time unit.

Suppose an optimal permutation schedule \(\sigma \) exists. Without loss of generality, assume that if \(\omega _i=\omega _j\) and \(i<j\), then job i is scheduled before job j. Let \(j_1\) be the first job in \(\sigma \). If \(\omega _{j_1}<6\), then at least one unit of the resource is wasted in the interval [0, 1), and at least one unit is lost in the interval [1, 2). Thus, \(\omega _{j_1}=6\), \(j_1=1\), and this job completes on \(M_2\) at time \(C^2_1=2\), as otherwise at least 6 units of the resource would be wasted in the interval [1, 2). Similarly, let \(j_2\) be the second job in \(\sigma \). If \(\omega _{j_2}<6\), then at least one unit of the resource is wasted in the interval [1, 2), and another one in the interval [2, 3) or [3, 4). Hence, \(\omega _{j_2}=6\), \(j_2=2\) and \(S^1_2=1\). Consider the following two cases: either (1) \(C^2_2=3\), or (2) \(C^2_2>3\).

In case (1), machine \(M_1\) is idle in the time interval [2, 3), as there is not enough available resource for starting another job. Let \(j_3\) be the third job in \(\sigma \). Note that \(j_3\) has to start at time 3 on \(M_1\), as 7 units of the resource would be wasted otherwise. If \(\omega _{j_3}<6\), then at least 2 units of the resource are wasted in the interval [3, 4). Moreover, if \(\omega _{j_3}=6\), then one unit of the resource is lost in the interval [3, 4), and another one in the interval [4, 5).

In case (2), machine \(M_2\) is idle in the interval [2, 3), and job 2 has to be completed at time \(C^2_2=4\), because at least 2 units of the resource would be wasted in the interval [2, 4) otherwise. Hence, one unit of the resource is wasted in the interval [3, 4), and in consequence, job 3 (whose resource requirement is 6) has to be executed on \(M_1\) in the interval [2, 3). Then, job 3 is scheduled on \(M_2\) in the interval [4, 5), which leads to losing one more unit of the resource.

Thus, in both cases at least two units of the resource are wasted, and hence, an optimal permutation schedule does not exist. \(\square \)

5 Polynomial-time approximation scheme

This section presents a polynomial-time approximation scheme for problem \(F2| storage , \omega _j, \varOmega (t), p_i|C_{ max }\). By Lemmas 1 and 2, it is enough to consider no-wait schedules. For clarity of presentation, it is assumed that \(p_1\ge p_2\). However, it follows from Lemma 3 that the proposed approximation scheme can also be used for solving the problem when \(p_1<p_2\).

For any \(\varepsilon >0\), let \(k = \left\lfloor \frac{n \varepsilon }{2} \right\rfloor \) and \(q =\left\lceil \frac{2}{\varepsilon } \right\rceil \). It will be shown that a \((1+\varepsilon )\)-approximation of the optimal solution can be found in \(O(p_2q^2n^q)\) time. Recall that an instance of problem \(F2| storage , \omega _j, \varOmega (t), p_i|C_{ max }\) contains the values n, \(p_1\) and \(p_2\), the buffer requirements \(\omega _i\) of the n jobs, and a sequence \(\varOmega (0),\dots ,\) \(\varOmega (T-1)\), where \(T=n(p_1+p_2)\), representing the storage availability function. Thus, the instance size is \(O(n(p_1+p_2))\), and hence, the running time of the proposed algorithm is indeed polynomial.

Theorem 4

For any instance of \(F2| storage , \omega _j, \varOmega (t), p_i|C_{ max }\) and any given small \(\varepsilon >0\), a schedule \(\sigma \) such that

$$\begin{aligned} C_{ max }(\sigma ) \le (1 + \varepsilon ) C_{ max }(\sigma ^*), \end{aligned}$$
(3)

where \(\sigma ^*\) is an optimal schedule, can be constructed in \(O(p_2q^2n^q)\) time.

Proof

Assume that there are sufficiently many jobs and number them in the nondecreasing order of their storage requirements, i.e. \(\omega _1 \le \cdots \le \omega _n\). For each job j, replace its storage requirement \(\omega _j\) by a new one (denoted \(\alpha _j\)) as follows. For each \(1 \le e \le q-1\) and each \((e-1)k < j \le ke\), let \(\alpha _j = \omega _{k e}\), and for each \(k (q-1) < j \le n\), let \(\alpha _j = \omega _{n}\). Observe that any schedule for the problem with the new storage requirements is feasible for the problem with the original storage requirements.

An optimal schedule for the new storage requirements can be constructed by dynamic programming as follows. For \(1 \le e \le q\), let

$$\begin{aligned} \pi (e) = \left\{ \begin{array}{ll} ke &{}\quad \text{ if } 1 \le e < q \\ n &{}\quad \text{ if } e = q \end{array} \right. , \end{aligned}$$

and consider \((q+1)\)-tuples \((n_1, \dots , n_q, i)\) such that (a) \(0 \le n_e \le k\) for all \(1 \le e \le q-1\), and \(0 \le n_q \le n - k(q-1)\); and (b) \(1 \le i \le q\) and \(n_i >0\). Each such \((q+1)\)-tuple represents \(n_1 + \cdots + n_q\) jobs such that, for each \(1 \le e \le q\), this set contains \(n_e\) jobs j, whose \(\alpha _j\) is \(\omega _{\pi (e)}\).

For each \((q+1)\)-tuple \((n_1, \dots , n_q, i)\), let \(F(n_1, \dots , n_q, i)\) be the minimal time needed for completion of all jobs corresponding to \((n_1, \dots , n_q, i)\), under the condition that the job with the largest completion time among these jobs is a job with the new storage requirement \(\omega _{\pi (i)}\). Consequently, the optimal makespan is

$$\begin{aligned} C = \min _{1 \le i \le q} F(k,\dots , k, n - (q-1)k, i). \end{aligned}$$

The \((q+1)\)-tuples, satisfying the condition \(n_1 + \cdots + n_q = 1\), will be referred to as boundary \((q+1)\)-tuples. Then, \(F(n_1, \dots , n_q, i) = p_1+p_2\) for each boundary \((n_1,\dots , n_q, i)\).

For any positive integer t, any \(1 \le i \le q\) and any \(1 \le e \le q\), let \(\omega _{i,e}=\omega _{\pi (i)} + \omega _{\pi (e)}\), and

$$\begin{aligned} W_{i, e}(t) = \left\{ \begin{array}{ll} p_1 + \min \{\delta \ge 0{:}\,\omega _{i,e} \le \min \limits _{\tau \in [t-p_2+\delta ,t)} \{\varOmega (\tau )\}\} &{}\quad \text{ if } \omega _{i,e}\le \varOmega (t-1) \\ p_1+p_2 &{}\quad \text{ if } \omega _{i,e}> \varOmega (t-1) \end{array} \right. . \end{aligned}$$

Note that for given i, e and t, the value of \(W_{i, e}(t)\) can be calculated in \(O(p_2)\) time. The values of F for all \((q+1)\)-tuples that are not boundary are computed using the following recursive equation:

$$\begin{aligned} F(n_1, \dots , n_i+1, \dots , n_q, i) = \min _{\{e{:}\, n_e > 0\}} [ F(n_1, \dots , n_q, e) + W_{i, e}(F(n_1, \dots , n_q, e) )]. \end{aligned}$$

The dynamic programming algorithm above constructs an optimal schedule \(\sigma \) in \(O(p_2q^2n^q)\) time, and it only remains to show that (3) holds.

Let \(\sigma ^*\) be an optimal schedule for the problem with the original storage requirements. This schedule can be converted into a schedule \(\eta \) for the problem with the new storage requirements as follows. For each job j such that \(1 \le j \le n-k\), let \(C^2_j(\eta ) = C^2_{j+k}(\sigma ^*)\) and, for each job j such that \(n-k < j\), let

$$\begin{aligned} C^2_j(\eta ) = C_{ max }(\sigma ^*) + (p_1+p_2)(j - n+ k). \end{aligned}$$

Since \(p_1\ge p_2\) and \(p_1n < C_{ max }(\sigma ^*)\), it holds that

$$\begin{aligned} C_{ max }(\sigma )\le & {} C_{ max }(\eta ) \le C_{ max }(\sigma ^*) + (p_1+p_2)k \le C_{ max }(\sigma ^*) + 2p_1k \\\le & {} C_{ max }(\sigma ^*) + p_1n\varepsilon \le (1 + \varepsilon )C_{ max }(\sigma ^*), \end{aligned}$$

which completes the proof. \(\square \)

6 Heuristics

In this section, heuristic algorithms are proposed for problem \(F2| storage , \omega _j,\) \(\varOmega (t), p_i|C_{ max }\). Similarly as in the previous section, it is assumed for clarity that \(p_1\ge p_2\).

The main difficulty in designing heuristics for the considered problem consists in the frequent changes of the available storage size. Indeed, suppose a partial schedule \(\sigma \) for time interval \([t,t+\delta )\) is built, and in order to schedule the remaining jobs or to improve some other schedule part, \(\sigma \) has to be moved to start at time \(t'\ne t\). The buffer availability pattern in interval \([t',t'+\delta )\) may be completely different from that in the original interval \([t,t+\delta )\), and hence, keeping the job order from \(\sigma \) may require additional idle times due to insufficient storage space. In such a case, it is probable that the schedule modification will be counterproductive. Therefore, simple heuristics building the schedules from left to right are proposed first, and the remaining algorithms consist in improving the initial schedules by small modifications.

Algorithm LF implements the Largest Fit rule. Every time t machine \(M_1\) becomes idle, the largest available job which can be executed without violating the buffer limit is started on it. If no such job can be found, machine \(M_1\) remains idle for one unit of time, and the search for a feasible job is repeated at time \(t+1\).

Using algorithm LF may be disadvantageous if it is often the case that the buffer is large enough to hold two small jobs, but no job fits together with the largest available job. This motivates designing heuristic LFAhead, which also uses the largest fit rule, but additionally looks one job ahead in an attempt to avoid idle time on the first machine. Precisely, the job to be scheduled on \(M_1\) at time t is the largest feasible job such that another available job can be started immediately after it, at time \(t+p_1\). If no such job can be found, then the largest job that fits in the storage space is chosen. If no job can be started at time t, the algorithm moves to time \(t+1\).

Algorithm Rnd constructs a random job sequence. The jobs are started without unnecessary delay, as soon as the previous job completes on the first machine and a sufficient amount of storage space is available. This algorithm is used mainly to verify if the remaining heuristics perform well in comparison to what can be achieved without effort.

The next group of heuristics are local search algorithms LFLocal, LFAheadLocal and RndLocal. Each of them starts with an initial schedule delivered by the corresponding heuristic described above (LF, LFAhead or Rnd). Then, for each pair of jobs it is checked whether swapping their positions leads to improving the schedule. Let \(S=[S_{[1]},S_{[2]},\dots ,S_{[n]}]\) and \(S'=[S'_{[1]},S'_{[2]},\dots ,S'_{[n]}]\) be the increasing sequences of job starting times on machine \(M_1\) in the current schedule \(\sigma \) and in a new schedule \(\sigma '\), respectively. Schedule \(\sigma '\) is considered better than \(\sigma \) if its makespan is smaller than that of \(\sigma \), or if the two makespans are equal and sequence \(S'\) is lexicographically smaller than S. The swap that results in the best schedule (if any) is executed, and the search is continued until no further improvement is possible.

Furthermore, variable neighborhood search (VNS) algorithms are proposed. Variable neighborhood search is a metaheuristic consisting in systematically changing the neighborhoods used during local search. In the proposed VNS, the following three neighborhoods are used. For a given schedule \(\sigma \), neighborhood \(N_1(\sigma )\) contains all schedules obtained from \(\sigma \) by swapping a single pair of jobs. Neighborhood \(N_2(\sigma )\) consists of all schedules obtained from \(\sigma \) by moving a job from position i to some other position j, for any \(i,j\in \{1,\dots ,n\}\), \(i\ne j\). Neighborhood \(N_3(\sigma )\) contains all schedules obtained from \(\sigma \) by reversing a sequence of jobs \(\sigma (i),\dots ,\sigma (j)\), for any pair of positions \(i,j\in \{ 1,\dots ,n\}\), \(i<j\). For a given initial solution, the variable neighborhood search starts with setting the current neighborhood number to \(k=1\). In each step of the algorithm, if local search with respect to neighborhood \(N_k\) leads to a schedule improvement, the current schedule is updated and k is changed to 1. In the opposite case, k is increased by 1. The search continues until reaching \(k=4\), which means that the current solution could not be improved. Corresponding to the choice of the initial schedule, the proposed variable neighborhood search algorithms are denoted by LFVNS, LFAheadVNS and RndVNS.

Finally, the proposed variable neighborhood search is embedded in the iterated search framework, using two tunable parameters \(f\in (0,0.5)\) and \(r>0\). Given an initial schedule, the variable neighborhood search procedure described above is applied. In the obtained schedule, \(\lceil fn \rceil \) randomly chosen pairs of jobs are swapped, and then the variable neighborhood search is run again. This procedure is repeated r times. The best solution found is recorded and returned as the final result. Reflecting the way the initial schedule is constructed, the iterated variable neighborhood search (IVNS) algorithms are called LFIVNS, LFAheadIVNS and RndIVNS.

In order to assess the quality of the results obtained by the heuristics, an integer linear program delivering optimal solutions is proposed. Recall that \(T= n(p_1+p_2)\) is an upper bound on the minimum schedule length \(C_{ max }\). For each \(j=1,\dots ,n\) and \(t=0,\dots ,T-1\), define binary variables \(x_{j,t}\) such that \(x_{j,t}=1\) if job j starts on machine \(M_1\) at time t, and \(x_{j,t}=0\) in the opposite case. The minimum schedule length is computed as follows.

$$\begin{aligned}&\text {minimise}\quad C_{ max } \end{aligned}$$
(4)
$$\begin{aligned}&\sum _{j=1}^n \sum _{\tau =\max \{0,t+1-p_1-p_2\}}^{t} \omega _j x_{j,\tau } \le \varOmega (t) \quad \text {for }\ t=0,\dots ,T-1 \end{aligned}$$
(5)
$$\begin{aligned}&\sum _{j=1}^n \sum _{\tau =t}^{t+p_1-1} x_{j,\tau } \le 1 \quad \text {for }\ t=0,\dots ,T-p_1 \end{aligned}$$
(6)
$$\begin{aligned}&\sum _{t=0}^{T-1} x_{j,t} =1 \quad \text {for }\ j=1,\dots ,n \end{aligned}$$
(7)
$$\begin{aligned}&\sum _{t=0}^{T-1} t x_{j,t} +p_1+p_2 \le C_{ max } \quad \text {for }\ j=1,\dots ,n \end{aligned}$$
(8)
$$\begin{aligned}&x_{j,t}\in \{0,1\} \quad \text {for }\ j=1,\dots ,n, \quad t=0,\dots ,T-1 \end{aligned}$$
(9)

In the above program, constraints (5) guarantee that the jobs executed in the interval \([t,t+1)\) fit in the available buffer. Inequalities (6) ensure that at most one job starts on machine \(M_1\) in each interval \([t,t+p_1)\). Since \(p_1\ge p_2\), this means that no two jobs are executed on the same machine at the same time. Each job starts exactly once by (7). Inequalities (8) guarantee that all jobs are completed by time \(C_{ max }\).

7 Computational experiments

The quality of the delivered solutions and the running times of the proposed heuristics were tested in a series of computational experiments. The algorithms were implemented in C++ and run on an Intel Core i7-7820HK CPU @ 2.90 GHz with 32 GB RAM. Integer linear programs were solved using Gurobi.

The number of jobs in the generated instances was \(n\in \{30,100\}\). In the tests where the two operations of each job had the same duration, their execution times were \(p_1=p_2\in \{1,3,5\}\). In the instances with different processing times of the two operations, values \((p_1,p_2)\in \{(2,1),(4,2),(5,2)\}\) were used. The buffer requirements \(\omega _j\) were selected uniformly at random from the range [10n, 20n]. Available buffer sizes \(\varOmega (t)\) were chosen randomly from the range [20n, 32n], for each \(t=0,1,\dots ,T-1\) independently. This choice of ranges was based on the experimental analysis presented in [2]. In addition to the tests with random buffer availability, instances with available storage space increasing or decreasing with time were constructed by sorting the values \(\varOmega (t)\). These three groups of tests will be referred to as Rnd, Inc and Dec instances, correspondingly. For each analysed setting, 30 tests were generated and solved.

Many test instances could not be solved by Gurobi to optimality in reasonable time. Therefore, a 1 h time limit was imposed. Since the optimal solutions were not always known, the quality of schedules was measured by the relative percentage error from the lower bound \( LB =\max \{np_1 +p_2, LB _G\}\), where \( LB _G\) was the lower bound obtained by Gurobi during its computations.

Fig. 2
figure 2

RndIVNS tuning, \(n=100\), \(p_1=p_2=5\). a Rnd instances, b Dec instances

The proposed iterated variable neighborhood search algorithms have two parameters, f and r, which have to be tuned. In order to avoid bias due to starting at a specific schedule, algorithm RndIVNS was used for choosing the values of these parameters. Figure 2 presents the results delivered by this algorithm with \(f\in \{ 0.02,0.05,0.01,0.2\}\) and \(r\le 500\) for Rnd and Dec instances with \(n=100\) and \(p_1=p_2=5\). Similar results were obtained for other analysed combinations of instance type and values \(p_1\) and \(p_2\). Naturally, for any fixed f, the quality of the obtained schedules improves with increasing r. Thus, \(r=500\) was selected, as it seems a good compromise between quality and time. The value f that results in the shortest schedules depends on the number of iterations r. For \(r=500\), the best results were obtained for \(f=0.05\) in most settings. In the cases when some other value of f was better, the difference between the obtained errors was insignificant (see Fig. 2b). Therefore, \(f=0.05\) was chosen.

Table 1 Average percentage distance from \( LB \) and running time for \(n=30\), \(p_1=p_2\), Rnd instances

Table 1 presents the results delivered by the respective algorithms for Rnd instances with \(n=30\) and \(p_1=p_2\). The algorithms are divided into groups according to their complexity, and in consequence, running time: from the fastest simple heuristics to the slowest integer linear programming (ILP). All integer programs were solved within the time limit for \(p_i\in \{ 1,3\}\), but for \(p_i=5\), optimal solutions could not be obtained for some tests. Still, ILP delivers the best solutions for the analysed instances. In the group of simple heuristics, the best results are achieved by LFAhead, followed by LF. Both these algorithms deliver much better results than Rnd. Algorithms LFLocal and LFAheadLocal do not gain much in comparison to their initial schedules. Local search brings significant improvement only when it starts from a random solution. The quality of schedules delivered by RndLocal is similar to those of LFLocal and LFAheadLocal. Using variable neighborhood search gives better results, but the difference between local search and variable neighborhood search algorithms is not very large. All iterated variable neighborhood search algorithms obtain very good schedules, with average errors below 3%. Thus, the random shake step in IVNS is indeed helpful in moving from local minima to substantially better solutions. The choice of the initial schedule does not seem very important for IVNS algorithms, as all variants deliver solutions of similar quality in similar time. It is interesting that although the instance size increases with growing \(p_i\), because the time horizon T gets larger, the quality of schedules delivered by heuristic algorithms improves with increasing \(p_i\).

Table 2 Average percentage distance from \( LB \) and running time for \(n=30\), \(p_1\ne p_2\), Rnd instances

The results obtained for Rnd instances with 30 jobs and \(p_1\ne p_2\) are shown in Table 2. The heuristic algorithms achieve here smaller errors than in the case of \(p_1=p_2\). Thus, it seems that instances with \(p_1\ne p_2\) are easier to solve. Indeed, if \(p_2<p_1\), then the second machine is idle for at least \(p_1-p_2\) time units after executing each job. As a single job always fits in the storage space, the buffer limit is automatically observed in such periods, and hence, it may be easier to construct a good solution. Moreover, the relative distance between the upper bound \(n(p_1+p_2)\) on the schedule length and the trivial lower bound \(np_1+p_2\) is smaller when the difference between \(p_1\) and \(p_2\) is larger. The relationships between individual algorithms are similar to the case of \(p_1=p_2\). The average errors of all IVNS algorithms are again smaller than 3%.

Table 3 Average percentage distance from \( LB \) and running time for \(n=30\), \(p_1=p_2\), Inc instances

Table 3 contains the results obtained for Inc instances with 30 jobs and \(p_1=p_2\). For such tests, the best simple heuristic is LF, which delivers solutions below 6% from the optimum on average. The schedules constructed by LFAhead are much worse, although not as bad as random solutions. The local search and variable neighborhood search algorithms perform best when starting from an LF schedule. Using a random initial solution leads to substantially better results than starting with an LFAhead schedule. All variants of iterated variable neighborhood search obtain similar results and achieve very high quality, as their average errors are below 1% in almost all cases.

Table 4 Average percentage distance from \( LB \) and running time for \(n=30\), \(p_1\ne p_2\), Inc instances

Table 4 presents the results for Inc instances with \(n=30\) and \(p_1\ne p_2\). Similarly to the case of Rnd tests, the reported errors are smaller for \(p_1\ne p_2\) than for \(p_1=p_2\). Apart from this, no significant differences between these two groups of tests with an increasing \(\varOmega (t)\) function can be seen.

Table 5 Average percentage distance from \( LB \) and running time for \(n=30\), \(p_1=p_2\), Dec instances

The results obtained for Dec instances with \(n=30\), \(p_1=p_2\) are shown in Table 5. This time, the best results among the simple heuristics are delivered by LFAhead, while the schedules produced by LF are much worse. Moreover, LFLocal delivers significantly worse results than RndLocal. However, the distance between VNS variants is small. The quality of results produced by LFAheadLocal is the same as that of LFAhead. Thus, it seems that LFAhead schedules are local optima with respect to the neighborhood used in the proposed local search procedure. Using additional neighborhoods in LFAheadVNS gives only very small improvements. In consequence, for \(p_i<5\), LFAheadVNS is outperformed by LFVNS and RndVNS. All IVNS algorithms deliver again similar results, with average errors below 5%. The results produced by all heuristics are generally worse than for Rnd and Inc instances with corresponding \(p_i\) values. Moreover, the running time of ILP is the longest for Dec tests. Thus, the instances with a decreasing \(\varOmega (t)\) function seem the most difficult to solve.

Table 6 Average percentage distance from \( LB \) and running time for \(n=30\), \(p_1\ne p_2\), Dec instances

Table 6 presents the results obtained for Dec instances with 30 jobs and \(p_1\ne p_2\). Once again, it is confirmed that it is easier to produce good schedules when the execution times of the two operations of a job are different. The obtained errors are about two times smaller than for Dec instances with \(p_1=p_2\). In particular, the average errors of IVNS algorithms are less than 2.5%. The simple heuristic LFAhead delivers good schedules with average errors smaller than 6%, which are not improved by local search or variable neighborhood search.

Table 7 Average percentage distance from \( LB \) and running time for \(n=100\), Rnd instances

The experimental results for tests with \(n=100\) will be presented only for values \((p_1,p_2)\in \{(1,1),(4,2),(5,5)\}\), representing short and long jobs, as well as equal and different execution times of the two operations of a job. The results obtained for Rnd instances are shown in Table 7. For \(p_1=p_2=1\), the results of most heuristic algorithms are slightly worse than for the corresponding instances with 30 jobs. The quality loss is the largest for Rnd and iterated variable neighborhood search algorithms. All variants of IVNS produce schedules around 6% longer than the lower bound. The errors obtained for \(p_i>1\) are much larger than for the corresponding instances with \(n=30\). Recall that these errors are computed with respect to the lower bound \( LB \). For instances with 100 jobs and \(p_i>1\), Gurobi was not able to find good lower bounds within the imposed time limit, and hence, the trivial lower bound \(np_1 +p_2\) had to be used. Thus, the distance between the lower bound and the actual optimum is very large, and this is the key factor determining the reported error values. For \(p_i>1\), IVNS algorithms reach solution quality similar to that of ILP, in a much shorter time. No instances with \(n=100\) and \(p_i>1\) were solved to the optimum by ILP. Note that although the time for solving the integer linear program was limited to 1 h, the algorithm running time also includes building the program and retrieving the schedule found. Hence, the average ILP times reported in Table 7 are slightly greater than 1 h when \(p_i>1\).

Table 8 Average percentage distance from \( LB \) and running time for \(n=100\), Inc instances

The results obtained for Inc instances with 100 jobs are shown in Table 8. In this group, no tests were solved to optimality by ILP, even for \(p_1=p_2=1\). Thus, it seems that instances with increasing storage availability are harder to solve by ILP than those with random buffer changes. This is also confirmed by the fact that in the group of tests with \(n=30\), the average ILP execution time was longer for Inc instances than for the corresponding Rnd instances (see Tables 1, 2, 3, 4). For \(p_1=p_2=1\), the average error of ILP solutions is approximately 4%, and the IVNS schedules are about 5% longer than the lower bound. For larger \(p_i\), the reported errors strongly increase, reflecting growing distance between the lower bounds found and the optimal solutions. All variants of iterated variable neighborhood search significantly outperform ILP when \(p_i>1\), achieving both better solution quality and shorter execution time.

Table 9 Average percentage distance from \( LB \) and running time for \(n=100\), Dec instances

Table 9 presents the results obtained for Dec instances with \(n=100\). No tests in this group were solved to the optimum by ILP. The average ILP error reaches 15% already for instances with \(p_1=p_2=1\), which suggests that the case of decreasing buffer availability is the hardest to solve for this algorithm. This conforms with the earlier observation that the Dec tests seem the most difficult in the group of instances with \(n=30\). The iterated variable neighborhood search algorithms achieve much better results than ILP for \(p_i>1\). It may seem surprising that the errors obtained by VNS, IVNS and ILP algorithms for Dec instances with \(p_1,p_2>1\) are smaller than for the corresponding Rnd or Inc instances. However, this can be explained by the fact that in Dec instances, the available buffer is large at the beginning of the scheduling period, and hence, the optimum schedules are shorter than in the case of Rnd or Inc instances. Hence, the distance between the optimum and the lower bound \(np_1+p_2\), which is the main factor influencing the reported errors, is smaller for Dec instances. Obtaining smaller errors for \(p_1=4\), \(p_2=2\) than for \(p_1=p_2=1\) confirms the earlier observation that tests with \(p_1\ne p_2\) are easier to solve than those with \(p_1=p_2\). This effect is not visible for Rnd and Inc instances with \(n=100\) and \(p_1=4\), \(p_2=2\), because the distance between the lower bound and the optimal solution increases fast with growing job execution times when the \(\varOmega (t)\) function is not decreasing.

The results of the performed computational experiments can be summarised as follows. The optimal solutions can be found using ILP, but at a high computational cost, which seems the largest in the case of decreasing \(\varOmega (t)\), and the smallest in the case of random buffer changes. If a schedule has to be found very fast, one of algorithms LF and LFAhead can be used. LF is suitable for increasing buffer size, and LFAhead should be used when the storage space function is decreasing or random. For \(n=30\), choosing a correct simple heuristic usually leads to obtaining solutions at most 10% from the lower bound. Better results can be obtained using local search, and variable neighborhood search provides further improvements. However, for some types of instances, choosing a good initial schedule may be necessary to obtain high quality results using these algorithms. Among the proposed heuristics, the best results are delivered by iterated variable neighborhood search, in reasonable time. Moreover, the choice of the initial schedule has a very small impact on the performance of IVNS, and hence, no additional knowledge about the instance is required to achieve high quality solutions. For the largest analysed instances, iterated variable neighborhood search produces better results than ILP with 1 h time limit.

8 Conclusions

This paper analyses makespan minimisation in two-machine flow shops with job-dependent storage requirements and storage availability changing in time. It shows that the problem is strongly NP-hard even in the case when the duration of each operation is one unit of time. The existence of optimal permutation no-wait schedules is proved for the case when the smallest processing time on the first-stage machine is greater than or equal to the largest processing time on the second-stage machine, and the case when the smallest processing time on the second-stage machine is greater than or equal to the largest processing time on the first-stage machine. For the case where all jobs have the same processing time on the first machine, and the same processing time on the second machine, the paper presents a polynomial-time approximation scheme and several heuristic algorithms. Computational experiments show that iterated variable neighborhood search algorithms are a good tool for solving the considered problem. The average relative errors obtained by all variants of IVNS for instances with \(n=30\) are below 5%. For the most difficult instances with 100 jobs, the performance of IVNS is hard to estimate because good lower bounds cannot be found, but the delivered solutions are close to or better than those produced by ILP. Future research should include the worst-case analysis of the approximation algorithms. An interesting open question is whether problem \(F2| storage , \omega _j, \varOmega (t), p_{i,j}=1|C_{ max }\) remains strongly NP-hard when \(\varOmega (t)\) is a monotonic function.