Cyclic lot-sizing problems with sequencing costs

We study a single-machine lot-sizing problem, where n types of products need to be scheduled on the machine. Each product is associated with a constant demand rate, maximum production rate and inventory costs per time unit. Every time when the machine switches production between products, sequencing costs are incurred. These sequencing costs depend both on the product the machine just produced and on the product the machine is about to produce. The goal is to find a cyclic schedule minimizing total average costs, subject to the condition that all demands are satisfied. We establish the complexity of the problem, and we prove a number of structural properties largely characterizing optimal solutions. Moreover, we present two algorithms approximating the optimal schedules by augmenting the problem input. Due to the high-multiplicity setting, even trivial cases of the corresponding conventional counterparts become highly non-trivial with respect to the output sizes and computational complexity, even without sequencing costs. In particular, the length of an optimal solution can be exponential in the input size of the problem. Nevertheless, our approximation algorithms produce schedules of a polynomial length and with a good quality compared to the optimal schedules of exponential length.



Introduction
In the current competitive economy, companies need to be aware of multiple objectives such as decreasing costs and enhancing customer service. Among the core activities of many companies and supply chains are mechanisms to match supply with demand, to prevent stock-outs and to cut back unnecessary overhead costs. Production companies are required to conduct extensive research into cost reduction to remain competitive within the market. Consequently, a lot of interest has been shown in problems within the area of operations management. This paper is motivated by a real-life problem: A multinational textile company posed the problem of optimizing the production schedule of their lycra production operations. The company employed a single machine to produce synthetic fibers of a few different types of thickness, subject B Tim Oosterwijk t.oosterwijk@maastrichtuniversity.nl Alexander Grigoriev a.grigoriev@maastrichtuniversity.nl Vincent J. Kreuzen vjc.kreuzen@avans.nl 1 Maastricht University, Maastricht, The Netherlands to (extremely large) fixed daily output rates. In the setting we have dealt with, there were three to five types of lycra thickness. Every switch from one thickness type to another is associated with a setup of the machine, and corresponding costs occur. The company was interested in finding a cyclic production schedule of minimum cycle length. A similar setting can be encountered in the automotive manufacturing, where the cars on the conveyor belt should be colored. Due to cleansing requirements, changing from one color to another does not only take a fixed amount of setup time, but an additional amount of time based on the color sequence, e.g., switching from black to yellow is more costly than switching from yellow to green.
In the aforementioned industrial setting, the problem of finding an optimal cycle in which every product is produced exactly once has been addressed in Wetsels (2012). In the present paper, we generalize this result to cyclic schedules with no restrictions on the number of production periods per product. We arrive at a lot-sizing problem with sequencedependent setup costs and aim to find a cyclic schedule which minimizes total average costs.
The lot-sizing problem is a well-studied problem in operations management, where one machine needs to produce a set of products to minimize average holding and setup costs. In this problem, the ongoing production can be represented as the repeated scheduling of a single job on the machine, enabling a highly compact encoding of the input. These types of problems are commonly referred to as high-multiplicity scheduling problems. Jobs in the highmultiplicity setting are represented by a single job description with a multiplicity, representing the number of individual jobs to be processed. It is different from the conventional scheduling setting where every single job, even though identical to many others, is given as a part of the problem input. In this case, the input length of the traditional setting can be exponentially larger than the length of the high-multiplicity input, resulting in exponentially slower performance of algorithms regularly applicable in traditional scheduling. Due to the compact encoding of the input for the problem at hand, the optimal schedule can have superpolynomial length, even for very restricted cases with only one or two products (see Gabay et al. 2014). Consequently, finding a polynomially sized certificate for these types of problems alone can already prove to be a hard task.
Not only from the computational complexity point of view, it is questionable whether conventional encoding is practical for high-multiplicity problems. For many companies, the high-multiplicity encoding is a natural way to provide input from real-world data, especially if thousands of jobs are identical, and is found in numerous practical applications. In this age of big data, in which processing large amounts of data is becoming more and more important, and in many cases necessary in order to keep up with competitors, companies are often able to compress large amounts of data into smaller-sized input sets. Algorithms need to be equipped to cope with these compressed data in such a way that the original problem is tackled, without resorting to the usage of excessive processing times in order to process the underlying information of the reduced input.
In this paper, we address this issue by incorporating high-multiplicity encoding into an extended version of the aforementioned real-life problem, the lot-sizing problem with sequence-dependent setup costs. In this problem, we have a single machine that is capable of producing a single product at any given time and a set of products that need to be produced. Each product is associated with a demand rate, a maximum production rate and inventory holding costs per unit. The objective is to find a cyclic schedule such that the demand of every product is met, minimizing the average costs per cycle. For any schedule, sequence-dependent setup costs, referred to as sequencing costs, are incurred each time the machine switches production between two different products. Moreover, input is provided under high-multiplicity encoding.
We show NP-hardness of the problem, and we largely characterize optimal solutions by proving a number of struc-tural properties, which will be of great use for the algorithm design. Further, we develop an approximation algorithm which slightly perturbs the input instance to get a polynomial running time and, most importantly, polynomial size of the output schedule, where the number of products is fixed. The latter is a reasonable assumption, since in most realworld applications, the number of distinct product types is relatively small, while the demand quantities are substantial. The quality of the resulting schedule is relatively close to that of an optimal schedule.

Related work
The earliest research on problems with high-multiplicity encoding dates back to the 1960s; see e.g., Rothkopf (1966) who considers the traveling salesman problem with multiple visits to cities. Madigan (1968) studies a variant of our problem where setup times are introduced, setup costs do not depend on the sequence, and holding costs are product-independent. He proposes an elegant heuristic for the problem and compares it to the results previously published in the literature. Goyal (1973) studies the variant of the problem posed by Madigan where no setup times are involved and solves the problem to optimality for a fixed time horizon. Boctor (1982) extends the model to incorporate product-dependent holding costs and setup times and considers an infinite time horizon. He presents an exact algorithm for the case of two products. For a historic overview of economic lot-sizing problems, we refer to Holmbom and Segerstedt (2014).
Only in 1991, Hochbaum and Shamir (1991) coined the term high multiplicity and underlined the added complexity of such encodings. They study single-machine highmultiplicity scheduling problems with different objective functions and construct algorithms that are strongly polynomial in the number of types of jobs. At the same time, Narro Lopez and Kingsman (1991) discuss basic solution approaches to high-multiplicity scheduling problems and assess their quality and use in practice.
Most papers on high-multiplicity scheduling consider discrete variants, in which time and/or quantities are discretized into units. There has also been some work considering the continuous setting, in which production can start and stop at any time, e.g., with fluids. Bertsimas et al. (2003) consider the high-multiplicity job shop problem without sequencing costs and use this continuous setting as a relaxation for the original discrete job shop problem. They round an optimal solution for the fluid problem to an asymptotically optimal solution for the discrete problem and provide some computational experiments. In another work on the continuous setting, Haase (1996) discusses a problem very closely related to ours, where production rates are fixed. He proposes a local search heuristic and evaluates it by comparing it to optimal solutions for small instances. Haase and Kimms (2000) consider the same problem and, by making additional assumptions on the input instances, solve the problem to optimality. They present a mixed integer programming formulation for their model and a fast enumeration scheme, which they evaluate by a computational study.
Incorporating sequencing costs substantially adds complexity akin to the traveling salesman problem. The techniques we are using in this paper are closely related to the techniques used in classical single multiplicity scheduling. For instance, Clifford and Posner (2000) provide lower bounds and use these to develop heuristics for minimizing tardiness. They extend the problem to parallel, uniform and unrelated machines in Clifford and Posner (2001), where their objective is to minimize the makespan or the sum of completion times in either the preemptive, or the nonpreemptive variant of the problem. They prove NP-hardness, develop polynomial time and pseudopolynomial time algorithms for special cases, and present heuristics. Filippi and Romanin-Jacur (2009) continue their work and present a twostage approach, in which they first fix most jobs in partial schedules and then solve the residual problem. Brauner et al. (2005) provide a detailed framework for the complexity analysis of high-multiplicity scheduling problems. We refer the reader to this paper for an excellent survey of related work in this field. They extend their framework in Brauner et al. (2007).

The model
We model the general problem for multiple products as follows. We have a single machine that can produce a single type of product at any given time and we are given a set of products J = {1, . . . , n}. For each product i ∈ J , let p i be its maximum production rate, i.e., the maximum number of units produced per time unit. Similarly, let d i be its demand rate and h i its holding costs per time unit. Furthermore, we are given sequencing costs s i, j that need to be paid when the machine switches from producing product i to producing product j. The problem we consider is to find an optimal cyclic schedule S * that minimizes the average costs per unit of timec(S * ). Note that for each product i, the rates d i and p i and costs h i are assumed to be constant over time and positive. Observe that the input is very compact. Let m be the largest number in the input, then the input size is O(n log m), where n is typically a small number, or even a constant.
We distinguish two variants of the problem: The continuous case, in which the machine can switch production at any time; and the discrete case, in which the machine can switch production only at the end of a fixed unit of time (e.g., a day) and produces some product i at a single rate r i ≤ p i during each unit of time. Herewith, we assume that production is done in the beginning of the period and demand is satisfied at the end. Without loss of generality, in both variants we We denote by LSP(A,n) with A ∈ {C, D}, n ∈ N the lotsizing problem of scheduling n products in the continuous or discrete setting, respectively. Let π [a,b) i denote the produced amount of product i during time interval [a, b).
. Let x t i be an indicator function denoting whether product i is produced during time interval [t, t + 1). Let q t i denote the stock level for product i at time t. We explicitly refer to the stock of product i at time t in a schedule S as q t i (S). Formally, we arrive at the following problem.
Input Let A ∈ {C, D}. Let a set of products J = {1, . . . , n} be given, and for each product i ∈ J , a demand rate d i ≥ 1, a maximum production rate p i ≥ 1, and inventory holding costs h i ≥ 1. Sequencing costs s i, j ≥ 1 are given for every pair of products. Task Find a cyclic schedule S which minimizes the average costs per unit of time,c(S) for A.
We represent a cyclic schedule of length as a sequence: where r ϕ ≤ p i ϕ is a production rate of phase ϕ = 0, . . . , s, i ϕ is the product produced in that phase, and [t ϕ , t ϕ+1 ) is a maximal time interval where only i ϕ is produced at a fixed rate r ϕ . A maximal sequence of consecutive phases of the same product i ∈ J is called a production period, denoted by [t, t ) i for some t > t. The complete sequence of phases is called the (cyclic) schedule, and we call a schedule a simple cycle if there is exactly one production period for each product.

Structural properties of optimal solutions
We now prove some structural properties of optimal schedules of the problem. We show that all variants are NP-hard, even when we restrict ourselves to unit demand rates and unit holding costs. Next, we derive a simple necessary and sufficient condition for the existence of a feasible cyclic schedule. Furthermore, we characterize the form of production for the continuous and discrete cases. Also, we show that there is no idle time in an optimal schedule and that every product has at least one point during the schedule where its stock level is zero. Finally, in the last subsection, we present a lower bound on the objective value for the continuous case and an upper bound on the objective value and the maximum stock level for the discrete case. We use these bounds in the approximation.

Problem complexity
The following lemma follows directly from a reduction from the traveling salesman problem (TSP).

Lemma 1 (Complexity) Both the discrete and the continuous variants of the lot-sizing problem are strongly NP-hard.
Proof We prove NP-hardness for the discrete case by a reduction from the traveling salesman problem (TSP). Let us consider a TSP instance Note that for every feasible schedule S, we have sequencing costs W (S) such that W (S) ≥ W min . Moreover, for all simple cycles S we have W (S) ≤ W max . We claim that there exists a TSP tour of length at most B if and only if the corresponding instance of the lot-sizing problem admits a solution of total cost at most hn(n − 1)/2 + B/n.
Clearly, since the total demand and production rates match each other, the total stock level is constant over time. Every simple cycle of length n, using the same order of products, can be realized with average holding costsH = hn(n − 1)/2 and average sequencing costs W min /n ≤W ≤ W max /n. In fact, this schedule is minimum regarding the holding costs.
Let S be a feasible non-simple cycle of length > n with total costs c(S ) = H (S ) + W (S ). Note that there is a product of which two consecutive production periods are separated by at least n + 1 time units. Hence, we need at least one additional unit of that product in stock and thus H (S ) ≥ h n(n − 1)/2 + h . Thus, for every minimal simple cycle S, since W (S) ≤ W max < h, we have that the average costs of S arec(S ) ≥ H (S )/ >c(S). Observe that the value ofH (S) is the same for every minimal simple cycle, and therefore the optimal solution to I is the minimal simple cycle which minimizes W (S).
Let σ be a sequence of visits in the TSP instance with costs B. Producing each product for 1 time unit with the same sequence as σ is a feasible solution for the lot-sizing problem with costs hn(n −1)/2+ B/n. Conversely, let σ be a solution for the lot-sizing problem with costs hn(n − 1)/2 + B/n. This solution is a simple cycle, and therefore the production sequence is a tour with costs B. This proves the NP-hardness of the discrete case.
We prove the continuous case by a similar reduction from the Metric TSP. For an instance I of the Metric TSP, we let J = V and s i, j = c i, j for all i, j ∈ J . Let d i = 1, p i = n and h i = 1 for all i ∈ J .
Let σ be an optimal solution to I with costs c(σ ). Let S be any feasible schedule for the corresponding instance I of the lot-sizing problem, and let the length of the schedule be . Let S * be the simple cycle of length * where the products are produced in the same order as in σ , with production time * /n per product.
Since every product needs to be produced at least once in a feasible schedule and the triangle inequality holds for the sequencing costs, S * is optimal with respect to the sequencing costs, i.e., W (S * ) ≤ W (S). Note that compared to the discrete case, the continuous case has a complication: We can choose * arbitrarily small. By construction, every production period in schedule S * consists of one phase of length * /n where the product is produced at rate p i = n. Since h i = 1, the total holding costs for every product i are given as (cf. Fig. 1 Thus, the total holding costs of S * are H (S * ) = ( * ) 2 (n − 1)/2 and the average holding costs areH (S * ) = * (n −1)/2. In particular, since holding costs decrease with the cycle length, we can choose * such thatH (S * ) ≤H (S) and c(S * ) ≤c(S). Thus, we have that the optimal solution to I is a simple cycle S * using the sequence of σ .
A closely related problem with setup times was addressed in Gallego and Shaw (1997), where they show NP-hardness for multiple special cases of their problem.

Feasibility condition
Observe that d i / p i is the fraction of time product i needs to be scheduled on the machine, and therefore i∈J d i / p i needs to be at most 1. The following lemma shows that this is a sufficient condition for feasible schedules.

Characterizing optimal production schedules
In this subsection, we prove several properties about the production in continuous and discrete schedules. We start by showing that if there is some idle time in a schedule, we can already start producing the next product at demand rate during the idle time to decrease the holding costs. Gabay et al. (2014)] Let S * be an optimal schedule for LSP (C,n) or LSP(D,n), with n ∈ N. S * has no idle time.
We now provide a short proof for the claim that in an optimal schedule for the continuous case, at any time the production rate is always larger than or equal to the demand rate of the produced product.
Lemma 4 (Produce at least the demand rate) Let S * be an optimal schedule for LSP (C,n) Clearly, the schedule is feasible and the costs are decreased, and thus S was not optimal.
The next property ensures that the machine produces every product i only at rates d i and p i to minimize holding costs in the continuous case.
Lemma 5 [Two-phase production, Gabay et al. (2014)] Consider LSP(C,n) for any n ≥ 2. There is an optimal cycle S * such that for every product i ∈ J , every production period of i in S * consists of at most two phases. For every production period, in the first phase the machine produces i at a rate of d i . During the second (non-empty) phase i is produced at a rate of p i .
Note that in a tight schedule, i.e., i∈J d i / p i = 1, in order to meet demand for each product, the machine needs to continuously produce at maximum speed. Therefore, in an optimal schedule S for a tight instance of the problem, each production period consists of a single phase where product i is produced at rate p i . Furthermore, the proof of Lemma 5 also proves that in an optimal schedule for LSP(C,n), for each phase [t, t ) r i , we have that r = d i or r = p i . Following the same reasoning as in the previous two lemmata, we can achieve a similar result for the discrete case of the problem and prove that in an optimal schedule, production periods consist of at most four phases.
Lemma 6 (Four-phase production) Consider LSP(D,n) for any n ≥ 2. There is an optimal cycle S * such that for every product i ∈ J , every production period of i in S * consists of at most four phases. For every production period, in the first phase the machine produces i at a rate of r 1 < d i and this phase has length at most 1. During the second phase i is produced at a rate of d i . During the third phase, i is produced at rate d i < r 2 < p i and this phase again has length at most 1. Finally, during the fourth phase, i is produced at a rate of p i . Phases can be empty, but the first and third phase cannot occur sequentially.
Proof We prove by contradiction. We claim, following arguments similar to those in the proofs of the previous two lemmata, that phases within the production period can be ordered such that for every pair of phases with j > j we have that r 1 < r 2 in order to minimize costs while retaining a feasible schedule. To see this, note that a swap similar to the swap in the proof of Lemma 4 yields lower holding costs, as it is always favorable to produce demand at the latest possible time.
Suppose we have an optimal schedule S with two consecutive phases [t j , t j+1 ) . By definition of a phase, r j = r j+1 . Since S is optimal, 0 < r j < r j+1 ≤ p i must hold. Clearly, if t j+2 = t j+1 + 1 = t j + 2, the lemma holds. Otherwise, we construct a new schedule S * and we start this construction by initializing S * := S. Fig. 2 A depiction of an optimal production period of schedule S * for LSP(C,n) (where phases [t 1 , t 2 ) r1 i and [t 3 , t 4 ) r2 i must be empty) and for LSP(D,n), with n ≥ 2 in S * into five new phases as follows. We first deplete the stock by q • and consecutively increase the stock by q * , where these values depend on the case distinction below. We introduce the indicator function f N (x) = x − x which takes on the value 1 if x / ∈ N and 0 otherwise. The new phases are: We refer the reader to Fig. 2 for a depiction of the new set of phases.
Firstly, suppose d i ≤ r j < r j+1 . Now, let q * = (q t j+2 i − q t j i ) and q • = 0, consequently producing stock, which results in a production period of at most three phases.
Secondly, suppose ) and q * = 0, consequently depleting stock, which results in a production period of at most three phases.
Lastly, suppose r j < d i < r j+1 . Now, let q • = q t j i and q * = q t j+2 i , consequently first depleting and consecutively producing stock, which results in a production period of at most four phases.
If completely depleting and consecutively producing the stock takes longer than the production period, we get t 2 > t 3 . In this case, denote the total amount of stock which was produced in this production period by q . Then, let t 4 = t j+2 − q p i and t 1 = t 2 = t 3 = t j+2 − q p i and r 2 = d i + q − p i q p i , resulting in a production period of at most three phases.
Clearly, in all cases S * is feasible. If S * is different from S, then H (S * ) < H (S), and thus S is not optimal. Note that the phase [t j , t 1 ) 0 i is idle and can be removed as in the proof of Lemma 3 by extending or introducing demand production for some other product, thereby delaying its stock production, leaving a production period of four phases and proving the lemma.
Note that the proof of Lemma 6 also shows that in an optimal schedule for LSP(D,n), for each phase [t, t ) r i with t > t + 1, we have that r = d i or r = p i .
We now show that in the continuous case, the machine produces product i at rate d i only if the stock for i is empty.

Lemma 7 (Level production for continuous case) In an optimal schedule S * for an instance of LSP(C,n), for any product i ∈ J there exists a non-empty phase
Proof We prove by contradiction. Suppose we have an optimal schedule S with a phase [t j , t j+1 ) d i i with t j+1 > t j and q t j i > 0. Again, we construct a new schedule S * starting with S * := S. We split [t j , t j+1 ) d i i in S * into three new phases: If the length of the phase is too short to completely deplete the stock and consecutively completely rebuild the stock, i.e., t 1 > t 2 , then we reduce the stock as much as possible. In this case, let t 1 = t 2 = t j+1 − t • , where t • = (t j+1 − t j ) d i p i denotes the time required to produce when producing at rate p i in order to meet demand during the original phase.
Clearly, S * is feasible and now we have that H (S * ) < H (S) and thus S is not optimal.
We now show a similar result for the discrete case, where the machine produces product i at rate d i only if the stock for i is empty or if the production phase has length 1.
Lemma 8 (Level production for discrete case) In an optimal schedule S * for an instance of LSP(D,n), for any product i ∈ J there exists a non-empty phase [t j , t j+1 ) d i i only if q t j i = 0 or t j+1 = t j + 1.

Proof
We prove by contradiction. Suppose we have an optimal schedule S with a phase [t j , t j+1 ) d i i with t j+1 = t j + 2 and q t j i > 0. Once again, we construct a new schedule S * starting with S * := S. We can now split [t j , t j+1 ) d i i in S * into two new phases: i . Otherwise, let r 1 = max{2d i − p i , 0} and r 2 = min{ p i , 2d i }. Clearly, S * is feasible and we have that H (S * ) < H (S) and thus S is not optimal.
Next, suppose where r 1 and r 2 are defined as above. This process can be iteratively repeated upon the schedule S * until either the stock level reaches 0, or there is at most one phase left of length 1.
We can now show that in an optimal schedule, for every product there is a time where its stock level is zero. Lemma 9 (Zero stock level) Let S * be an optimal schedule for an instance of LSP (C,n) or LSP (D,n). Then, for each i ∈ J there exists a time t such that q t i = 0. Proof The proof is by contradiction. Let S be an optimal schedule of length with at least one product i such that q t i > 0 for all t. Let t * be such that q t * i = min 0≤t≤ q t i . Now, let S * be a copy of S, where we decrease the stock level for the entire schedule of this product, i.e., q t i ← q t i − q t * i for all 0 ≤ t ≤ . Since q t * i ≤ q t i for all t in S, we know that S * is feasible. Clearly, H (S * ) < H (S), and thus S is not optimal. Note that the stock level can be decreased by producing at a rate lower than required by the schedule until the desired level is attained.

Bounding the average costs
We conclude the basic properties with a lower bound on the average costs of an optimal continuous schedule and an upper bound on the average costs and maximum stock level of an optimal discrete schedule. To obtain these, we first derive optimality conditions for both cases.

Lemma 10 (Continuous cost balancing) An optimal schedule S for an instance of LSP(C,n) has the property that H (S) = W (S).
Proof We prove by contradiction. Let S be an optimal schedule s.t. H (S) = W (S). Scale the length of each phase in S by a positive factor δ = 1, such that for the resulting feasible schedule S it holds that H (S ) = W (S ). Let i be a product, where without loss of generality we assume that h i = 1. The holding costs for i in S during a phase [t 1 , t 2 ) r i are given as where q min is the minimum stock level of i during the phase. When rescaling, we maintain two inequalities. First of all, it is clear that (t 2 −t 1 ) ≤ (t 2 −t 1 )δ. Moreover, considering Fig. 1, we see that rescaling results in similar triangles of the stock level, implying that q min ≤ q min δ. Thus, for the corresponding phase [t 1 , t 2 ) r i of the scaled schedule S we have Summing over all phases and products, we get Observe that due to scaling the schedule, we have Observe that for any values of H (S) and W (S) s.t. H (S) = W (S), there exists a δ satisfying Eq. (2). Because of this particular choice of δ, we have that and thus S was not optimal, proving the lemma.
We now prove a similar result for the discrete case, taking into account that low values of δ might create infeasible schedules.

Lemma 11 (Discrete cost balancing) A schedule S for an instance of LSP(D,n) is optimal only if W (S) ≤ 4 · H (S).
Proof The lemma follows almost entirely from the proof of Lemma 10. The difference is that in the discrete case, we might introduce infeasible schedules S by stretching with any factor δ. Therefore, we restrict ourselves to factors δ ∈ N, δ ≥ 2. The first inequality in Eq. (2) now holds only if W (S) ≤ 4 · H (S).
To obtain a lower bound on the average costs, we first fully characterize the optimal continuous schedule for instances where all products are identical.
Proof Since all products are identical, the optimal schedule is defined by a simple cycle where all products are produced for the same period of time and H (S) = W (S); see Lemma 10. We first look at a single product block [t 1 , t 2 , t 3 ) i , which denotes the period for a single item i from the moment it starts a production period, until it starts another production period. Here, [t 1 , t 2 ) denotes the production period for product i and [t 2 , t 3 ) denotes the period during which i is not produced. Note that q t 1 i = q t 3 i = 0. See Fig. 3. The holding costs for a single product i are given as xq 2 h, where q is the maximum stock level and x is the time where product i is produced at rate p plus the time it is not produced, during the product block. Given a slope of p − d during production and a slope of −d during non-production, since q = a( p − d) = bd, we have a = dx p , b = x − a, resulting in total holding costs of For each product, the length of the product block is given as the total length . Note that 1 − dn p is the fraction of time during which the machine produces any product at rate d.
Since all products are produced for an equal amount of time, the fraction of time during which one product is produced at rate d is 1 n − d p . Define α := 1 − 1 n + d p , yielding x = α. Note that in a tight schedule, α = 1.
The optimal schedule S has total sequencing costs W (S) = ns and total holding costs H (S) = x 2 ( p−d)d 2 p hn = 2 α 2 ( p−d)d 2 p hn. Thus, the average costs are given as We now find the optimal cycle length using that W (S) = H (S). Given the optimal length, we can calculate the average total costsc(S), yielding Using the characterization for identical products, we can construct a lower bound on the average costs of a schedule.
Lemma 13 (Lower bound on average costs) Consider LSP(C,n) for n > 1. Let S * be the optimal schedule. Let i be the product minimizing ( p i −d i )d i 2 p i h i , and let s min = min i, j∈J s i, j be the minimum sequencing costs. Then, Proof Intuitively, we construct a schedule for n identical products with d, p and h equal to the corresponding value of the least costly product. We use the notation from Lemma 12. In order to lower-bound the holding costs, assume that we produce n times a certain new dummy product k, such that h k ≤ h j and d k p k ≤ d j p j for all j ∈ J . Furthermore, assume that the stock level is zero at the beginning and end of the product block, i.e., q t 1 k = q t 3 k = 0. From Lemma 12, we know that the holding costs of a single product block for k are equal to Now, let x 2 h min denote the minimum holding costs for each product during the block [t 1 , t 2 , t 3 ) k , where Choosing i as the product minimizing h min and s min = min i, j =i∈J s i j , we apply Lemma 12 to prove this lemma.
Already for two products, the optimal schedule can have pseudopolynomial length (Gabay et al. (2014)). This poses an inherent problem in processing the problem in polynomial time, particularly in outputting the schedule in polynomial time.
In this section, we overcome these difficulties and present two approximation algorithms: First, we augment the problem and solve this to optimality, yielding an augmented polynomial time approximation algorithm for the discrete case. Next, we convert the augmented discrete solution into a feasible solution for the continuous case, yielding a polynomial time approximation algorithm. In both cases, the schedule produced has polynomial length. The algorithm constructs solutions in polynomial time given a constant number of products. Observe that the latter is a reasonable assumption: In real-life instances, the number of products is relatively small. Throughout this section, we assume S * is an optimal cyclic schedule of length and q t i for i ∈ J and t = 0, . . . , − 1 denotes the optimal stock level in S * .
The general idea is to augment the production and demand rates, i.e., we allow for slightly higher production rates and modestly adjusted demand rates. For a given δ > 0, we lift the stock levels q t i for all i and t to powers of (1 + δ) and use augmentation to keep the schedule feasible. For every time unit t, we generate states, which are defined by stock levels q t i for each product i ∈ J and the product being produced. By Lemma 16, the maximum stock level is bounded by Q, yielding a polynomial number of states. With these states, we create a state graph and find a minimum mean cycle using Karp's algorithm in Karp (1978), in order to get an optimal schedule for the augmented version of LSP(D,n). Finally, we balance the resulting schedule such that it becomes a close to optimal solution for LSP(D,n) and a feasible schedule for LSP(C,n), yielding the aforementioned approximation algorithms. See Algorithm 1 for the pseudocode of the algorithm. Let a state S i = (q 1 , . . . , q n ) be defined as an ordered set of stock levels q j for each product j ∈ J , where subscript i ∈ J denotes the last product which has been produced before reaching the current state. Let d t i denote the augmented demand for a product i ∈ J in time unit t.
For each time unit t and a product i which is produced, we allow for augmented production rates r t i such that the total augmented production is no more than (1 + δ) times the total production in a feasible schedule. Specifically, we require that augmented production satisfies the following conditions: Algorithm 1: Augmentation Algorithm AugAlg Data: A set J of n products with demand rates d i , maximum production rate p i and holding costs h i for all i ∈ J . Result: Augmented schedule S D and schedule S C . 1 Create the set S of all states S i = (q 1 , . . . , q n ); 2 Let E = ∅ be the set of state edges; 3 foreach pair of states Find the minimum mean cycle C * in S using Karp's algorithm, cf. Karp (1978), discarding edge progressions which do not admit Eqs.
(3) to (6); 8 Extract augmented schedule S D from C * ; 9 Let S C ← S D be a continuous schedule with x t the length of time slot t; 10 Let all demands d t i ← d i and decrease production rates in S C , such that r t i ≤ p i and −1 The first equation ensures for each time unit an upper bound on the augmented production rate, such that the next power of (1 + δ) can be reached for the stock level. Note that this actually augments the stock level rather than the production rates. In order to limit the total augmentation in terms of the production rates, the latter equation ensures that the total production in the augmented schedule is not more than (1 + δ) times the maximum possible production in the non-augmented schedule. In practice, we get an augmented schedule which is reasonably achievable with respect to the original input data.
Additionally, for each time unit t with product i that is not produced during t, for augmented demand rates d t i ≥ 0 the following equation must hold: This equation ensures that demand rates are not increased more than necessary in order to retain stock levels within a factor of (1 + δ). Moreover, for all time units t and products i, we require the following to ensure that the total demand in the augmented schedule is not more than (1 + δ) times the total demand in the non-augmented schedule: obtained after this transformation by S D . Since demand and production rates are decreased by at most a factor of (1 + δ), overproduction in S D is no more than (1 + δ) ( p i − d i ); therefore, the costs are bounded as c(S D ) ≤ (1 + δ)c(S D ).
Let x t denote the length of time slot t. Clearly, there are time slots. For each product i ∈ J such that −1 t=0 r t i < d i , we will increase all production lengths x t where r t i > 0 to meet the demand of product i. To retain feasibility for all products j = i ∈ J , we increase production rates and shorten production periods where possible, while keeping the schedule length constant. For each product j ∈ J such that −1 t=0 r t j ≥ d j , we consider the following three numbered categories: 1. For all t where 0 < r t j < p j , we will increase r t j and decrease x t such that total production in x t remains unchanged, at most up to the point where r t j = p j . 2. If −1 t=0 r t j > d j and r t j ∈ {0, p j } for all t, we will decrease lengths of production periods x t with r t j > 0, at most up to the point where −1 t=0 r t j = d j and r t j ∈ {0, p j } for all t, the schedule is tight for this product.
For each i ∈ J such that −1 t=0 r t i < d i , simultaneously increase all x t where r t i > 0, increase r t j and decrease x t for all products j as in Category 14, and decrease x t for all products j as in Category 14, while keeping the schedule length constant. Note that the category number of a production period can only be increased by applying the transformation. Hence, since i∈J d i p i ≤ 1, and each production period can be categorized as above, this transformation terminates successfully. Finally, for any product which is produced more than the total demand throughout the cycle, we uniformly decrease production rates for this product-without altering the length of the production period-until demand is met exactly. We denote the resulting schedule by S C . For the remainder of this proof, we assess the quality of S C .
First look at a single increment of length α ≤ δ for a time unit t where i is produced and −1 Let c t j (S) denote the costs for a product j ∈ J during time slot t in a schedule S. Since the production rate is increased in the transformation by a factor (1 + α), the costs for product i at time slot t are bounded by (1 + α)c t i (S D ). Similarly, the cost for each product j = i ∈ J is increased to at most (1 + α)c t j (S D ). At the end of every production period, stock levels in S C are not increased compared to stock levels in S D .
Secondly, look at a single decrement of α for a time unit t where i is produced. Clearly, the costs c t i (S C ) do not increase. Furthermore, the costs c t j (S) for each product j = i ∈ J are neither increased. Since the production period is shortened, the stock level for each product j = i at the end of the production period is increased. In a worst-case scenario, this extra stock needs to be carried throughout the entire schedule. Hence, for each decrement of α, for each product j = i, total costs for the product in the entire schedule can be increased by at most αd j h j . Observe that αd j h j ≤ δc j (S).
Recall that the maximum increment for a single time unit is at most δ. Each product, for which time units are increased, increases the total costs for all products by at most δc(S D ).
Furthermore, for each product for which time units are decreased, costs increase by at most δc(S D ) in total. Thus, AugAlg produces a feasible schedule S for LSP(C,n) of costs at most (1 + nδ)c(S D ). From Lemma 17, we know that c(S D ) ≤ ξc(S * ).

Discussion and future work
This article combines the hardness of high-multiplicity encoding with sequence-dependent setup costs, both of which are natural properties of real-life problems. Not only does this introduce hardness akin to the traveling salesman problem, but due to the compact encoding it is not clear whether or not a polynomially sized certificate can be constructed, even for very restricted cases. We discussed the complexity of the problem and presented structural properties largely characterizing optimal schedules, which can be used for future algorithms and computational experiments. We presented a polynomial time augmented approximation algorithm, which finds (1 + ε)-approximate augmented solutions for the discrete variant of the problem and (1+ε)ξapproximate solutions for the continuous case. In contrast to the known complexity of the problem, the algorithm runs in polynomial time and yields schedules of polynomial length.
It is unclear whether it can be guaranteed that an optimal schedule exists at all. Consider the case of LSP(C,2) in Gabay et al. (2014), where the optimal schedule is already irrational even under rational input values. Now consider LSP(C,3). Is it possible that due to the irrationality of the cost balance, the optimal schedule has infinite length? Can it nevertheless be approximated with a finite schedule? Considering instances with two products, can we characterize the optimal solutions for the discrete case? We conjecture this is possible to achieve using techniques similar to the ones used in this paper.
Alternatively, consider the settings where we explicitly make assumptions concerning the input instances. For instance, if the sequence is given, e.g., using a TSP oracle, is it possible to find an (approximately) optimal solution for both cases in polynomial time? Or if the sequencing costs have a lexicographical ordering (e.g., when the products only differ in color and setting up the machine when switching between two similar colors costs less), can we obtain stronger results?
Regarding the complexity of the problem, we conjecture that this problem is contained in a higher complexity class than NP: Already for LSP(F,1) and LSP(C,2), the optimal schedule can be of non-polynomial length. Although the schedule for these cases can still be represented in polynomial time, it is uncertain if this can be done for arbitrary numbers of products. Furthermore, consider the following decision problem: Does there exist an optimal cyclic schedule of average costs k? It is unclear whether this decision problem is contained in NP and how an adequate polynomial certificate for a NO instance can be constructed.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.