A Note on the Integrality Gap of Cutting and Skiving Stock Instances: Why 4/3 is an Upper Bound for the Divisible Case?

In this paper, we consider the (additive integrality) gap of the cutting stock problem (CSP) and the skiving stock problem (SSP). Formally, the gap is defined as the difference between the optimal values of the ILP and its LP relaxation. For both, the CSP and the SSP, this gap is known to be bounded by 2 if, for a given instance, the bin size is an integer multiple of any item size, hereinafter referred to as the divisible case. In recent years, some improvements of this upper bound have been proposed. More precisely, the constants 3/2 and 7/5 have been obtained for the SSP and the CSP, respectively, the latter of which has never been published in English language. In this article, we introduce two reduction strategies to significantly restrict the number of representative instances which have to be dealt with. Based on these observations, we derive the new and improved upper bound 4/3 for both problems under consideration.


General Introduction
Let a capacity (bin size) L ∈ N and m ∈ N items, characterized by their sizes l i ∈ N and quantities b i ∈ N, i ∈ I := {1, . . . , m}, be given. More compactly, we will refer to these input data by a tuple E = (m, l, L, b) with l := (l 1 , . . . , l m ) ∈ N m and b := (b 1 , . . . , b m ) ∈ N m , termed as an instance. In this paper, we consider the following two combinatorial optimization problems: • Cutting Stock Problem (CSP): Find the minimum number of bins (of capacity L) that is able to accommodate all items without exceeding the capacity of any bin.
• Skiving Stock Problem (SSP): Find the maximum number of bins (of capacity L) that can be filled using the given items, so that the total load of any bin at least reaches the capacity.
Note that, usually, the CSP and the SSP are introduced from the perspective of cutting large items into smaller ones or combining small objects to obtain larger ones. However, to overcome the difficulty that the input parameters of an instance would partly have different meanings for the CSP and the SSP, we decided to choose the bin packing terminology throughout this article, so that both problems can suitably be addressed by the same vocabulary.
Obviously, the above optimization tasks are united by an economic component (i.e., lower costs in a broad sense) and the idea of sustainability (i.e., restricting the waste of resources). Nevertheless, from a historical point of view, there is a quite remarkable difference. While the CSP already started to attract scientific interest several decades ago, see [10,13] for early publications, the SSP is a rather young field of research which was introduced as a natural counterpart of the CSP in a specific application [12,35]. In recent years, according to the constantly growing importance of the CSP, see particularly [6, Figure 1], also a larger body of work has been established with respect to the SSP. For a more detailed introduction to both optimization problems and possible areas of application, here we recommend the survey articles [7,34] for the CSP, the papers [18,19] for the SSP, as well as the book [32] for a general overview on cutting and packing. Observe that although the CSP and the SSP form an obviously related pair of minimization and maximization problems, they are not dual formulations in the sense of mathematical optimization. Consequently, both problems have to be considered as theoretically independent. However, it is worth noting that the CSP and the SSP also appear side-by-side in holistic cutting-and-skiving scenarios within different fields of industry [3,12].
Over time, many different integer linear programming (ILP) formulations have been presented for the two problems. Starting with assignment models [13] or pattern-based approaches [10,35], nowadays more and more research deals with pseudo-polynomial alternative frameworks like onecut models [9,19] or flow-based formulations [5,18,34], the latter of which actually showed the most competitive performances so far. Besides structural parameters (like the numbers of variables and constraints or the sparsity of the system matrices), the strength of the corresponding LP relaxation is one of the most important indicators for an efficient solution procedure. More precisely, research has shown, and it became widely accepted, that the quality of the bound provided by the LP relaxation of an ILP model is a crucial factor in the size of the branch-and-bound search trees. In the next section, this aspect shall be introduced more thoroughly.

The Additive Integrality Gap: Preliminaries and Literature Review
Let E = (m, l, L, b) denote an instance (of the CSP or the SSP). To appropriately quantify the tightness of the LP bound, the additive integrality gap (or briefly gap), i.e., the difference between the optimal values of the LP relaxation and the original ILP formulation, is considered in this paper. In what follows, we will introduce this concept with respect to the pattern-based formulations presented by Gilmore/Gomory [10] and Zak [35]. Due to well known equivalence results, see [25,Theorem 10] or [5] for the CSP and [19,Section 3] for the SSP, the gap is independent of the fact whether the pattern-based model or alternative formulations like flowbased or onecut approaches are considered.
Definition 1. Let E = (m, l, L, b) denote an instance (of the CSP or the SSP). Then, any vector a ∈ Z m + with • l a ≤ L is called a cutting pattern, • l a ≥ L is called a packing pattern, • l a = L is called an exact pattern.
We will refer to the respective sets by P ≤ := P ≤ (E), P ≥ := P ≥ (E), and P = := P = (E). Moreover, in order to address a specific pattern, we will use the notation a j = (a j 1 , . . . , a j m ) ∈ Z m + , where j can belong to one of the index sets J ≤ := J ≤ (E), J ≥ := J ≥ (E), or J = := J = (E).
Based on these definitions, we can formulate the Pattern-based Model of the CSP

and the
Pattern-based Model of the SSP where x j counts how many bins are filled according to the (cutting or packing) pattern a j . In both cases, the LP relaxation can be obtained by replacing the condition x j ∈ Z + with x j ≥ 0.
Definition 2. Let E = (m, l, L, b) denote an instance (of the CSP or the SSP). Then, the differences are called the (additive integrality) gap of E, depending on whether the CSP or SSP is referred to. Here, the index "LP " stands for the continuous relaxation, whereas the superscript " " indicates the optimal value of the respective formulation.
Hence, there is no obvious relationship between these two concepts, even though the corresponding optimization problems possess a similar structure.
As regards the CSP, a significant body of work on the additive integrality gap has been employed in recent decades. At the beginning of these investigations, it was conjectured that the gap of the CSP can be bounded from above by the constant 1. A first counterexample to this claim was presented by Marcotte [17], but it required very large (thus practically irrelevant) input data. Some years later, Fieldhouse [11] and Nica [27] constructed further counterexamples of moderate sizes, so that finally ∆ CSP (E) < 2, the so called modified integer round-up property (MIRUP), was conjectured for all instances E, see [33]. With respect to the SSP, a similar claim (the modified integer round-down property (MIRDP) ∆ SSP (E) < 2) was introduced in [35]. Even nowadays, it is still an open question whether these inequalities hold or not. Moreover, there are only very few discrete optimization problems, whose additive integrality gap is known to be bounded by a small constant (being independent of the instance); see [26] for an example in edge coloring. Due to these reasons, particularly in the last years, research on the gap (of both, the CSP and the SSP) was further intensified, mainly with respect to • the construction of instances having large gaps [2,14,29,30], • upper bounds for the maximum gap [20,28,31], • the investigation of special cases [21,22].
One of the most important special cases for which the MIRUP and the MIRDP could successfully be proved is given by the divisible case 1 .
Definition 3. Let E = (m, l, L, b) denote an instance (of the CSP or the SSP). Then, E belongs to the divisible case (E ∈ DC for short) if L/l i ∈ N holds for all i ∈ I.
Note that (in a slight abuse of our initial assumption l ∈ N m ) for the sake of simplicity, we will always represent an instance E ∈ DC by its normalized form with L = 1 and l i = 1/q i ∈ {1/q : q ∈ N} for all i ∈ I. Besides this convenient structure, one main advantage of the divisible case is given by the following result about the optimal LP value: Proof. Let l 1 = 1/q 1 for some q 1 ∈ N. Then, the exact pattern a 1 := (q 1 , 0, . . . , 0) ∈ P = (E) can be used x 1 = b 1 /q 1 times in the LP relaxation. In the same manner, we can proceed with the further items and obtain the objective value i∈I b i /q i = l b. This value is optimal, for the CSP as well as for the SSP, since we did not waste any material.
Hence, only one optimization problem needs to be solved when investigating the gap of the divisible case.
• Meanwhile, these upper bounds could be improved to ∆ SSP (E) < 3/2, see [21], and ∆ CSP (E) < 1.4, see [28,Satz 1]. Observe that the latter has never been published in English language, and, unfortunately, its documentation in the German dissertation [28] was not established thoroughly enough to understand and reconstruct all the arguments. Hence, we will not make active use of this inequality here. • Upper bounds for the gap of the divisible case can directly be used to formulate upper bounds for the gap of arbitrary instances, see [24,Theorem 4] for an example. Hence, any improvement that is achieved for this special case can be transferred to more general instances, too.
Note that, besides being of theoretical importance, the divisible case sometimes also appears in practically relevant scenarios, especially when there is a high degree of standardization such as in bin-packing based (multiprocessor) scheduling applications [1,4].
Given the aforementioned observations, the main contributions of the present article can be summarized as follows: • We introduce two new reduction strategies that allow to only focus on a considerably restricted set of (somewhat homogeneous) instances (→ Section 3).
On the one hand, we significantly improve the currently best upper bounds for the gap of the divisible case for both, the CSP and the SSP. Moreover, we highlight the fact that our underlying theoretical approach is very general, so that a much more extensive analysis of the applied arguments could potentially lead to further improvements of these upper bounds.

Reduction Strategies
The following definition will be required to decompose the set of all instances of the divisible case.
Definition 4. Let δ ∈ N with δ ≥ 2 be given. For any n ∈ N, we define the sets For fixed δ ≥ 2, the sets DC − (n, δ), n ∈ N, will be used for the CSP, whereas the sets DC + (n, δ), n ∈ N, will later be important for the SSP case. Observe that, due to Lemma 2, the optimal LP value of an instance E ∈ DC − (n, δ) is known to satisfy In the following, we first concentrate on the CSP case. To this end, let δ ≥ 2 be fixed to some appropriate value (which is specified later).
Lemma 3. If for any n ∈ N and any instance E ∈ DC − (n, δ), the items of E can be assigned to (at most) n bins (cutting patterns), then we have Proof. For any instance E = (m, l, 1, b) ∈ DC there is some n := n(E) ∈ N with l b ∈ [n − 1, n): • If E ∈ DC − (n, δ), then we have z CSP , (E) ≤ n by hypothesis and, finally, On the other hand, given that MIRUP holds for the entire divisible case, we obtain z CSP , (E) ∈ {n, n + 1}. Both observations lead to

Remark 4.
In an analogous manner, proving that the items of any instance E ∈ DC + (n, δ), n ∈ N, can be used to build (at least) n bins (packing patterns) is sufficient to show for all E ∈ DC. We refer the reader to Section 5 for more details concerning the SSP case.
Obviously, for fixed δ, the sets DC − (n, δ), n ∈ N, contain an infinite number of instances to be checked with respect to Lemma 3. Moreover, these instances can be considered to be very heterogeneous since neither the lengths l i nor the quantities b i , i ∈ I, of an instance E ∈ DC − (n, δ) are restricted. To overcome these issues, we will introduce two reduction strategies leading to (1) an upper bound for the quantities b i , i ∈ I, (2) a lower bound for the lengths l i , i ∈ I, so that coping with only finitely many cases (each of which offering some structural information) will be sufficient to prove the result for all possible instances of the divisible case.
Before explaining these steps in more details, the following definition is required: Definition 5. An instance E = (m, l, 1, b) ∈ DC is called irreducible, if two or more items (not necessarily having different sizes) cannot be used to build a unit fraction. More formally, an irreducible instance is characterized by for any vector a ∈ Z m + . Otherwise, E is called reducible. Remark 5. In particular, any irreducible instance E ∈ DC has the following properties: • E does not possess any exact pattern a ∈ P = (E) with a ≤ b (in a componentwise sense).
• Let t(q) denote the smallest prime divisor of q ∈ N with q ≥ 2. Then, any item of size 1/q can appear at most t(q) − 1 times in E. (Otherwise, t(q) items of size 1/q would sum up to a unit fraction.) • In addition to the previous observation, item combinations like 1/3 + 1/6 = 1/2 or 1/4 + 1/12 = 1/3 cannot appear in E.
In order to implement the first reduction mentioned above, we now proceed as follows: Reduction 1. If z CSP , (E) ≤ n can be shown for all irreducible instances E ∈ DC − (n, δ), then it also holds for any reducible instance in DC − (n, δ).
Proof. Consider a reducible instance E ∈ DC − (n, δ). Then, we can find a subset (containing at least two items) summing up to some value 1/k, k ∈ N. By replacing this subset of items with one artificial item of length 1/k, we obtain an instance E with fewer items but the same optimal LP value. After a finite number of such steps, we end up with an irreducible instance whose items can be packed into at most n bins (cutting patterns) by hypothesis. In this feasible packing, any artificial item of size 1/k can be replaced by the corresponding subset of original items (of E) which was used to build the item of size 1/k. Consequently, also the items of E can be packed into at most n bins and we are done.
While this method can mainly be used to restrict the quantities b i , i ∈ I, of an instance, the second reduction strategy is useful to only focus on the "large items" of E.
Definition 6. Let δ ≥ 2, n ∈ N, and an instance E ∈ DC be given. Then, the set of large items (of E) is defined by All other items will be termed as small.
Then, we obtain the following observation.

Reduction 2.
Let E ∈ DC − (n, δ) be irreducible. If it is possible to assign all large items of E to at most n bins (cutting patterns), then we have z CSP , (E) ≤ n, i.e., all items actually fit into those n bins 2 .
Proof. Let us assume that, after having assigned the large items of E to at most n bins, it is not possible to put one of the remaining small items, say one item of size l := l i for some i ∈ I \ Λ E , into the existing bins without exceeding the bin size. This can only happen, if for any bin B j , j ∈ {1, . . . , n}, the total load C(j) (of the items already allocated to B j ) is greater than 1 − l . Consequently, we have Rearranging the terms leads to where the last equivalence is true since any item length is a unit fraction. However, this would imply that the item of size l is a large item which already has been feasibly assigned to a bin by hypothesis. Hence, we obtain a contradiction and the statement is proved.
This second reduction strategy allows to only consider the largest items of an instance E ∈ DC − (n, δ), where the term large is specified by condition (1). Moreover, it implicitly states that, after having distributed these objects, the small items can be assigned in an arbitrary manner, as long as the capacities are respected. By way of example, one appropriate strategy to assign the small items is based on the best-fit decreasing heuristic, as described in [21, Algorithm 1].

The CSP Case: Improved Upper Bounds for the Gap
As a first contribution, we consider the case δ = 2 which would lead to the upper bound ∆ CSP (E) < 3/2 for E ∈ DC according to Lemma 3. The reasons for presenting this auxiliary result are twofold: • Remember that the proof for the currently best upper bound ∆ CSP (E) < 1.4 has never been published in English language, and so it is hardly known in the scientific community. Hence, instead of using a result that cannot easily be verified, it is more convenient to briefly prove an upper bound of nearly the same quality.
• Later, we can apply this preliminary observation to appropriately deal with some subcases appearing in our main theorem.
Theorem 6. Let n ∈ N be given, and let us consider an irreducible instance E ∈ DC − (n, 2). Then, these items can be assigned to at most n bins (cutting patterns). In particular, we have Proof. It is sufficient to consider the large items of E, meaning that we focus on those items with Since E is irreducible, for any fixed k = 1, . . . , n − 1 we have at most one item of size 1/(2k) and at most 2k − 2 items of size 1/(2k − 1). Due to these items would fit into one bin. Hence, the large items of E can be assigned to at most n bins 3 which concludes the proof thanks to Reduction 2.
Note that also this proof is constructive since it contains a precise method to obtain a feasible solution using n bins. More precisely, the large items of E are grouped according to the instructions in the previous proof, whereas the small items can exemplarily be distributed by the best-fit decreasing heuristic, see [21,Algorithm 1]. Now we intend to show an analogous result for δ = 3 which would lead to the improved upper bound ∆ CSP (E) < 4/3 for all E ∈ DC. To this end, let us consider an irreducible instance E = (m, l, 1, b) ∈ DC − (n, 3) for some n ∈ N. Note that for n = 1 it is obvious that all items can be packed into one bin. Consequently, we can assume n ≥ 2. Moreover, due to Reduction 2, it is sufficient to consider the large items of E, meaning that we only have to show that all items with can be assigned to (at most) n bins (cutting patterns). To prove the latter, Theorem 6 can be applied. More formally, for δ = 3, let T E (n) := T E (n, δ) denote the total size of all large items appearing in E, i.e., T E (n) := i∈Λ E (n,δ) and let us define T (n) := max T E (n) : E ∈ DC − (n, 3) is irreducible . 3 Actually, we have shown a bit more, namely that already n − 1 bins are sufficient.
Then it suffices to verify that holds for all n ≥ 2. To prove this claim, observe that for N := 3n − 4 we have where U (N ) = U (3n − 4) sums up at most t(q) − 1 items of size 1/q (for any large item q = 2, . . . , N = 3n − 4), and R(N ) ≥ 0 is a correction term that possibly subtracts some item sizes that we obtain from forbidden item combinations, see Remark 5. Hence, our main strategy in the following proofs consists of showing for an appropriately defined term R(3n − 4) ≥ 0, which directly implies Condition (2). (2) is true for all n ∈ {2, 3, . . . , 24}.
In our proof, we will need the inequality p∈P,3≤p≤N which can easily be derived from (4).
Note that N ≥ 8 is required in the proof to obtain the additional term −1/8 when subtracting the two sums in the second line of the inequality chain.
Proof. Let n ≥ 25 and N := 3n − 4 be fixed, then we have T (n) < U (N ) ≤ n − 1 2 , i.e., Condition (2) is true. For all the details, we refer the interested reader to the appendix. Putting Theorems 7 and 9 together, we can conclude that: Consequently, we have found a new and improved upper bound for the gap of divisible case instances of the CSP.

A Transfer to the SSP Case
Let us now consider an instance E = (m, l, 1, b) ∈ DC of the skiving stock problem. In analogy to the CSP case we can focus on irreducible instances. Lemma 11. Let E ∈ DC + (n, 3) be irreducible, then the items of E can be assigned to n bins (packing patterns).
Proof. Let us consider the large items of E, i.e., those items satisfying In the proofs of the previous section, we have shown that (for given n and δ = 3) all large items (of any irreducible instance E ∈ DC) can always be assigned to n cutting patterns. Hence, the same is true for the large items contained in the considered instance E, so that they possess a total size of at most n. Thus, some items of the SSP instance are still to be distributed. Note that all of them are small items, i.e., they have a length Let us assign the remaining items based on the best-fit decreasing heuristic presented in [21,Algorithm 1], meaning that the current item is allocated to a bin B j , j ∈ {1, . . . , n}, whose total item load C(j) is the smallest. Then, it is clear that a bin B j with C(j) ≥ 1 will receive an additional item only in those cases when all bins actually satisfy C(j) ≥ 1.
For the sake of contradiction, let us assume that after having assigned the last item of E, there is some k ∈ {1, . . . , n} with C(k) < 1, i.e., we did not end up with a feasible solution for the SSP. This would mean that no item was assigned to a bin which already represented a packing pattern, i.e., any filled bin satisfies since only small items are distributed in this phase. This would lead to giving the contradiction. Hence, we have constructed n bins (for the SSP) and the proof is complete.
Given this observation, a direct consequence is the following: Theorem 12. We have ∆ SSP (E) < 4 3 for any E ∈ DC. Hence, we have also improved the best upper bound for the SSP case to 4/3. Note that a much more detailed analysis of the arguments applied in the CSP case could also directly influence the quality of the upper bound of the SSP.
Remark 13. In fact, the strategy from the previous proof is much more general. In the CSP case, we have not only proved that the large items of a given instance E ∈ DC − (n, δ) would fit into at most n cutting patterns; the proofs of Theorems 7 and 9 actually show that all large items of any(!) irreducible instance E ∈ DC can be assigned to at most n bins 4 . In the proof of the previous lemma, we used this observation to easily transfer the upper bound from the CSP to the SSP. Effectively, this means that we can apply the same proof also for some larger values of δ, as long as we can verify that all large items i ∈ Λ E (n, δ) of any irreducible instance E ∈ DC can feasibly be assigned to (at most) n bins without exceeding their capacities.
Finally, it is important to note again that improved upper bounds for the divisible case directly influence the upper bounds for more general instances. By way of example, given the results of this section, the constant term in [24,Theorem 4] can now be reduced from 3/2 to 4/3 for any arbitrary instance E of the SSP.

Conclusions
In this article, we investigated the additive integrality gap of both, the CSP and the SSP, from a theoretical point of view. In particular, we focussed on the well known divisible case, where the bin size is assumed to be an integer multiple of any item size l i , i ∈ I. For such instances we first developed two reduction strategies to considerably limit the number of instances that has to be considered. More precisely, one of these reductions aims at restricting the quantities b i , while the second one implements a lower bound on the item sizes l i , i ∈ I. Based on these two observations, we were able to state the improved upper bound 4/3 for the CSP and the SSP. Moreover, our approach potentially offers the possibility of further improvements if a much more extensive analysis is conducted. Another aspect of future research deals with narrowing the interval between the currently best upper bounds and the largest known gaps provided by concrete instances, as mentioned at the end of Section 2.
then we obviously have