Abstract
Best Fit is a well known online algorithm for the bin packing problem, where a collection of onedimensional items has to be packed into a minimum number of unitsized bins. In a seminal work, Kenyon [SODA 1996] introduced the (asymptotic) random order ratio as an alternative performance measure for online algorithms. Here, an adversary specifies the items, but the order of arrival is drawn uniformly at random. Kenyon’s result establishes lower and upper bounds of 1.08 and 1.5, respectively, for the random order ratio of Best Fit. Although this type of analysis model became increasingly popular in the field of online algorithms, no progress has been made for the Best Fit algorithm after the result of Kenyon. We study the random order ratio of Best Fit and tighten the longstanding gap by establishing an improved lower bound of 1.10. For the case where all items are larger than 1/3, we show that the random order ratio converges quickly to 1.25. It is the existence of such large items that crucially determines the performance of Best Fit in the general case. Moreover, this case is closely related to the classical maximumcardinality matching problem in the fully online model. As a side product, we show that Best Fit satisfies a monotonicity property on such instances, unlike in the general case. In addition, we initiate the study of the absolute random order ratio for this problem. In contrast to asymptotic ratios, absolute ratios must hold even for instances that can be packed into a small number of bins. We show that the absolute random order ratio of Best Fit is at least 1.3. For the case where all items are larger than 1/3, we derive upper and lower bounds of 21/16 and 1.2, respectively.
Introduction
One of the fundamental problems in combinatorial optimization is bin packing. Given a list \(I=(x_1,\ldots ,x_n)\) of n items with sizes from (0, 1] and an infinite number of unitsized bins, the goal is to pack all items into the minimum number of bins. Formally, a packing is an assignment of items to bins such that for any bin, the sum of assigned items is at most 1. While an offline algorithm has complete information about the items in advance, in the online variant, items are revealed one by one. An online algorithm must pack \(x_i\) without knowing the items following \(x_i\) and without modifying the packing of previous items.
Bin packing was mentioned first by Ullman [38]. As the problem is strongly \({\mathsf{N}}{\mathsf{P}}\)complete [17], research mainly focuses on efficient approximation algorithms. The offline problem is well understood and admits even approximation schemes [20, 26, 39]. The online variant is still a very active field in the community [7], as the asymptotic approximation ratio of the best online algorithm is still unknown [3, 4]. The first approximation algorithms for the problem, First Fit and Best Fit, have been analyzed in [38] and a subsequent work by Garey et al. [16]. Johnson published the Next Fit algorithm briefly afterwards [24]. All of these algorithms work in the online setting and attract by their simplicity: Suppose that \(x_i\) is the current item to pack. The algorithms work as follows:
 \(\mathrm{Best Fit (BF)}\):

Pack \(x_i\) into the fullest bin possible, open a new bin if necessary.
 \(\mathrm{First Fit (FF)}\):

Maintain a list of bins ordered by the time at which they were opened. Pack \(x_i\) into the first possible bin in this list, open a new bin if necessary.
 \(\mathrm{Next Fit (NF)}\):

Pack \(x_i\) into the bin opened most recently if possible; open a new bin if necessary.
Another important branch of online algorithms is based on the harmonic algorithm [29]. This approach has been massively tuned and generalized in a sequence of papers [3, 35, 36].
To measure the performance of an algorithm, different metrics exist. For an algorithm \(\mathcal{A}\), let \(\mathcal{A}(I)\) and \({{\mathrm{OPT}}}(I)\) denote the number of bins used by \(\mathcal{A}\) and an optimal offline algorithm, respectively, to pack the items in I. Let \(\mathcal{I}\) denote the set of all item lists. The most common metric for bin packing algorithms is the asymptotic (approximation) ratio defined as
Note that \(R_{\mathcal{A}}^\infty\) focuses on instances where \({{\mathrm{OPT}}}(I)\) is large. This avoids anomalies typically occurring on lists that can be packed optimally into few bins. However, many bin packing algorithms are also studied in terms of the stronger absolute (approximation) ratio
Here, the approximation ratio \(R_{\mathcal{A}}\) must hold for each possible input. An online algorithm with (absolute or asymptotic) ratio \(\alpha\) is also called \(\alpha\)competitive.
Table 1 shows the asymptotic and absolute approximation ratios of the three heuristics Best Fit, First Fit, and Next Fit. Interestingly, for these algorithms both metrics coincide. While the asymptotic ratios of Best Fit and Next Fit were established already in early work [25], the absolute ratios have been settled rather recently [11, 12].
Note that the above performance measures are clearly worstcase orientated. An adversary can choose items and present them in an order that forces the algorithm into its worst possible behavior. In the case of Best Fit, hardness examples are typically based on lists where small items occur before large items [16]. In contrast, it is known that Best Fit performs significantly better if items appear in nonincreasing order [25]. For realworld instances, it seems overly pessimistic to assume adversarial order of input. Moreover, sometimes worstcase ratios hide interesting properties of algorithms that occur in average cases. This led to the development of alternative measures.
A natural approach that goes beyond worstcase was introduced by Kenyon [28] in 1996. In the model of random order arrivals, the adversary can still specify the items, but the arrival order is permuted randomly. The performance measure described in [28] is based on the asymptotic ratio, but can be applied to absolute ratios likewise. In the resulting performance metrics, an algorithm must satisfy its performance guarantee in expectation over all permutations. We define
as the asymptotic random order ratio and the absolute random order ratio of algorithm \(\mathcal{A}\), respectively. Here, \(\sigma\) is drawn uniformly at random from \(\mathcal{S}_n\), the set of permutations of n elements, and \(I^\sigma = (x_{\sigma (1)},\ldots ,x_{\sigma (n)})\) is the permuted list.
Related Work
The following literature review only covers results that are most relevant to our work. We refer the reader to the article [8] by Coffman et al. for an extensive survey on (online) bin packing. For further problems studied in the random order model, see [19].
Bin packing Kenyon introduced the notion of asymptotic random order ratio \(RR{_{\mathcal{A}}^\infty}\) for online bin packing algorithms in [28]. For the Best Fit algorithm, Kenyon proves an upper bound of 1.5 on \(RR{_{\mathrm{BF}}^\infty}\), demonstrating that random order significantly improves upon \(R_{\mathrm{BF}}^\infty = 1.7\). However, it is conjectured in [8, 28] that the actual random order ratio is close to 1.15. The proof of the upper bound crucially relies on the following scaling property: With high probability, the first t items of a random permutation can be packed optimally into \(\frac{t}{n} {{\mathrm{OPT}}}(I) + o(n)\) bins. On the other side, Kenyon proves that \(RR{_{\mathrm{BF}}^\infty} \ge 1.08\). This lower bound is obtained from the weaker i.i.d.model, where item sizes are drawn independently and identically distributed according to a fixed probability distribution.
Coffman et al. [9] analyzed nextfit in the random order model and showed that \(RR{_{\mathrm{NF}}^\infty} = 2\), matching the asymptotic approximation ratio \(RR{_{\mathrm{NF}}^\infty} = 2\) (see Table 1). Fischer and Röglin [14] obtained analogous results for Worst Fit [24] and Smart Next Fit [34]. Therefore, all three algorithms fail to perform better in the random order model than in the adversarial model.
A natural property of bin packing algorithms is monotonicity, which holds if an algorithm never uses fewer bins to pack \(I^{\prime}\) than for I, where \(I^{\prime}\) is obtained from I by increasing item sizes. Murgolo [33] showed that nextfit is monotone, while Best Fit and First Fit are not monotone in general. The concept of monotonicity also arises in related optimization problems, such as scheduling [18] and bin covering [14].
Bin covering The dual problem of bin packing is bin covering, where the goal is to cover as many bins as possible. A bin is covered if it receives items of total size at least 1. Here, a wellstudied and natural algorithm is Dual Next Fit (DNF). In the adversarial setting, DNF has asymptotic ratio \(R{_{\mathrm{DNF}}}^\infty = 1/2\) which is best possible for any online algorithm [6]. Under random arrival order, Christ et al. [6] showed that \(RR_{\mathrm{DNF}}^\infty \le 4/5\). This upper bound was improved later by Fischer and Röglin [13] to \(RR_{\mathrm{DNF}}^\infty \le 2/3\). The same group of authors further showed that \(RR_{\mathrm{DNF}}^\infty \ge 0.501\), i.e., DNF performs strictly better under random order than in the adversarial setting [14].
Matching Online matching can be seen as the key problem in the field of online algorithms [32]. Inspired by the seminal work of Karp et al. [27], who introduced the online bipartite matching problem with onesided arrivals, the problem has been studied in many generalizations. Extensions include fully online models [15, 21, 22], vertexweighted versions [1, 23] and, most relevant to our work, random arrival order [23, 31].
Our Results
While several natural algorithms fail to perform better in the random order model, Best Fit emerges as a strong candidate in this model. The existing gap between 1.08 and 1.5 clearly leaves room for improvement; closing (or even narrowing) this gap has been reported as challenging and interesting open problem in several papers [6, 9, 19]. To the best of our knowledge, our work provides the first new results on the problem since the seminal work by Kenyon. Below we describe our results in detail. In the following theorems, the expectation is over the permutation \(\sigma\) drawn uniformly at random.
If all items are strictly larger than 1/3, the objective is to maximize the number of bins containing two items. This problem is closely related to finding a maximumcardinality matching in a vertexweighted graph; our setting corresponds with the fully online model studied in [1] under random order arrival. Also in the analysis from [28], this special case arises. There, it is sufficient to argue that \({{\mathrm{BF}}}(I) \le \frac{3}{2} {{\mathrm{OPT}}}(I) + 1\) under adversarial order. We show that Best Fit performs significantly better under random arrival order:
Theorem 1
For any list I of items larger than 1/3, we have \({{\mathrm{E}}}[{{\mathrm{BF}}}(I^\sigma )] \le \frac{5}{4} {{\mathrm{OPT}}}(I) + \frac{1}{4}.\)
The proof of Theorem 1 is developed in Sect. 3 and based on several pillars. First, we show that Best Fit is monotone in this case (Proposition 3), unlike in the general case [33]. This property can be used to restrict the analysis to instances with wellstructured optimal packing. The main technical ingredient is introduced in Sect. 3.3 with Lemma 2 as the key lemma. Here, we show that Best Fit maintains some parts of the optimal packing, depending on certain structures of the input sequence. We identify these structures and show that they occur with constant probability for a random permutation. It seems likely that this property can be used in a similar form to improve the bound \(RR^{\infty }_{\mathrm{BF}} \le 1.5\) for the general case: Under adversarial order, much hardness comes from relatively large items of size more than 1/3; in fact, if all items have size at most 1/3, an easy argument shows \(\nicefrac {4}{3}\)competitiveness even for adversarial arrival order [25].
Moreover, it is natural to ask for the performance in terms of absolute random order ratio. It is a surprising and rather recent result that for Best Fit, absolute and asymptotic ratios coincide. The result of [28] has vast additive terms and it seems that new techniques are required for insights into the absolute random order ratio. In Sect. 3.4, we investigate the absolute random order ratio for items larger than 1/3 and obtain the following result.
Proposition 1
For any list I of items larger than 1/3, we have \({{\mathrm{E}}}[{{\mathrm{BF}}}(I^\sigma )] \le \frac{21}{16} {{\mathrm{OPT}}}(I).\)
The upper bound of 21/16 is complemented by the following lower bound.
Proposition 2
There is a list I of items larger than 1/3 with \({{\mathrm{E}}}[{{\mathrm{BF}}}(I^\sigma )] > \frac{6}{5} {{\mathrm{OPT}}}(I).\)
The proof of Proposition 2 is given in Appendix B.
We also make progress on the hardness side in the general case, which is presented in Sect. 4. First, we show that the asymptotic random order ratio is larger than 1.10, improving the previous lower bound of 1.08 from [28].
Theorem 2
The asymptotic random order ratio of Best Fit is \(RR{_{\mathrm{BF}}^\infty} > 1.10.\)
As it is typically challenging to obtain lower bounds in the random order model, we exploit the connection to the i.i.d.model. Here, items are drawn independently and identically distributed according to a fixed probability distribution. By defining an appropriate distribution, the problem can be analyzed using Markov chain techniques. Moreover, we present the first lower bound on the absolute random order ratio:
Theorem 3
The absolute random order ratio of Best Fit is \(RR_{\mathrm{BF}} \ge 1.30.\)
Interestingly, our lower bound on the absolute random order ratio is notably larger than in the asymptotic case (see [28] and Theorem 2). This suggests either
Notation
We consider a list \(I=(x_1,\ldots ,x_n)\) of n items throughout the paper. Due to the online setting, I is revealed in rounds \(1,\ldots ,n\). In round t, item \(x_t\) arrives and in total, the prefix list \(I(t):=(x_1,\ldots ,x_t)\) is revealed to the algorithm. The items in I(t) are called the visible items of round t. We use the symbol \(x_t\) for the item itself and its size \(x_t \in (0,1]\) interchangeably. An item \(x_t\) is called large (L) if \(x_t > 1/2\), medium (M) if \(x_t \in \left( 1/3, 1/2 \right]\), and small (S) if \(x_t \le 1/3\). We also say that \(x_t\) is \(\alpha\)large if \(x_t > \alpha\).
Bins contain items and therefore can be represented as sets. As a bin usually can receive further items in later rounds, the following terms refer always to a fixed round. We define the load of a bin \(\mathcal{B}\) as \(\sum _{x_i \in \mathcal{B}} x_i\). Sometimes, we classify bins by their internal structure. We say \(\mathcal{B}\) is of configuration LM (or \(\mathcal{B}\) is an LMbin) if it contains one large and one medium item. The configurations L, MM, etc. are defined analogously. Moreover, we call \(\mathcal{B}\) a kbin if it contains exactly k items. If a bin cannot receive further items in the future, it is called closed; otherwise, it is called open.
The number of bins which Best Fit uses to pack a list I is denoted by \({{\mathrm{BF}}}(I)\). We slightly abuse the notation and refer to the corresponding packing by \({{\mathrm{BF}}}(I)\) as well whenever the exact meaning is clear from the context. Similarly, we denote by \({{\mathrm{OPT}}}(I)\) the number of bins and the corresponding packing of an optimal offline solution.
Finally, for any natural number n we define \([n]:= \{1,\ldots ,n\}\). Let \(\mathcal{S}_n\) be the set of permutations in [n]. If not stated otherwise, \(\sigma\) refers to a permutation drawn uniformly at random from \(\mathcal{S}_n\).
Upper Bound for 1/3Large Items
In this section, we consider the case where I contains no small items, i.e., where all items are \(\nicefrac {1}{3}\)large. We develop the technical foundations in Sects. 3.1 to 3.3. The final proofs of Theorem 1 and Proposition 1 are presented in Sect. 3.4.
Monotonicity
We first define the notion of monotone algorithms.
Definition 1
We call an algorithm monotone if increasing the size of one or more items cannot decrease the number of bins used by the algorithm.
One might suspect that any reasonable algorithm is monotone. While this property holds for an optimal offline algorithm and some online algorithms as Next Fit [10], Best Fit is not monotone in general [33]. As a counterexample, consider the lists
Before arrival of the fifth item, \({{\mathrm{BF}}}(I(4))\) uses the two bins \(\{0.36, 0.38\}\) and \(\{0.65, 0.34\}\), while \({{\mathrm{BF}}}(I^{\prime}(4))\) uses three bins \(\{0.36, 0.36\}\), \(\{0.65\}\), and \(\{0.38\}\). Now, the last three items fill up the existing bins in \({{\mathrm{BF}}}(I^{\prime}(4))\) exactly. In contrast, these items open two further bins in the packing of \({{\mathrm{BF}}}(I(4))\). Therefore, \({{\mathrm{BF}}}(I) = 4 > 3 = {{\mathrm{BF}}}(I^{\prime})\).
However, we can show that Best Fit is monotone for the case of \(\nicefrac {1}{3}\)large items. Interestingly, 1/3 seems to be the threshold for the monotonicity of Best Fit: As shown in the counterexample from the beginning of this section, it is sufficient to have one item \(x \in \left( 1/4, 1/3 \right]\) to force Best Fit into anomalous behavior. Anyway, we have the following proposition.
Proposition 3
Given a list I of items larger than 1/3 and a list \(I^{\prime}\) obtained from I by increasing the sizes of one or more items, we have \({{\mathrm{BF}}}(I) \le {{\mathrm{BF}}}(I^{\prime})\).
We provide the proof of Proposition 3 in Appendix A. Enabled by the monotonicity, we can reduce an instance of \(\nicefrac {1}{3}\)large items to an instance of easier structure. This construction is described in the following.
Simplifying the Instance
Let I be a list of items larger than 1/3. Note that both the optimal and the Best Fit packing use only bins of configurations L, LM, MM, and possibly one Mbin. However, we can assume a simpler structure without substantial implications on the competitiveness of Best Fit.
Lemma 1
Let I be any list that can be packed optimally into \({{\mathrm{OPT}}}(I)\) LMbins. If Best Fit has (asymptotic or absolute) approximation ratio \(\alpha\) for I, then it has (asymptotic or absolute) approximation ratio \(\alpha\) for any list of items larger than 1/3 as well.
Proof
Let \(I_0\) be a list of items larger than 1/3 and let a, b, c, and \(d \le 1\) be the number of bins in \({{\mathrm{OPT}}}(I_0)\) with configurations L, LM, MM, and M, respectively (see Fig. 1a). In several steps, we eliminate L, MM, and Mbins from \({{\mathrm{OPT}}}(I_0)\) while making the instance only harder for Best Fit.
First, we obtain \(I_1\) from \(I_0\) by replacing items of size 1/2 by items of size \(1/2  \varepsilon\). By choosing \(\varepsilon > 0\) small enough, i.e., \(\varepsilon < \min \{ \delta ^+  1/2, 1/2  \delta ^ \}\), where \(\delta ^+ = \min \{x_i \mid x_i > 1/2 \}\) and \(\delta ^ = \max \{x_i \mid x_i < 1/2 \}\), it is ensured that Best Fit packs all items in the same bins as before the modification. Further, the modification does not decrease the number of bins in an optimal packing, so we have \({{\mathrm{BF}}}(I_0)={{\mathrm{BF}}}(I_1)\) and \({{\mathrm{OPT}}}(I_0) = {{\mathrm{OPT}}}(I_1)\).
Now, we obtain \(I_2\) from \(I_1\) by increasing item sizes: We replace each of the \(a+d\) items packed in 1bins in \({{\mathrm{OPT}}}(I_1)\) by large items of size 1. Moreover, any 2bin (MM or LM) in \({{\mathrm{OPT}}}(I_1)\) contains at least one item smaller than 1/2. These items are enlarged such that they fill their respective bin completely. Therefore, \({{\mathrm{OPT}}}(I_2)\) has \(a+d\) Lbins and \(b+c\) LMbins (see Fig. 1b). We have \({{\mathrm{OPT}}}(I_2) = {{\mathrm{OPT}}}(I_1)\) and, by Proposition 3, \({{\mathrm{BF}}}(I_2) \ge {{\mathrm{BF}}}(I_1)\).
Finally, we obtain \(I_3\) from \(I_2\) by deleting the \(a+d\) items of size 1. As size1 items are packed separately in any feasible packing, \({{\mathrm{OPT}}}(I_3) = {{\mathrm{OPT}}}(I_2)  (a+d)\) and \({{\mathrm{BF}}}(I_3) = {{\mathrm{BF}}}(I_2)  (a+d)\).
Note that \({{\mathrm{OPT}}}(I_3)\) contains only LMbins (see Fig. 1c) and, by assumption, Best Fit has (asymptotic or absolute) approximation ratio \(\alpha\) for such lists. Therefore, in general we have a factor \(\alpha \ge 1\) and an additive term \(\beta\) such that \({{\mathrm{BF}}}(I_3) \le \alpha {{\mathrm{OPT}}}(I_3) + \beta\). It follows that
which concludes the proof. \(\square\)
By Lemma 1, we can impose the following constraints on I without loss of generality.
Assumption. For the remainder of the section, we assume that the optimal packing of I has \(k = {{\mathrm{OPT}}}(I)\) LMbins. For \(i \in [k]\), let \(l_i\) and \(m_i\) denote the large item and the medium item in the ith bin, respectively. We call \(\{l_i, m_i\}\) an LMpair.
Good Order Pairs
If the adversary could control the order of items, he would send all medium items first, followed by all large items. This way, Best Fit opens k/2 MMbins and k Lbins and therefore is 1.5competitive. In a random permutation, we can identify structures with a positive impact on the Best Fit packing. This is formalized in the following random event.
Definition 2
Consider a fixed permutation \(\pi \in \mathcal{S}_n\). We say that the LMpair \(\{l_i, m_i\}\) arrives in good order (or is a good order pair) if \(l_i\) arrives before \(m_i\) in \(\pi\).
Note that in the adversarial setting, no LMpair arrives in good order, while in a random permutation, this holds for any LMpair independently with probability 1/2. The next lemma is central for the proof of Theorem 1. It shows that the number of LMpairs in good order bounds the number of LMbins in the final Best Fit packing from below.
Lemma 2
Let \(\pi \in \mathcal{S}_n\) be any permutation and let X be the number of LMpairs arriving in good order in \(I^\pi\). The packing \({{\mathrm{BF}}}(I^\pi )\) has at least X LMbins.
To prove Lemma 2, we model the Best Fit packing by the following bipartite graph: Let \(G_t = (\mathcal{M}_t \cup \mathcal{L}_t, E^{\mathrm{BF}}_t \cup E^{{{\mathrm{OPT}}}}_t)\), where \(\mathcal{M}_t\) and \(\mathcal{L}_t\) are the sets of medium and large items in \(I^\pi (t)\), respectively. The sets of edges represent the LMmatchings in the Best Fit packing and in the optimal packing at time t, i.e.,
We distinguish OPTedges in good and bad order, according to the corresponding LMpair. Note that \(G_t\) is not necessarily connected and may contain parallel edges. We illustrate the graph representation by a small example.
Example 1
Let \(\varepsilon > 0\) be sufficiently small and define for \(i \in [4]\) large items \(l_i = 1/2 + i \varepsilon\) and medium items \(m_i = 1/2  i \varepsilon\). Consider the list \(I^\pi = (l_2, l_1, m_3, m_4, l_4, m_1, m_2, l_3)\). Figure 2a, b show the Best Fit packing and the corresponding graph \(G_7\) before arrival of the last item. Note that \(I^\pi\) has two good order pairs (\(\{l_1,m_1\}\) and \(\{l_2,m_2\}\)) and, according to Lemma 2, the packing has two LMbins.
The proof of Lemma 2 essentially boils down to the following claim:
Claim 1
In each round t and in each connected component C of \(G_t\), the number of BFedges in C is at least the number of OPTedges in good order in C.
We first show how Lemma 2 follows from Claim 1. Then, we work towards the proof of Claim 1.
Proof of Lemma 2
Claim 1 implies that in \(G_n\), the total number of BFedges (summed over all connected components) is at least X. Therefore, the packing has at least X LMbins and thus not less than the number of good order pairs X. \(\square\)
Before proving Claim 1, we show the following property of \(G_t\).
Claim 2
Consider the graph \(G_t\) for some \(t \in [n]\). Let \(Q=(b_w,a_{w1},b_{w1}, ,\ldots ,a_1,b_1)\) with \(w \ge 1\) be a maximal alternating path such that \(\{a_j, b_j\}\) is an OPTedge in good order and \(\{a_j, b_{j+1} \}\) is a BFedge for any \(j \in [w1]\) (i.e., aitems and bitems represent medium and large items, respectively). It holds that \(b_w \ge b_1\).
Proof
We show the claim by induction on w. Note that the items’ indices only reflect the position along the path, not the arrival order. For \(w=1\), we have \(Q=(b_w)=(b_1)\) and thus, the claim holds trivially.
Now, fix \(w \ge 2\) and suppose that the claim holds for all paths \(Q^{\prime}\) with \(w^{\prime} \le w1\). We next prove \(b_w \ge b_1\). Let \(t^{\prime} \le t\) be the arrival time of the aitem \(a_d\) that arrived latest among all aitems in Q. We consider the graph \(G_{t^{\prime}1}\), i.e., the graph immediately before arrival of \(a_d\) and its incident edges. Note that in \(G_{t^{\prime}1}\), all items \(a_i\) with \(i \in [w1] {\setminus } \{d\}\) and \(b_i\) with \(i \in [w1]\) are visible. Let \(Q^{\prime}=(b_w,\ldots ,a_{d+1},b_{d+1})\) and \(Q^{\prime\prime}=(b_d,\ldots , a_1, b_1)\) be the connected components of \(b_w\) and \(b_1\) in \(G_{t^{\prime}1}\). As \(Q^{\prime}\) and \(Q^{\prime\prime}\) are maximal alternating paths shorter than Q, we obtain from the induction hypothesis \(b_w \ge b_{d+1}\) and \(b_d \ge b_1\).
Note that \(b_{d+1}\) and \(b_1\) were visible and packed into Lbins on arrival of \(a_d\). Further, \(a_d\) and \(b_1\) would fit together, as \(a_d + b_1 \le a_d + b_d \le 1\). However, Best Fit packed \(a_d\) with \(b_{d+1}\), implying \(b_{d+1} \ge b_1\). Combining the inequalities yields \(b_w \ge b_{d+1} \ge b_1\), concluding the proof.\(\square\)
Now, we are able to prove the remaining technical claim.
Proof of Claim 1
Note that the number of OPTedges in good order can only increase on arrival of a medium item \(m_i\) where \(\{m_i, l_i\}\) is an LMpair in good order. Therefore, it is sufficient to verify Claim 1 in rounds \(t_1< \cdots < t_j\) such that in round \(t_i\), item \(m_i\) arrives and \(l_i\) arrived previously.
Induction base. In round \(t_1,\) there is one OPTedge \(\{m_1,l_1\}\) in good order. We need to show that there exists at least one BFedge in \(G_{t_1}\), or, alternatively, at least one LMbin in the packing. If the bin of \(l_1\) contains a medium item different from \(m_1\), we identified one LMbin. Otherwise, Best Fit packs \(m_1\) together with \(l_1\) or some other large item, again creating an LMbin.
Induction hypothesis. Fix \(i \ge 2\) and assume that Claim 1 holds up to round \(t_{i1}\).
Induction step. We only consider the connected component of \(m_i\), as by the induction hypothesis, the claim holds for all remaining connected components. If \(m_i\) is packed into an LMbin, the number of BFedges increases by one and the claim holds for round \(t_i\). Therefore, assume that \(m_i\) is packed by Best Fit in an M or MMbin. This means that in \(G_{t_i}\), vertex \(m_i\) is incident to an OPTedge in good order, but not incident to any BFedge. Let \(P=(m_i, l_i, \ldots ,v)\) be the maximal path starting from \(m_i\) alternating between OPTedges and BFedges.
Case 1: v is a medium item. For illustration, consider Fig. 2 with \(m_i = m_2\) and \(v = m_3\). Since P begins with an OPTedge and ends with a BFedge, the number of BFedges in P equals the number of OPTedges in P. The latter number is clearly at least the number of OPTedges in good order in P.
Case 2: v is a large item. For illustration, consider Fig. 2 with \(m_i = m_1\) and \(v = l_4\). We consider two cases. If P contains at least one OPTedge which is not in good order, the claim follows by the same argument as in Case 1.
Now, suppose that all OPTedges in P are in good order. Let \(P^{\prime}\) be the path obtained from P by removing the item \(m_i\). As \(P^{\prime}\) satisfies the premises of Claim 2, we obtain \(l_i \ge v\). This implies that \(m_i\) and v would fit together, as \(m_i + v \le m_i + l_i \le 1\). However, \(m_i\) is packed in an M or MMbin by assumption, although v is a feasible option on arrival of \(m_i\). As this contradicts the Best Fit rule, we conclude that case 2 cannot happen. \(\square\)
Final Proofs
Finally, we prove the main result of this section.
Proof of Theorem 1
Let X be the number of good order pairs in \(I^\sigma\) and let Y be the number of LMbins in the packing \({{\mathrm{BF}}}(I^\sigma )\). We have \(Y \ge X\) by Lemma 2. For the remaining large and medium items, Best Fit uses \((kY)\) Lbins and \(\lceil (kY)/2 \rceil\) MMbins (including possibly one Mbin), respectively. Therefore,
where \(\xi (X) = (kX) \bmod 2\). Using linearity and monotonicity of expectation, we obtain
Since \(\sigma\) is uniformly distributed on \(\mathcal{S}_n\), each LMpair arrives in good order with probability 1/2. Therefore, \({{\mathrm{E}}}[X]= k/2\) and \(\Pr [\xi (X)=1] = 1/2\). Hence,
where we used \(k = {{\mathrm{OPT}}}(I)\). This concludes the proof. \(\square\)
To obtain the upper bound of 21/16 on the absolute random order ratio (Proposition 1), we analyze a few special cases more carefully.
Proof of Proposition 1
For \(k \ge 4\), the claim follows immediately from Equation (3):
Since Best Fit is clearly optimal for \(k=1\), it remains to verify the cases \(k \in \{2,3\}\).
 \(k=2\):

It is easily verified that there are 16 out of \(4!=24\) permutations where Best Fit is optimal and that it opens at most 3 bins otherwise. Therefore,
$$\begin{aligned} {{\mathrm{E}}}[{{\mathrm{BF}}}(I^\sigma )] = \frac{1}{4!} \cdot \left( 16 {{\mathrm{OPT}}}(I) + 8 \cdot \frac{3}{2} {{\mathrm{OPT}}}(I) \right) = \frac{7}{6} {{\mathrm{OPT}}}(I) < \frac{21}{16} {{\mathrm{OPT}}}(I). \end{aligned}$$  \(k=3\):

When k is odd, there must be at least one LMbin in the Best Fit packing: Suppose for contradiction that all Mitems are packed in MM or Mbins. As k is odd, there must be an item \(m_i\) packed in an Mbin. If \(l_i\) arrives before \(m_i\), item \(l_i\) is packed in an Lbin, as there is no LMbin. Therefore, Best Fit packs \(m_i\) with \(l_i\) or some other Litem instead of opening a new bin. If \(l_i\) arrives after \(m_i\), Best Fit packs \(l_i\) with \(m_i\) or some other Mitem. We have a contradiction in both cases. Therefore, for \(k=3\) we have at least one LMbin, even if no LMpair arrives in good order. Consider the proof of Theorem 1. Instead of \(Y \ge X\), we can use the stronger bound \(Y \ge X^{\prime}\) with \(X^{\prime} := \max \{1, X\}\) on the number of LMbins. The new random variable satisfies \({{\mathrm{E}}}[X^{\prime}] = k/2 + 1/2^k\) and \(\Pr [\xi (X^{\prime}) = 1] = 1/2  1/2^k\). Adapting Equations (1) and (2) appropriately, we obtain
$$\begin{aligned} \frac{{{\mathrm{E}}}[{{\mathrm{BF}}}(I^\sigma )]}{{{\mathrm{OPT}}}(I)}&= \frac{1}{k} \cdot \left( \frac{3k}{2}  \frac{k/2 + 1/2^k}{2} + \frac{1/2  1/2^k}{2} \right) \\&= \frac{5}{4} + \frac{1}{4k}  \frac{1}{k 2^k} = \frac{31}{24} < \frac{21}{16} . \end{aligned}$$
\(\square\)
Lower Bounds
In this section, we present the improved lower bound on \(RR{_{\mathrm{BF}}^\infty}\) (Theorem 2) and the first lower bound on the absolute random order ratio \(RR_{\mathrm{BF}}\) (Theorem 3).
Asymptotic Random Order Ratio
Another model of probabilistic analysis is the i.i.d.model, where the input of the algorithm is a sequence of independent and identically distributed (i.i.d.) random variables. Here, the performance measure of algorithm \(\mathcal{A}\) is \({{{\mathrm{E}}}[\mathcal{A}(I_n(F))]} / {{{\mathrm{E}}}[{{\mathrm{OPT}}}(I_n(F))]}\), where \(I_n(F):=(X_1,\ldots ,X_n)\) is a list of n random variables drawn i.i.d. according to F. This model is in general weaker than the random order model, which is why lower bounds in the random order model can be obtained from the i.i.d.model. This is formalized in the following lemma.
Lemma 3
Consider any online bin packing algorithm \(\mathcal{A}\). Let F be a discrete distribution and \(I_n(F) = (X_1,\ldots ,X_n)\) be a list of i.i.d. samples. For \(n \rightarrow \infty\), there exists a list I of n items such that
Moreover, if there exists a constant \(c > 0\) such that \(X_i \ge c\) for all \(i \in [n]\), we have \({{\mathrm{OPT}}}(I) \ge cn\).
This technique has already been used in [28] to establish the lower bound of 1.08, however, without a formal proof. Apparently, the only published proofs of this connection address bin covering [6, 13]. We provide a constructive proof of Lemma 3 in Appendix C for completeness. The improved lower bound from Theorem 2 now follows by combining Lemma 3 with the next lemma.
Lemma 4
There exists a discrete distribution F such that for \(n \rightarrow \infty\), we have \({{\mathrm{E}}}[\mathcal{A}(I_n(F))] > \frac{11}{10} {{\mathrm{E}}}[{{\mathrm{OPT}}}(I_n(F))]\) and each sample \(X_i\) satisfies \(X_i \ge 1/4\).
Proof
Let F be the discrete distribution which gives an item of size 1/4 with probability p and an item of size 1/3 with probability \(q:=1p\). First, we analyze the optimal packing. Let \(N_{4}\) and \(N_{3}\) be the number of items with size 1/4 and 1/3 in \(I_n(F)\), respectively. We have
Now, we analyze the expected behavior of Best Fit for \(I_n(F)\). As the only possible item sizes are 1/4 and 1/3, we can consider each bin of load more than 3/4 as closed. Moreover, the number of possible loads for open bins is small and Best Fit maintains at most two open bins at any time. Therefore, we can model the Best Fit packing by a Markov chain as follows. Let the nine states \(\mathsf{A},\mathsf{B},\ldots ,\mathsf{I}\) be defined as in Fig. 3b. The corresponding transition diagram is depicted in Fig. 3. This Markov chain converges to the stationary distribution
where we defined \(\vartheta :=\frac{p^3}{1q^3}\) and \(\uplambda := \vartheta q \left( 3  q^2 \right) + \vartheta + 3\). A formal proof of this fact can be found in Appendix C.2.
Let \(V_{\mathsf{S}}(t)\) denote the number of visits to state \(\mathsf{S} \in \{ \mathsf{A}, \ldots , \mathsf{I} \}\) up to time t. By a basic result from the theory of ergodic Markov chains (see [30, Sect. 4.7]), it holds that \(\lim _{t \rightarrow \infty } \frac{1}{t} \cdot V_{\mathsf{S}}(t) = \omega _S\). In other words, the proportion of time spent in state \(\mathsf{S}\) approaches its probability \(\omega _{\mathsf{S}}\) in the stationary distribution.
This fact can be used to bound the total number of opened bins over time. Note that Best Fit opens a new bin on the transitions \(A \rightarrow B\), \(A \rightarrow C\), and \(G \rightarrow H\) (see Fig. 3a).
Hence, \({{\mathrm{E}}}[{{\mathrm{BF}}}(I_n(F))] = V_{\mathsf{A}}(n) + q V_{\mathsf{G}}(n)\). Setting \(p = 0.60\), we obtain finally
\(\square\)
Absolute Random Order Ratio
Theorem 3 follows from the following lemma.
Lemma 5
There exists a list I such that \({{\mathrm{E}}}[{{\mathrm{BF}}}(I^\sigma )] = \frac{13}{10} {{\mathrm{OPT}}}(I)\).
Proof
Let \(\varepsilon > 0\) be sufficiently small and let \(I := (a_1,a_2,b_1,b_2,c)\) where
An optimal packing of I has two bins \(\{a_1,a_2,c\}\) and \(\{b_1,b_2\}\), thus \({{\mathrm{OPT}}}(I)=2\). Subsequently, we argue that Best Fit needs two or three bins depending on the order of arrival.
Let E be the event that exactly one bitem arrives within the first two rounds. After the second item, the first bin is closed, as its load is at least \(\frac{1}{3} + 16\varepsilon + \frac{1}{3}  8\varepsilon = \frac{2}{3} + 8 \varepsilon\). Among the remaining three items, there is a bitem of size \(\frac{1}{3} + 16 \varepsilon\) and at least one aitem of size \(\frac{1}{3} + 4\varepsilon\). This implies that a third bin needs to be opened for the last item. As there are exactly \(2 \cdot 3 \cdot 2! \cdot 3! = 72\) permutations where E happens, we have \(\Pr [E] = \frac{72}{5!} = \frac{3}{5}\).
On the other side, Best Fit needs only two bins if one of the events F and G, defined in the following, happen. Let F be the event that both bitems arrive in the first two rounds. Then, the remaining three items fit into one additional bin. Moreover, let G be the event that the set of the first two items is a subset of \(\{a_1,a_2,c\}\). Then, the first bin has load at least \(\frac{2}{3}  4 \varepsilon\), thus no bitem can be packed there. Again, this ensures a packing into two bins.
By counting permutations, we obtain \(\Pr [F] = \frac{2! \cdot 3!}{5!} = \frac{1}{10}\) and \(\Pr [G] = \frac{3 \cdot 2! \cdot 3!}{5!} = \frac{3}{10}\).
As the events E, F, and G partition the probability space, we obtain
\(\square\)
The construction from the above proof is used in [25] to prove that Best Fit is 1.5competitive under adversarial arrival order if all item sizes are close to 1/3. Interestingly, it gives a strong lower bound on the absolute random order ratio as well.
References
AlHerz, A., Pothen, A.: A 2/3approximation algorithm for vertexweighted matching. In: Discrete Applied Mathematics (2019). ((in press))
Albers, S., Khan, A., Ladewig, L.: Best fit bin packing with random order revisited. In: 45th International Symposium on Mathematical Foundations of Computer Science (MFCS), pp. 7:1–7:15 (2020)
Balogh, J., Békési, J., Dósa, G., Epstein, L., Levin, A.: A new and improved algorithm for online bin packing. In: Proceedings of the 26th Annual European Symposium on Algorithms (ESA), LIPIcs, vol. 112, pp. 5:1–5:14 (2018)
Balogh, J., Békési, J., Dósa, G., Epstein, L., Levin, A.: A new lower bound for classic online bin packing. In: Approximation and Online Algorithms—17th International Workshop, (WAOA), Lecture Notes in Computer Science, vol. 11926, pp. 18–28. Springer (2019)
Boyar, J., Dósa, G., Epstein, L.: On the absolute approximation ratio for first fit and related results. Discret. Appl. Math. 160(13–14), 1914–1923 (2012)
Christ, M.G., Favrholdt, L.M., Larsen, K.S.: Online bin covering: expectations vs. guarantees. Theor. Comput. Sci 556, 71–84 (2014)
Christensen, H.I., Khan, A., Pokutta, S., Tetali, P.: Approximation and online algorithms for multidimensional bin packing: a survey. Comput. Sci. Rev. 24, 63–79 (2017)
Coffman Jr., E.G., Csirik, J., Galambos, G., Martello, S., Vigo, D.: Bin packing approximation algorithms: survey and classification. In: Handbook of Combinatorial Optimization, pp. 455–531. Springer, New York (2013)
Coffman, E.G., Jr., Csirik, J., Rónyai, L., Zsbán, A.: Randomorder bin packing. Discret. Appl. Math. 156(14), 2810–2816 (2008)
Coffman, E.G., Jr., Lueker, G.S.: Probabilistic Analysis of Packing and Partitioning Algorithms. WileyInterscience Series in Discrete Mathematics and Optimization. Wiley, Hoboken (1991)
Dósa, G., Sgall, J.: First fit bin packing: a tight analysis. In: 30th International Symposium on Theoretical Aspects of Computer Science (STACS), LIPIcs, vol. 20, pp. 538–549 (2013)
Dósa, G., Sgall, J.: Optimal analysis of best fit bin packing. In: Proceedings of the 41st International Colloquium on Automata, Languages, and Programming (ICALP), pp. 429–441 (2014)
Fischer, C., Röglin, H.: Probabilistic analysis of the dual nextfit algorithm for bin covering. In: LATIN 2016: Theoretical Informatics—12th Latin American Symposium, pp. 469–482 (2016)
Fischer, C., Röglin, H.: Probabilistic analysis of online (classconstrained) bin packing and bin covering. In: LATIN 2018: Theoretical Informatics—13th Latin American Symposium, Lecture Notes in Computer Science, vol. 10807, pp. 461–474. Springer (2018)
Gamlath, B., Kapralov, M., Maggiori, A., Svensson, O., Wajc, D.: Online matching with general arrivals. In: 60th IEEE Annual Symposium on Foundations of Computer Science (FOCS), pp. 26–37 (2019)
Garey, M.R., Graham, R.L., Ullman, J.D.: Worstcase analysis of memory allocation algorithms. In: Proceedings of the 4th Annual ACM Symposium on Theory of Computing (STOC), pp. 143–150 (1972)
Garey, M.R., Johnson, D.S.: Strong NPcompleteness results: motivation, examples, and implications. J. ACM 25(3), 499–508 (1978)
Graham, R.L.: Bounds on multiprocessing timing anomalies. SIAM J. Appl. Math. 17(2), 416–429 (1969)
Gupta, A., Singla, S.: Randomorder models. CoRR, arXiv:abs/2002.12159 (2020)
Hoberg, R., Rothvoss, T.: A logarithmic additive integrality gap for bin packing. In: Proceedings of the 28th Annual ACMSIAM Symposium on Discrete Algorithms (SODA), pp. 2616–2625 (2017)
Huang, Z., Kang, N., Tang, Z.G., Wu, X., Zhang, Y., Zhu, X.: How to match when all vertices arrive online. In: Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pp. 17–29 (2018)
Huang, Z., Peng, B., Tang, Z.G., Tao, R., Wu, X., Zhang, Y.: Tight competitive ratios of classic matching algorithms in the fully online model. In: Proceedings of the 30th Annual ACMSIAM Symposium on Discrete Algorithms (SODA), pp. 2875–2886 (2019)
Huang, Z., Tang, Z.G., Wu, X., Zhang, Y.: Online vertexweighted bipartite matching: beating 11/e with random arrivals. ACM Trans. Algorithms 15(3), 38:138:15 (2019)
Johnson, D.S.: Fast algorithms for bin packing. J. Comput. Syst. Sci. 8(3), 272–314 (1974)
Johnson, D.S., Demers, A.J., Ullman, J.D., Garey, M.R., Graham, R.L.: Worstcase performance bounds for simple onedimensional packing algorithms. SIAM J. Comput. 3(4), 299–325 (1974)
Karmarkar, N., Karp, R.M.: An efficient approximation scheme for the onedimensional binpacking problem. In: Proceedings of the 23rd Annual Symposium on Foundations of Computer Science (FOCS), pp. 312–320 (1982)
Karp, R.M., Vazirani, U.V., Vazirani, V.V.: An optimal algorithm for online bipartite matching. In: Ortiz, H. (ed.) Proceedings of the 22nd Annual ACM Symposium on Theory of Computing (STOC), pp. 352–358 (1990)
Kenyon, C.: Bestfit binpacking with random order. In: Proceedings of the 7th Annual ACMSIAM Symposium on Discrete Algorithms (SODA), pp. 359–364 (1996)
Lee, C.C., Lee, D.T.: A simple online binpacking algorithm. J. ACM 32(3), 562–572 (1985)
Levin, D.A., Peres, Y.: Markov Chains and Mixing Times, vol. 107. American Mathematical Society, Providence (2017)
Mahdian, M., Yan, Q.: Online bipartite matching with random arrivals: an approach based on strongly factorrevealing LPs. In: Proceedings of the 43rd ACM Symposium on Theory of Computing (STOC), pp. 597–606 (2011)
Mehta, A.: Online matching and ad allocation. Found. Trends Theor. Comput. Sci. 8(4), 265–368 (2013)
Murgolo, F.D.: Anomalous behavior in bin packing algorithms. Discret. Appl. Math. 21(3), 229–243 (1988)
Ramanan, P.V.: Averagecase analysis of the smart next fit algorithm. Inf. Process. Lett. 31(5), 221–225 (1989)
Ramanan, P.V., Brown, D.J., Lee, C.C., Lee, D.T.: Online bin packing in linear time. J. Algorithms 10(3), 305–326 (1989)
Seiden, S.S.: On the online bin packing problem. J. ACM 49(5), 640–671 (2002)
Shor, P.W.: The averagecase analysis of some online algorithms for bin packing. Combinatorica 6(2), 179–200 (1986)
Ullman, J.: The Performance of a Memory Allocation Algorithm, vol. 47. Department of Electrical Engineering, Computer Science Laboratory, Princeton University, Princeton (1971)
de la Vega, W.F., Lueker, G.S.: Bin packing can be solved within 1+epsilon in linear time. Combinatorica 1(4), 349–355 (1981)
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Work supported by the European Research Council, Grant Agreement No. 691672. A preliminary version of this paper received the best paper award at 45th International Symposium on Mathematical Foundations of Computer Science (MFCS 2020) and was published in the conference proceedings [2].
Appendices
Appendix A: Monotonicity
Proposition 3 follows by applying the following lemma iteratively. A technically similar proof appeared in [37], where Shor showed that the MBF algorithm from [37] is monotone under removal of items.
Lemma 6
Let \(I=(x_1,\ldots ,x_n)\) be any list of items larger than 1/3. Let \(I^{\prime} = (x_1^{\prime},\ldots ,x_n^{\prime})\) with \(x_i^{\prime} > x_i\) for a single \(i \in [n]\) and \(x_j^{\prime} = x_j\) for all \(j \ne i\). We have \({{\mathrm{BF}}}(I) \le {{\mathrm{BF}}}(I^{\prime})\).
Proof
All bins in any packing of I or \(I^{\prime}\) contain at most two items. We call two 1bins of \({{\mathrm{BF}}}(I)\) and \({{\mathrm{BF}}}(I^{\prime})\) pairwiseidentical if they contain items of the same size. Moreover, we call any two 2bins of \({{\mathrm{BF}}}(I)\) and \({{\mathrm{BF}}}(I^{\prime})\) pairwiseclosed, as neither of the two bins can receive a further item.
For ease of notation, let \(I_t = I(t)\) and \(I_t^{\prime} = I^{\prime}(t)\). We show that at any time t, the packings \({{\mathrm{BF}}}(I_t)\) and \({{\mathrm{BF}}}(I_t^{\prime})\) are related in one of three ways (see Fig. 4).
 \((*1)\):

All bins are pairwiseidentical or pairwiseclosed.
 \((*2)\):

All bins are pairwiseidentical or pairwiseclosed, except for two 1bins \(B=\{b\}\) and \(B^{\prime}=\{b^{\prime}\}\) in \({{\mathrm{BF}}}(I_t)\) and \({{\mathrm{BF}}}(I_t^{\prime})\), respectively, where \(b < b^{\prime}\).
 \((*3)\):

All bins are pairwiseidentical or pairwiseclosed, except for a 2bin \(C = \{c_1, c_2\}\) in \({{\mathrm{BF}}}(I_t)\) which does not exist in \({{\mathrm{BF}}}(I_t^{\prime})\), and two 1bins \(B_1^{\prime} = \{b_1^{\prime}\}\), \(B_2^{\prime}=\{b_2^{\prime}\}\) in \({{\mathrm{BF}}}(I_t^{\prime})\) which do not exist in \({{\mathrm{BF}}}(I_t)\).
Note in all three cases, \({{\mathrm{BF}}}(I_t) \le {{\mathrm{BF}}}(I_t^{\prime})\). As this property is maintained until \(t=n\), it implies the lemma. Subsequently, we show the claim inductively for each round.
Before round i, both lists contain items of identical size and thus, the packings are clearly related by \((*1)\). In round i, both packings deviate possibly, since \(x_i < x_i^{\prime}\). Here, three cases can occur.

(a)
Both items go into new bins. Then, \((*2)\) holds.

(b)
Both items go into identical 1bins. Then, \((*1)\) holds.

(c)
Bestfit packs \(x_i\) into an existing 1bin \(B_1\) in \({{\mathrm{BF}}}(I_i)\), while it opens a new bin \(B_2^{\prime}\) for \(x_i^{\prime}\) in \({{\mathrm{BF}}}(I_i^{\prime})\). Before packing \(x_i\) into \(B_1\), there was a 1bin \(B_1^{\prime}\) in \({{\mathrm{BF}}}(I_{i}^{\prime})\) such that \(B_1\) and \(B_1^{\prime}\) were pairwise identical. By packing \(x_i\), bin \(B_1\) becomes a 2bin. Therefore, \((*3)\) holds with \(C = B_1\) as 2bin and \(B_1^{\prime}\), \(B_2^{\prime}\) as 1bins.
Note that these three cases are exhaustive: The algorithm packs either both items into new bins (a), both items into existing 1bins (b), or \(x_i\) into an existing 1bin and \(x_i^{\prime}\) into a new bin (c). In case (b), the 1bins must be pairwiseidentical as this holds for all bins up to time i. The inverse situation of (c) cannot occur since \(x_i < x_i^{\prime}\).
For the induction step, we consider round \(t \ge i+1\) and suppose that \({{\mathrm{BF}}}(I_{t1})\) and \({{\mathrm{BF}}}(I_{t1}^{\prime})\) are related either by \((*1)\), \((*2)\), or \((*3)\). Note that in round t, in both lists the current item has equal size \(x_t = x_t^{\prime}\) again.
Case 1 If \((*1)\) holds at time \(t1\), then \((*1)\) is maintained in round t.
Case 2 Suppose \((*2)\) holds at time \(t1\). If \(x_t^{\prime}\) is packed in \({{\mathrm{BF}}}(I_t^{\prime})\) into an existing 1bin, we know that Best Fit does not open a new bin for \(x_t\) in \({{\mathrm{BF}}}(I_t)\) as well. Depending on the chosen bins, either \((*2)\) is maintained, or \((*1)\) holds.
Now, assume that \(x_t^{\prime}\) is packed into a new bin in \({{\mathrm{BF}}}(I_t^{\prime})\). Either, a new bin is opened in \({{\mathrm{BF}}}(I_t)\) as well, or \({{\mathrm{BF}}}(I_t)\) packs \(x_t\) into bin B. In the first case, \((*2)\) is maintained, in the second case \((*3)\) holds. Note that \({{\mathrm{BF}}}(I_t)\) cannot pack \(x_t\) into any existing 1bin other than B, as otherwise \({{\mathrm{BF}}}(I_t^{\prime})\) would pack \(x_t^{\prime}\) into the corresponding identical bin instead of opening a new bin.
Case 3 Suppose \((*3)\) holds at time \(t1\). If Best Fit packs \(x_t\) into an existing 1bin, we know that it packs \(x_t^{\prime}\) into an existing 1bin as well. Hence, in both packings a 1bin becomes a 2bin and \((*3)\) is maintained.
If Best Fit packs \(x_t\) into a new bin, we know that none of the existing 1bins in \({{\mathrm{BF}}}(I_t^{\prime})\), except for possibly \(B_1^{\prime}\) or \(B_2^{\prime}\), are suitable for \(x_t^{\prime}\). Therefore, Best Fit either opens a new bin to pack \(x_t^{\prime}\) as well (then, \((*3)\) is maintained), or it packs \(x_t^{\prime}\) into \(B^{\prime} \in \{B_1^{\prime}, B_2^{\prime}\}\). Suppose that \(B^{\prime}=B_1^{\prime}\). Then, all bins in the packings are pairwiseidentical or closed, except for \(B_2^{\prime} = \{b_2^{\prime}\}\) in \({{\mathrm{BF}}}(I_t^{\prime})\) and the new bin \(\{x_t\}\) in \({{\mathrm{BF}}}(I_t)\). Hence, \((*2)\) holds if \(b_2^{\prime} > x_t\). To see this, observe that \(b_1^{\prime} + b_2^{\prime} > 1\) since \(b_1^{\prime}\) and \(b_2^{\prime}\) have been packed into separate bins previously. Moreover, since Best Fit packed \(x_t^{\prime}\) with \(b_1^{\prime}\), we have \(x_t + b_1^{\prime} \le 1\). Therefore, \(b_2^{\prime} > 1  b_1^{\prime} \ge 1  (1  x_t) = x_t\). The case \(B^{\prime}=B_2^{\prime}\) is analogous. \(\square\)
Appendix B: Lower Bound for \(\nicefrac {1}{3}\)Large Items
Here, we prove the existence of a list I with \(\nicefrac {1}{3}\)large items and \({{\mathrm{E}}}[{{\mathrm{BF}}}(I^\sigma )] > \frac{6}{5} {{\mathrm{OPT}}}(I){.}\)
Proof of Proposition 2
We construct a list of \(k=3\) LMpairs. For sufficiently small \(\varepsilon >0\) and \(i \in [k]\) define \(l_i = \frac{1}{2} + i \varepsilon\) and \(m_i = \frac{1}{2}  i \varepsilon\). This way, \(l_1< l_2 < l_3\) and \(m_1> m_2 > m_3\). Clearly, \({{\mathrm{OPT}}}(I)=3\). We can show that Best Fit uses 4 instead of 3 bins in at least 440 permutations. Therefore,
We call the event that Best Fit packs two items different from \(\{l_i, m_i\}\) into the same bin a nonoptimal match. This event occurs if Best Fit packs either \(l_i\) and \(m_j\) for \(1 \le i < j \le 3\), or two medium items \(m_i\) and \(m_j\) together into the same bin. It is easy to see that any packing with a nonoptimal match needs at least 4 bins. However, Best Fit never uses more than 4 bins.
In the following, we partition the set of all permutations with a nonoptimal match according to the first time where this happens. Note that the first possibility is after arrival of the second item. Moreover, if no nonoptimal match happened before arrival of the fifth item, the fifth and sixth item build an LMpair and therefore, no nonoptimal match occurs at all.
Let \(a_1, \ldots , a_4\) denote the first four items in a fixed permutation.

\({\textit{Case 1: First nonoptimal match after the second item}}\)
We have either \(a_1, a_2 \in \{l_i, m_j \}\) with \(i < j\), or \(a_1, a_2 \in \{m_i, m_j\}\) with \(i \ne j\). In both cases, we can choose the set \(\{a_1, a_2\}\) in three ways, can arrange \(a_1\) and \(a_2\) in 2! ways and the remaining items in 4! ways. The total number of such permutations is

\({\textit{Case 2: First nonoptimal match after the third item}}\)
Before \(a_3\) arrives and is matched nonoptimally, \(a_1\) and \(a_2\) need to be packed into separate bins and \(a_3\) is matched nonoptimally. This happens in the following cases.

Case 2a
\(a_1, a_2 \in \{ l_1, l_2 \}\) and \(a_3 = m_3\).

Case 2b
\(a_1, a_2 \in \{l_1, l_3 \}\) and \(a_3 = m_2\).

Case 2c
\(a_1, a_2 \in \{l_2, m_1\}\) and \(a_3 = m_3\).

Case 2d
\(a_1, a_2 \in \{l_3, m_1\}\) and \(a_3 = m_2\).

Case 2e
\(a_1, a_2 \in \{m_2, l_3 \}\) and \(a_3 \in \{m_1, l_1\}\). Note that for each case 2a2d, we have \(2! \cdot 3!\) permutations, while in case 2e, in addition, we can choose \(a_3\) in one of two ways. Therefore, we have a total number of permutations for case 2 of

\({\textit{Case 3: First nonoptimal match after the fourth item}}\)
Here, we must have an optimal match either after the second item (Case 3a) or after the third item (Case 3b).

Case 3a
We have \(a_1, a_2 \in \{l_i, m_i\}\) for some \(i \in [3]\). Now, \(\{a_3,a_4\}\) can contain either the remaining two medium items, or the larger of the remaining two large items and the smaller of the remaining two medium items. The fifth and sixth item can be arranged in 2! ways. The total number of such permutations is

Case 3b
We have an optimal match after the third item followed by a nonoptimal match after the fourth item. This happens in the following cases.

\(a_1, a_2 \in \{ l_3, m_1 \}\), \(a_3 = m_3\), \(a_4 = m_2\).

\(a_1, a_2 \in \{ l_1, l_3 \}\), \(a_3 = m_3\), \(a_4 = m_2\).

\(a_1, a_2 \in \{ l_2, m_1 \}\), \(a_3 \in \{m_2, l_1 \}\), \(a_4 = m_3\).

\(a_1, a_2 \in \{ l_1, l_2 \}\), \(a_3 \in \{m_1, m_2 \}\), \(a_4 = m_3\).

\(a_1, a_2 \in \{ l_3, m_2 \}\), \(a_3 = m_3\), \(a_4 = \{m_1, l_1\}\).
In each of the first two cases, we get \(2! \cdot 2!\) permutations. In each of the remaining three cases, we have \(2! \cdot 2 \cdot 2!\) permutations, since we can choose one additional item among two elements. The total number of permutations in case 3b is thus
\(\square\)
Appendix C: Lower Bound on the Asymptotic Random Order Ratio
Appendix C.1: Proof of Lemma 3
Let \(\mathcal{I} = \{ I \mid \Pr [I_n(F)=I] > 0 \}\) be the set of possible outcomes of \(I_n(F)\). We say that two lists \(I_1, I_2 \in \mathcal{I}\) are similar (\(I_1 \sim I_2\)) if there is a permutation \(\pi \in \mathcal{S}_n\) such that \(I_1 = I_2^\pi\). Note that \(\sim\) defines an equivalence relation on \(\mathcal{I}\). Let \(\mathcal{H}\) be a complete set of representatives of \(\sim\). This way, \(\mathcal{I} = \biguplus _{H \in \mathcal{H}} \{ H^\pi \mid \pi \in \mathcal{S}_n \}\).
We will use the following two technical claims which we will prove later.
Claim 3
Let \(ALG \in \{\mathcal{A}, {{\mathrm{OPT}}}\}\). For each \(H \in \mathcal{H}\), there exists \(\uplambda _H > 0\) such that
Claim 4
For any two nonnegative sequences \((a_i)_{i \in [m]}\) and \((b_i)_{i \in [m]}\), we have
Proof of Lemma 3
Using Claims 3 and 4, Lemma 3 follows from the following reasoning. Let
Now, it holds that
The second claim follows immediately from \({{\mathrm{OPT}}}(I) \ge \sum _{i=1}^{n} X_i \ge cn\). \(\square\)
It remains to show Claims 3 and 4.
Proof of Claim 3
We first show that all lists of the same equivalence class have the same probability to be an outcome of \(I_n(F)=(X_1,\ldots ,X_n)\).
Let \(I_1=(a_1,\ldots ,a_n)\) and \(I_2=(b_1,\ldots ,b_n)\) with \(I_1 \sim I_2\). Note that both lists contain the same multiset of items. Let m be the number of different item sizes and let \(z_i\) be the multiplicity of size \(y_i\) for \(i \in [m]\) in \(I_1\) (and thus in \(I_2\)). As the random variables \(X_i\) are drawn i.i.d., we have
Now, consider any equivalence class [H] for \(H \in \mathcal{H}\). All lists \(I \in [H]\) have the same probability \(\Pr [I_n(F) = I] =: \uplambda _H\) to be an outcome of \(I_n(F)\). Further, we have \([H] = \{ H^\pi \mid \pi \in \mathcal{S}_n \}\) and thus
\(\square\)
Claim 4 is easy to see:
Proof of Claim 4
Define \(M := \max _{j \in [m]} (a_j / b_j)\). We have
\(\square\)
Appendix C.2: Markov Chain from Fig. 3
Lemma 7
The Markov chain from Fig. 3converges to the stationary distribution \(\omega = (\omega_{\mathsf{A}},\ldots ,\omega _{\mathsf{I}})\) with
where \(\vartheta =\frac{p^3}{1q^3}\) and \(\uplambda = \vartheta q \left( 3  q^2 \right) + \vartheta + 3\).
Proof
As the Markov chain from Fig. 3 is irreducible and aperiodic, it converges to a unique stationary distribution. Let \(\omega =(\omega_{\mathsf{A}},\ldots ,\omega _{\mathsf{I}})\) be this distribution, then we have the following system of equations.
We define \(\vartheta :=\frac{p^3}{1q^3}\) and \(\uplambda := \vartheta q \left( 3  q^2 \right) + \vartheta + 3\) and claim that
satisfies (Q1) to (Q10). In fact, the validity of equations (Q2) to (Q9) can be seen immediately. To verify (Q1) and (Q10), we first observe that for any \(i \ge 0\) and \(1 \le j \le 3\), it holds that
Now, we prove (Q1). We have
which implies (Q1). Equation (Q10) follows from the following calculation.
\(\square\)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Albers, S., Khan, A. & Ladewig, L. Best Fit Bin Packing with Random Order Revisited. Algorithmica 83, 2833–2858 (2021). https://doi.org/10.1007/s00453021008445
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00453021008445
Keywords
 Online bin packing
 Random arrival order
 Probabilistic analysis