Best Fit Bin Packing with Random Order Revisited

Best Fit is a well known online algorithm for the bin packing problem, where a collection of one-dimensional items has to be packed into a minimum number of unit-sized bins. In a seminal work, Kenyon [SODA 1996] introduced the (asymptotic) random order ratio as an alternative performance measure for online algorithms. Here, an adversary specifies the items, but the order of arrival is drawn uniformly at random. Kenyon’s result establishes lower and upper bounds of 1.08 and 1.5, respectively, for the random order ratio of Best Fit. Although this type of analysis model became increasingly popular in the field of online algorithms, no progress has been made for the Best Fit algorithm after the result of Kenyon. We study the random order ratio of Best Fit and tighten the long-standing gap by establishing an improved lower bound of 1.10. For the case where all items are larger than 1/3, we show that the random order ratio converges quickly to 1.25. It is the existence of such large items that crucially determines the performance of Best Fit in the general case. Moreover, this case is closely related to the classical maximum-cardinality matching problem in the fully online model. As a side product, we show that Best Fit satisfies a monotonicity property on such instances, unlike in the general case. In addition, we initiate the study of the absolute random order ratio for this problem. In contrast to asymptotic ratios, absolute ratios must hold even for instances that can be packed into a small number of bins. We show that the absolute random order ratio of Best Fit is at least 1.3. For the case where all items are larger than 1/3, we derive upper and lower bounds of 21/16 and 1.2, respectively.


Introduction
One of the fundamental problems in combinatorial optimization is bin packing. Given a list I = (x 1 , … , x n ) of n items with sizes from (0, 1] and an infinite number of unitsized bins, the goal is to pack all items into the minimum number of bins. Formally, a packing is an assignment of items to bins such that for any bin, the sum of assigned items is at most 1. While an offline algorithm has complete information about the items in advance, in the online variant, items are revealed one by one. An online algorithm must pack x i without knowing the items following x i and without modifying the packing of previous items.
Bin packing was mentioned first by Ullman [38]. As the problem is strongly -complete [17], research mainly focuses on efficient approximation algorithms. The offline problem is well understood and admits even approximation schemes [20,26,39]. The online variant is still a very active field in the community [7], as the asymptotic approximation ratio of the best online algorithm is still unknown [3,4]. The first approximation algorithms for the problem, First Fit and Best Fit, have been analyzed in [38] and a subsequent work by Garey et al. [16]. Johnson published the Next Fit algorithm briefly afterwards [24]. All of these algorithms work in the online setting and attract by their simplicity: Suppose that x i is the current item to pack. The algorithms work as follows: Another important branch of online algorithms is based on the harmonic algorithm [29]. This approach has been massively tuned and generalized in a sequence of papers [3,35,36].
To measure the performance of an algorithm, different metrics exist. For an algorithm A , let A(I) and OPT(I) denote the number of bins used by A and an optimal offline algorithm, respectively, to pack the items in I. Let I denote the set of all item lists. The most common metric for bin packing algorithms is the asymptotic (approximation) ratio defined as Note that R ∞ A focuses on instances where OPT(I) is large. This avoids anomalies typically occurring on lists that can be packed optimally into few bins. However, many bin packing algorithms are also studied in terms of the stronger absolute (approximation) ratio Here, the approximation ratio R A must hold for each possible input. An online algorithm with (absolute or asymptotic) ratio is also called -competitive. Table 1 shows the asymptotic and absolute approximation ratios of the three heuristics Best Fit, First Fit, and Next Fit. Interestingly, for these algorithms both metrics coincide. While the asymptotic ratios of Best Fit and Next Fit were established already in early work [25], the absolute ratios have been settled rather recently [11,12].
Note that the above performance measures are clearly worst-case orientated. An adversary can choose items and present them in an order that forces the algorithm into its worst possible behavior. In the case of Best Fit, hardness examples are typically based on lists where small items occur before large items [16]. In contrast, it is known that Best Fit performs significantly better if items appear in non-increasing order [25]. For real-world instances, it seems overly pessimistic to assume adversarial order of input. Moreover, sometimes worst-case ratios hide interesting properties of algorithms that occur in average cases. This led to the development of alternative measures.
A natural approach that goes beyond worst-case was introduced by Kenyon [28] in 1996. In the model of random order arrivals, the adversary can still specify the items, but the arrival order is permuted randomly. The performance measure described in [28] is based on the asymptotic ratio, but can be applied to absolute ratios likewise. In the resulting performance metrics, an algorithm must satisfy its performance guarantee in expectation over all permutations. We define as the asymptotic random order ratio and the absolute random order ratio of algorithm A , respectively. Here, is drawn uniformly at random from S n , the set of permutations of n elements, and I = (x (1) , … , x (n) ) is the permuted list.

Related Work
The following literature review only covers results that are most relevant to our work. We refer the reader to the article [8]  on (online) bin packing. For further problems studied in the random order model, see [19]. Bin packing Kenyon introduced the notion of asymptotic random order ratio RR ∞ A for online bin packing algorithms in [28]. For the Best Fit algorithm, Kenyon proves an upper bound of 1.5 on RR ∞ BF , demonstrating that random order significantly improves upon R ∞ BF = 1.7 . However, it is conjectured in [8,28] that the actual random order ratio is close to 1.15. The proof of the upper bound crucially relies on the following scaling property: With high probability, the first t items of a random permutation can be packed optimally into t n OPT(I) + o(n) bins. On the other side, Kenyon proves that RR ∞ BF ≥ 1.08 . This lower bound is obtained from the weaker i.i.d.-model, where item sizes are drawn independently and identically distributed according to a fixed probability distribution.
Coffman et al. [9] analyzed next-fit in the random order model and showed that RR ∞ NF = 2 , matching the asymptotic approximation ratio RR ∞ NF = 2 (see Table 1). Fischer and Röglin [14] obtained analogous results for Worst Fit [24] and Smart Next Fit [34]. Therefore, all three algorithms fail to perform better in the random order model than in the adversarial model.
A natural property of bin packing algorithms is monotonicity, which holds if an algorithm never uses fewer bins to pack I ′ than for I, where I ′ is obtained from I by increasing item sizes. Murgolo [33] showed that next-fit is monotone, while Best Fit and First Fit are not monotone in general. The concept of monotonicity also arises in related optimization problems, such as scheduling [18] and bin covering [14].
Bin covering The dual problem of bin packing is bin covering, where the goal is to cover as many bins as possible. A bin is covered if it receives items of total size at least 1. Here, a well-studied and natural algorithm is Dual Next Fit (DNF). In the adversarial setting, DNF has asymptotic ratio R DNF ∞ = 1∕2 which is best possible for any online algorithm [6]. Under random arrival order, Christ et al. [6] showed that RR ∞ DNF ≤ 4∕5 . This upper bound was improved later by Fischer and Röglin [13] to RR ∞ DNF ≤ 2∕3 . The same group of authors further showed that RR ∞ DNF ≥ 0.501 , i.e., DNF performs strictly better under random order than in the adversarial setting [14].
Matching Online matching can be seen as the key problem in the field of online algorithms [32]. Inspired by the seminal work of Karp et al. [27], who introduced the online bipartite matching problem with one-sided arrivals, the problem has been studied in many generalizations. Extensions include fully online models [15,21,22], vertex-weighted versions [1,23] and, most relevant to our work, random arrival order [23,31].

Our Results
While several natural algorithms fail to perform better in the random order model, Best Fit emerges as a strong candidate in this model. The existing gap between 1.08 and 1.5 clearly leaves room for improvement; closing (or even narrowing) this gap has been reported as challenging and interesting open problem in several papers [6,9,19]. To the best of our knowledge, our work provides the first new results on the problem since the seminal work by Kenyon. Below we describe our results in detail. In the following theorems, the expectation is over the permutation drawn uniformly at random.
If all items are strictly larger than 1/3, the objective is to maximize the number of bins containing two items. This problem is closely related to finding a maximum-cardinality matching in a vertex-weighted graph; our setting corresponds with the fully online model studied in [1] under random order arrival. Also in the analysis from [28], this special case arises. There, it is sufficient to argue that BF(I) ≤ 3 2 OPT(I) + 1 under adversarial order. We show that Best Fit performs significantly better under random arrival order: The proof of Theorem 1 is developed in Sect. 3 and based on several pillars. First, we show that Best Fit is monotone in this case (Proposition 3), unlike in the general case [33]. This property can be used to restrict the analysis to instances with well-structured optimal packing. The main technical ingredient is introduced in Sect. 3.3 with Lemma 2 as the key lemma. Here, we show that Best Fit maintains some parts of the optimal packing, depending on certain structures of the input sequence. We identify these structures and show that they occur with constant probability for a random permutation. It seems likely that this property can be used in a similar form to improve the bound RR ∞ BF ≤ 1.5 for the general case: Under adversarial order, much hardness comes from relatively large items of size more than 1/3; in fact, if all items have size at most 1/3, an easy argument shows 4 ∕3-competitiveness even for adversarial arrival order [25].
Moreover, it is natural to ask for the performance in terms of absolute random order ratio. It is a surprising and rather recent result that for Best Fit, absolute and asymptotic ratios coincide. The result of [28] has vast additive terms and it seems that new techniques are required for insights into the absolute random order ratio. In Sect. 3.4, we investigate the absolute random order ratio for items larger than 1/3 and obtain the following result.

Proposition 1 For any list I of items larger than 1/3, we have
The upper bound of 21/16 is complemented by the following lower bound.

Proposition 2 There is a list I of items larger than 1/3 with E[BF(I )] > 6 5 OPT(I).
The proof of Proposition 2 is given in Appendix B.
We also make progress on the hardness side in the general case, which is presented in Sect. 4. First, we show that the asymptotic random order ratio is larger than 1.10, improving the previous lower bound of 1.08 from [28].

Theorem 2 The asymptotic random order ratio of Best Fit is RR
As it is typically challenging to obtain lower bounds in the random order model, we exploit the connection to the i.i.d.-model. Here, items are drawn independently and identically distributed according to a fixed probability distribution. By defining an appropriate distribution, the problem can be analyzed using Markov chain techniques. Moreover, we present the first lower bound on the absolute random order ratio: Interestingly, our lower bound on the absolute random order ratio is notably larger than in the asymptotic case (see [28] and Theorem 2). This suggests either -a significant discrepancy between RR BF and RR ∞ BF , which is in contrast to the adversarial setting ( R BF = R ∞ BF , see Table 1), or -a disproof of the conjecture RR ∞ BF ≈ 1.15 mentioned in [8,28].

Notation
We consider a list I = (x 1 , … , x n ) of n items throughout the paper. Due to the online setting, I is revealed in rounds 1, … , n . In round t, item x t arrives and in total, the prefix list I(t) ∶= (x 1 , … , x t ) is revealed to the algorithm. The items in I(t) are called the visible items of round t. We use the symbol x t for the item itself and its size Bins contain items and therefore can be represented as sets. As a bin usually can receive further items in later rounds, the following terms refer always to a fixed round. We define the load of a bin B as ∑ x i ∈B x i . Sometimes, we classify bins by their internal structure. We say B is of configuration LM (or B is an LMbin) if it contains one large and one medium item. The configurations L, MM, etc. are defined analogously. Moreover, we call B a k-bin if it contains exactly k items. If a bin cannot receive further items in the future, it is called closed; otherwise, it is called open.
The number of bins which Best Fit uses to pack a list I is denoted by BF(I) . We slightly abuse the notation and refer to the corresponding packing by BF(I) as well whenever the exact meaning is clear from the context. Similarly, we denote by OPT(I) the number of bins and the corresponding packing of an optimal offline solution.
Finally, for any natural number n we define [n] ∶= {1, … , n} . Let S n be the set of permutations in [n]. If not stated otherwise, refers to a permutation drawn uniformly at random from S n .

Upper Bound for 1/3-Large Items
In this section, we consider the case where I contains no small items, i.e., where all items are 1 ∕3-large. We develop the technical foundations in Sects. 3.1 to 3.3. The final proofs of Theorem 1 and Proposition 1 are presented in Sect. 3.4.

Monotonicity
We first define the notion of monotone algorithms.

Definition 1
We call an algorithm monotone if increasing the size of one or more items cannot decrease the number of bins used by the algorithm.
One might suspect that any reasonable algorithm is monotone. While this property holds for an optimal offline algorithm and some online algorithms as Next Fit [10], Best Fit is not monotone in general [33]. As a counterexample, consider the lists Before arrival of the fifth item, BF(I(4)) uses the two bins {0.36, 0.38} and {0.65, 0.34} , while BF(I � (4)) uses three bins {0.36, 0.36} , {0.65} , and {0.38} . Now, the last three items fill up the existing bins in BF(I � (4)) exactly. In contrast, these items open two further bins in the packing of BF(I(4)) . Therefore, However, we can show that Best Fit is monotone for the case of 1 ∕3-large items. Interestingly, 1/3 seems to be the threshold for the monotonicity of Best Fit: As shown in the counterexample from the beginning of this section, it is sufficient to have one item x ∈ (1∕4, 1∕3 to force Best Fit into anomalous behavior. Anyway, we have the following proposition.

Proposition 3 Given a list I of items larger than 1/3 and a list I ′ obtained from I by increasing the sizes of one or more items, we have BF(I) ≤ BF(I � ).
We provide the proof of Proposition 3 in Appendix A. Enabled by the monotonicity, we can reduce an instance of 1 ∕3-large items to an instance of easier structure. This construction is described in the following.

Simplifying the Instance
Let I be a list of items larger than 1/3. Note that both the optimal and the Best However, we can assume a simpler structure without substantial implications on the competitiveness of Best Fit.

Lemma 1 Let I be any list that can be packed optimally into OPT(I) LM-bins. If
Best Fit has (asymptotic or absolute) approximation ratio for I, then it has (asymptotic or absolute) approximation ratio for any list of items larger than 1/3 as well.
Proof Let I 0 be a list of items larger than 1/3 and let a, b, c, and d ≤ 1 be the number of bins in OPT(I 0 ) with configurations L, LM, MM, and M, respectively (see Fig. 1a). In several steps, we eliminate L-, MM-, and M-bins from OPT(I 0 ) while making the instance only harder for Best Fit.
First, we obtain I 1 from I 0 by replacing items of size 1/2 by items of size 1∕2 − . By choosing > 0 small enough, i.e., < min{ , it is ensured that Best Fit packs all items in the same bins as before the modification. Further, the modification does not decrease the number of bins in an optimal packing, so we have BF(I 0 ) = BF(I 1 ) and OPT(I 0 ) = OPT(I 1 ). Now, we obtain I 2 from I 1 by increasing item sizes: We replace each of the a + d items packed in 1-bins in OPT(I 1 ) by large items of size 1. Moreover, any 2-bin (MM or LM) in OPT(I 1 ) contains at least one item smaller than 1/2. These items are enlarged such that they fill their respective bin completely. Therefore, OPT(I 2 ) has a + d L-bins and b + c LM-bins (see Fig. 1b). We have OPT(I 2 ) = OPT(I 1 ) and, by Proposition 3, BF(I 2 ) ≥ BF(I 1 ).
Finally, we obtain I 3 from I 2 by deleting the a + d items of size 1. As size-1 items are packed separately in any feasible packing, OPT(I 3 ) = OPT(I 2 ) − (a + d) and BF(I 3 ) = BF(I 2 ) − (a + d).
Note that OPT(I 3 ) contains only LM-bins (see Fig. 1c) and, by assumption, Best Fit has (asymptotic or absolute) approximation ratio for such lists. Therefore, in general we have a factor ≥ 1 and an additive term such that By Lemma 1, we can impose the following constraints on I without loss of generality.
Assumption. For the remainder of the section, we assume that the optimal packing of I has k = OPT(I) LM-bins. For i ∈ [k] , let l i and m i denote the large item and the medium item in the i-th bin, respectively. We call {l i , m i } an LM-pair.

Good Order Pairs
If the adversary could control the order of items, he would send all medium items first, followed by all large items. This way, Best Fit opens k/2 MM-bins and k L-bins and therefore is 1.5-competitive. In a random permutation, we can identify structures with a positive impact on the Best Fit packing. This is formalized in the following random event.
Definition 2 Consider a fixed permutation ∈ S n . We say that the LM-pair {l i , m i } arrives in good order (or is a good order pair) if l i arrives before m i in .
Note that in the adversarial setting, no LM-pair arrives in good order, while in a random permutation, this holds for any LM-pair independently with probability 1/2. The next lemma is central for the proof of Theorem 1. It shows that the number of LM-pairs in good order bounds the number of LM-bins in the final Best Fit packing from below.

Lemma 2
Let ∈ S n be any permutation and let X be the number of LM-pairs arriving in good order in I . The packing BF(I ) has at least X LM-bins.
To prove Lemma 2, we model the Best Fit packing by the following bipartite graph: where M t and L t are the sets of medium and large items in I (t) , respectively. The sets of edges represent the LM-matchings in the Best Fit packing and in the optimal packing at time t, i.e., We distinguish OPT-edges in good and bad order, according to the corresponding LM-pair. Note that G t is not necessarily connected and may contain parallel edges. We illustrate the graph representation by a small example.

3
The proof of Lemma 2 essentially boils down to the following claim:

Claim 1 In each round t and in each connected component C of G t , the number of BF-edges in C is at least the number of OPT-edges in good order in C.
We first show how Lemma 2 follows from Claim 1. Then, we work towards the proof of Claim 1.

Proof of Lemma 2
Claim 1 implies that in G n , the total number of BF-edges (summed over all connected components) is at least X. Therefore, the packing has at least X LM-bins and thus not less than the number of good order pairs X. ◻ Before proving Claim 1, we show the following property of G t .

Claim 2
Consider the graph G t for some t ∈ [n] . Let Q = (b w , a w−1 , b w−1 , , … , a 1 , b 1 ) with w ≥ 1 be a maximal alternating path such that {a j , b j } is an OPT-edge in good order and {a j , b j+1 } is a BF-edge for any j ∈ [w − 1] (i.e., a-items and b-items represent medium and large items, respectively). It holds that b w ≥ b 1 .
Proof We show the claim by induction on w. Note that the items' indices only reflect the position along the path, not the arrival order. For w = 1 , we have Q = (b w ) = (b 1 ) and thus, the claim holds trivially. Now, fix w ≥ 2 and suppose that the claim holds for all paths Q ′ with w � ≤ w − 1 . We next prove b w ≥ b 1 . Let t ′ ≤ t be the arrival time of the a-item a d that arrived latest among all a-items in Q. We consider the graph G t � −1 , i.e., the graph immediately before arrival of a d and its incident edges. Note that in G t � −1 , all items a i with i ∈ [w − 1]⧵{d} and b i with i ∈ [w − 1] are visible. Let Q � = (b w , … , a d+1 , b d+1 ) and Q �� = (b d , … , a 1 , b 1 ) be the connected components of b w and b 1 in G t � −1 . As Q ′ and Q ′′ are maximal alternating paths shorter than Q, we obtain from the induction hypothesis b w ≥ b d+1 and b d ≥ b 1  Now, we are able to prove the remaining technical claim.

Proof of Claim 1
Note that the number of OPT-edges in good order can only increase on arrival of a medium item m i where {m i , l i } is an LM-pair in good order. Therefore, it is sufficient to verify Claim 1 in rounds t 1 < ⋯ < t j such that in round t i , item m i arrives and l i arrived previously. Induction base. In round t 1 , there is one OPT-edge {m 1 , l 1 } in good order. We need to show that there exists at least one BF-edge in G t 1 , or, alternatively, at least one LM-bin in the packing. If the bin of l 1 contains a medium item different from m 1 , we identified one LM-bin. Otherwise, Best Fit packs m 1 together with l 1 or some other large item, again creating an LM-bin.
Induction hypothesis. Fix i ≥ 2 and assume that Claim 1 holds up to round t i−1 . Induction step. We only consider the connected component of m i , as by the induction hypothesis, the claim holds for all remaining connected components. If m i is packed into an LM-bin, the number of BF-edges increases by one and the claim holds for round t i . Therefore, assume that m i is packed by Best Fit in an M-or MMbin. This means that in G t i , vertex m i is incident to an OPT-edge in good order, but not incident to any BF-edge. Let P = (m i , l i , … , v) be the maximal path starting from m i alternating between OPT-edges and BF-edges.
Case 1: v is a medium item. For illustration, consider Fig. 2 with m i = m 2 and v = m 3 . Since P begins with an OPT-edge and ends with a BF-edge, the number of BF-edges in P equals the number of OPT-edges in P. The latter number is clearly at least the number of OPT-edges in good order in P.
Case 2: v is a large item. For illustration, consider Fig. 2 with m i = m 1 and v = l 4 . We consider two cases. If P contains at least one OPT-edge which is not in good order, the claim follows by the same argument as in Case 1. Now, suppose that all OPT-edges in P are in good order. Let P ′ be the path obtained from P by removing the item m i . As P ′ satisfies the premises of Claim 2, we obtain l i ≥ v . This implies that m i and v would fit together, as m i + v ≤ m i + l i ≤ 1 . However, m i is packed in an M-or MM-bin by assumption, although v is a feasible option on arrival of m i . As this contradicts the Best Fit rule, we conclude that case 2 cannot happen. ◻

Final Proofs
Finally, we prove the main result of this section. To obtain the upper bound of 21/16 on the absolute random order ratio (Proposition 1), we analyze a few special cases more carefully.

Lower Bounds
In this section, we present the improved lower bound on RR ∞ BF (Theorem 2) and the first lower bound on the absolute random order ratio RR BF (Theorem 3).

Asymptotic Random Order Ratio
Another model of probabilistic analysis is the i.i.d.-model, where the input of the algorithm is a sequence of independent and identically distributed (i.i.d.) random variables. Here, the performance measure of algorithm A is E[A(I n (F))]∕E[OPT(I n (F))] , where I n (F) ∶= (X 1 , … , X n ) is a list of n random variables drawn i.i.d. according to F. This model is in general weaker than the random order model, which is why lower bounds in the random order model can be obtained from the i.i.d.-model. This is formalized in the following lemma. This technique has already been used in [28] to establish the lower bound of 1.08, however, without a formal proof. Apparently, the only published proofs of this connection address bin covering [6,13]. We provide a constructive proof of Lemma 3 in Appendix C for completeness. The improved lower bound from Theorem 2 now follows by combining Lemma 3 with the next lemma.

Lemma 4
There exists a discrete distribution F such that for n → ∞ , we have E[A(I n (F))] > 11 10 E[OPT(I n (F))] and each sample X i satisfies X i ≥ 1∕4.
Proof Let F be the discrete distribution which gives an item of size 1/4 with probability p and an item of size 1/3 with probability q ∶= 1 − p . First, we analyze the optimal packing. Let N 4 and N 3 be the number of items with size 1/4 and 1/3 in I n (F) , respectively. We have Now, we analyze the expected behavior of Best Fit for I n (F) . As the only possible item sizes are 1/4 and 1/3, we can consider each bin of load more than 3/4 as closed. Moreover, the number of possible loads for open bins is small and Best Fit maintains at most two open bins at any time. Therefore, we can model the Best Fit packing by a Markov chain as follows. Let the nine states , , … , be defined as in Fig. 3b. The corresponding transition diagram is depicted in Fig. 3. This Markov chain converges to the stationary distribution where we defined ∶= p 3 1−q 3 and λ ∶= q 3 − q 2 + + 3 . A formal proof of this fact can be found in Appendix C.2.  Let V (t) denote the number of visits to state ∈ { , … , } up to time t. By a basic result from the theory of ergodic Markov chains (see [30,Sect. 4.7]), it holds that lim t→∞ 1 t ⋅ V (t) = S . In other words, the proportion of time spent in state approaches its probability in the stationary distribution.
This fact can be used to bound the total number of opened bins over time. Note that Best Fit opens a new bin on the transitions A → B , A → C , and G → H (see Fig. 3a).

Absolute Random Order Ratio
Theorem 3 follows from the following lemma.

Lemma 5 There exists a list I such that E[BF(I )] = 13 10 OPT(I).
Proof Let > 0 be sufficiently small and let I ∶= (a 1 , An optimal packing of I has two bins {a 1 , a 2 , c} and {b 1 , b 2 } , thus OPT(I) = 2 . Subsequently, we argue that Best Fit needs two or three bins depending on the order of arrival.
Let E be the event that exactly one b-item arrives within the first two rounds. After the second item, the first bin is closed, as its load is at least Among the remaining three items, there is a b-item of size 1 3 + 16 and at least one a-item of size 1 3 + 4 . This implies that a third bin needs to be opened for the last item. As there are exactly 2 ⋅ 3 ⋅ 2! ⋅ 3! = 72 permutations where E happens, we have Pr[E] = 72 5! = 3 5 . On the other side, Best Fit needs only two bins if one of the events F and G, defined in the following, happen. Let F be the event that both b-items arrive in the first two rounds. Then, the remaining three items fit into one additional bin. Moreover, let G be the event that the set of the first two items is a subset of {a 1 , a 2 , c} . Then, the first bin has load at least 2 3 − 4 , thus no b-item can be packed there. Again, this ensures a packing into two bins.

10
. As the events E, F, and G partition the probability space, we obtain .

◻
The construction from the above proof is used in [25] to prove that Best Fit is 1.5-competitive under adversarial arrival order if all item sizes are close to 1/3. Interestingly, it gives a strong lower bound on the absolute random order ratio as well.

Appendix A: Monotonicity
Proposition 3 follows by applying the following lemma iteratively. A technically similar proof appeared in [37], where Shor showed that the MBF algorithm from [37] is monotone under removal of items.

Lemma 6
Let I = (x 1 , … , x n ) be any list of items larger than 1/3. Let Proof All bins in any packing of I or I ′ contain at most two items. We call two 1-bins of BF(I) and BF(I � ) pairwise-identical if they contain items of the same size. Moreover, we call any two 2-bins of BF(I) and BF(I � ) pairwise-closed, as neither of the two bins can receive a further item. For ease of notation, let I t = I(t) and I � t = I � (t) . We show that at any time t, the packings BF(I t ) and BF(I � t ) are related in one of three ways (see Fig. 4).  have been packed into separate bins previously. Moreover, since Best Fit packed Here, we prove the existence of a list I with 1 ∕3-large items and E[BF(I )] > 6 5 OPT(I).

Proof of Proposition 2
We construct a list of k = 3 LM-pairs. For sufficiently small > 0 and i ∈ [k] define l i = 1 2 + i and m i = 1 2 − i . This way, l 1 < l 2 < l 3 and m 1 > m 2 > m 3 . Clearly, OPT(I) = 3 . We can show that Best Fit uses 4 instead of 3 bins in at least 440 permutations. Therefore, We call the event that Best Fit packs two items different from {l i , m i } into the same bin a non-optimal match. This event occurs if Best Fit packs either l i and m j for 1 ≤ i < j ≤ 3 , or two medium items m i and m j together into the same bin. It is easy to see that any packing with a non-optimal match needs at least 4 bins. However, Best Fit never uses more than 4 bins.
In the following, we partition the set of all permutations with a non-optimal match according to the first time where this happens. Note that the first possibility is after arrival of the second item. Moreover, if no non-optimal match happened before arrival of the fifth item, the fifth and sixth item build an LM-pair and therefore, no non-optimal match occurs at all.
Let a 1 , … , a 4 denote the first four items in a fixed permutation.

Case 1: First non-optimal match after the second item
We have either a 1 , a 2 ∈ {l i , m j } with i < j , or a 1 , a 2 ∈ {m i , m j } with i ≠ j . In both cases, we can choose the set {a 1 , a 2 } in three ways, can arrange a 1 and a 2 in 2! ways and the remaining items in 4! ways. The total number of such permutations is OPT(I) ≥

Case 3b
We have an optimal match after the third item followed by a non-optimal match after the fourth item. This happens in the following cases.
In each of the first two cases, we get 2! ⋅ 2! permutations. In each of the remaining three cases, we have 2! ⋅ 2 ⋅ 2! permutations, since we can choose one additional item among two elements. The total number of permutations in case 3b is thus Let I = {I | Pr[I n (F) = I] > 0} be the set of possible outcomes of I n (F) . We say that two lists I 1 , I 2 ∈ I are similar ( I 1 ∼ I 2 ) if there is a permutation ∈ S n such that I 1 = I 2 . Note that ∼ defines an equivalence relation on I . Let H be a complete set of representatives of ∼ . This way, I = ⨄ H∈H {H | ∈ S n }. We will use the following two technical claims which we will prove later.  = 1 λ 1, p, q + pq , p 2 , 2pq + p 2 q , q 2 + 2pq 2 , , q , q 2 , We define ∶= p 3 1−q 3 and λ ∶= q 3 − q 2 + + 3 and claim that satisfies (Q1) to (Q10). In fact, the validity of equations (Q2) to (Q9) can be seen immediately. To verify (Q1) and (Q10), we first observe that for any i ≥ 0 and 1 ≤ j ≤ 3 , it holds that Now, we prove (Q1). We have which implies (Q1). Equation (Q10) follows from the following calculation.