Improved Approximation Algorithm for klevel Uncapacitated Facility Location Problem (with Penalties)
Abstract
We study the klevel uncapacitated facility location problem (klevel UFL) in which clients need to be connected with paths crossing open facilities of k types (levels). In this paper we first propose an approximation algorithm that for any constant k, in polynomial time, delivers solutions of cost at most α _{ k } times OPT, where α _{ k } is an increasing function of k, with \(\lim _{k\to \infty } \alpha _{k} = 3\). Our algorithm rounds a fractional solution to an extended LP formulation of the problem. The rounding builds upon the technique of iteratively rounding fractional solutions on trees (Garg, Konjevod, and Ravi SODA’98) originally used for the group Steiner tree problem. We improve the approximation ratio for klevel UFL for all k ≥ 3, in particular we obtain the ratio equal 2.02, 2.14, and 2.24 for k = 3,4, and 5.
Second, we give a simple interpretation of the randomization process (Li ICALP’2011) for 1level UFL in terms of solving an auxiliary (factor revealing) LP. Armed with this simple view point, we exercise the randomization on our algorithm for the klevel UFL. We further improve the approximation ratio for all k ≥ 3, obtaining 1.97, 2.09, and 2.19 for k = 3,4, and 5.
Third, we extend our algorithm to the klevel UFL with penalties (klevel UFLWP), in which the setting is the same as klevel UFL except that the planner has the option to pay a penalty instead of connecting chosen clients.
Keywords
Facility location Approximation algorithms1 Introduction
In the uncapacitated facility location (UFL) problem the goal is to open facilities in a subset of given locations and connect each client to an open facility so as to minimize the sum of opening costs and connection costs. In the penalty avoiding (prize collecting) variant of the problem, a fixed penalty can be paid instead of connecting a client.
In the klevel facility location problem (klevel UFL) we have a set C of clients and a set \(F = \bigcup _{l = 1}^{k} F_{l}\) of facilities (locations to potentially open a facility). Facilities are of k different types (levels), e.g., for k = 3 one may think of these facilities as shops, warehouses and factories. Each set F _{ l } contains facilities on level l. Each facility i has cost of opening it f _{ i } and for each i, j∈C∪F there is distance c _{ i, j } ≥ 0 which satisfies the triangle inequality. The task is to connect each client to an open facility at each level, i.e., for each client j it needs to be connected with a path (j, i _{1},i _{2},⋯ ,i _{ k−1},i _{ k }), where i _{ l } is an open facility at level l. We aim at minimizing the total cost of opening facilities (at all levels) plus the total connection cost, i.e., the sum of the lengths of clients paths.
In the klevel uncapacitated facility location problem with penalties (klevel UFLWP), the setting is the same as klevel UFL except that each client j can either be connected to a path, or be rejected in which case the penalty p _{ j } must be paid ( p _{ j } can be considered as the loss of profit). The goal is to minimize the sum of the total cost of opening facilities (at all levels), the total connection cost and the total penalty cost. In the uniform version of the problem all penalties are the same, i.e., for any two clients j _{1},j _{2}∈C we have \(p_{j_{1}} = p_{j_{2}}\).
1.1 Related Work and Our Contribution
The studied klevel UFL, generalizes the standard 1level UFL, for which Guha and Khuller [13] showed a 1.463hardness of approximation. The hardness for multilevel variant was recently improved by Krishnaswamy and Sviridenko [15] who showed 1.539hardness for two levels ( k = 2) and 1.61hardness for general k. This demonstrates that multilevel facility location is strictly harder to approximate than the single level variant for which Li [16] presented the current best known 1.488approximation algorithm by using a nontrivial randomization of a certain scaling parameter in the LProunding algorithm by Chudak and Shmoys combined with a primaldual algorithm of Jain et al.
The first constant factor approximation algorithm for k = 2 is due to Shmoys, Tardos, and Aardal [18], who gave a 3.16approximation algorithm. For general k, the first constant factor approximation algorithm was the 3approximation algorithm by Aardal, Chudak, and Shmoys [1].
As it was naturally expected that the problem is easier for smaller number of levels, Ageev, Ye, and Zhang [2] gave an algorithm which reduces an instance of the klevel problem into a pair of instances of the (k−1)level problem and of the single level problem. By this reduction they obtained 2.43–apx. for k = 2 and 2.85–apx. for k = 3. This was later improved by Zhang [22], who got 1.77–apx for k = 2, 2.53–apx ^{1} for k = 3, and 2.81apx for k = 4. Byrka and Aardal [5] then improved the approximation ratio for k = 3 to 2.492.
1level UFL with penalties was first introduced by Charikar et al. [9], who gave a 3approximation algorithm based on a primaldual method. Later, Jain et al. [14] indicated that their greedy algorithm for UFL could be adapted to UFLWP with the approximation ratio 2. Xu and Xu [20, 21] proposed a 2.736approximation algorithm based on LProunding and a combinatorial 1.853approximation algorithm by combining local search with primaldual. Later, Geunes et al. [12] presented an algorithmic framework which can extend any LPbased αapproximation algorithm for UFL to get an (1−e ^{−1/α })^{−1}approximation algorithm for UFL with penalties. As a result, they gave a 2.056approximation algorithm for this problem. Recently, Li et al. [17] extended the LProunding algorithm by Byrka and Aardal [5] and the analysis by Li [16] to UFLWP to give the currently best 1.5148approximation algorithm.
For multilevel UFLWP, Bumb [4] gave a 6approximation algorithm by extending the primaldual algorithm for multilevel UFL. Asadi et al. [3] presented an LProunding based 4approximation algorithm by converting the LPbased algorithm for UFLWP by Xu and Xu [20] to klevel.
Zhang [22] predicted the existence of an algorithm for klevel UFL that for any fixed k has approximation ratio strictly smaller than 3. In this paper we give such an algorithm, which is a natural generalization of LProunding algorithms for 1level UFL. We further improve the ratios by extending the randomization process proposed by Li [16] for 1level UFL to klevel UFL. Our new LProunding algorithm improves the currently best known approximation ratio for klevel UFL for any k > 2. The ratios we obtain for k≤10 are summarized in the following table.
In addition, we show that our algorithm can be naturally generalized to get an improved approximation algorithm for klevel UFLWP.
1.2 The Main Idea Behind Our Algorithm
The 3approximation algorithm of Aardal, Chudak, and Shmoys, rounds a fractional solution to the standard path LPrelaxation of the studied problem by clustering clients around socalled cluster centers. Each cluster center gets a direct connection, while all the other clients only get a 3hop connection via their centers. In the single level UFL problem, Chudak and Shmoys observed that by randomly opening facilities one may obtain an improved algorithm using the fact that for each client, with at least some fixed probability, he gets an open facility within a 1hop path distance. While in the single level problem independently sampling facilities to open is sufficient, the multilevel variant requires coordinating the process of opening facilities across levels.
The key idea behind our solution relies on an observation that the optimal integral solution has a form of a forest, while the fractional solution to the standard LPrelaxation may not have this structure. We start by modifying the instance and hence the LP, so that we enforce the forest structure also for the fractional solution of the relaxation. Having the hierarchical structure of the trees, we then use the technique of Garg, Konjevod, and Ravi [11], to first round the top of the tree, and then only consider the descendant edges if the parent edge is selected. This approach naturally leads to sampling trees (not opening lower level facilities if their parent facilities are closed), but to eventually apply the technique to a location problem, we need to make it compatible with clustering. To this end we must ensure that all cluster centers get a direct 1hop path service. This we obtain by a specific modification of the rounding algorithm, which ensures opening exactly one direct path for each cluster center, while preserving the necessary randomness for all the other clients. It is only possible because cluster centers do not share top level facilities, and in rounding a single tree we only care about at most one cluster center. In Section 3.2 we propose a tokenpassing based rounding procedure which has exactly the desired properties.
The key idea behind importing the randomization process to klevel UFL relies on an observation that algorithms whose performance can be analysed with a linear function of certain instance parameters, like the Chudak and Shmoys algorithm [10] for UFL, can be easily combined and analysed with a natural factor revealing LP. This simplifies the argument of Shi Li [16] for his 1.488approximation algorithm for UFL as an explicit distribution for the parameters obtained by a linear program is not necessary in our factor revealing LP. With this tool one can easily randomize the scaling factor in LProunding algorithms for various variants of the UFL problem.
2 Extended LP Formulation for klevel UFL
To describe our new LP we first describe a process of splitting vertices of the input graph into a number of (polynomially many for fixed k) copies of each potential facility location.
Graph modification.
Our idea is to have a graph in which each facility t on level j may only be connected to a single facility on level j + 1. Since we do not know a priori to which facility on level j + 1 facility t is connected in the optimal solution, we will introduce multiple copies of t, one for each possible parent on level j + 1.
To be more precise, we let F ^{ ′ } denote the original set of facilities, and we construct the new set of facilities denoted by F. Nothing will change for facilities in set \(F^{\prime }_{k}\), so \(F_{k} = F^{\prime }_{k}\). For each facility \(t \in F^{\prime }_{k1}\) we have F _{ k } facilities each connected with different facility in set F _{ k }. So the cardinality of the set F _{ k−1} is equal to \(F_{k} \cdot F^{\prime }_{k1}\). In general: for each i = 1, 2, . . . ,k−1 set F _{ i } has F _{ i + 1} copies of each element in set \(F^{\prime }_{i}\) and each copy is connected with a different element in the set F _{ i + 1}, so \(F_{i} = F_{i+1} \cdot F^{\prime }_{i}\). Observe that so created copies of facilities at level l are in one to one correspondence with paths (i _{ l },i _{ l + 1},…,i _{ k }) on original facilities on levels l, l + 1,…,k. We will use such paths on the original facilities as names for the facilities in the extended instance.
Connection and Service Cost.
P _{ C } is the set of paths (in the above described graph), which start in some client and end in a facility at level k. P _{ j } is the set of facilities at level j in the extended instance, or alternatively the set of paths on original facilities which start in a facility at level j and end in a facility at level k. Now we define the cost of path p denoted by c _{ p }. For p = (c, i _{1},i _{2},⋯i _{ k })∈P _{ C } we have \(c_{p} = c_{c, i_{1}} + c_{i_{1}, i_{2}} + \ldots + c_{i_{k1}, i_{k}}\) and for p = (i _{ j },i _{ j + 1},⋯i _{ k })∈P _{ j } we have \(c_{p} = f_{i_{j}}\). So if p∈P _{ C } then c _{ p } is a service cost (i.e., the length of path p), and if p∈P _{ j } then c _{ p } is the cost of opening the first facility on this path. \(P = P_{C} \cup \bigcup _{j = 1}^{k} P_{j}\).
2.1 The LP
The natural interpretation of the above LP is as follows. Inequality (1) states that each client is assigned to at least one path. Inequality (2) encodes that opening of a lower level facility implies opening of its unique higher level facility. The most complicated inequality (3) for a client j∈C and a facility i _{ l }∈F _{ l }, imposes that the opening of i _{ l } must be at least the total usage of it by client j.
Lemma 1
Let x and ( v,y,w) be optimal solutions to the above primal and dual linear programs, respectively. For any p∈P _{ C } , if x _{ p } >0, then c _{ p } ≤v _{ j } , where j is the client connected by the path c _{ p }.
Proof
Let x ^{∗} be an optimal fractional solution to LP (1)–(5). Let \(P^{j} = \{p \in P_{C}  j \in p \wedge x^{*}_{p} > 0\}\) denote the set of paths beginning in client j, which are in the support of solution x ^{∗}. Define \(d^{av}(j) = C_{j}^{*} = {\sum }_{p \in P^{j}} c_{p} x_{p}^{*}\), \(d^{max}(j) = \max _{p\in P^{j} : x^{*}_{p} > 0} c_{p} \leq v_{j}\), and \(F_{j}^{*} = v^{*}_{j}  C_{j}^{*}\). Naturally, \(F^{*} = {\sum }_{j \in C} F_{j}^{*}\) and \(C^{*} = {\sum }_{j \in C} C_{j}^{*}\).
3 Algorithm and Analysis for klevel UFL
It starts by solving the above described extended LP which, by contrast to the LP used in [1], enforces the fractional solution to have a forest like structure. Step 3. can be interpreted as an adaptation of (by now standard) LProunding techniques used for (single level) facility location. Step 4. is an almost direct application of a method from [11]. The final connection Step 5. is straightforward, the algorithm simply connects each client via a shortest path of open facilities.
For the clarity of presentation we first only describe the algorithm without scaling which achieves a slightly weaker approximation ratio. We will now present Steps 3. and 4. in more detail.
3.1 Clustering
Like in LProunding algorithms for UFL, we will partition clients into disjoint clusters and for each cluster center select a single client which will be called the center of this cluster.
Please recall that the solution x ^{∗} we obtain by solving LP (1)(5) gives us (possibly fractional) weights on paths. Paths p∈P _{ C } we interpret as connections from clients to open facilities, while other (shorter) paths from P∖P _{ C } encode the (fractional) opening of facilities, which have a structure of a forest (i.e., every facility from a lower level is assigned only to a single facility at a higher level).
Observe that if two client paths p _{1},p _{2}∈P _{ C } share at least one facility, then they must also end in the same facility at the highest kth level. For a client j and a kth level facility i we will say j is fractionally connected to i in x ^{∗} if and only if there exists a path p∈P _{ C } of the form (j,…,i) with x _{ p } > 0. Two clients are called neighbors if they are fractionally connected to the same kth level facility.

select an unclustered client j that minimizes d ^{ a v }(j)+d ^{ m a x }(j),

create a new cluster containing j and all its yet unclustered neighbors,

call j the center of the new cluster;
The procedure is known (see e.g., [10]) to provide good clustering, i.e., no two cluster centers are neighbors and the distance from each client to his cluster center is bounded.
3.2 Randomized Facility Opening
We will now give details on how the algorithm decides which facilities to open. Recall that the facility opening part of the fractional solution can be interpreted as a set of trees rooted in top level facilities and having leafs in level1 facilities.
We will start by describing how a single tree is rounded. For the clarity of presentation we will change the notation and denote the set of vertices (facilities) of such tree by V, and we will use x _{ v } to denote the fractional opening of v∈V in the initial fractional solution x ^{∗}. We will also use y _{ v } to denote how much a cluster center uses v. Please note, that for each of the trees of the fractional solution there is at most one cluster center client j using this tree. If the tree we are currently rounding is not used by any cluster center, then we set all y _{ v }=0. If cluster center j uses the tree, then for each facility v in the tree, we set \(y_{v} = {\sum }_{p \in P^{j} : v \in p} x_{p}\), i.e., y _{ v } is the sum over the connection paths p of j crossing v of the extent the fractional solution uses this path.
 1.
if v is not a leaf, then \(y_{v} = {\sum }_{u \in C(v)} y_{u}\);
 2.
if v is not the root node, then x _{ v }≤x _{ p(v)};
 3.
for all v∈V we have x _{ v } ≥ y _{ v }.
The following randomized procedure will be used to round both the fractional x into an integral \(\hat {x}\) and the fractional y into an integral \(\hat {y}\). The procedure will visit each node of the tree at most once. For certain nodes it will be run in a’ with a token’ mode and for some others it will be run’ without a token’. It will be initiated in the root node and will recursively execute itself on a subset of lower level nodes. Initially \(\hat {x}_{v}\) and \(\hat {y}_{v}\) are set to 0 for all nodes v, and unless indicated otherwise a node does not have a token.
Now we briefly describe what the algorithm does. Suppose that we are in node v which is not a leaf. If v has a token then we set \(\hat {x}_{v} = \hat {y}_{v} = 1\), choose one son (each son i has probability \(\frac {y_{i}}{y_{u}}\)) and give him a token. Make recursive call on each son. If v doesn’t have a token then with probability \(\frac {x_{v}  y_{v}}{x_{pred}  y_{v}}\) ( x _{ p r e d } is 1 if v is a root or x _{ p(v)} otherwise) set \(\hat {x}_{v}=1\) and make recursive call on each son. If v is a leaf we don’t choose a son to give him the token and don’t make a recursive call on sons. We execute the above procedure on the root of the tree, possibly assigning the token to the root node just before the execution. Observe, that an execution of the procedure ROUND(v) on a root of the tree brings the token to a single leaf of the tree if and only if it starts with the token at the root node. In case of the token, the \(\hat {y}_{v}\) variables will record the path of the token, and hence will form a single path from the root to a leaf.
Consider a procedure that first with probability y _{ r } gives the token to the root r of the tree and then executes ROUND(r). We will argue that this procedure preserves marginals when used to round x into \(\hat {x}\) and y into \(\hat {y}\).
Lemma 2
\(E[\hat {y}_{v}] = y_{v}\) for all v∈V.
Proof
□
Lemma 3
\(E[\hat {x}_{v}] = x_{v}\) for all v∈V.
Proof
By Lemma 2 is is now sufficient to show, that \(E[\hat {x}_{v}  \hat {y}_{v}] = x_{v}  y_{v}\) for all v∈V. Observe that \(\hat {x}_{v}  \hat {y}_{v}\) is always either 0 or 1, hence \(E[\hat {x}_{v}  \hat {y}_{v}] = Pr[\hat {x}_{v}=1, \hat {y}_{v}=0]\).
The proof is again by induction on the distance of v from the root node r. Clearly, \(E[\hat {x}_{r}  \hat {y}_{r}] = Pr[\hat {x}_{r}=1, \hat {y}_{r}=0] = Pr[\hat {x}_{r}=1  \hat {y}_{r}=0 ] \cdot Pr[\hat {y_{r}}=0] = \frac {x_{r}  y_{r}}{1  y_{r}} \cdot (1  y_{r}) = x_{r}  y_{r}\).
□
 1.
For each cluster center j, put a single token on the root node of one of the trees he is using in the fractional solution. Every single tree is selected with probability equal the fractional connection of j to this tree.
 2.
Execute the ROUND(.) procedure on the root of each tree.
By the construction of the rounding procedure, every single cluster center, since he had placed his token on a tree, will have one of his paths open so that he can directly connect via this path. Moreover, by Lemma 2 the probability of opening a particular connection path p∈P ^{ j } for him (as indicated by variables \(\hat {y}\)) is exactly equal the weight x _{ p } the fractional solution assigns to this path. Hence, his expected connection cost is exactly his fractional cost.
To bound the expected connection cost of the other (noncenter) clients is slightly more involved and will be discussed in the following section.
3.3 Analysis
Let us first comment on the running time of the algorithm. The algorithm first solves a linear program of size O(n ^{ k }), where n is the maximal number of facilities on a single level. For fixed k it is of polynomial size, hence may be directly solved by the ellipsoid algorithm. The rounding of facility openings is by traversing trees whose total size is again bounded by O(n ^{ k }). Finally each client can try each of his at most O(n ^{ k }) possible connecting paths and see which of them is the closest open one.
Every client j will find an open connecting path to connect with, since he is a part of a cluster, and the client j ^{ ′ } who is the center of this cluster certainly has a good open connecting path. Client j may simply use (the facility part of) the path of cluster center j ^{ ′ }, which by the triangle inequality will cost him at most the distance \(c_{j,j^{\prime }}\) more than it costs j ^{ ′ }.
In fact a slightly stronger bound on the expected length of the connection path of j is easy to derive. We use the following bound, which is analogous to the Chudak ans Shmoys [10] argument for UFL.
Lemma 4
Again like in the work of Chudak ans Shmoys [10], the crux of our improvement lies in the fact that with a certain probability the quite expensive 3hop path guaranteed by the above lemma will not be necessary, because j will happen to have a shorter direct connection. The main part of the analysis which will now follow is to evaluate the probability of this lucky event.
We will use the following technical lemma.
Lemma 5
Proof
We will show that \({\prod }_{i = 1}^{n} (1  x_{i} + x_{i}d) \leq (1  \frac {c}{n} + \frac {cd}{n})^{n}\) by induction.
Basis: Show that the statement holds for n = 1.
This is trivial as \(x_{1}=\frac {c}{n}\) when n = 1.
Inductive step: Show that if \(\forall _{l < n}, {\prod }_{i = 1}^{l} (1  x_{i} + x_{i}d) \leq \left (1  \frac {c}{l} + \frac {c d}{l}\right )^{l}\), then \({\prod }_{i = 1}^{n} (1  x_{i} + x_{i}d) \leq \left (1  \frac {c}{n} + \frac {cd}{n}\right )^{n}\).
For each sequence \({\prod }_{i = 1}^{n} (1  x_{i} + x_{i}d)\), suppose that there exists a subset L⊂X = {x _{1},x _{2},⋯ ,x _{ n }} with l elements (l≤n−1) , and exists x _{ i }∈L with x _{ i }≠c ^{ ′ }/l, where \(c^{\prime } = {\sum }_{x_{i} \in L} x_{i}\). We can replace subsequence \({\prod }_{x_{i} \in L} (1  x_{i} + x_{i}d)\) with \((1  \frac {c^{\prime }}{l} + \frac {c^{\prime }d}{l})^{l}\) by our assumption as l<n. The remaining sequence (1−x _{ i }+x _{ i } d), forall x _{ i }∈X−L will not change. So, the value of the whole sequence will be equal to or bigger than before.
We repeatedly replace the subsequence with the above properties until each element in X is equal to \(\frac {c}{n}\). In the final situation there is no subsequence whose value can be increased. So, the value of the sequence is maximum. Therefore, the inequality \({\prod }_{i = 1}^{n} (1  x_{i} + x_{i}d) \leq \left (1  \frac {c}{n} + \frac {cd}{n}\right )^{n}\) holds. □
Suppose a (noncenter) client is connected with a flow of value z to a tree in the fractional solution. Suppose further that this flow saturates all the fractional openings on this tree, then the following function f _{ k }(z) gives a lower bound on the probability that at least one path of this tree will be open as a result of the rounding routine. Function f _{ k }(z) is defined recursively. For k = 1 it is just equal to fractional opening, i.e., f _{1}(z)=z. For k ≥ 2 it is \(f_{k}(z) = z \cdot \min _{z}(1  {\prod }_{i = 1}^{n}(1  f_{k1}(\frac {z_{i}}{z})))\).^{2} It is a product of the probability of opening the root node, and the (recursively bounded) probability that at least one of the subtrees has an open path, conditioned on the root being open.
The following lemma displays the structure of f _{ k }(.).
Lemma 6
Inequality f _{ k } (x) ≥ x⋅(1−c) implies f _{ k + 1} (x) ≥ x ⋅ (1−e ^{ c−1 } ).
Proof
Note that f _{1}(x) ≥ x and \(f_{2}(x) \geq x (1  \frac {1}{e})\). Now we show induction step. Suppose that f _{ k }(x) ≥ x⋅(1−c) then \(f_{k+1}(x) = x \cdot (1  \max _{x}{\prod }_{i = 1}^{n}(1  f_{k}(\frac {x_{i}}{x}))) \geq x \cdot (1  \max _{x}{\prod }_{i = 1}^{n}(1  \frac {x_{i}}{x} + \frac {x_{i}}{x} \cdot c)) = x(1  (1  \frac {1}{n} + \frac {c}{n})^{n}) \mapsto x (1  e^{c  1})\). Last equality follows from Lemma 5 applied to vector \(\overline {x}\) defined as \(\overline {x_{i}} = \frac {x_{i}}{x}\). □
Fortunately enough, we may inductively prove the following lemma, which states that the worst case for our analysis is when the tree capacity is saturated by the connectivity flow of a client.
Lemma 7
If 1 ≥ x ≥ z ≥ 0, then f _{ k } (x,z) ≥ f _{ k } (z).
To prove Lemma 7, we show the following result first.
Lemma 8
Suppose 1 ≥ a>0, 1 ≥ x>0, c>0 and \({\sum }_{i}{x_{i}}=c\) . Then \(\max _{x}{\prod }_{i=1}^{n}(1  ax_{i}) = {\prod }_{i=1}^{n}(1  a\frac {c}{n})\) . That is, \({\prod }_{i=1}^{n}(1  ax_{i})\) reaches its biggest value when \(\forall _{i}, x_{i} = \frac {c}{n}\).
Proof
Basis: For n = 1 we have x _{ i }=c, so the equality holds.
Inductive step: Show that if ∀_{ l<n }, \(\max _{x}{\prod }_{i=1}^{l}(1  ax_{i}) = {\prod }_{i=1}^{l}(1  a\frac {c^{\prime }}{l})\) where \(c^{\prime }={\sum }_{i=1}^{l}{x_{i}}\), then \(\max _{x}{\prod }_{i=1}^{n}(1  ax_{i}) = {\prod }_{i=1}^{n}(1  a\frac {c}{n})\) where \(c={\sum }_{i=1}^{n}{x_{i}}\).
Suppose that \({\prod }_{i=1}^{n}(1  ax_{i})\) gets its biggest value at (β _{1},β _{2},⋯ ,β _{ n }). If there exists a subset L⊂{1,2,⋯ ,n} with l elements (l≤n−1) , and exists i∈L with β _{ i }≠c ^{ ′ }/l (where \(c^{\prime } = {\sum }_{i \in L} \beta _{i}\)), then we can replace subsequence \({\prod }_{i \in L} (1  ax_{i})\) with \((1  a \frac {c^{\prime }}{l})^{l}\) by our assumption as l<n. The remaining sequence (1−a x _{ i }) forall i∉L will not change. So, the value of the whole sequence remains the same.
We repeatedly replace the subsequence with the above properties until each element x _{ i } is equal to \(\frac {c}{n}\). So the statement holds. □
Proof of Lemma 7
Consider now a single client j who is fractionally connected to a number of trees with a total weight of his connection paths equal γ (you may think he sends a total flow of value γ through these trees, from leaves to roots). Now, to bound the probability of at least one of these paths getting opened by the rounding procedure, we introduce function F _{ k }(γ) defined as follows. \(F_{k}(\gamma ) = 1  \max _{\gamma } {\prod }_{i = 1}^{n}(1  f_{k}(x_{i}))\). This function is one minus the biggest chance that no tree gives route from root to leaf, using the previously defined f _{ k }(.) function to express the success probability on a single tree.
Now we can give an analogue of Lemma 6 but for F _{ k }(γ).
Lemma 9
Inequality F _{ k } (γ) ≥ 1−e ^{ (c−1)γ } implies \(F_{k+1}(\gamma ) \geq 1  e^{(e^{c1}  1)\gamma }\).
Proof
Suppose that f _{ k }(x) ≥ x(1−c). Note that \(F_{k}(\gamma ) = 1  \max _{\gamma } {\prod }_{i = 1}^{n}(1  f_{k}(x_{i})) \geq 1  \max _{\gamma } {\prod }_{i = 1}^{n}(1  x_{i} + x_{i}c)) = 1  (1  \frac {\gamma }{n} + \frac {\gamma }{n}c)^{n} \mapsto 1  e^{(c1)\gamma }\). (Last equality base on Lemma 5). Key observation is that in the last equality there is no constraint on the positive constant c  we can replace it with any other positive constant and the equality will still be true. Using Lemma 6 we know that f _{ k + 1}(x) ≥ x(1−e ^{ c−1}). The only difference in the way we evaluate F _{ k + 1}(γ) is the replacement of constant c by other constant e ^{ c−1}, so the equality for F _{ k }(γ) implies the equality for F _{ k + 1}(γ), and hence the lemma holds. □
We are now ready to combine our arguments into a bound on the expected total cost of the algorithm.
Theorem 1
Expected total cost of the algorithm is at most (3−2F _{ k } (1))OPT.
Proof
Note first that by Lemma 3, the probability of opening of each single facility equals its fractional opening, and hence the expected facility opening cost is exactly the fractional opening cost F ^{∗}.
Consider client j∈C which is a cluster center. He randomly chooses one of the paths from set P ^{ j }. Expected connection cost for client j is \(E[C_{j}] = d^{av}(j) = {\sum }_{p \in P^{j}} c_{p}x_{p} = C_{j}^{*}\). Suppose now j∈C is not a cluster center. As discussed above, the chance that at least one path from P ^{ j } is open is not less than F _{ k }(1). Suppose that at least one path from P ^{ j } is open. Each path from this set has proportional probability to become open, so the expected length of the chosen path is equal to d ^{ a v }(j). If there is no open path in set P ^{ j }, client j will use a path \(p^{\prime } \in P^{j^{\prime }}\) which was chosen by his cluster center j ^{ ′ }∈C, but j has to pay extra for the distance to the center j ^{ ′ }. In this case, by Lemma 4 we have E[C _{ j }] ≤ 2d ^{max}(j)+d ^{ a v }(j).
4 How to Apply Scaling – General Idea
By means of scaling up facility opening variables before rounding, just like in the case of 1level UFL, we gain on the connectivity cost in two ways. First of all, the probability for j of connecting to one of his fractional facilities via a shorter 1hop path increases, decreasing the usage of the longer backup paths. The second effect is that, in the process of clustering, clients may ignore the furthest of their fractionally used facilities. It has the effect of filtering the solution and reducing the lengths of the 3hop connections. In fact, if the scaling factor is sufficient, which is the case for our application, we eventually do not need the dual program to upper bound the length of a fractional connection with a dual variable. All this is well studied for UFL (see, e.g., [8]), but would require a few pages to present in full detail.
All we need in order to use the techniques from UFL is to give bounds on the probability of opening a connection to specific groups of facilities as a function of the scaling parameter γ. So the probability of connecting j to one of his close facilities (total opening equal 1 after scaling) will be at least F _{ k }(1). The probability of connecting j to either a close or a distant facility (total opening equal γ after scaling) will be at least F _{ k }(γ). The probability of using the backup 3hop path via the cluster center will be at most 1−F _{ k }(γ). To obtain the approximation ratios claimed in the table in Section 1.1, it remains to plug in these numbers to the analysis in [8], and for each value of k find the optimal value for the scaling parameter γ. A complete description of the algorithm for klevel UFL using randomized scaling will be given in Section 1.1. Before we dive into the full detail picture, we first discuss the simpler case of UFL in the following Section.
5 Randomized Scaling for UFL  an Overview
Chudak and Shmoys [10] gave a randomized rounding algorithm for UFL based on this relaxation. Later Byrka and Aardal [5] considered a variant of this algorithm where the facility opening variables were initially scaled up by a factor of γ. They showed that for γ ≥ γ _{0}≈1.67 the algorithm returns a solution with cost at most γ times the fractional facility opening cost plus 1+2e ^{−γ } times the fractional connection cost. This algorithm, when combined with the (1.11, 1.78)approximation algorithm of Jain, Mahdian and Saberi [14] (JMS algorithm for short), is easily a 1.5approximation algorithm for UFL. More recently, Li [16] showed that by randomly choosing the scaling parameter γ from an certain probability distribution one obtains an improved 1.488approximation algorithm. A natural question is what improvement this technique gives in the klevel variant.
In what follows we present our simple interpretation and sketch the analysis of the randomization by Li. We argue that a certain factor revealing LP provides a valid upper bound on the obtained approximation ratio. The appropriate probability distribution for the scaling parameter (engineered and discussed in detail in [16]) may in fact be directly read from the dual of our LP. While we do not claim to get any deeper understanding of the randomization process itself, the simpler formalism we propose is important for us to apply randomization to a more complicated algorithm for klevel UFL, which we describe next.
5.1 Notation
Let \(\mathcal {F}_{j}\) denote the set of facilities with which client j∈C is fractionally connected, i.e., facilities i with \(x^{*}_{ij} > 0\) in the optimal LP solution (x ^{∗},y ^{∗}). Since for uncapacitated facility location problems one can split facilities before rounding, to simplify the presentation, we will assume that \(\mathcal {F}_{j}\) contains lots of facilities with very small fractional opening \(y^{*}_{i}\). This will enable splitting \(\mathcal {F}_{j}\) into subsets of desired total fractional opening.
Definition 1 (definition 15 from [16])
Given an UFL instance and its optimal fractional solution (x ^{∗},y ^{∗}), the characteristic function h _{ j }:[0,1]↦R of a client j∈C is the following. Let i _{1},i _{2},⋯ ,i _{ m } denote the facilities in \(\mathcal {F}_{j}\), in a nondecreasing order of distances to j. Then h _{ j }(p)=d(i _{ t },j), where t is the minimum number such that \({\sum }_{s=1}^{t}y^{*}_{i_{s}} \geq p\). Furthermore, define \(h(p) = {\sum }_{j \in C} h_{j}(p)\) as the characteristic function for the entire fractional solution.
Definition 2
Volume of a set F ^{ ′ }⊆F, denoted by v o l(F ^{ ′ }) is the sum of facility openings in this set, i.e., \(vol(F^{\prime }) = {\sum }_{i \in F^{\prime }} y^{*}_{i}\).
For l = 1, 2, . . . , n − 1 define \(\gamma _{l} = 1 + 2 \cdot \frac {n  l}{n}\), which will form the support for the probability distribution of the scaling parameter γ. Suppose that all facilities are sorted in an order of nondecreasing distances from a client j∈C. Scale up all y ^{∗} variables by γ _{ l } and divide the set of facilities \(\mathcal {F}_{j}\) into two disjoint subsets: the close facilities of client \(j, \mathcal {F}_{j}^{C_{l}}\), such that \(vol(\mathcal {F}_{j}^{C_{l}}) = 1\); and the distant facilities \(\mathcal {F}_{j}^{D_{l}} = \mathcal {F}_{j} \setminus \mathcal {F}_{j}^{C_{l}}\). Note that \(vol(\mathcal {F}_{j}^{D_{l}}) = \gamma _{l}  1\). Observe that\(\frac {1}{\gamma _{k}} < \frac {1}{\gamma _{l}} \Rightarrow \mathcal {F}_{j}^{C_{k}} \subset \mathcal {F}_{j}^{C_{l}}\) and \(\mathcal {F}_{j}^{C_{l}} \setminus \mathcal {F}_{j}^{C_{k}} \neq \emptyset \). We now split \(\mathcal {F}_{j}\) into disjoint subsets \(\mathcal {F}_{j}^{l}\). Define \(\mathcal {F}_{j}^{C_{0}} = \emptyset \) and \(\mathcal {F}_{j}^{l} = \mathcal {F}_{j}^{C_{l}} \setminus \mathcal {F}_{j}^{C_{l1}}\), where l = 1,2…,n. The average distance from j to facilities in \(\mathcal {F}_{j}^{l}\) is \(c_{l}(j) = {\int }_{1/\gamma _{l1}}^{1/\gamma _{l}} h_{j}(p)~dp\) for l > 1 and \({\int }_{0}^{1/\gamma _{1}} h_{j}(p)~dp\) for l = 1. Note that c _{ l }(j)≤c _{ l + 1}(j) and \(D_{\max }^{l}(j) \leq c_{l+1}(j)\), where \(D_{\max }^{l}(j) = \max _{i \in \mathcal {F}_{j}^{l}}{c_{ij}}\).
Since the studied algorithm with the scaling parameter γ = γ _{ k } opens each facility i with probability \(\gamma _{k} \cdot y_{i}^{*}\), and there is no positive correlation between facility opening in different locations, the probability that at least one facility is open from the set \(\mathcal {F}_{j}^{l}\) is at least \(1  e^{\gamma _{k} \cdot vol(\mathcal {F}_{j}^{l})}\).
Crucial to the analysis is the length of a connection via the cluster center j ^{ ′ } for client j when no facility in \(\mathcal {F}_{j}\) is open. Consider the algorithm with a fixed scaling factor γ = γ _{ k }, an arbitrary client j and its cluster center j ^{ ′ }. Li gave the following upper bound on the expected distance from j to an open facility around its cluster center j ^{ ′ }.
Lemma 10 (Lemma 14 from [16])
If no facility in \(\mathcal {F}_{j}\) is opened, the expected distance to the open facility around j ^{ ′ } is at most \(\gamma _{k} D_{av}(j) + (3  \gamma _{k})D_{\max }^{k}(j)\) , where \( D_{av}(j) = {\sum }_{i \in \mathcal {F}_{j}} c_{ij}x_{ij}^{*}\).
Corollary 1
Corollary 2
Proof
p _{ l } is the probability of the following event: no facility is opened within distance at most \(D_{\max }^{l1}(j)\) and at least one facility is opened in \(\mathcal {F}_{j}^{l}\). We will show that we can get an upper bound for E[C _{ j }] with setting \(p_{1} = 1  e^{\frac {\gamma _{k}}{\gamma _{1}}}\) and \(p_{l}= e^{\frac {\gamma _{k}}{\gamma _{l1}}}  e^{\frac {\gamma _{k}}{\gamma _{l}}}\) for all l > 1.
The probability of the following event (denoted by E _{ l }) is at least \(1e^{\frac {\gamma _{k}}{\gamma _{l}}}\): at least one facility is opened in distance at most \(D_{\max }^{l}(j)\) [10]. Similarly, the probability of the following event is at least \(1e^{\frac {\gamma _{k}}{\gamma _{l1}}}\): at least one facility is opened in distance at most \(D_{\max }^{l1}(j)\).
Recall that we have c _{ l }(j)≤c _{ l + 1}(j) and \(c_{n}(j)\leq \gamma _{k} D_{av}(j) + (3  \gamma _{k})D_{\max }^{k} (j)\) (otherwise, we never use this 3hop).
To get \(\max _{p} \{{\sum }_{l = 1}^{n} c_{l}(j) \cdot p_{l} + e^{\gamma _{k}} \cdot (\gamma _{k} D_{av}(j) + (3  \gamma _{k})D_{\max }^{k} (j))\}\), we want to make the probability of using 3hop biggest first. Then, we set the probability of event E _{ n } as \(1e^{\frac {\gamma _{k}}{\gamma _{n}}}=1e^{\gamma _{k}}\), which means the probability of using 3hop is biggest. Next step, we make p _{ n } biggest by setting the probability of event E _{ n−1} as \(1e^{\frac {\gamma _{k}}{\gamma _{n1}}}\). So, we have \(p_{n}=1e^{\frac {\gamma _{k}}{\gamma _{n}}}(1e^{\frac {\gamma _{k}}{\gamma _{n1}}})=e^{\frac {\gamma _{k}}{\gamma _{n1}}}  e^{\frac {\gamma _{k}}{\gamma _{n}}}\). By induction, we get that the worst case is as follows: \(p_{l} = e^{\frac {\gamma _{k}}{\gamma _{l1}}}  e^{\frac {\gamma _{k}}{\gamma _{l}}}\) for all l > 1 and \(p_{1}= 1  e^{\frac {\gamma _{k}}{\gamma _{1}}}\) . □
5.2 Factor Revealing LP
6 Randomized Scaling for klevel UFL
To obtain an improved approximation ratio we run algorithm A(γ) for several values of γ and select the cheapest solution. The factor revealing LP in Section 5.2 gives an upper bound on the approximation ratio. Since the number of levels has influence on connection probabilities, the values of \({p_{l}^{i}}\) need to be defined more carefully than for UFL. In particular, for l = 1 we now have \({p_{1}^{i}} =F_{k}(\frac{\gamma_{i}}{\gamma_{1}})\) and \({p_{l}^{i}} =F_{k}(\frac{\gamma_{i}}{\gamma_{l}})  F_{k}(\frac{\gamma_{i}}{\gamma_{l1}}) \) for l > 1.
Comparison of ratios
k  1  2  3  4  5  6  7  8  9  10 

previous best  1.49  1.77  2.50  2.81  3  3  3  3  3  3 
our alg. (no scaling)  1.74  2.07  2.26  2.38  2.47  2.53  2.59  2.63  2.66  2.69 
our alg. (with scaling)  1.58  1.85  2.02  2.14  2.24  2.31  2.37  2.42  2.46  2.50 
our alg. (with randomization)  1.52  1.79  1.97  2.09  2.19  2.27  2.33  2.39  2.43  2.47 
7 kLevel UFL with Penalties
We now consider the variant of the problem where the planner may decide to pay a penalty (per client) instead of connecting a certain group of clients. First we give a simple argument that the problem with uniform penalties (the penalty for not connecting j is the same for all j∈C) reduces to the problem without penalties. Next we give an algorithm for the general penalty case and show that our bound on the connection plus penalty cost of a client only gets better if the algorithm decides to pay the penalty, effectively showing that the same approximation as for the nonpenalty case is possible.
7.1 klevel UFL with Uniform Penalties reduced to klevel UFL
The difficulty of klevel UFLWP lies in the extra choice of each client, that is, the penalty. We will explain how to overcome the penalties by converting the instance of klevel UFLWP to an appropriate instance of klevel UFL. We first consider the easy case of uniform penalties.
Lemma 11
There is an approximation preserving reduction from klevel UFL with uniform penalties to klevel UFL.
Proof
We can encode the penalty of client j∈C as a group of collocated facilities (one from each level) at distance p _{ j } to client j, and with opening cost zero. The distance from client j to the penaltyfacilities of client j ^{ ′ } is equal to \(c_{j, j^{\prime }} + p_{j^{\prime }}\). Note that \(p_{j^{\prime }} = p_{j}\). We can run any approximation algorithm for klevel UFL on the modified instance. If in the obtained solution client j is connected with a penaltyfacility of client j ^{ ′ }, we can switch j to its penaltyfacility without increasing the cost of the solution. We obtain a solution where clients either connect to normal facilities or to their dedicated penaltyfacilities. The latter event we interpret (in the solution to the original instance) as a situation where the client pays penalty. □
Lemma 11 implies that for klevel uncapacitated facility location with uniform penalties we have the following approximation ratios. Algorithms for k = 1 and 2 are described in [16] and [22], for k > 2 are described in this article.
k  1  2  3  4  5  6  7  8  9  10 

ratio  1.488  1.77  1.97  2.09  2.19  2.27  2.33  2.39  2.43  2.47 
Note that the reduction above does not work for the nonuniform case, because then the distance from client j to the penaltyfacility of client j ^{ ′ } could be smaller than p _{ j }. Nevertheless we will show that LProunding algorithms in this paper can be easily extended to the nonuniform penalty variant.
7.2 Algorithm for klevel UFLWP (nonuniform penalties)
 1.
 2.
scale up facility opening and client rejecting variables by γ _{ l }, then recompute values of \(x^{*}_{p}\) for p∈P _{ C } to obtain a minimum cost solution \((\bar {x}, \bar {g});\)
 3.
divide clients into two groups \(C_{\gamma _{l}} = \{ j \in C  \gamma _{l} \cdot (1  g_{j}^{*}) \geq 1 \}\) and \(\bar {C}_{\gamma _{l}} = C \setminus C_{\gamma _{l}};\)
 4.
cluster clients in \(C_{\gamma _{l}}\);
 5.
round facility opening (tree by tree);
 6.
connect each client j with a closest open connection path unless rejecting it is a cheaper option.
Our final algorithm is as follows: run algorithm A(γ _{ l }) for each l = 1,2…,n−1 and select a solution with the smallest cost.
Clustering is based on rules described in [10] and is described in Section 3.1. Rounding on a tree is described in Section 3.2. From now on we are considering only scaled up instance \((\bar {x}, \bar {g})\).
7.3 Analysis
The high level idea is that we can consider the instance of klevel UFLWP as a corresponding instance of klevel UFL by showing that the worst case approximation ratio is for clients in set C _{ γ } and we can treat the penalty of client j∈C _{ γ } as a “penaltyfacility” in our analysis. That is, we can overcome penalties by solving an equivalent klevel UFL without penalties. Then, for klevel UFLWP, we can get the same ratios as for klevel UFL, shown in Table 1.
Complete Solution and “onelevel” Description.
It is standard in uncapacitated location problems to split facilities to obtain a so called complete solution, where no facility is used less than it is open by a client (see [19] for details). For our algorithm, to keep the forest structure of the fractional solution, we must slice the whole trees instead of splitting individual facilities to obtain the following.
Lemma 12
Each solution of our linear program for klevel UFLWP can be transformed to an equivalent complete solution.
Proof
We should give two copies T ^{ ′ } and T ^{ ′ ′ } of tree T (instead of it) if there is some client j∈C with a positive flow x _{ j p } to one of the paths p in the tree T which is smaller than the path opening x _{ p }. Let the opening of such“problematic” path be equal to flow x _{ j p } in tree T ^{ ′ }. In tree T ^{ ′ ′ } it has value equal to the opening in T decreased by x _{ j p }. In general each facility in tree T ^{ ′ }(T ^{ ′ ′ }) has the same opening as in T times \(\frac {x_{jp}}{x_{p}}\left (\frac {x_{p}  x_{jp}}{x_{p}}\right )\). Note that the value of flow from client j (and other clients which are connected with both trees now) should be the same as before adding trees T ^{ ′ } and T ^{ ′ ′ } instead of T. All clients “recompute” their connection values. We sort all paths in increasing connection cost for client j and connect with them (in that order) as strong as it is possible until client j has flow equal to one or it is cheaper to pay penalty instead of connecting with any open path. The important fact is that the expected connection and penalty cost of each client remain the same after above operations.
In the process of coping and replacing trees we add at most C new trees. Because each client has at most one “problematic” (not saturating) path. □
For the clarity of the following analysis we will use a “onelevel” description of the instance and fractional solution despite its klevel structure. Because the number of levels will have influence only on the probabilities of opening particular paths in our algorithm.
Consider set S _{ j } of paths which start in client j and end in the root of a single tree T. Instead of thinking about all paths from set S _{ j } separately we can now treat them as one path p _{ T } whose fractional opening is \(x_{p_{T}} = {\sum }_{p \in S_{j}} \bar {x}_{p}\) and (expected) cost is \(c_{p_{T}} = \frac {{\sum }_{p \in S_{j}} c_{p} \bar {x}_{p}}{x_{p_{T}}}\). Observe that our distance function \(c_{p_{T}}\) satisfy the triangle inequality. From now on we will think only about clients and facilities (on level k) and (unique) paths between them. Accordingly, we will now encode the fractional solution as \((\bar {x}, \bar {y}, \bar {g})\), to denote the fractional connectivity, opening and penalty components.
Penalty discussion
Lemma 13 The worst case approximation ratio is for clients from set C _{ γ }.
Proof
The first inequality upper bounds E[C _{ j }+P _{ j }] in a similar way as Corollary 1, but in situation when no facility is open in F _{ j } we need to consider rejecting j or connection with open facility via cluster center. The second inequality holds because \(p_{j} \leq \gamma \left (1  g_{j}^{*}\right ) D_{av}(j) + \left (3  \gamma \left (1  g_{j}^{*}\right )\right ) D_{\max }^{C}(j)\). Otherwise, j could connect with facility in that distance instead of using penalty.
Note that each client \(j \in \bar {C_{\gamma }}\) can treat its penalty as a close (and distant) facility, so we can say that \(p_{j} = D_{\max }^{C}(j) = D_{av}^{D}(j)\). Moreover, \(D_{\max }^{C}(j) \leq \gamma \left (1g_{j}^{*}\right ) D_{av}(j) + \left (3  \gamma \left (1  g_{j}^{*}\right )\right ) D_{\max }^{C}(j)\) as \(0<\gamma \left (1  g_{j}^{*}\right )<1\) according to the definition of \(\bar {C}_{\gamma }\).
Therefore, we have that the worst case approximation ratio is for clients from set C _{ γ }. □
Lemma 14
For clients j∈C _{ γ } we can treat its penalty as a facility.
Proof
If j is a cluster center, j will have at least one (real) facility open in its set of close facilities. Thus, its connection and penalty cost are independent of the value of \(g_{j}^{*}\). If j is not a cluster center and we pretend its penalty as a facility, no other client j ^{ ′ } will consider to use this fake facility. Because j ^{ ′ } only looks at facilities fractionally serving him, and the facilities which serve the center of the cluster containing j ^{ ′ }. □
Footnotes
Notes
Acknowledgments
We thank Karen Aardal for insightful discussions on the klevel UFL problem. We also thank Thomas Rothvoss for teaching us the “rounding on trees” technique form [11]. Research supported by FNP HOMING PLUS/20101/3 grant, MNiSW grant number N N206 368839, 20102013 and by NCN 2012/07/N/ST6/03068 grant.
Open Access
This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
References
 1.Aardal, K., Chudak, F., Shmoys, D.: A 3approximation algorithm for the klevel uncapacitated facility location problem. Inf. Process. Lett. 72(56), 161–167 (1999)zbMATHMathSciNetCrossRefGoogle Scholar
 2.Ageev, A., Ye, Y., Zhang, J.: Improved combinatorial approximation algorithms for the klevel facility location problem. ICALP, 145–156 (2003)Google Scholar
 3.Asadi, M., Niknafs, A., Ghodsi, M.: An approximation algorithm for the klevel uncapacitated facility location problem with penalties CSICC. CCIS 6, 41–49 (2008)Google Scholar
 4.Bumb, A.: Approximation algorithms for facility location problems. University of Twente, PHD thesis (2002)Google Scholar
 5.Byrka, J., Aardal, K.: An optimal bifactor approximation algorithm for the metric uncapacitated facility location problem. SIAM J. Comput. 39(6), 2212–2231 (2010)zbMATHMathSciNetCrossRefGoogle Scholar
 6.Byrka, J., Li, S., Rybicki, B.: Improved approximation algorithm for klevel UFL with penalties, a simplistic view on randomizing the scaling parameter (2013)Google Scholar
 7.Byrka, J., Rybicki, B.: Improved LPRounding Approximation Algorithm for klevel Uncapacitated Facility Location ICALP (1), pp 157–169 (2012)Google Scholar
 8.Byrka, J., Ghodsi, M., Srinivasan, A.: LProunding algorithms for facilitylocation problems CoRR abs/1007, p 3611 (2010)Google Scholar
 9.Charikar, M., Khuller, S., Mount, D., Narasimhan, G.: Algorithms for facility location problems with outliers. SODA, 642–651 (2001)Google Scholar
 10.Chudak, F., Shmoys, D.: Improved approximation algorithms for the uncapacitated facility location problem. SIAM J. Comput. 33(1), 1–25 (2003)zbMATHMathSciNetCrossRefGoogle Scholar
 11.Garg, N., Konjevod, G., Ravi, R.: A polylogarithmic approximation algorithm for the group Steiner tree problem, SODA, pp 253–259 (1998)Google Scholar
 12.Geunes, J., Levi, R., Romeijn, H., Shmoys, D.: Approximation algorithms for supply chain planning and logistics problems with market choice. Math. Program. 130(1), 85–106 (2011)zbMATHMathSciNetCrossRefGoogle Scholar
 13.Guha, S., Khuller, S.: Greedy strikes back: improved facility location algorithms. J. Algorithm. 31(1), 228–248 (1999)zbMATHMathSciNetCrossRefGoogle Scholar
 14.Jain, K., Mahdian, M., Markakis, E., Saberi, A., Vazirani, V.: Greedy facility location algorithms analyzed using dual fitting with factorrevealing LP. J. ACM 50(6), 795–824 (2003)MathSciNetCrossRefGoogle Scholar
 15.Krishnaswamy, R., Sviridenko, M.: Inapproximability of the multilevel uncapacitated facility location problem, SODA, pp 718–734 (2012)Google Scholar
 16.Li, S.: A 1.488 approximation algorithm for the uncapacitated facility location problem. Inf. Comput. 222, 45–58 (2013)zbMATHCrossRefGoogle Scholar
 17.Li, Y., Du, D., Xiu, N., Xu, D.: Improved approximation algorithm for the facility location problems with linear/submodular penalties. COCOON, 292–303 (2013)Google Scholar
 18.Shmoys, D., Tardos, E., Aardal, K.: Approximation algorithms for facility location problems (extended abstract), STOC, pp 265–274 (1997)Google Scholar
 19.Sviridenko, M.: An improved approximation algorithm for the metric uncapacitated facility location Problem, IPCO, pp 240–257 (2002)Google Scholar
 20.Xu, G., Xu, J.: An LP rounding algorithm for approximating uncapacitated facility location problem with penalties. Inf. Process. Lett. 94(3), 119–123 (2005)zbMATHCrossRefGoogle Scholar
 21.Xu, G., Xu, J.: An improved approximation algorithm for uncapacitated facility location problem with penalties. J. Comb. Optim. 17(4), 424–436 (2009)zbMATHMathSciNetCrossRefGoogle Scholar
 22.Zhang, J.: Approximating the twolevel facility location problem via a quasigreedy approach. Math. Program. 108(1), 159–176 (2006)zbMATHMathSciNetCrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.