Approximation Algorithms for Maximum Independent Set of PseudoDisks
 1.1k Downloads
 30 Citations
Abstract
We present approximation algorithms for maximum independent set of pseudodisks in the plane, both in the weighted and unweighted cases. For the unweighted case, we prove that a localsearch algorithm yields a PTAS. For the weighted case, we suggest a novel rounding scheme based on an LP relaxation of the problem, which leads to a constantfactor approximation.
Most previous algorithms for maximum independent set (in geometric settings) relied on packing arguments that are not applicable in this case. As such, the analysis of both algorithms requires some new combinatorial ideas, which we believe to be of independent interest.
Keywords
Fréchet distance Approximation algorithms Realistic input models1 Introduction
Let F={f _{1},…,f _{ n }} be a set of n objects in the plane, with weights w _{1},w _{2},…,w _{ n }>0, respectively. In this paper, we are interested in the problem of finding an independent set of maximum weight. Here a set of objects is independent if no pair of objects intersect.
A natural approach to this problem is to build an intersection graph G=(V,E), where the objects form the vertices, and two objects are connected by an edge if they intersect, and weights are associated with the vertices. We want the maximum independent set in G. This is of course an problem, and it is known that no approximation factor is possible within V^{1−ε } for any ε>0 if NP≠ZPP [27]. In fact, even if the maximum degree of the graph is bounded by 3, no PTAS is possible in this case [11].
In geometric settings, better results are possible. If the objects are fat (e.g., disks and squares), PTASs are known. One approach [15, 22] relies on a hierarchical spatial subdivision, such as a quadtree, combined with dynamic programming techniques [7]; it works even in the weighted case. Another approach [15] relies on a recursive application of a nontrivial generalization of the planar separator theorem [30, 38]; this approach is limited to the unweighted case. If the objects are not fat, only weaker results are known. For the problem of finding a maximum independent set of unweighted axisparallel rectangles, an O(loglogn)approximation algorithm was recently given by Chalermsook and Chuzhoy [14]. For line segments, a roughly \(O(\sqrt{\mathrm{Opt}})\)approximation is known [2]; recently, Fox and Pach [24] have improved the approximation factor to n ^{ ε } for not only for line segments but curve segments that intersect a constant number of times.
In this paper we are interested in the problem of finding a large independent set in a set of weighted or unweighted pseudodisks. A set of objects is a collection of pseudodisks, if the boundary of every pair of them intersects at most twice. This case is especially intriguing because previous techniques seem powerless: it is unclear how one can adapt the quadtree approach [15, 22] or the generalized separator approach [15] for pseudodisks.
Even a constantfactor approximation in the unweighted case is not easy. Consider the most obvious greedy strategy for disks (or fat objects): select the object f _{ i }∈F of the smallest radius, remove all objects that intersect f _{ i } from F, and repeat. This is already sufficient to yield a constantfactor approximation by a simple packing argument [21, 32]. However, even this simplest algorithm breaks down for pseudodisks—as pseudodisks are defined “topologically”, how would one define the “smallest” pseudodisk in a collection?
Independent Set via Local Search
Nevertheless, we are able to prove that a different strategy can yield a constantfactor approximation for unweighted pseudodisks: local search. In the general settings, local search was used to get (roughly) a Δ/4 approximation to independent set, where Δ is the maximum degree in the graph, see [26] for a survey. In the geometric settings, Agarwal and Mustafa [2, Lemma 4.2] had a proof that a localsearch algorithm gives a constantfactor approximation for the special case of pseudodisks that are rectangles; their proof does not immediately work for arbitrary pseudodisks. Our proof provides a generalization of their lemma.
In fact, we are able to do more: we show that local search can actually yield a PTAS for unweighted pseudodisks! This gives us by accident a new PTAS for the special case of disks and squares. Though the localsearch algorithm is slower than the quadtreebased PTAS in these special cases [15], it has the advantage that it only requires the intersection graph as input, not its geometric realization; previously, an algorithm with this property was only known in further special cases, such as unit disks [36]. Our result uses the planar separator theorem, but in contrast to the separatorbased method in [15], a standard version of the theorem suffices and is needed only in the analysis, not in the algorithm itself.
Planar graphs are special cases of disk intersection graphs, and so our result applies. Of course, PTASs for planar graphs have been around for quite some time [9, 30], but the fact that a simple localsearch algorithm already yields a PTAS for planar graphs is apparently not well known, if it was known at all.
We can further show that the same localsearch algorithm gives a PTAS for independent set for fat objects in any fixed dimension, reproving known results in [15, 22].
This strategy, unfortunately, works only in the unweighted case.
Independent Set via LP
It is easy to extract a large independent set from a sparse unweighted graph. For example, greedily, we can order the vertices from lowest to highest degree, and pick them one by one into the independent set, if none of its neighbors was already picked into the independent set. Let d _{ G } be the average degree in G. Then a constant fraction of the vertices have degree O(d _{ G }), and the selection of such a vertex can eliminate O(d _{ G }) candidates. Thus, this yields an independent set of size Ω(n/d _{ G }). Alternatively, for better constants, we can order the vertices by a random permutation and do the same. Clearly, the probability of a vertex v to be included in the independent set is 1/(d(v)+1). An easy calculation leads to Turán’s theorem, which states that any graph G has an independent set of size ≥n/(d _{ G }+1) [6].
Now, our intersection graph G may not be sparse. We would like to “sparsify” it, so that the new intersection graph is sparse and the number of vertices is close to the size of the optimal solution. Interestingly, we show that this can be done by solving the LP relaxation of the independent set problem. The relaxation provides us with a fractional solution, where every object f _{ i } has value x _{ i }∈[0,1] associated with it. Rounding this fractional solution into a feasible solution is not a trivial task, as no such scheme exists in the general case. Our basic approach is somewhat similar to the localratio technique [10], but, more precisely, it is a variant of the contention resolution scheme of Chekuri et al. [18]. To this end, we prove a technical lemma (see Lemma 4.1) that shows that the total sum of terms of the form x _{ i } x _{ j }, over pairs f _{ i } f _{ j } that intersect is bounded by the boundary complexity of the union of Open image in new window objects of F, where Open image in new window is the size of the fractional solution. The proof contains a nice application of the standard Clarkson technique [19].
This lemma implies that on average, if we pick f _{ i } into our random set of objects, with probability x _{ i }, then the resulting intersection graph would be sparse. This is by itself sufficient to get a constantfactor approximation for the unweighted case. For the weighted case, we follow a greedy approach: we examine the objects in a certain order (based on a quantity we call “resistance”), and choose an object with probability around x _{ i }, on condition that it does not intersect any previously chosen object. We argue, for our particular order, that each object is indeed chosen with probability Ω(x _{ i }). This leads to a constantfactor approximation for weighted pseudodisks.
Interestingly, the rounding scheme we use can be used in more general settings, when one tries to find an independent set that maximizes a submodular target function. See Sect. 4.6 for details.
Linear Union Complexity
Our LP analysis works more generally for any class of objects with linear union complexity. We assume that the boundary of the union of any k of these objects has at most ϱk vertices, for some fixed ϱ. For pseudodisks, the boundary of the union is made out of at most 6n−12 arcs, implying ϱ=6 in this case [28].
A family F of simply connected regions bounded by simple closed curves in general position in the plane is kadmissible (with k even) if for any pair f _{ i },f _{ j }∈F, we have: (i) f _{ i }∖f _{ j } and f _{ j }∖f _{ i } are connected, and (ii) their boundary intersect at most k times. Whitesides and Zhao [40] showed that the union of such n objects has at most 3kn−6 arcs; that is, ϱ=3k. So, our LP analysis applies to this class of objects as well. For more results on union complexity, see the sermon by Agarwal et al. [5].
Our localsearch PTAS works more generally for unweighted admissible regions in the plane. For an arbitrary class of unweighted objects with linear union complexity in the plane, local search still yields a constantfactor approximation.
Rectangles
LP relaxation has been used before, notably, in Chalermsook and Chuzhoy’s recent breakthrough in the case of axisparallel rectangles [14], but their analysis is quite complicated. Although rectangles do not have linear union complexity in general, we observe in Sect. 5 that a variant of our approach can yield a readily accessible proof of a sublogarithmic O(logn/loglogn) approximation factor for rectangles, even in the weighted case, where previously only logarithmic approximation is known [4, 12, 16] (Chalermsook and Chuzhoy’s result is better but currently is applicable only to unweighted rectangles).
Discussion
Local search and LP relaxation are of course staples in the design of approximation algorithms, but are not seen as often in computational geometry. Our main contribution lies in the fusion of these approaches with combinatorial geometric techniques.
In a sense, one can view our results as complementary to the known results on approximate geometric set cover by Brönnimann and Goodrich [13] and Clarkson and Varadarajan [20]. They consider the problem of finding the minimum number of objects in F to cover a given point set. Their results imply a constantfactor approximation for families of objects with linear union complexity, for instance. One version of their approaches is indeed based on LP relaxation [23, 31]. The “dual” hitting set problem is to find the minimum number of points to pierce a given set of objects. Brönnimann and Goodrich’s result combined with a recent result of Pyrga and Ray [37] also implies a constantfactor approximation for pseudodisks for this piercing problem. The piercing problem is actually the dual of the independent set problem (this time, we are referring to linear programming duality). We remark, however, that the rounding schemes for set cover and piercing are based on different combinatorial techniques, namely, εnets, which are not sufficient to deal with independent set (one obvious difference is that independent set is a maximization problem).
In Theorem 4.6, we point out a combinatorial consequence of our LP analysis: for any collection of unweighted pseudodisks, the ratio of the size of the minimum piercing set to the size of maximum independent set is at most a constant. (It is easy to see that the ratio is always at least 1; for disks or fat objects, it is not difficult to obtain a constant upper bound by packing arguments.) This result is of independent interest; for example, getting tight bounds on the ratio for axisparallel rectangles is a longstanding open problem.
In an interesting independent development, Mustafa and Ray [35] have recently applied the local search paradigm to obtain a PTAS for the geometric set cover problem for (unweighted) pseudodisks and admissible regions.
2 Preliminaries
In the following, we have a set F of n objects in the plane, such that the union complexity of any subset X⊆F is bounded by ϱX, where ϱ is a constant. Here, the union complexity of X is the number of arcs on the boundary of the union of the objects of X. Let Open image in new window denote the arrangement of F, and Open image in new window denote the set of vertices of Open image in new window .
In the following, we assume that deciding if two objects intersect takes constant time.
3 Approximation by Local Search: Unweighted Case
3.1 The Algorithm
In the unweighted case, we may assume that no object is fully contained in another.
We say that a subset L of F is blocally optimal if L is an independent set and one cannot obtain a larger independent set from L by deleting at most b objects and inserting at most b+1 objects of F.
Our algorithm for the unweighted case simply returns a blocally optimal solution for a suitable constant b, by performing a local search. We start with L←∅. For every subset X⊆F∖L of size at most b+1, we verify that X by itself is independent, and, furthermore, that the set Y⊆L of objects intersecting the objects of X is of size at most X−1. If so, we do L←(L∖Y)∪X. Every such exchange increases the size of L by at least one, and as such it can happen at most n times. Naively, there are \(\binom{n}{b+1}\) subsets X to consider, and for each such subset X it takes O(nb) time to compute Y. Therefore, the running time is bounded by O(n ^{ b+3}). (The running time can be probably improved by being a bit more careful about the implementation.)
3.2 Analysis
We present two alternative ways to analyze this algorithm. The first approach uses only the fact that the union complexity is low. The second approach is more direct, and uses the property that the regions are admissible.
3.2.1 Analysis Using Union Complexity
The following lemma by Afshani and Chan [1], which was originally intended for different purposes, will turn out to be useful here (the proof exploits linearity of planar graphs and the Clarkson technique [19]):
Lemma 3.1
Suppose we have n disjoint simply connected regions in the plane and a collection of disjoint curves, where each curve intersects at most k regions. Call two curves equivalent if they intersect precisely the same subset of regions. Then the number of equivalent classes is at most c _{0} nk ^{2} for some constant c _{0}.
Let Open image in new window be an optimal solution, and let L be a blocally optimal solution. We will upper bound Open image in new window in terms of L.
Let Open image in new window denote the set of objects in Open image in new window that intersect at least b+1 objects of L. Let Open image in new window be the set of remaining objects in Open image in new window .
On the other hand, by applying Lemma 3.1 with L as the regions and the boundaries of Open image in new window as the curves, the objects in Open image in new window form at most c _{0} b ^{2}L equivalent classes. Each equivalent class contains at most b objects: Otherwise we would be able to remove b objects from L and intersect b+1 objects in this equivalence class to get an independent set larger than L. This would contradict the blocal optimality of L. Thus, Open image in new window .
Theorem 3.2
Given a set of n unweighted objects in the plane with linear union complexity, for a sufficiently large constant b, any blocally optimal independent set has size \(\varOmega(\operatorname{opt})\), where \(\operatorname{opt}\) is the size of the maximum independent set of the objects.
3.2.2 Better Analysis for Admissible Regions
A set of regions F is admissible, if for any two regions f,f′∈F, we see that f∖f′ and f′∖f are both simply connected (i.e., connected and contains no holes). Note that we do not care how many times the boundaries of the two regions intersect, and furthermore, by definition, no region is contained inside another.
Lemma 3.3
Let F be a set of admissible regions, and consider a independent set of regions I⊆F, and a region f∈F∖I. Then the core region f∖I=f∖⋃_{ g∈I } g is nonempty and simply connected.
Proof
It is easy to verify that for the regions of I to split f into two connected components, they must intersect, which contradicts their disjointness. □
Lemma 3.4
Let X,Y⊆F be two independent sets of regions. Then the intersection graph G of X∪Y is planar.
Proof
Similarly, for every region g∈Y, we place a vertex v _{ g } inside g, and connect it to all the points p _{ f,g } placed on its boundary, by curves that are contained in g, and they are interior disjoint. Clearly, together, these vertices and curves form a planar drawing of G. □
We need the following version of the planar separator theorem. Below, for a set of vertices U in a graph G, let Γ(U) denote the set of neighbors of U, and let \(\overline{\varGamma} ({U} ) =\varGamma ({U} ) \cup U\).
Lemma 3.5
[25]
There are constants c _{1}, c _{2} and c _{3}, such that for any planar graph G=(V,E) with n vertices, and a parameter r, one can find a set of X⊆V of size at most \(c_{1}n/\sqrt{r}\), and a partition of V∖X into n/r sets V _{1},…,V _{ n/r }, satisfying: (i) V _{ i }≤c _{2} r, (ii) Γ(V _{ i })∩V _{ j }=∅, for i≠j, and (iii) \(\vert {\varGamma ({V_{i}} )\cap X} \vert \le c_{3}\sqrt{r} \).
Theorem 3.6
Given a set of n unweighted admissible regions in the plane, any blocally optimal independent set has size \(\geq(1O(1/\sqrt{b}))\operatorname{opt}\), where \(\operatorname{opt}\) is the size of the maximum independent set of the objects. In particular, one can compute an independent set of size \(\geq(1\varepsilon)\operatorname{opt}\), in time \(n^{O(1/\varepsilon^{2})}\).
3.2.3 Analysis for Fat Objects in Any Fixed Dimension
We show that the same algorithm gives a PTAS for the case when the objects in F are fat. This result in fact holds in any fixed dimension d. For our purposes, we use the following definition of fatness: the objects in F are fat if for every axisaligned hypercube B of side length r, we can find a constant number c of points such that every object that intersects B and has diameter at least r contains one of the chosen points.
Smith and Wormald [38] proved a family of geometric separator theorems, one version of which will be useful for us and is stated below (see also [15]):
Lemma 3.7
[38]
Given a collection of n fat objects in a fixed dimension d with constant maximum depth, there exists an axisaligned hypercube B such that at most 2n/3 objects are inside B, at most 2n/3 objects are outside B, and at most O(n ^{1−1/d }) objects intersect the boundary of B.
We need the following extension of Smith and Wormald’s separator theorem to multiple clusters (whose proof is similar to the extension of the standard planar separator theorem in [25]):
Lemma 3.8
There are constants c _{1}, c _{2}, c _{3} and c _{4}, such that for any intersection graph G=(V,E) of n fat objects in a fixed dimension d with constant maximum depth, and a parameter r, one can find a set of X⊆V of size at most c _{1} n/r ^{1/d }, and a partition of V∖X into n/r sets V _{1},…,V _{ n/r }, satisfying: (i) V _{ i }≤c _{2} r, (ii) Γ(V _{ i })∩V _{ j }=∅, for i≠j, and (iii) ∑_{ i }Γ(V _{ i })∩X≤c _{3} n/r ^{1/d }, and (iv) Γ(V _{ i })∩X≤c _{4} r.
Proof
Assume that all objects are unmarked initially. We describe a recursive procedure for a given set S of objects. If S≤c _{2} r, then S is a “leaf” subset and we stop the recursion. Otherwise, we apply Lemma 3.7. Let S′ and S″ be the subset of all objects inside and outside the separator hypercube B, respectively. Let \(\widehat{S}\) be the subset of all objects intersecting the boundary of B. We mark the objects in \(\widehat{S}\) and recursively run the procedure for the subset \(S'\cup\widehat{S}\) and for the subset \(S''\cup\widehat{S}\).
Note that some objects may be marked more than once. Let X be the set of all objects that have been marked at least once. For each leaf subset S _{ i }, generate a subset V _{ i } of all unmarked objects in S _{ i }. Property (i) is obvious. Properties (ii) and (iv) hold, because the unmarked objects in each leaf subset S _{ i } can only intersect objects within S _{ i } and cannot intersect unmarked objects in other S _{ j }’s.
Let Open image in new window be the optimal solution and L be a blocally optimal solution. Consider the bipartite intersection graph G of Open image in new window , which has maximum depth 2. We proceed as in the proof from Sect. 3.2.2, using Lemma 3.8 instead of Lemma 3.5. Note that (iii)–(iv) are weaker properties but are sufficient for the same proof to go through. The only differences are that square roots are replaced by dth roots, and we now set r=b/(c _{2}+c _{4}), so that \(\vert {\overline{\varGamma}({V_{i}} )} \vert \leq c_{2}r + c_{4}r < b\). We conclude:
Theorem 3.9
Given a set of n fat objects in a fixed dimension d, any blocally optimal independent set has size \(\geq(1O(1/b^{1/d}))\operatorname{opt}\), where \(\operatorname{opt}\) is the size of the maximum independent set of the objects. In particular, one can compute an independent set of size \(\geq(1\varepsilon)\operatorname{opt}\), in time \(n^{O(1/\varepsilon^{d})}\).
4 Approximation by LP Relaxation: Weighted Case
4.1 The Algorithm
In the following, x _{ i } will refer to the value assigned to this variable by the solution of the LP. Similarly, Opt=∑_{ i } w _{ i } x _{ i } will denote the weight of the relaxed optimal solution, which is at least the weight \(\operatorname{opt}\) of the optimal integral solution.
We will assume, for the time being, that no two objects of F fully contain each other.
The algorithm starts with an empty candidate set C and an empty independent set I, and scans the objects according to the permutation in reverse order. At the ith stage, the algorithm first decides whether to put the object π _{ n−i } in C, by flipping a coin that with probability x(π _{ n−i })/τ comes up heads, where x(π _{ n−i }) is the value assigned by the LP to the object π _{ n−i } and τ is some parameter to be determined shortly. If π _{ n−i } is put into C then we further check whether π _{ n−i } intersects any of the objects already added to the independent set I. If it does not intersect any objects in I, then it adds π _{ n−i } to I and continues to the next iteration.
In the end of the execution, the set I is returned as the desired independent set.
4.2 Analysis
Let F be a set of n objects in the plane, and let u(m) be the maximum union complexity of m≤n objects of F. Furthermore, we assume that the function u(⋅) is a monotone increasing function which is well behaved; namely, u(n)/n is a nondecreasing function, and there exists a constant c, such that u(xr)≤c u(r), for any r and 1≤x≤2. In the following, a vertex p of Open image in new window is denoted by (p,i,j), to indicate that it is the result of the intersection of the ith and jth object.
The key to our analysis lies in the following inequality, which we prove by adapting the Clarkson technique [19].
Lemma 4.1
Let H be any subset of F. Then Open image in new window , where Open image in new window .
Proof
Lemma 4.2
For any i, the resistance of the ith object π _{ i } (as defined by Eq. (2)) is Open image in new window .
Proof
Lemma 4.3
For a sufficiently large constant c, setting Open image in new window , the algorithm in Sect. 4.1 outputs in expectation an independent set of weight Ω((n/u(n))Opt).
Proof
4.3 Remarks
Variant
In the conference version of this paper [17], we proposed a different variant of the algorithm, where instead of ordering the objects by increasing resistance, we order the objects by decreasing weights. An advantage of the resistancebased algorithm is that it is oblivious to (i.e., does not look at) the input weights. This feature is shared, for example, by Varadarajan’s recent algorithm for weighted geometric set cover via “quasirandom sampling” [39]. Another advantage of the resistancebased algorithm is its extendibility to other settings; see Sect. 4.6.
Derandomization
The variance of the expected weight of the returned independent set I could be high, but fortunately the algorithm can be derandomized by the standard method of conditional probabilities/expectations [34]. To this end, observe that the above analysis provide us with a constructive way to estimate the weight of the generated solution. It is now straightforward to decide for each region whether to include it or not inside the generated solution, using conditional probabilities. Indeed, for each object we compute the expected weight of the solution if it is include in the solution, and if it is not included in the solution, and pick the one that has higher value.
Coping with Object Containment
Time to Solve the LP
This LP is a packing LP with O(n ^{2}) inequalities, and n variables. As such, it can be (1+ε)approximated in O(n ^{3}+ε ^{−2} n ^{2}logn)=O(n ^{3}) by a randomized algorithm that succeeds with high probability [29]. For our purposes, it is sufficient to set ε to be a sufficient small constant, say ε=10^{−4}.
We have thus proved:
Theorem 4.4
Given a set of n weighted objects in the plane with union complexity O(u(n)), one can compute an independent set of total weight \(\varOmega( (n / \mathsf{u}({n} )) \operatorname{opt})\), where \(\operatorname{opt}\) is the maximum weight over all independent sets of the objects. The running time of the randomized algorithm is O(n ^{3}), and polynomial for the deterministic version.
The running time of the deterministic algorithm of Theorem 4.4 is dominated by the time it takes to deterministically solve (approximately) the LP. One can use the ellipsoid algorithm to this end, but faster algorithm are known, see [29] and references therein.
Corollary 4.5
Given a set of n weighted pseudodisks in the plane, one can compute, in O(n ^{3}) time, a constantfactor approximation to the maximumweight independent set of pseudodisks.
Theorem 4.4 can be applied to cases where the union complexity is low. Even in the case of fat objects, where PTASs are known [15, 22], the above approach is still interesting as it can be extended to more general settings, as noted in Sect. 4.6.
4.4 A Combinatorial Result: Piercing Number
In the unweighted case, we obtain the following result as a byproduct:
Theorem 4.6
Given a set of n pseudodisks in the plane, let \(\operatorname{opt}\) be the size of the maximum independent set and let \(\operatorname{opt}'\) be the size of the minimum set of points that pierce all the pseudodisks. Then \(\operatorname{opt}=\varOmega(\operatorname{opt}')\).
Proof
By the preceding analysis, we have \(\operatorname{opt}=\varOmega(\mathrm{Opt})\), i.e., the integrality gap of our LP is a constant. (Here, all the weights are equal to 1.)
To conclude, observe that the two LPs are precisely the dual of each other, and so Opt=Opt′. □
4.5 A Discrete Version of the Independent Set Problem
We now show that our algorithm can be extended to solve a variant of the independent set problem where we are given not only a set F of n weighted objects but also a set P of m points. The goal is to select a maximumweight subset Open image in new window such that each point p∈P is contained in at most one object of Open image in new window . (The original problem corresponds to the case where P is the entire plane.) Unlike in the original independent set problem, it is not clear if local search yields good approximation here, even in the unweighted case.
We can use the same LP as in Sect. 4.1 to solve this problem, except that we now have a constraint for each p∈P instead of each Open image in new window . In the rest of the algorithm and analysis, we just reinterpret “f _{ i }∩f _{ j }≠∅” to mean “f _{ i }∩f _{ j }∩P≠∅”.
Lemma 4.1 is now replaced by the following.
Lemma 4.7
Let H be any subset of F. Then Open image in new window .
Proof
Consider a random sample R of H, where an object f _{ i } is being picked up with probability x _{ i }/2. Let Open image in new window be the cells in the vertical decomposition of the complement of the union Open image in new window . For a cell Open image in new window , let \(x_{\varDelta}= \sum_{f_{i} \in \mathsf{H}, f_{i} \cap\mathrm{int}(\varDelta) \ne\emptyset} x_{i}\) be the total energy of the objects of H that intersects the interior of Δ. A minor modification of the analysis of Clarkson [19] implies that, for any constant c, Open image in new window .
The above proof is inspired by a proof from [3]. There is an alternative argument based on shallow cuttings, but the known proof for the existence of such cuttings requires a more complicated sampling analysis [33].
The rest of the analysis then goes through unchanged. We therefore obtain an O(1)approximation algorithm for the discrete independent set problem for unweighted or weighted pseudodisks in the plane.
4.6 Contention Resolution and Submodular Functions
The algorithm of Theorem 4.4 can be interpreted as a contention resolution scheme; see Chekuri et al. [18] for details. The basic idea is that, given a feasible fractional solution x∈[0,1]^{ n }, a contention resolution scheme scales down every coordinate of x (by some constant b) such that, given a random sample C of the objects according to x (i.e., the ith object f _{ i } is picked with probability bx _{ i }), the contention resolution scheme computes (in our case) an independent set I such that Pr[f _{ i }∈I∣f _{ i }∈C]≥c, for some positive constant c. The proof of Lemma 4.3 implies exactly this property in our case.
As such, we can apply the results of Chekuri et al. [18] to our settings. In particular, they show that one can obtain constant approximation to the optimal solution, when considering independence constraints and submodular target function. Intuitively, submodularity captures the diminishingreturns nature of many optimization problems. Formally, a function g:2^{ F }→ℝ is submodular if g(X∪Y)+g(X∩Y)≤g(X)+g(Y), for any X,Y⊆F.
As a concrete example, consider a situation where each object in F represents a coverage area by a single antenna. If a point is contained inside such an object, it is fully serviced. However, even if it is not contained in a object, it might get some reduced coverage from the closest object in the chosen set. In particular, let ν(r) be some coverage function which is the amount of coverage a point gets if it is at distance r from the closest object in the current set I. We assume here ν(⋅) is a monotone decreasing function. Because of interference between antennas we require that the regions these antennas represent do not intersect (i.e., the set of antennas chosen needs to be an independent set).
Lemma 4.8
Proof
The proof is not hard and is included for the sake of completeness. For a point p∈P, it is sufficient to prove that the function ν(d _{ p }(H)) is submodular, as α _{ P }(H) is just the sum of these functions, and a sum of submodular functions is submodular.

If ℓ _{ f }≤ℓ _{ y }≤ℓ _{ x } then Eq. (3) becomes ν(ℓ _{ f })−ν(ℓ _{ x })≥ν(ℓ _{ f })−ν(ℓ _{ y }), which holds.

If ℓ _{ y }≤ℓ _{ f }≤ℓ _{ x } then Eq. (3) becomes ν(ℓ _{ f })−ν(ℓ _{ x })≥ν(ℓ _{ y })−ν(ℓ _{ y }), which is equivalent to ν(ℓ _{ f })≥ν(ℓ _{ x }). This in turn holds by the decreasing monotonicity of ν.

If ℓ _{ y }≤ℓ _{ x }≤ℓ _{ f } then Eq. (3) becomes 0=ν(ℓ _{ x })−ν(ℓ _{ x })≥ν(ℓ _{ y })−ν(ℓ _{ y })=0.
 (A)
The target function is indeed submodular and can be computed efficiently. This is Lemma 4.8.
 (B)
State an LP that solves the fractional problem (and its polytope contains the optimal integral solution). This is just the original LP, see Eq. (1).
 (C)
Observe that our rounding (i.e., contention resolution) scheme is still applicable in this case. This follows by Lemma 4.3.
We thus get the following.
Problem 4.9
Theorem 4.10
Given a set of n points in the plane and a set m of unweighted objects in the plane with union complexity O(u(n)), one can compute, in polynomial time, an independent set. Furthermore, this independent set provides an Ω(n/u(n))approximation to the optimal solution of the partial coverage problem.
Observe that the above algorithm applies for any pricing function that is submodular. In particular, one can easily encode into this function weights for the ranges, or other similar considerations.
5 Weighted Rectangles
5.1 The Algorithm
For the (original) independent set problem in the case of weighted axisaligned rectangles, we can solve the same LP, where the set Open image in new window contains both intersection points and corners of the given rectangles.
Define two subgraphs Open image in new window and Open image in new window of the intersection graph: if the boundaries of f _{ i } and f _{ j } intersect zero or two times, put f _{ i } f _{ j } in Open image in new window ; if the boundaries intersect four times instead, put f _{ i } f _{ j } in Open image in new window .
We first extract an independent set I of Open image in new window using the algorithm of Theorem 4.4.
It is well known (e.g., see [8]) that Open image in new window forms a perfect graph (specifically, a comparability graph), so find a Δcoloring of the rectangles of I in Open image in new window , where Δ denotes the maximum clique size, i.e., the maximum depth in Open image in new window . Let I′ be the color subclass of I of the largest total weight. Clearly, the objects in I′ are independent, and we output this set.
5.2 Analysis
Applying the same analysis as before, we conclude that the expected total weight of I (i.e., the independent set of Open image in new window ) computed by the algorithm is of size Ω(Opt).
To analyze I′, we need a new lemma which bounds the maximum depth of R:
Lemma 5.1
Δ=O(logn/loglogn) with probability at least 1−1/n.
Proof
5.3 Remarks
Derandomization
We have thus proved:
Theorem 5.2
Given a set of n weighted axisaligned boxes in the plane, one can compute in polynomial time an independent set of total weight \(\varOmega(\log\log n/\log n)\cdot\nobreak \operatorname{opt}\), where \(\operatorname{opt}\) is the maximum weight over all independent sets of the objects.
Higher Dimensions
By a standard divideandconquer method [4], we get an approximation factor of O(log^{ d−1} n/loglogn) for weighted axisaligned boxes in any constant dimension d.
Notes
Acknowledgements
We thank Esther Ezra for discussions on the discrete version of the independent set problem considered in Sect. 4.3. The somewhat cleaner presentation in the paper, compared to the preliminary version [17], was suggested by Chandra Chekuri. The results of Sect. 4.6 were inspired by discussions with Chandra Chekuri. We also thank the anonymous referees for their comments.
References
 1.Afshani, P., Chan, T.M.: Dynamic connectivity for axisparallel rectangles. In: Proc. 14th European Sympos. Algorithms. Lecture Notes Comput. Sci., vol. 4168, pp. 16–27 (2006) Google Scholar
 2.Agarwal, P.K., Mustafa, N.H.: Independent set of intersection graphs of convex objects in 2D. Comput. Geom., Theory Appl. 34(2), 83–95 (2006) MathSciNetzbMATHCrossRefGoogle Scholar
 3.Agarwal, P.K., Aronov, B., Chan, T.M., Sharir, M.: On levels in arrangements of lines, segments, planes, and triangles. Discrete Comput. Geom. 19, 315–331 (1998) MathSciNetzbMATHCrossRefGoogle Scholar
 4.Agarwal, P.K., van Kreveld, M., Suri, S.: Label placement by maximum independent set in rectangles. Comput. Geom., Theory Appl. 11, 209–218 (1998) zbMATHCrossRefGoogle Scholar
 5.Agarwal, P.K., Pach, J., Sharir, M.: State of the union—of geometric objects. In: Goodman, J.E., Pach, J., Pollack, R. (eds.) Surveys in Discrete and Computational Geometry Twenty Years Later. Contemporary Mathematics, vol. 453, pp. 9–48. AMS, Providence (2008) CrossRefGoogle Scholar
 6.Alon, N., Spencer, J.H.: The Probabilistic Method, 2nd edn. WileyInterscience, New York (2000) zbMATHCrossRefGoogle Scholar
 7.Arora, S.: Polynomial time approximation schemes for Euclidean TSP and other geometric problems. J. Assoc. Comput. Mach. 45(5), 753–782 (1998) zbMATHCrossRefGoogle Scholar
 8.Asplund, E., Grübaum, B.: On a coloring problem. Math. Scand. 8, 181–188 (1960) MathSciNetzbMATHGoogle Scholar
 9.Baker, B.S.: Approximation algorithms for NPcomplete problems on planar graphs. J. Assoc. Comput. Mach. 41, 153–180 (1994) MathSciNetzbMATHCrossRefGoogle Scholar
 10.BarYehuda, R., Bendel, K., Freund, A., Rawitz, D.: Local ratio: A unified framework for approximation algorithms. In memoriam: Shimon Even 1935–2004. ACM Comput. Surv. 36, 422–463 (2004) CrossRefGoogle Scholar
 11.Berman, P., Fujito, T.: On approximation properties of the independent set problem for low degree graphs. Theor. Comput. Sci. 32(2), 115–132 (1999) MathSciNetzbMATHCrossRefGoogle Scholar
 12.Berman, P., DasGupta, B., Muthukrishnan, S., Ramaswami, S.: Efficient approximation algorithms for tiling and packing problems with rectangles. J. Algorithms 41, 443–470 (2001) MathSciNetzbMATHCrossRefGoogle Scholar
 13.Brönnimann, H., Goodrich, M.T.: Almost optimal set covers in finite VCdimension. Discrete Comput. Geom. 14, 263–279 (1995) CrossRefGoogle Scholar
 14.Chalermsook, P., Chuzhoy, J.: Maximum independent set of rectangles. In: Proc. 20th ACMSIAM Sympos. Discrete Algorithms, pp. 892–901 (2009) Google Scholar
 15.Chan, T.M.: Polynomialtime approximation schemes for packing and piercing fat objects. J. Algorithms 46(2), 178–189 (2003) MathSciNetzbMATHCrossRefGoogle Scholar
 16.Chan, T.M.: A note on maximum independent sets in rectangle intersection graphs. Inf. Process. Lett. 89, 19–23 (2004) zbMATHCrossRefGoogle Scholar
 17.Chan, T.M., HarPeled, S.: Approximation algorithms for maximum independent set of pseudodisks. In: Proc. 25th Annu. ACM Sympos. Comput. Geom., pp. 333–340 (2009). cs.uiuc.edu/~sariel/papers/08/w_indep CrossRefGoogle Scholar
 18.Chekuri, C., Vondrák, J., Zenklusen, R.: Submodular function maximization via the multilinear relaxation and contention resolution schemes. In: Proc. 43rd Annu. ACM Sympos. Theory Comput. (2011, to appear) Google Scholar
 19.Clarkson, K.L., Shor, P.W.: Applications of random sampling in computational geometry, II. Discrete Comput. Geom. 4, 387–421 (1989) MathSciNetzbMATHCrossRefGoogle Scholar
 20.Clarkson, K.L., Varadarajan, K.R.: Improved approximation algorithms for geometric set cover. Discrete Comput. Geom. 37(1), 43–58 (2007) MathSciNetzbMATHCrossRefGoogle Scholar
 21.Efrat, A., Katz, M.J., Nielsen, F., Sharir, M.: Dynamic data structures for fat objects and their applications. Comput. Geom., Theory Appl. 15, 215–227 (2000) MathSciNetzbMATHCrossRefGoogle Scholar
 22.Erlebach, T., Jansen, K., Seidel, E.: Polynomialtime approximation schemes for geometric intersection graphs. SIAM J. Comput. 34(6), 1302–1323 (2005) MathSciNetzbMATHCrossRefGoogle Scholar
 23.Even, G., Rawitz, D., Shahar, S.: Hitting sets when the VCdimension is small. Inf. Process. Lett. 95(2), 358–362 (2005) MathSciNetzbMATHCrossRefGoogle Scholar
 24.Fox, J., Pach, J.: Computing the independence number of intersection graphs. In: Proc. 22nd ACMSIAM Sympos. Discrete Algorithms, pp. 1161–1165 (2011) Google Scholar
 25.Frederickson, G.N.: Fast algorithms for shortest paths in planar graphs, with applications. SIAM J. Comput. 16(6), 1004–1022 (1987) MathSciNetzbMATHCrossRefGoogle Scholar
 26.Halldórsson, M.M.: Approximations of independent sets in graphs. In: The 2nd Intl. Work. Approx. Algs. Combin. Opt. Problems, pp. 1–13 (1998) Google Scholar
 27.Hastad, J.: Clique is hard to approximate within n ^{1−ε}. Acta Math. 182, 105–142 (1996) MathSciNetCrossRefGoogle Scholar
 28.Kedem, K., Livne, R., Pach, J., Sharir, M.: On the union of Jordan regions and collisionfree translational motion amidst polygonal obstacles. Discrete Comput. Geom. 1, 59–71 (1986) MathSciNetzbMATHCrossRefGoogle Scholar
 29.Koufogiannakis, C., Young, N.E.: Beating simplex for fractional packing and covering linear programs. In: Proc. 48th Annu. IEEE Sympos. Found. Comput. Sci, pp. 494–506 (2007) Google Scholar
 30.Lipton, R.J., Tarjan, R.E.: A separator theorem for planar graphs. SIAM J. Appl. Math. 36, 177–189 (1979) MathSciNetzbMATHCrossRefGoogle Scholar
 31.Long, P.M.: Using the pseudodimension to analyze approximation algorithms for integer programming. In: Proc. 7th Workshop Algorithms Data Struct. Lecture Notes Comput. Sci., vol. 2125, pp. 26–37 (2001) Google Scholar
 32.Marathe, M.V., Breu, H., Hunt, H.B. III, Ravi, S.S., Rosenkrantz, D.J.: Simple heuristics for unit disk graphs. Networks 25, 59–68 (1995) MathSciNetzbMATHCrossRefGoogle Scholar
 33.Matoušek, J.: Reporting points in halfspaces. Comput. Geom., Theory Appl. 2(3), 169–186 (1992) zbMATHCrossRefGoogle Scholar
 34.Motwani, R., Raghavan, P.: Randomized Algorithms. Cambridge University Press, New York (1995) zbMATHGoogle Scholar
 35.Mustafa, N.H., Ray, S.: PTAS for geometric hitting set problems via local search. In: Proc. 25th Annu. ACM Sympos. Comput. Geom., pp. 17–22 (2009) CrossRefGoogle Scholar
 36.Nieberg, T., Hurink, J., Kern, W.: A robust PTAS for maximum weight independent set in unit disk graphs. In: Proc. 30th Int. Workshop GraphTheoretic Concepts in Comput. Sci. Lecture Notes Comput. Sci., vol. 3353, pp. 214–221 (2005) CrossRefGoogle Scholar
 37.Pyrga, E., Ray, S.: New existence proofs for εnets. In: Proc. 24th ACM Sympos. Comput. Geom., pp. 199–207 (2008) Google Scholar
 38.Smith, W.D., Wormald, N.C.: Geometric separator theorems and applications. In: Proc. 39th Annu. IEEE Sympos. Found. Comput. Sci., pp. 232–243 (1998) Google Scholar
 39.Varadarajan, K.: Weighted geometric set cover via quasiuniform sampling. In: Proc. 42nd Annu. ACM Sympos. Theory Comput., pp. 641–648 (2010) Google Scholar
 40.Whitesides, S., Zhao, R.: kadmissible collections of Jordan curves and offsets of circular arc figures. Technical Report SOCS 90.08, McGill Univ., Montreal, Quebec (1990) Google Scholar