The Online Broadcast Range-Assignment Problem

Let $P=\{p_0,\ldots,p_{n-1}\}$ be a set of points in $\mathbb{R}^d$, modeling devices in a wireless network. A range assignment assigns a range $r(p_i)$ to each point $p_i\in P$, thus inducing a directed communication graph $G_r$ in which there is a directed edge $(p_i,p_j)$ iff $\textrm{dist}(p_i, p_j) \leq r(p_i)$, where $\textrm{dist}(p_i,p_j)$ denotes the distance between $p_i$ and $p_j$. The range-assignment problem is to assign the transmission ranges such that $G_r$ has a certain desirable property, while minimizing the cost of the assignment; here the cost is given by $\sum_{p_i\in P} r(p_i)^{\alpha}$, for some constant $\alpha>1$ called the distance-power gradient. We introduce the online version of the range-assignment problem, where the points $p_j$ arrive one by one, and the range assignment has to be updated at each arrival. Following the standard in online algorithms, resources given out cannot be taken away -- in our case this means that the transmission ranges will never decrease. The property we want to maintain is that $G_r$ has a broadcast tree rooted at the first point $p_0$. Our results include the following. - For $d=1$, a 1-competitive algorithm does not exist. In particular, for $\alpha=2$ any online algorithm has competitive ratio at least 1.57. - For $d=1$ and $d=2$, we analyze two natural strategies: Upon the arrival of a new point $p_j$, Nearest-Neighbor increases the range of the nearest point to cover $p_j$ and Cheapest Increase increases the range of the point for which the resulting cost increase to be able to reach $p_j$ is minimal. - We generalize the problem to arbitrary metric spaces, where we present an $O(\log n)$-competitive algorithm.


Introduction
Consider a collection of wireless devices, each with its own transmission range.The transmission ranges induce a directed communication network, where each device p i can directly send a message to any device p j in its transmission range.If p j is not within range, a message from p i can still reach p j if there is a path from p i to p j in the communication network.The energy consumption of a device depends on its transmission range: the greater the range, the more power is needed.This leads to the range-assignment problem: assign transmissions ranges to the devices such that the resulting network has some desired connectivity property, while minimizing the total power consumption.
Mathematically we can model the problem as follows.Let P = {p 0 , . . ., p n−1 } be a set of n points in R d .For an assignment r : P → R 0 , let G r be the directed graph on the vertex set P obtained by putting a directed edge from a vertex p i to a vertex p j iff dist(p i , p j ) r(p i ), where dist(p i , p j ) denotes the distance between p i and p j .We call G r the communication graph on P induced by the range assignment r.The cost of a range assignment r is defined as cost α (r) := pi∈P r(p i ) α , where α 1 is called the distance-power gradient.In practice, α typically varies from 1 to 6 [15].We then want to find a range assignment that minimizes the cost while ensuring that G r has some desired property.Properties that have been investigated in this context include strong connectivity [9,14], h-hop strong connectivity [8,10,14], broadcast capability-here G r must contain a broadcast tree (that is, an arborescence) rooted at the source point p 0 -, and h-hop broadcast capability [2,13]; see the survey by Clementi et al. [6] for an overview of the various range-assignment problems.Most previous work considered the Euclidean setting.There has been some work on arbitrary metric spaces for the strong connectivity version [4,12].(Note that while the 2-dimensional version seems the most relevant setting, the distances may not be Euclidean due to obstacles that reduce the strength of the signal of a device.) In this paper we focus on the broadcast version of the range-assignment problem.This version can be solved optimally in a trivial manner when α = 1, by setting r(p 0 ) := max 0 i<n dist(p 0 , p i ) and r(p i ) := 0 for i > 0. Clementi et al. [7] showed a polynomial time algorithm for the 1-dimensional problem when α 2.Moreover, Clementi et al. [5] showed the problem is NP-hard for any α > 1 and any d 2. Clementi et al. [7], Clementi et al. [5], and Wan et al. [17] also showed that the problem can be approximated within a factor c • 2 α for any α 2 and for a certain constant c.Furthermore, Clementi et al. [5] showed that for any d 2 and for any α d, there is a function f : N × R → R such that the problem can be approximated within a factor f (d, α) in the d-dimensional Euclidean space.Fuchs [11] showed that for d = 2, the problem remains NP-hard even for so-called well-spread instances for any α > 1.In dimension d 3, he also showed that the problem is NP-hard to approximate within a factor of 51/50 when α > 1; the result also holds for well-spread instances when α > d.
Our contribution.We study the online version of the broadcast range-assignment problem.Here the points p 0 , p 1 , . . ., p n−1 come in one by one, and the goal is to maintain a range assignment r such that G r contains a broadcast tree on the currently inserted points, rooted at the first point p 0 .Of course one can simply recompute the assignment from scratch, but in online algorithms one requires that resources that have been given out cannot be taken back.For the range assignment problem this means that we are not allowed to decrease the range of any point.In fact, our algorithms have the useful property that upon arrival of each point, we change the current range assignment only minimally: either we do not change it at all-this happens when the newly arrived point is already within range of an existing point-or we increase the range of only a single point.Our goal is to obtain algorithms with a good competitive ratio. 1As far as we know, the range-assignment problem has not been studied from the perspective of online algorithms.
We first prove a lower bound on the competitive ratio achievable by any online algorithm: even in R 1 there is a constant c α > 1 (which depends on the power-distance gradient α) such that no online algorithm can be c α -competitive.For α = 2, we have c α > 1.57.
We then investigate the following two natural online algorithms for the broadcast rangeassignment problem.Suppose the point p j arrives.Our algorithms all set r(p j ) := 0 and, as mentioned, they do not change any of the ranges r(p 0 ), . . ., r(p j−1 ) if |p i p j | r(p i ) for some 0 i < j.When p j is not within range of an already inserted point, the algorithms increase the range of one point, as follows.Let nn(p j ) denote the nearest neighbor of p j in the set {p 0 , . . ., p j−1 }, with ties broken arbitrarily.
Nearest-Neighbor (nn for short) increases the range of nn(p j ) to dist(p j , nn(p j )).Cheapest Increase (ci for short) increases the range of p i * to dist(p i * , p j ), where p i * is a point minimizing the cost increase of the assignment, which is dist(p i * , p j ) α − r(p i * ) α where r(p i * ) denotes the current range of p i * .
dimension distance-power gradient lower bound for nn upper bound for nn and ci 94 Table 1 Overview of results on nn and ci.
The results are summarized in Table 1.Note the lower bounds hold only for nn, while the upper bounds hold for nn and ci; the exception is the third row, which is for 2-nn (see below).The lower bound of 6(1 + 0.52 α ) mentioned in the table-the exact bound is 6(1 + ( ) α )-applies to all α > 1, and thus implies the given lower bound for α = 2. Recall that for d = 1 and α = 2, we also have a universal lower bound of 1.57 that holds for any online algorithm and, hence, also for ci.The exact value of α * is α * = arg min F * α , where As can be seen in the table nn is O(1)-competitive for α = 2, but the competitive ratio is quite large, namely 322.We therefore also analyze the following variant of nn, which (if p j is not yet within range of an existing point) proceeds as follows: 2-Nearest-Neighbor (2-nn for short) increases the range of nn(p j ) to 2•dist(p j , nn(p j )).
We prove that the competitive ratio of 2-nn is at most 36 for α = 2. Thus, while still rather large, the competitive ratio is a lot smaller than what we were able to prove for nn.It is interesting to note that both nn and 2-nn make decisions that are independent of α.Hence, nn obtains a solution that is simultaneously competitive for all α 2.
As a final contribution we generalize the broadcast problem to points in arbitrary metric spaces.Since to the best of our knowledge this version has not been studied before, we present an approximation algorithm for the offline setting; its approximation ratio is 5 α .In this offline setting the algorithm must be what Boyar et al. [3] call an incremental algorithm: an algorithm that, even though it may know the future, maintains a feasible solution at any time.For the online setting (where the future is unknown) we obtain an O(4 α log n)-competitive algorithm.
Notation.We let P := p 0 , . . ., p n−1 denote the input sequence, where we assume without loss of generality that p i is inserted at time i and that all p i are distinct.Define P i := p 0 , . . ., p i , and denote the range of a point p i ∈ P j just after the insertion of the point p j by r j (p i ).Thus in the online version we require that r j (p i ) r j+1 (p i ).For an algorithm alg we use cost α (alg(P )) to denote the cost incurred by alg on input P for distance-power gradient α.Finally we denote the ball of radius ρ centered at a point p by B(p, ρ); note that in R 1 this is an interval of length 2ρ and in R 2 it is a disk of radius ρ.

2
Online range-assignment in R 1 In this section we prove that no online algorithm can have a competitive ratio arbitrarily close to 1, even in R 1 .We also prove bounds on the competitive ratio of nn and ci in R 1 .
A universal lower bound.To prove the lower bound we consider an arbitrary online algorithm alg.Our adversary then first presents the points p 0 = 0, p 1 = x, and p 2 := δ α • x.
Depending on the range assignment alg has done so far, the adversary either ends the instance or presents a fourth point p 3 = −δ α • x.By picking a suitable value for δ α and making x sufficiently large, we can obtain a lower bound.This is made precise in the following theorem.
Theorem 2.1.For any distance-power gradient α > 1, there is a constant c α > 1 such that any online algorithm for the range assignment problem in R 1 has a competitive ratio of at least c α .For α = 2 this constant is c 2 ≈ 1.58.
Proof.Let α > 1 and let alg be an algorithm with competitive ratio c 1, i.e., there is a constant a such that the cost of alg is upper bounded by c • OPT +a.We also define and δ α : = arg max We show that c c α by constructing the following families of instances consisting of, respectively, three and four points, and parametrized by the real number x 1: Note that there is a one-to-one correspondence between the instances in both families: each instance of F 1 is the beginning of exactly one instance of F 2 and each instance of F 2 starts like exactly one instance of F 1 .
For any x, depending on what alg does after p 2 is inserted, we choose an instance from either the family F 1 or the family F 2 using the following rule: if after p 2 is inserted, alg has a disk of radius at least δ α • x, we choose F 1 , otherwise we choose F 2 .In the former case, alg pays at least δ α α • x α while the optimal solution would be to place a disk of radius x at p 0 and a disk of radius (δ α − 1) • x at p 1 and pay x α + (δ α − 1) α • x α .Since the competitive ratio of alg is c, we have that .
Since the second term can be made arbitrarily small by choosing x large enough, c must be at least δ α α (δα−1) α .In the latter case, alg has one disk of radius at least x and one of radius at least (δ α −1)•x before p 3 is inserted.We split this case into two subcases: in the first one, alg increases the radius of the disk at p 0 and in the second one, alg increases the radius of the disk at p 1 .The cost alg has to pay after p 3 has been inserted is at least either δ α α • x α + (δ α − 1) α • x α in the first subcase, or x α + (δ α + 1) α • x α in the second, whereas the optimal solution for both subcases would be to place only one disk of radius δ α • x at p 0 and pay δ α α • x α .Since the competitive ratio of alg is c, we have that x α + a for the first subcase and hence x α + a for the second subcase and hence Since, in both subcases, the second term can be made arbitrarily small by choosing x large enough, c must be at least for the first subcase, and at least 1+(δα+1) α δ α α , otherwise there is an infinite family of instances contradicting the competitive ratio for these two subcases.
Therefore, the competitive ratio of alg must be at least the minimum of the competitive ratio between these cases, which is exactly c α .Even though it is not clear how to compute the value of c α for any fixed α > 1, it is easy to see it is strictly bigger than 1.If α = 2, we have which is achieved for Bounds for nn and ci.We now prove bounds on the competitive ratio of the algorithms nn and ci explained in the introduction.
Theorem 2.2.Consider the range-assignment problem in R 1 with distance-power gradient α.
(i) For any α > 1, the competitive ratio of ci is at most 2.
(ii) For any α > 1, the competitive ratio of nn is exactly 2.
Proof.We first prove the upper bounds.Assume without loss of generality that p 0 = 0. We first prove that both NN and ci perform optimally for α > 1 on any sequence p 0 , p 1 , . . ., p n−1 with p j 0 for all 1 j < n.
Claim.Suppose p 0 = 0 and p j 0 for all 1 j < n.Then nn and ci are optimal.
Proof.We first observe that for any point p j the following holds for the graph G rj that we have after the insertion of p j : for any point p i with 0 < i j there is a path from the source p 0 to p i that only uses edges directed from left to right, that is, edges (p i , p i ) with p i < p i .Indeed, if the path uses an edge (p i , p i ) with p i > p i then the subpath from p 0 to p i must contain an edge (p s , p t ) with p s p i p t , and then we can go directly from p s to p i .This observation implies that there exists an optimal strategy Opt such that the balls B(p i , r j (p i )) of the currently inserted points never extend beyond the currently rightmost point, a property which holds for nn and ci as well.(Intuitively, the part of B(p i , r j (p i )) to the right of the rightmost point is currently useless, and the part of B(p i , r j (p i )) to the left of p i is not needed because we never need edges going to the left.Hence, we decrease r j (p i ) until the right endpoint of B(p i , r j (p i )) coincides with the currently rightmost point, and increase the range of p i later, as needed.)Now imagine running nn, ci, and Opt simultaneously on P .We claim that nn and ci do exactly the same, and that their cost increase after the insertion of any point p j is at most the cost increase of Opt.To see this, let p j be the rightmost point just before inserting p j .If p j < p j then nn and ci do not increase any range-since p j is reachable from p 0 , the point p j must already be reachable as well-and so the cost increase is zero.If p j > p j then nn and ci both increase the range of p j from 0 to p j − p j .For nn this is clear.For ci it follows from the fact that α > 1.Indeed, increasing the range of some p i < p j gives a cost increase (r j−1 (p i ) + x + (p j − p j )) α − (r j−1 (p i )) α , for some x 0. This is more than (p j − p j ) α , since we must have r j−1 (p i ) + x > 0. By a similar reasoning, and using that the balls of Opt do not extend beyond p j , we conclude that the cost increase of Opt cannot be smaller than (p j − p j ) α .Hence, nn and ci are optimal on a sequence of non-negative points.
Next, we prove that the optimality for non-negative points gives a competitive ratio of at most 2 for any input sequence P .Let P + and P − denote the subsequences of P consisting of the points with non-negative and non-positive points, respectively.Note that the source point p 0 = 0 is included in both subsequences.We claim that cost α (Opt((P )) cost α (Opt((P + )).Indeed, we can modify the optimal solution for P to a valid solution for P + whose cost is at most cost α (Opt((P )), as follows: whenever the range of a point p i ∈ P + is increased to reach a point p j ∈ P + , we instead increase the range of p 0 by the same amount.A similar argument gives cost α (Opt((P )) cost α (Opt((P − )).
Imagine running nn simultaneously on P , on P + and on P − .We claim that the increase of cost α (nn(P )) upon the arrival of a new point p j is at most the increase of cost α (nn(P + )) if p j > 0, and at most the increase of cost α (nn(P − )) if p j < 0. To see this, assume without loss of generality that p j > 0 and suppose the increase of cost α (nn(P )) is non-zero.Then p j lies to the right of the currently rightmost point, p i .Both nn(P ) and nn(P + ) then increase the range of p i , and pay the same cost.The only exception is when i = 0, that is, p j is the first point with p j > 0. In this case nn(P ) may pay less than nn(P + ), since nn(P ) could already have increased the range of p 0 due to arrivals of points to the left of p 0 .
A similar argument works for ci.Indeed, ci(P + ) and ci(P − ) never extend a ball beyond the currently rightmost and leftmost point, respectively.Hence, when the new point p j lies, say, to the right of the currently rightmost point p i , then ci(P + ) would pay (dist(p i , p j )) α .Since ci(P ) also has the option to increase the range of p i , it will never pay more.
It remains to prove the lower bound for part (ii) of the theorem.Assume for a contradiction that there is a constant a such that for all inputs P we have cost α (nn(P )) (2 − ε) • cost α (Opt(P )) + a.Consider the input p 0 = 0, p 1 = δx, p 2 = x, and p 3 = −x, for some δ ∈ (0, 1] and x > 0 to be determined later.The optimal solution has r 3 (p 0 ) = x and r 3 (p 1 ) = r 3 (p 2 ) = r 3 (p 3 ) = 0, while nn has r 3 (p 0 ) = x and r 3 (p 1 ) = (1 − δ)x and r 3 (p 2 ) = r 3 (p 3 ) = 0. Hence, the competitive ratio that nn achieves on this instance is which is larger than 2 − ε when we pick δ sufficiently small and x sufficiently large.

3
Online range-assignment in R 2

Bounds on the competitive ratio of nn and ci when α > 2
As before, let p 0 , . . ., p n−1 be the sequence of inserted points, with p 0 being the source point.Consider a fixed point p i , and a disk D centered at p i -the disk D need not have radius equal to the range of p i .Define S(p i , D) := {p j : j i and p j ∈ D} to be the set containing p i plus all points arriving after p i that lie in D. For a point p j , define cost α (nn, p j ) to be the cost incurred by nn when p j is inserted; in other words, cost α (nn, p j ) := 0 when p j falls into an existing disk B(p i , r j−1 (p i )), and cost α (nn, p j ) := (r j (p k )) α − (r j−1 (p k )) α otherwise, where p k := nn(p j ).Define cost α (ci, p j ) similarly for ci.Finally, for p j ∈ S(p i , D) define The next lemma shows that we can use the function F α to upper bound the cost of nn and ci.We later apply this lemma to all disks in an optimal solution to bound the competitive ratio.Note that cost α (nn, p j ) F α (p j ).Indeed, nn either pays zero (when p j already lies inside a disk) or it expands the disk of p j 's nearest neighbor (which may or may not lie in D) which costs at most F α (p j ).Similarly cost α (ci, p j ) F α (p j ).Hence we have: F α (p j ).
Lemma 3.1 suggests the following strategy to bound the competitive ratio of nn (and ci).
Consider, for each point p i , the final disk D placed at p i in an optimal solution, and let ρ be its radius.The cost of this disk is ρ α .We charge the cost of the disks placed by nn (or ci) at points p j inside D-this cost can be bounded using the function F α , by Lemma 3.1-to the cost of D. This motivates the following definition: where the maximum is over any possible input instance P , any point p i ∈ P , any disk D of radius ρ centered at p i , and any subset S(D) ⊆ S(p i , D) \ {p i }.The value F * α bounds the maximum total charge to any disk D in the optimal solution, relative to D's cost ρ α .The next lemma shows that for α > 2, the value F * α is bounded by a constant (depending on α).
Lemma 3.2.We have that The formal proof of the lemma is quite technical so we sketch the intuition here before diving into the proof, also showing why the condition α > 2 is needed.The quantity F * α can be thought of in the following way.Consider a disk D of radius ρ centered at p i , and imagine the points in S(D) arriving one by one.(The points in S(p i , D) \ S(D) are irrelevant.)Whenever a new point p j arrives, then F * α increases by dist(p j , nn(p j )) α , where nn(p j ) is p j 's nearest neighbor among the already arrived points from S(D) including p i .Since the more points arrive the distances to the nearest neighbor will decrease-more precisely, we cannot have many points whose nearest neighbor is at a relatively large distance-the hope is that the sum of these distance to the power α converges, and this is indeed what we can prove for α > 2. For α = 2 it does not converge, as shown by the following example.
Let D be a unit disk centered at p i , and consider the inscribed square σ of D. Note that the radius of σ-the distance from its center to its vertices-is 1.We insert a set S(D) of n − 1 points in rounds, as follows.In the first round we partition σ into four squares of radius 1/2, and we insert a point in the center of each of them.These four points all have p i as nearest neighbor, and F * α increases by 4 We recurse in each of the four squares.Thus in the k-th round, we have 4 k−1 squares of radius (1/2) k−1 , each of which is partitioned into four squares of radius (1/2) k , and we place a point inside each such subsquare.This increases Note that 1/2 2−α = 1 for α = 2, giving F * 2 = Ω(log n), while for α > 2 the total cost converges.Also note that the example only gives a lower bound on F * 2 , it does not show that nn has unbounded competitive ratio for α = 2.The reason is that nn actually pays less than F * 2 , since most points p j are already within range of an existing point upon insertion, and so we do not have to pay dist(p j , nn(p j )) α .Indeed, in the next section we prove, using a different argument, that nn is O(1)-competitive even for α = 2.
We now present the proof of Lemma 3.2.Proof of Lemma 3.2.Let p be a point and let D be any disk centred at p .For the sake of simplicity, we rescale D to be a unit disk and relabel points in S(D) as p 0 , . . ., p k without changing the ordering and where p 0 is the center of D. We show that To that purpose we create a potential function Φ : {0, . . ., k} → R, with Φ(i) being the potential when p i is inserted, with the following properties: Figure 1 Outside the grey region the function φ is always 0. When q is inside D, the function φ(q, i) is simply cα dist(q, nni(q)) α−2 .Finally, when q is in the grey area but not in D, that is 1 < dist(q, p0) 1.5, we have that φ(q, i) = cα min{dist(q, nni(q)), dist(q, ∂D2) If such a potential function exists, we then indeed have Let p i be the last point inserted.For any point q in the plane, let nn i (q) be its closest point with among p 1 , . . ., p i .We define the potential φ(q, i) at q at time i as follows: 3  2 , and dist(q, nn i (q)) dist(q, ∂D 2 ); if 1 < dist(q, p 0 ) 3 2 , and dist(q, nn i (q)) < dist(q, ∂D 2 ); 0 otherwise; where D 2 is the disk of radius 2 centred at p 0 and c α = α(α−1)2 α−2 π(2 α−1 −α) is a constant depending only on α.See Figure 1 for an illustration of the cases.
We finally define the potential function at time i as This potential function can be interpreted as a volume in R 3 , where we assume without loss of generality that the center of D lies at the origin and the points p 0 , . . ., p k all lie in the plane z = 0.The volume then consists of the following.Over D it is the volume between the plane z = 0 and the lower envelope of a set of "paraboloids", one for each point p j ∈ {p 0 , . . ., p i }, defined by Par α (p j ) = {(x, y, z) | z = c α dist((x, y), p j ) α−2 }.Outside of D, on the other hand, the volume is defined as the volume under the lower envelope of Par α (p) for each point p ∈ {p 0 , . . ., p i } ∪ ∂D 2 and above the paraboloids Par α (p) for each point p ∈ ∂D.See Figure 2 for an illustration.
Figure 2 Cross section of the volume Vi in gray.For clarity, we do not draw the paraboloids of points outside the cross section.
Figure 3 The volume we use as a lower bound on the decrease of potential upon insertion of pi.On the left, we have a cross section of the volume, where d * = dist(pi, pj) and pj is the nearest point to pi with j < i.On the right, we have the volume in 3 dimensions.All the paraboloids defined by Parα(p) for some p.
We now need to show that this potential function has the claimed properties.It is easy to see that Φ(i) > 0 for each i = 0, . . ., k. Next we show that the decrease of potential is at least as large as the cost.Let p i be the point inserted and let p j , with j < i, be a nearest neighbor of p i with d * := dist(p i , p j ).Upon insertion of p i , we add a paraboloid defined at a point q ∈ D by c α dist(q, p i ) α−2 .The decrease of potential is then the volume V i subtracted by this surface.Let us consider the volume See Figure 3 for an illustration of the volume V * i .Next we argue that V * i ⊆ V i by showing that the upper boundary of V * i is under the upper boundary of V i and the lower boundary of V * i is above the lower boundary of V i .Since the upper boundary of V * i is defined by paraboloids at distance d * , and since d * is the distance to the closest point p j , the volume V * i is under the upper boundary of V i .On the other hand, since the lower boundary of V * i is defined again by the same paraboloids, even if p i is close to the boundary of D, the volume V * i is above the lower boundary of V i .Therefore V * i ⊆ V i .We now compute the volume of V * i .We do this by fixing a radius ρ, then computing the area of the largest cylinder of radius ρ centered around the vertical axis passing through p i and inscribed in V * i and integrating that value from 0 until d * /2.For a certain 0 ρ d * /2, the area of the cylinder is 2πρh(ρ) where h(ρ) is the height of the tallest cylinder of radius ρ inscribed in V * i .It remains to compute h(ρ).This is given by the difference of height between the two paraboloids (one on p i and one on a point at distance d * of p i ), i.e., h(ρ We can integrate ρ(d * − ρ) α−2 by parts: It gives us the following: which is exactly the cost of inserting p i , therefore the decrease of potential is indeed at most the cost.It remains to show that Φ(0) = α 2 α −3 2 α−1 −α .To that purpose, let V 0 be a volume representing Φ(0), defined as as depicted in Figure 4. We use the same technique as above to compute
We again use integration by parts. .
Using Lemma 3.2 we can prove a bound on the competitive ratio of nn and ci.
For any α > 2, the competitive ratio of nn and ci in R 2 is at most min{F * β | 2 < β α}.Hence, for α α * , where α * = arg min F * α ≈ 4.3, the competitive ratio is at most α 2 α −3 2 α−1 −α , and for α > α * it is at most 12.94.Proof.Consider a sequence p 0 , p 1 , . . ., p n−1 of points in the plane.Let D j be the disk centered at p j in an optimal solution, after the last point p n−1 has been handled, and let ρ j be its radius.Thus the cost of the optimal solution is OPT : To bound the cost of nn on the same sequence, we charge the cost of inserting p i , with 0 < i n − 1, to a disk D j such that j < i and p i ∈ D j .Such a disk D j exists, since after p i 's insertion, p i is contained in a disk of an existing point p j and so p i will also be contained in D j , the final disk of p j .(If there are more such points, we take an arbitrary one.)Let S(D j ) be the set of points that charge disk D j .Note that {p 1 , . . ., p n−1 } = n−2 j=0 S(D j ).Hence, using Lemmas 3.1 and 3.2, for any 2 < β α, for nn (and similarly for ci) we get: The next theorem gives a lower bound on the competitive ratio of nn.

Figure 5
Lower bound on the competitive ratio of nn.The light gray disk (of radius 1) represents the optimal solution on the boundary of which points p7, . . ., p18 are placed.Points p1, . . ., p6 are placed on the boundary of a disk of radius ε (in dark gray).The algorithm nn is forced to place one first disk of radius ε around p0, then six disks of radius 1 − ε around p1, . . ., p6.And finally six disks of radius roughly 0.5 around p7, . . ., p12.

Bounds on the competitive ratio of nn and 2-nn when α = 2
Above we proved upper bounds for nn and ci for α > 2, and we gave a lower bound for nn for any α > 1.We now study nn and 2-nn for the case α = 2. Unfortunately, the arguments below do not apply to ci.
An upper bound on the competitive ratio of 2-nn for α = 2. Let P := p 0 , p 1 , . . ., p n−1 be the input instance.Recall that nn(p i ) is the closest point to p i among p 0 , . . ., p i−1 .Upon insertion of a point p i , if p i is not covered by the current set of balls B(p i , r j−1 (p i )) with i < i, then 2-nn increases the range of nn(p i ) to 2 • dist(p i , nn(p i )), and otherwise it does nothing.Suppose that upon the insertion of some point p i , we increase the range of nn(p i ).We now define D * i as the disk centered at p i (not at nn(p i )) and of radius d i /2, where d i := dist(p i , nn(p i )).We call D * i the charging disk of p i .Note that the charging disk is a tool in the proof, it is not a disk used by the algorithm.If 2-nn did nothing upon insertion of p i because p i was already covered by a disk, we define D * i := ∅.
Lemma 3.5.For every pair of charging disks D * i and D * j with j = i, we have Proof.Without loss of generality we assume that i < j.Suppose for a contradiction that D * i ∩ D * j = ∅.Let p i := nn(p i ) and p j := nn(p j ), and let d i := dist(p i , p i ) and d j := dist(p j , p j ).Since i < i < j, we have dist(p j , p i ) > 2d i , otherwise p j lies inside the disk of p i when p j is inserted and we would have D * j = ∅.On the other hand, , which is true because we assumed i < j, this implies d i dist(p i , p j ).But then dist(p j , p i ) d i + dist(p i , p j ) 2d i , a contradiction.Lemma 3.6.For any points p i and p j with i < j, let D OPT j (p i ) be the disk centered at p i after p j is inserted in an optimal solution and let ρ j (p i ) be its radius.Furthermore, let D 1.5 OPT j (p i ) be the disk centered at p i of radius 1.5 • ρ j (p i ).Then, for every point p k , there is a point p i such that the charging disk Using these two lemmas, we can conclude the following.
Proof.Recall that the charging disk D * i has radius dist(p i , nn(p i ))/2.Thus the cost incurred by 2-nn upon the insertion of p i is at most (2 • dist(p i , nn(p i ))) 2  16 • radius(D * i ) 2 .By Lemma 3.5, the disks D * i are pairwise disjoint.Let D OPT denote the set of disks in an optimal solution, and let OPT be its cost.Then by Lemma 3.6 we have 4 OPT.Hence the total cost incurred by 2-nn is bounded by 16 Upper bound on the competitive ratio of nn for α = 2.We now prove an upper bound on the competitive ratio of nn using a similar strategy as for 2-nn.The proof uses charging disks, as above.The main difference being how the charging disks are defined.Suppose that nn increases the range of nn(p i ) upon the insertion of p i .Then the charging disk D * i is the disk of radius γ • d i that is centered on the midpoint of the segment p i nn(p i ), where d i := dist(p i , nn(p i )) and γ is a constant to be determined later.If nn did nothing upon insertion of p i , we define D * i := ∅.We now show that the charging disks are disjoint if we pick γ suitably.Proof.Let p i and p j be two points with charging disks D * i and D * j .Let p i := nn(p i ) and p j := nn(p j ).Let also D i , respectively D i , be the disk of radius dist(p i , p i ) centered on p i , respectively p i .We define D j and D j similarly.Let also m i and m j be the midpoints between p i and p i , and p j and p j respectively.We assume without loss of generality that dist(p i , p i ) = 1 dist(p j , p j ).We distinguish two cases.
First, if i = j , then i > j otherwise p j ∈ D i when p j is inserted and D * j = ∅.Moreover, let H be the halfplane defined by the bisector of p i = p j and p j with p j ∈ H.Then, p i / ∈ H otherwise nn(p i ) is not p i but p j .This implies that the angle between p i p i and p i p j is at least π/3 (see Figure 6).Let the two half-lines starting at p i with an angle of π/6 with p i p i define a wedge w.
If γ is such that D * j is contained in the wedge w, the disks D * i and D * j are disjoint.That is the case when the square triangle of hypotenuse 1/2 and angle π/6 has its short side at most γ.Using trigonometry (see Figure 7), we get γ sin(π/6)/2 = 1/4 which is always the case since γ < 3−  The Online Broadcast Range-Assignment Problem Figure 6 The gray area depicts where pi can be.Recall that dist(pj, p j ) dist(pi, p i ).
p i = p j p j x π/6 We now deal with the case i = i .Suppose for a contradiction that D * i and D * j intersect.Consider the interior of D i ∩ D i .We claim that if p j is in that region, then D * i and D * j do not intersect.Suppose p j is in the interior of D i ∩ D i .If j i, then p j ∈ D i and D * j = ∅ which is a contradiction.If, on the other hand j < i, when p i is inserted, we have that nn(p i ) is p j and not p i since p j is in D i , which is a contradiction.Therefore, from now on, we can assume that p j / ∈ Int(D i ∩ D i ).Note that this implies dist(p j , m i ) 1/2.Therefore, if dist(p j , m j ) < 1/2 − 2γ, then we have that dist(m i , m j ) > 2γ and thus D * i and D * j can never intersect because the radius of Moreover, we claim that p j has to be in the disk D mi of radius 2γ + 1/2 centered on m i for the disks D * i and D * j to intersect.Suppose it is not the case.Then dist(p j , m i ) > 2γ + 1/2 which implies that dist(m i , m j ) dist(p j , m i ) − dist(p j , m j ) > 2γ and then the disks D * i and D * j are disjoint.Figure 8 shows the region A := D mi \ Int(D i ∩ D i ) where p j has to be in order to have the disks intersect.
Using trigonometry, we compute the maximum distance between p j and either p i or p i , depending on which is closer, that is max pj ∈A min(dist(p i , p j ), dist(p i , p j )).See Figure 9.

Figure 8
The region A := Dm i \ Int(Di ∩ D i ) in gray depicts where pj needs to be for the disks to intersect.
The point q of the triangle is defined as the intersection of Dm i and D i .We want to compute x.
Using the law of cosines, we have 2γ + 1 2 which is equivalent to 4γ 2 + 2γ = 1 − cos(φ) and Combining the two equations, we get x = 2 γ(2γ + 1).Since γ < 3− √ 7 4 , then 2 γ(2γ + 1) < 1 − 4γ.Thus x < dist(p j , p j ).Consequently, we have that p j is closer to either p i or p i than it is to p j .We show that both these options lead to a contradiction.
Let us first assume that p j is in the right crescent, so p j is closer to p i than p j .If i < j, then dist(p j , p i ) x < dist(p j , p j ) which is a contradiction.Otherwise, if j < i then p j is closer to p i than p i when p i is inserted, which is also a contradiction.
Then, assume p j is in the left crescent, so p j is closer to p i than p j .That implies j i otherwise p i is closer to p j than p j when p j is inserted, which is a contradiction.Note that if j = i , then there are only three points, but all the following arguments hold the same way.We hence have that j < j i < i.If p i ∈ D j it implies that D * i = ∅ leading to a contradiction.Therefore we have that dist(p i , p j ) > dist(p j , p j ) 1 − 4γ.We now compute dist(p i , m j ) using Apollonius's theorem on the triangle ∆p i p j p j and until for some j < i and range r we have k∈Sj,r:k i y k = r α ; we then set p j 's new range to r i (p j ) := γr for one such j.In both cases, we only set p j 's range, the other ranges remain unchanged.Note that in the event that multiple sets centered at different points become tight simultaneously, we only update the range of one of them.
Analysis.We begin our analysis of the algorithm by showing the feasibility of the constructed dual solution y and the corresponding range assignment.For each point p i , the algorithm stops raising y i once some set S j,r containing p i is tight and then updates p j 's radius to be γr > r.This guarantees that no dual constraint is violated and that p i is covered by p j .
Next we analyze the cost of this algorithm.We use the shorthand r j for the final range r n−1 (p j ) of the point p j .First, we argue that it suffices to bound the cost of the points whose ranges are large enough.Let H = {0 i n − 2 : r i max 0 j n−2 r j /n}.Then, the cost of the algorithm is where the second last inequality is because i∈H r α i max j r α j and the last is because α > 1.In the remainder of this section we will show that i∈H The theorem then follows from the Weak Duality Theorem of Linear Programming which states that value of any feasible solution to the primal (minimization) problem is always greater than or equal to the value of any feasible solution to its associated dual problem.For 0 i n − 2, our algorithm sets the final range r i of point p i such that r i = γr for some r ∈ R such that k∈Si,r y k = r α .Thus, we get where the last equality follows by interchanging the sums.Thus, to prove Inequality (3) it suffices to prove the following lemma.Proof.Define H j = {i ∈ H : j ∈ S i,ri/γ }.We will show that for every i, i ∈ H j , either r i > γ−1 2 r i or r i > γ−1 2 r i .This implies that the t-th smallest range (among the points in H j ) is at least ((γ − 1)/2) t times the smallest range (among those points).Since Suppose i, i ∈ H j .Let p k be the last-arriving point that causes our algorithm to update r i , and p k be the last-arriving point that causes our algorithm to update r i .Since the arrival of any point causes at most one point's range to be updated, we have that p k = p k .Suppose that p k arrived before p k .By construction of r i , we have dist(p k , p i ) = r i /γ.Moreover, since i, i ∈ H j , we have dist(p i , p j ) r i /γ and dist(p i , p j ) r i /γ.Therefore, by the triangle inequality, dist(p i , p k ) dist(p i , p j ) + dist(p j , p i ) + dist(p i , p k ) 2 r i γ + r i γ .
Since p k arrived before p k and p k caused our algorithm to update r i , the point p k must have been uncovered when it arrived, and so dist(p i , p k ) > r i .Therefore, we get and so r i > γ−1 2 r i as desired.In the case that p k arrived before p k , a similar argument yields r i > γ−1 2 r i .By setting γ = 4 we obtain the following theorem.Theorem 4.2.For any power-distance gradient α > 1, there is a O(4 α log n)-competitive algorithm for the online range assignment problem in general metric spaces.

5
On offline algorithm for general metric spaces In the offline setting, we are given the entire sequence of points p 0 , . . ., p n in advance and the goal is to assign ranges r 0 , . . ., r n−1 to the points p 0 , . . ., p n−1 so that for every 1 i n, there exists j < i such that dist(p i , p j ) r j .We can formulate the problem in this way because we know all points beforehand, and we are interested in the cost of the final assignment.Thus we may immediately assign each point its final range, and we need not specify a separate range for every point at each time step.The stated condition on the assignment ensures that after inserting each p j , we have a broadcast tree on p 0 , . . ., p j .Thus we require the algorithm to be what Boyar et al. [3] call an incremental algorithm: namely an algorithm that maintains a feasible solution at any time (even though, unlike an online algorithm) it may know the future).We emphasise that this is different from the static broadcast range assignment problem studied previously.To avoid confusion with the usual offline broadcast range assignment problem, we call this the Priority Broadcast Range Assignment problem. 2 Below we give a 5 α -approximation algorithm for the offline version of the problem, based on the LP formulated in Section 4.1.
The basic idea of the approximation algorithm is as follows.We start with a maximally feasible dual solution y, i.e. increasing any y j would violate some dual constraint.Since y is maximally feasible, for every j, there exists a set S i,r containing p j that is tight.Thus, the tight sets form a feasible set cover.Let S be subset of the tight sets that is a minimally feasible set cover.As observed above, for every i, there is at most one set S i,r ∈ S. Thus, S corresponds to a feasible range assignment.Let r i be the radius assigned to p i .
We now modify the range assignment r to get a range assignment r so that Since y is a feasible dual solution y, weak LP duality implies that r is a 5 α -approximation.Say that i conflicts with j if there exists a point p k ∈ S j,rj ∩ S i,ri such that y k > 0. Order the indices in decreasing order of r i , breaking ties arbitrarily, and denote by i ≺ j if i comes before j in this ordering.We use the following algorithm to construct r .
While the requirement that we cannot decrease the range of any point in the online setting is perhaps not necessary in practice, our algorithms have the additional benefit that they modify the range of at most one point.Thus it can also be seen as the first step in studying a more general version, where we are allowed to modify (increase or decrease) the range of, say, two points.In general, it is interesting to study trade-offs between the number of modifications and the competitive ratio.Studying deletions is then also of interest.

Lemma 3 . 1 .
Let p i be any input point and D a disk centered at p i .Then for any subset S(D) ⊆ S(p i , D) \ {p i } we have: pj ∈S(D) cost α (nn, p j ) pj ∈S(D) F α (p j ) and pj ∈S(D) cost α (ci, p j ) pj ∈S(D)
max i∈H j ri min i∈H j ri n, this means that |H j | = O(log (γ−1)/2 n) = O(log n).