1 Introduction

Consider a collection of wireless devices, each with its own transmission range. The transmission ranges induce a directed communication network, where each device \(p_i\) can directly send a message to any device \(p_j\) in its transmission range. If \(p_j\) is not within range, a message from \(p_i\) can still reach \(p_j\) if there is a path from \(p_i\) to \(p_j\) in the communication network. The energy consumption of a device depends on its transmission range: the greater the range, the more power is needed. This leads to the range-assignment problem: assign transmissions ranges to the devices such that the resulting network has some desired connectivity property, while minimizing the total power consumption.

Mathematically we can model the problem as follows. Let \(P=\{p_0,\ldots ,p_{n-1} \}\) be a set of n points in \({\mathbb R}^d\). For an assignment \(r:P\rightarrow {\mathbb R}_{\geqslant 0}\), let \(\mathcal {G}_r\) be the directed graph on the vertex set \(P\) obtained by putting a directed edge from a vertex \(p_i\) to a vertex \(p_j\) iff \({{\,\textrm{dist}\,}}(p_i,p_j) \leqslant r(p_i)\), where \({{\,\textrm{dist}\,}}(p_i,p_j)\) denotes the distance between \(p_i\) and \(p_j\). We call \(\mathcal {G}_r\) the communication graph on \(P\) induced by the range assignment r. The cost of a range assignment r is defined as \({{\,\textrm{cost}\,}}_\alpha (r):= \sum _{p_i\in P} r(p_i)^\alpha \), where \(\alpha \geqslant 1\) is called the distance-power gradient. In practice, \(\alpha \) typically varies from 1 to 6 [15]. We then want to find a range assignment that minimizes the cost while ensuring that \(\mathcal {G}_r\) has some desired property. Properties that have been investigated in this context include strong connectivity [9, 14], h-hop strong connectivity [8, 10, 14], broadcast capability—here \(\mathcal {G}_r\) must contain a broadcast tree (that is, an arborescence) rooted at the source point \(p_0\)—, and h-hop broadcast capability [2, 13]; see the survey by Clementi et al. [6] for an overview of the various range-assignment problems. Most previous work considered the Euclidean setting. There has been some work on arbitrary metric spaces for the strong connectivity version [4, 12]. (Note that while the 2-dimensional version seems the most relevant setting, the distances may not be Euclidean due to obstacles that reduce the strength of the signal of a device.)

In this paper we focus on the broadcast version of the range-assignment problem. This version can be solved optimally in a trivial manner when \(\alpha =1\), by setting \(r(p_0):= \max _{0\leqslant i < n} {{\,\textrm{dist}\,}}(p_0,p_i)\) and \(r(p_i):=0\) for \(i>0\). Clementi et al.  [7] showed a polynomial time algorithm for the 1-dimensional problem when \(\alpha \geqslant 2\). Moreover, Clementi et al.  [5] showed the problem is NP-hard for any \(\alpha > 1\) and any \(d\geqslant 2\). Clementi et al.  [7], Clementi et al.  [5], and Wan et al.  [17] also showed that the problem can be approximated within a factor \(c\cdot 2^\alpha \) for any \(\alpha \geqslant 2\) and for a certain constant c. Furthermore, Clementi et al.  [5] showed that for any \(d\geqslant 2\) and for any \(\alpha \geqslant d\), there is a function \(f: {\mathbb N}\times {\mathbb R}\rightarrow {\mathbb R}\) such that the problem can be approximated within a factor \(f(d,\alpha )\) in the d-dimensional Euclidean space. Fuchs [11] showed that for \(d=2\), the problem remains NP-hard even for so-called well-spread instances for any \(\alpha >1\). In dimension \(d\geqslant 3\), he also showed that the problem is NP-hard to approximate within a factor of 51/50 when \(\alpha > 1\); the result also holds for well-spread instances when \(\alpha > d\).

Our contribution. We study the online version of the broadcast range-assignment problem. Here the points \(p_0,p_1,\ldots ,p_{n-1}\) come in one by one, and the goal is to maintain a range assignment r such that \(\mathcal {G}_r\) contains a broadcast tree on the currently inserted points, rooted at the first point \(p_0\). Of course one can simply recompute the assignment from scratch, but in online algorithms one requires that resources that have been given out cannot be taken back. For the range assignment problem this means that we are not allowed to decrease the range of any point. In fact, our algorithms have the useful property that upon arrival of each point, we change the current range assignment only minimally: either we do not change it at all—this happens when the newly arrived point is already within range of an existing point—or we increase the range of only a single point. Our goal is to obtain algorithms with a good competitive ratio.Footnote 1 As far as we know, the range-assignment problem has not been studied from the perspective of online algorithms.

We first prove a lower bound on the competitive ratio achievable by any online algorithm: even in \({\mathbb R}^1\) there is a constant \(c_{\alpha }>1\) (which depends on the power-distance gradient \(\alpha \)) such that no online algorithm can be \(c_{\alpha }\)-competitive. For \(\alpha =2\), we have \(c_{\alpha }>1.57\).

We then investigate the following two natural online algorithms for the broadcast range-assignment problem. Suppose the point \(p_j\) arrives. Our algorithms all set \(r(p_j):=0\) and, as mentioned, they do not change any of the ranges \(r(p_0),\ldots ,r(p_{j-1})\) if \(|p_i p_j| \leqslant r(p_i)\) for some \(0\leqslant i<j\). When \(p_j\) is not within range of an already inserted point, the algorithms increase the range of one point, as follows. Let \({{\,\textrm{nn}\,}}(p_j)\) denote the nearest neighbor of \(p_j\) in the set \(\{ p_0,\ldots ,p_{j-1} \}\), with ties broken arbitrarily.

  • Nearest-Neighbor (nn) increases the range of \({{\,\textrm{nn}\,}}(p_j)\) to \({{\,\textrm{dist}\,}}(p_j,{{\,\textrm{nn}\,}}(p_j))\).

  • Cheapest Increase (ci) increases the range of \(p_{i^*}\) to \({{\,\textrm{dist}\,}}(p_{i^*},p_j)\), where \(p_{i^*}\) is a point minimizing the cost increase of the assignment, which is \({{\,\textrm{dist}\,}}(p_{i^*},p_j)^{\alpha }-r(p_{i^*})^{\alpha }\) where \(r(p_{i^*})\) denotes the current range of \(p_{i^*}\).

Table 1 Overview of results on the competitive ratios of nn and ci

The results are summarized in Table 1. Note the lower bounds hold only for nn, while the upper bounds hold for nn and ci; the exception is the third row, which is for 2-nn (see below). The lower bound of \(6(1+0.52^\alpha )\) mentioned in the table—the exact bound is \(6(1+(\frac{\sqrt{6}-\sqrt{2}}{2})^\alpha )\)—applies to all \(\alpha >1\), and thus implies the given lower bound for \(\alpha =2\). Recall that for \(d=1\) and \(\alpha =2\), we also have a universal lower bound of 1.57 that holds for any online algorithm and, hence, also for ci. The exact value of \(\alpha ^*\) is \(\alpha ^* = \mathop {\mathrm {arg\,min}}\limits F^*_\alpha \), where \(F^*_\alpha =\alpha \frac{2^\alpha -3}{2^{\alpha -1}-\alpha }\).

As can be seen in the table nn is O(1)-competitive for \(\alpha =2\), but the competitive ratio is quite large, namely 322. We therefore also analyze the following variant of \(\text {nn} \), which (if \(p_j\) is not yet within range of an existing point) proceeds as follows:

  • 2-Nearest-Neighbor (2-nn) increases the range of \({{\,\textrm{nn}\,}}(p_j)\) to \(2\cdot {{\,\textrm{dist}\,}}(p_j,{{\,\textrm{nn}\,}}(p_j))\).

We prove that the competitive ratio of 2-nn is at most 36 for \(\alpha =2\). Thus, while still rather large, the competitive ratio is a lot smaller than what we were able to prove for nn. It is interesting to note that both nn and 2-nn make decisions that are independent of \(\alpha \). Hence, nn obtains a solution that is simultaneously competitive for all \(\alpha \geqslant 2\).

As a final contribution we generalize the broadcast problem to points in arbitrary metric spaces. Since to the best of our knowledge this version has not been studied before, we present an approximation algorithm for the offline setting; its approximation ratio is \(5^{\alpha }\). In this offline setting the algorithm must be what Boyar et al.  [3] call an incremental algorithm: an algorithm that, even though it may know the future, maintains a feasible solution at any time. For the online setting (where the future is unknown) we obtain an \(O(4^{\alpha }\log n)\)-competitive algorithm.

Notation. We let \(P:= p_0,\ldots ,p_{n-1}\) denote the input sequence, where we assume without loss of generality that \(p_i\) is inserted at time i and that all \(p_i\) are distinct. Define \(P_i:= p_0,\ldots ,p_i\), and denote the range of a point \(p_i\in P_j\) just after the insertion of the point \(p_j\) by \(r_j(p_i)\). Thus in the online version we require that \(r_j(p_i) \leqslant r_{j+1} (p_i)\). For an algorithm alg we use \({{\,\textrm{cost}\,}}_{\alpha }(\text {alg} (P))\) to denote the cost incurred by alg on input \(P\) for distance-power gradient \(\alpha \). Finally we denote the ball of radius \(\rho \) centered at a point p by \(B(p,\rho )\); note that in \({\mathbb R}^1\) this is an interval of length \(2\rho \) and in \({\mathbb R}^2\) it is a disk of radius \(\rho \).

2 Online Range-Assignment in \({\mathbb R}^1\)

In this section we prove that no online algorithm can have a competitive ratio arbitrarily close to 1, even in \({\mathbb R}^1\). We also prove bounds on the competitive ratio of nn and ci in \({\mathbb R}^1\).

2.1 A Universal Lower Bound

To prove the lower bound we consider an arbitrary online algorithm alg. Our adversary then first presents the points \(p_0=0\), \(p_1=x\), and \(p_2:=\delta _\alpha \cdot x\). Depending on the range assignment alg has done so far, the adversary either ends the instance or presents a fourth point \(p_3=-\delta _\alpha \cdot x\). By picking a suitable value for \(\delta _\alpha \) and making x sufficiently large, we can obtain a lower bound. This is made precise in the following theorem.

Theorem 1

For any distance-power gradient \(\alpha >1\), there is a constant \(c_\alpha >1\) such that any online algorithm for the range assignment problem in \({\mathbb R}^1\) has a competitive ratio of at least \(c_\alpha \). For \(\alpha =2\) this constant is \(c_2 \approx 1.58\).

Proof

Let \(\alpha > 1\) and let alg be an algorithm with competitive ratio \(c \geqslant 1\), i.e., there is a constant a such that the cost of alg is upper bounded by \(c\cdot {{\,\mathrm{{{\,\textrm{OPT}\,}}}\,}}+ a\). We also define

$$\begin{aligned} c_\alpha :&= \max _{\delta>1} \min \left( \frac{\delta ^\alpha }{1+(\delta -1)^\alpha }, \frac{\delta ^\alpha + (\delta -1)^\alpha }{\delta ^\alpha }, \frac{1 + (\delta +1)^\alpha }{\delta ^\alpha } \right) , \\ \text{ and } \quad \delta _\alpha :&= \mathop {\mathrm {arg\,max}}\limits _{\delta >1} \min \left( \frac{\delta ^\alpha }{1+(\delta -1)^\alpha }, \frac{\delta ^\alpha + (\delta -1)^\alpha }{\delta ^\alpha }, \frac{1 + (\delta +1)^\alpha }{\delta ^\alpha } \right) . \end{aligned}$$

(The reasons behind the various terms in these definitions will become apparent in the rest of the proof.) We show that \(c \geqslant c_\alpha \) by constructing the following families of instances consisting of, respectively, three and four points, and parametrized by the real number \(x\geqslant 1 \):

$$\begin{aligned}&\mathcal {F} _1:=\{ \{p_0=0, p_1=x, p_2=\delta _\alpha \cdot x \} \} \\ \text{ and } \quad&\mathcal {F} _2:=\{ \{p_0=0, p_1=x, p_2=\delta _\alpha \cdot x, p_3=-\delta _\alpha \cdot x \} \}. \end{aligned}$$

See Fig. 1 for an illustration.

Fig. 1
figure 1

The lower-bound construction in \({\mathbb R}^1\)

Note that there is a one-to-one correspondence between the instances in both families: each instance of \(\mathcal {F} _1\) is the beginning of exactly one instance of \(\mathcal {F} _2\) and each instance of \(\mathcal {F} _2\) starts like exactly one instance of \(\mathcal {F} _1\).

For any x, depending on what alg does after \(p_2\) is inserted, we choose an instance from either the family \(\mathcal {F} _1\) or the family \(\mathcal {F} _2\) using the following rule: if after \(p_2\) is inserted, alg has a disk of radius at least \(\delta _\alpha \cdot x\), we choose \(\mathcal {F} _1\), otherwise we choose \(\mathcal {F} _2\). In the former case, alg pays at least \(\delta _\alpha ^\alpha \cdot x^\alpha \) while the optimal solution would be to place a disk of radius x at \(p_0\) and a disk of radius \((\delta _\alpha - 1) \cdot x \) at \(p_1\) and pay \( x ^\alpha + (\delta _\alpha - 1)^\alpha \cdot x ^\alpha \). Since the competitive ratio of alg is c, we have that \(\delta _\alpha ^\alpha \cdot x^\alpha \leqslant c\cdot x ^\alpha ( 1+ (\delta _\alpha - 1)^\alpha ) + a\) and hence

$$\begin{aligned} c \geqslant \frac{ \delta _\alpha ^\alpha }{ 1 + (\delta _\alpha - 1)^\alpha } -\frac{ a }{ x ^\alpha ( 1 + (\delta _\alpha - 1)^\alpha ) }. \end{aligned}$$

Since the second term can be made arbitrarily small by choosing x large enough, c must be at least \(\frac{ \delta _\alpha ^\alpha }{1 + (\delta _\alpha - 1)^\alpha }\).

In the latter case, alg has one disk of radius at least x and one of radius at least \((\delta _\alpha - 1) \cdot x \) before \(p_3\) is inserted. We split this case into two subcases: in the first one, alg increases the radius of the disk at \(p_0\) and in the second one, alg increases the radius of the disk at \(p_1\). The cost alg has to pay after \(p_3\) has been inserted is at least either \(\delta _\alpha ^\alpha \cdot x^\alpha + (\delta _\alpha -1 )^\alpha \cdot x^\alpha \) in the first subcase, or \(x^\alpha + (\delta _\alpha +1 )^\alpha \cdot x^\alpha \) in the second, whereas the optimal solution for both subcases would be to place only one disk of radius \(\delta _\alpha \cdot x\) at \(p_0\) and pay \(\delta _\alpha ^\alpha \cdot x^\alpha \). Since the competitive ratio of alg is c, we have that \(\delta _\alpha ^\alpha \cdot x^\alpha + (\delta _\alpha -1 )^\alpha \cdot x^\alpha \leqslant c \cdot \delta _\alpha ^\alpha \cdot x^\alpha + a\) for the first subcase and hence

$$\begin{aligned} c \geqslant \frac{ \delta _\alpha ^\alpha + (\delta _\alpha -1 )^\alpha }{ \delta _\alpha ^\alpha } -\frac{ a }{ \delta _\alpha ^\alpha \cdot x^\alpha }; \end{aligned}$$

and that \(x^\alpha + (\delta _\alpha +1 )^\alpha \cdot x^\alpha \leqslant c\delta _\alpha ^\alpha \cdot x^\alpha + a\) for the second subcase and hence

$$\begin{aligned} c \geqslant \frac{ 1 + (\delta _\alpha +1 )^\alpha }{ \delta _\alpha ^\alpha } -\frac{ a }{ \delta _\alpha ^\alpha \cdot x^\alpha }. \end{aligned}$$

Since, in both subcases, the second term can be made arbitrarily small by choosing x large enough, c must be at least \(\frac{ \delta _\alpha ^\alpha + (\delta _\alpha -1 )^\alpha }{ \delta _\alpha ^\alpha }\) for the first subcase, and at least \(\frac{ 1 + (\delta _\alpha +1 )^\alpha }{ \delta _\alpha ^\alpha }\), otherwise there is an infinite family of instances contradicting the competitive ratio for these two subcases.

Therefore, the competitive ratio of alg must be at least the minimum of the competitive ratio between these cases, which is exactly \(c_\alpha \). Even though it is not clear how to compute the value of \(c_\alpha \) for any fixed \(\alpha >1\), it is easy to see it is strictly bigger than 1. If \(\alpha = 2\), we have (using WolframAlpha)

$$\begin{aligned} c_2 :&= \max _{\delta >1} \min \left( \frac{\delta ^2}{1+(\delta -1)^2}, \frac{\delta ^2 + (\delta -1)^2}{\delta ^2}, \frac{1 + (\delta +1)^2}{\delta ^2} \right) \\&= \frac{1}{12}\left( 4 + \root 3 \of {496-24\sqrt{183}} + 2 \root 3 \of {62 + 3 \sqrt{183}} \right) \\&\approx 1.58 \end{aligned}$$

which is achieved for

$$\begin{aligned} \delta _2 :&= \mathop {\mathrm {arg\,max}}\limits _{\delta >1} \min \left( \frac{\delta ^2}{1+(\delta -1)^2}, \frac{\delta ^2 + (\delta -1)^2}{\delta ^2}, \frac{1 + (\delta +1)^2}{\delta ^2} \right) \\&= \frac{1}{3} \left( 5 + \root 3 \of {62-3\sqrt{183}} + \root 3 \of {62 + 3 \sqrt{183}} \right) \\&\approx 4.15. \\ \end{aligned}$$

\(\square \)

2.2 Bounds for nn and ci

We now prove bounds on the competitive ratio of the algorithms nn and ci explained in the introduction.

Theorem 2

Consider the range-assignment problem in \({\mathbb R}^1\) with distance-power gradient \(\alpha \).

  1. (i)

    For any \(\alpha >1\), the competitive ratio of ci is at most 2.

  2. (ii)

    For any \(\alpha >1\), the competitive ratio of nn is exactly 2.

Proof

We first prove the upper bounds. Assume without loss of generality that \(p_0=0\). We first prove that both nn and ci perform optimally for \(\alpha >1\) on any sequence \(p_0,p_1,\ldots ,p_{n-1}\) with \(p_j\geqslant 0\) for all \(1\leqslant j<n\).

Claim

Suppose \(p_0=0\) and \(p_j\geqslant 0\) for all \(1\leqslant j<n\). Then nn and ci are optimal.

Proof of claim

We first observe that for any point \(p_j\) the following holds for the graph \(\mathcal {G}_{r_j}\) that we have after the insertion of \(p_j\): for any point \(p_i\) with \(0<i\leqslant j\) there is a path from the source \(p_0\) to \(p_i\) that only uses edges directed from left to right, that is, edges \((p_{i'},p_{i''})\) with \(p_{i'}< p_{i''}\). Indeed, if the path uses an edge \((p_{i'},p_{i''})\) with \(p_{i'}> p_{i''}\) then the subpath from \(p_0\) to \(p_{i'}\) must contain an edge \((p_s,p_t)\) with \(p_s\leqslant p_{i''}\leqslant p_t\), and then we can go directly from \(p_s\) to \(p_{i''}\). This observation implies that there exists an optimal strategy Opt such that the balls \(B(p_i,r_j(p_i))\) of the currently inserted points never extend beyond the currently rightmost point, a property which holds for nn and ci as well. (Intuitively, the part of \(B(p_i,r_j(p_i))\) to the right of the rightmost point is currently useless, and the part of \(B(p_i,r_j(p_i))\) to the left of \(p_i\) is not needed because we never need edges going to the left. Hence, we decrease \(r_j(p_i)\) until the right endpoint of \(B(p_i,r_j(p_i))\) coincides with the currently rightmost point, and increase the range of \(p_i\) later, as needed.)

Now imagine running nn, ci, and Opt simultaneously on \(P\). We claim that nn and ci do exactly the same, and that their cost increase after the insertion of any point \(p_j\) is at most the cost increase of Opt. To see this, let \(p_{j'}\) be the rightmost point just before inserting \(p_j\). If \(p_j < p_{j'}\) then nn and ci do not increase any range—since \(p_{j'}\) is reachable from \(p_0\), the point \(p_j\) must already be reachable as well—and so the cost increase is zero. If \(p_j>p_{j'}\) then nn and ci both increase the range of \(p_{j'}\) from 0 to \(p_j-p_{j'}\). For nn this is clear. For ci it follows from the fact that \(\alpha >1\). Indeed, increasing the range of some \(p_i<p_{j'}\) gives a cost increase \((r_{j-1}(p_i)+x+(p_j-p_{j'}))^\alpha - (r_{j-1}(p_i))^\alpha \), for some \(x\geqslant 0\). This is more than \((p_j-p_{j'})^\alpha \), since we must have \(r_{j-1}(p_i) +x>0\). By a similar reasoning, and using that the balls of Opt do not extend beyond \(p_{j'}\), we conclude that the cost increase of Opt cannot be smaller than \((p_j-p_{j'})^\alpha \). Hence, nn and ci are optimal on a sequence of non-negative points. \(\square \)

Next, we prove that the optimality for non-negative points gives a competitive ratio of at most 2 for any input sequence \(P\). Let \(P^+\) and \(P^-\) denote the subsequences of \(P\) consisting of the points with non-negative and non-positive points, respectively. Note that the source point \(p_0=0\) is included in both subsequences. We claim that \({{\,\textrm{cost}\,}}_{\alpha }(\text {Opt} ((P)) \geqslant {{\,\textrm{cost}\,}}_{\alpha }(\text {Opt} ((P^+))\). Indeed, we can modify the optimal solution for \(P\) to a valid solution for \(P^+\) whose cost is at most \({{\,\textrm{cost}\,}}_{\alpha }(\text {Opt} ((P))\), as follows: whenever the range of a point \(p_i \not \in P^+\) is increased to reach a point \(p_j\in P^+\), we instead increase the range of \(p_0\) by the same amount. A similar argument gives \({{\,\textrm{cost}\,}}_{\alpha }(\text {Opt} ((P)) \geqslant {{\,\textrm{cost}\,}}_{\alpha }(\text {Opt} ((P^-))\).

We now argue that \({{\,\textrm{cost}\,}}_\alpha (\text {nn} (P)) \leqslant {{\,\textrm{cost}\,}}_{\alpha }(\text {nn} (P^+))+ {{\,\textrm{cost}\,}}_\alpha (\text {nn} (P^-))\) and, similarly, that \({{\,\textrm{cost}\,}}_\alpha (\text {ci} (P)) \leqslant {{\,\textrm{cost}\,}}_{\alpha }(\text {ci} (P^+))+ {{\,\textrm{cost}\,}}_\alpha (\text {ci} (P^-))\).

Imagine running \(\text {nn} \) simultaneously on \(P\), on \(P^+\) and on \(P^-\). We claim that the increase of \({{\,\textrm{cost}\,}}_\alpha (\text {nn} (P))\) upon the arrival of a new point \(p_j\) is at most the increase of \({{\,\textrm{cost}\,}}_{\alpha }(\text {nn} (P^+))\) if \(p_j>0\), and at most the increase of \({{\,\textrm{cost}\,}}_{\alpha }(\text {nn} (P^-))\) if \(p_j<0\). To see this, assume without loss of generality that \(p_j>0\) and suppose the increase of \({{\,\textrm{cost}\,}}_\alpha (\text {nn} (P))\) is non-zero. Then \(p_j\) lies to the right of the currently rightmost point, \(p_{i}\). Both \(\text {nn} (P)\) and \(\text {nn} (P^+)\) then increase the range of \(p_i\), and pay the same cost. The only exception is when \(i=0\), that is, \(p_j\) is the first point with \(p_j>0\). In this case \(\text {nn} (P)\) may pay less than \(\text {nn} (P^+)\), since \(\text {nn} (P)\) could already have increased the range of \(p_0\) due to arrivals of points to the left of \(p_0\).

A similar argument works for ci. Indeed, \(\text {ci} (P^+)\) and \(\text {ci} (P^-)\) never extend a ball beyond the currently rightmost and leftmost point, respectively. Hence, when the new point \(p_j\) lies, say, to the right of the currently rightmost point \(p_i\), then \(\text {ci} (P^+)\) would pay \(({{\,\textrm{dist}\,}}(p_i,p_j))^\alpha \). Since \(\text {ci} (P)\) also has the option to increase the range of \(p_i\), it will never pay more.

Hence, for nn—a similar computation holds for ci—we get

$$\begin{aligned} {{\,\textrm{cost}\,}}_\alpha (\text {nn} (P)) \leqslant {{\,\textrm{cost}\,}}_{\alpha }(\text {nn} (P^+))+ {{\,\textrm{cost}\,}}_\alpha (\text {nn} (P^-)) \leqslant 2\cdot {{\,\mathrm{{{\,\textrm{OPT}\,}}}\,}}(P). \end{aligned}$$

It remains to prove the lower bound for part (ii) of the theorem. Assume for a contradiction that there is a constant a such that for all inputs \(P\) we have \({{\,\textrm{cost}\,}}_{\alpha }(\text {nn} (P)) \leqslant (2-\varepsilon ) \cdot {{\,\textrm{cost}\,}}_{\alpha }(\text {Opt} (P))+a\). Consider the input \(p_0=0\), \(p_1=\delta x\), \(p_2=x\), and \(p_3=-x\), for some \(\delta \in (0,1]\) and \(x>0\) to be determined later. The optimal solution has \(r_3(p_0)=x\) and \(r_3(p_1)=r_3(p_2)=r_3(p_3)=0\), while nn has \(r_3(p_0)=x\) and \(r_3(p_1)=(1-\delta )x\) and \(r_3(p_2)=r_3(p_3)=0\). Hence, the competitive ratio that nn achieves on this instance is

$$\begin{aligned} c = \frac{((1-\delta )^\alpha +1) x^\alpha - a}{x^\alpha } = (1-\delta )^\alpha +1 - \frac{a}{x^\alpha }, \end{aligned}$$

which is larger than \(2-\varepsilon \) when we pick \(\delta \) sufficiently small and x sufficiently large. \(\quad \square \)

3 Online Range-Assignment in \({\mathbb R}^2\)

3.1 Bounds on the Competitive Ratio of nn and ci when \(\alpha >2\)

As before, let \(p_0,\ldots ,p_{n-1}\) be the sequence of inserted points, with \(p_0\) being the source point. Consider some point \(p_i\), and some arbitrary disk D centered at \(p_i\)—the disk D need not have radius equal to the range of \(p_i\). Define \(S(p_i,D):= \{ p_j: j\geqslant i \text{ and } p_j\in D\}\) to be the set containing \(p_i\) plus all points arriving after \(p_i\) that lie in D. For a point \(p_j\), define \({{\,\textrm{cost}\,}}_{\alpha }(\text {nn} {},p_j)\) to be the cost incurred by nn when \(p_j\) is inserted; in other words, \({{\,\textrm{cost}\,}}_{\alpha }(\text {nn},p_j):= 0\) when \(p_j\) falls into an existing disk \(B(p_i,r_{j-1}(p_i))\), and \({{\,\textrm{cost}\,}}_{\alpha }(\text {nn},p_j):=(r_{j}(p_k))^\alpha - (r_{j-1}(p_k))^\alpha \) otherwise, where \(p_k:= {{\,\textrm{nn}\,}}(p_j)\). Define \({{\,\textrm{cost}\,}}_{\alpha }(\text {ci},p_j)\) similarly for ci . Finally, for \(p_j\in S(p_i,D)\) define

$$\begin{aligned} F_{\alpha } (p_j; p_i, D):= \min \{ {{\,\textrm{dist}\,}}(p_j,p_k)^{\alpha } \mid p_k \in S(p_i,D) \text{ and } k < j \}. \end{aligned}$$

(If there is no \(p_k\in S(p_i,D)\) with \(k<j\) then the minimum is \(+\infty \) by definition.) The next lemma shows that we can use the function \(F_{\alpha }\) to upper bound the cost of nn and ci . We later apply this lemma to all disks in an optimal solution to bound the competitive ratio. Note that for any disk D centered at \(p_i\) and any \(p_j \in S(D)\) we have \({{\,\textrm{cost}\,}}_\alpha (\text {nn},p_j) \leqslant F_\alpha (p_j; p_i,D)\). Indeed, nn either pays zero (when \(p_j\) already lies inside a disk) or it expands the disk of \(p_j\)’s nearest neighbor (which may or may not lie in D) which costs at most \(F_\alpha (p_j; p_i,D)\). Similarly \({{\,\textrm{cost}\,}}_\alpha (\text {ci},p_j) \leqslant F_\alpha (p_j; p_i,D)\). Hence we have:

Lemma 1

Let \(p_i\) be any input point and D a disk centered at \(p_i\). Then for any subset \(S(D)\subseteq S(p_i,D)\setminus \{p_i\}\) we have:

$$\begin{aligned} \sum _{p_j\in S(D) } {{\,\textrm{cost}\,}}_{\alpha }(\text {nn},p_j) \leqslant \sum _{p_j\in S(D) } F_{\alpha }(p_j; p_i,D) \end{aligned}$$

and

$$\begin{aligned} \sum _{p_j\in S(D) } {{\,\textrm{cost}\,}}_{\alpha }(\text {ci},p_j) \leqslant \sum _{p_j\in S(D) } F_{\alpha }(p_j; p_i, D). \end{aligned}$$

Lemma 1 suggests the following strategy to bound the competitive ratio of nn (and ci). Consider, for each point \(p_i\), the final disk D placed at \(p_i\) by an optimal offline algorithm, and let \(\rho \) be its radius. The cost of this disk is \(\rho ^\alpha \). We charge the cost of the disks placed by nn (or ci) at points \(p_j\) inside D—this cost can be bounded using the function \(F_{\alpha }\), by Lemma 1—to the cost of D. This motivates the following definition:

$$\begin{aligned} F^*_\alpha := \max \frac{1}{\rho ^\alpha }\sum _{p_j\in S(D) } F_{\alpha }(p_j; p_i,D), \end{aligned}$$

where the maximum is over any possible input instances \(P\), any point \(p_i\in P\), any disk D of radius \(\rho \) centered at \(p_i\), and any subset \(S(D)\subseteq S(p_i,D){\setminus } \{p_i\}\). The value \(F^*_\alpha \) bounds the maximum total charge to any disk D in the optimal solution, relative to D’s cost \(\rho ^\alpha \). The next lemma shows that for \(\alpha >2\), the value \(F^*_{\alpha }\) is bounded by a constant (depending on \(\alpha \)).

Lemma 2

We have that \(F^*_\alpha \leqslant \alpha \frac{2^\alpha -3}{2^{\alpha -1}-\alpha }\) for any \(\alpha > 2\).

The formal proof of the lemma is quite technical so we sketch the intuition here before diving into the proof, also showing why the condition \(\alpha >2\) is needed. The quantity \(F^*_\alpha \) can be thought of in the following way. Consider a disk D of radius \(\rho \) centered at \(p_i\), and imagine the points in \(S(D)\) arriving one by one. (The points in \(S(p_i,D)\setminus S(D)\) are irrelevant.) Whenever a new point \(p_j\) arrives, then \(F^*_{\alpha }\) increases by \({{\,\textrm{dist}\,}}(p_j,{{\,\textrm{nn}\,}}(p_j))^\alpha \), where \({{\,\textrm{nn}\,}}(p_j)\) is \(p_j\)’s nearest neighbor among the already arrived points from \(S(D)\) including \(p_i\). Since the more points arrive in D the distances to the nearest neighbor will decrease—more precisely, we cannot have many points whose nearest neighbor is at a relatively large distance—the hope is that the sum of these distance to the power \(\alpha \) converges, and this is indeed what we can prove for \(\alpha >2\). For \(\alpha =2\) it does not converge, as shown by the following example, illustrated in Fig. 2.

Fig. 2
figure 2

Example showing that \(F_{\alpha }\) does not converge for \(\alpha =2\)

Let D be a unit disk centered at \(p_i\), and consider the inscribed square \(\sigma \) of D. Note that the radiusFootnote 2 of \(\sigma \)—the distance from its center to its vertices—is 1. We insert a set \(S(D)\) of \(n-1\) points in rounds, as follows. In the first round we partition \(\sigma \) into four squares of radius 1/2, and we insert a point in the center of each of them. These four points all have \(p_i\) as nearest neighbor, and \(F^*_{\alpha }\) increases by \(4\cdot (1/2)^\alpha =(1/2)^{\alpha -2}\). We recurse in each of the four squares. Thus in the k-th round, we have \(4^{k-1}\) squares of radius \((1/2)^{k-1}\), each of which is partitioned into four squares of radius \((1/2)^k\), and we place a point inside each such subsquare. This increases \(F^*_\alpha \) by \(4^k\cdot (1/2^{k})^\alpha =(1/2^{2-\alpha })^k\). The total cost is \(\sum _{k=1}^t (1/2^{2-\alpha })^k\), where \(t:=\Theta (\log n)\) is the number of rounds.

Note that \(1/2^{2-\alpha }=1\) for \(\alpha =2\), giving \(F^*_2=\Omega (\log n)\), while for \(\alpha >2\) the total cost converges. Also note that the example only gives a lower bound on \(F^*_2\), it does not show that nn has unbounded competitive ratio for \(\alpha =2\). The reason is that nn actually pays less than \(F^*_2\), since in the example most points \(p_j\) are already within range of an existing point upon insertion, and so we do not have to pay \({{\,\textrm{dist}\,}}(p_j,{{\,\textrm{nn}\,}}(p_j))^\alpha \). Indeed, in the next section we prove, using a different argument, that nn is O(1)-competitive even for \(\alpha =2\).

We now present the proof of Lemma 2.

Proof of Lemma 2

Let \(p_\ell \) be a point, let D be any disk centred at \(p_\ell \), and let \(\partial D\) denote the boundary of D. For the sake of simplicity, we rescale D to be a unit disk and relabel points in \(S(D)\) as \(p_0,\ldots ,p_k\) without changing the ordering and where \(p_0\) (formerly known as \(p_{\ell }\)) is the center of D. From now on, to simplify the notation, we will use \(F_{\alpha }(p_i)\) as a shorthand for \(F_{\alpha }(p_i; p_{\ell }, D)\). We show that \( \sum _{i=1}^{k}F_{\alpha } (p_i) \leqslant \alpha \frac{2^\alpha -3}{2^{\alpha -1}-\alpha }\). To that purpose we create a potential function \(\Phi :\{0,\ldots ,k \} \rightarrow {\mathbb R}\), with \(\Phi (i)\) being the potential when \(p_i\) is inserted, with the following properties:

  • \(\Phi (0)=\alpha \frac{2^\alpha -3}{2^{\alpha -1}-\alpha }\),

  • \(\Phi (i)>0\) for any \(i=0,\ldots ,k\),

  • \(\Phi (i-1)-\Phi (i) \geqslant F_\alpha (p_i)\) for any \(i=1,\ldots ,k\).

If such a potential function exists, we then indeed have \(F^*_{\alpha } \leqslant \alpha \frac{2^\alpha -3}{2^{\alpha -1}-\alpha }\).

We now define \(\Phi (i)\). For any point q in the plane, let \({{\,\textrm{nn}\,}}_i(q)\) be its closest point among \(p_0,\ldots ,p_i\). We define the potential \(\phi (q,i)\) at q at time i as follows:

$$\begin{aligned} \phi (q,i) := \left\{ \begin{array}{l} c_\alpha {{\,\textrm{dist}\,}}(q,{{\,\textrm{nn}\,}}_i(q))^{\alpha -2}\\ \qquad \text{ if } q \in D \text{, } \text{ that } \text{ is, } \text{ if } {{\,\textrm{dist}\,}}(q,p_0) \leqslant 1; \\ c_\alpha ( {{\,\textrm{dist}\,}}(q,{{\,\textrm{nn}\,}}_i(q))^{\alpha -2} - {{\,\textrm{dist}\,}}(q,\partial D )^{\alpha -2} )\\ \qquad \text{ if } 1< {{\,\textrm{dist}\,}}(q,p_0) \leqslant \frac{3}{2}, \text{ and } {{\,\textrm{dist}\,}}(q,{{\,\textrm{nn}\,}}_i(q))< {{\,\textrm{dist}\,}}(q,\partial D_2); \\ c_\alpha ( {{\,\textrm{dist}\,}}(q,\partial D_2)^{\alpha -2} - {{\,\textrm{dist}\,}}(q,\partial D )^{\alpha -2} )\\ \qquad \text{ if } 1 < {{\,\textrm{dist}\,}}(q,p_0) \leqslant \frac{3}{2}, \text{ and } {{\,\textrm{dist}\,}}(q,{{\,\textrm{nn}\,}}_i(q)) \geqslant {{\,\textrm{dist}\,}}(q,\partial D_2); \\ 0 \quad \ \ \text{ otherwise; } \end{array} \right. \end{aligned}$$

where \(D_2\) is the disk of radius 2 centred at \(p_0\) and \(c_\alpha = \frac{\alpha (\alpha -1) 2^{\alpha -2}}{\pi (2^{\alpha -1} -\alpha )}\) is a constant depending only on \(\alpha \). See Fig. 3 for an illustration of the cases.

Fig. 3
figure 3

Outside the grey region the function \(\phi \) is always 0. When q is inside D, the function \(\phi (q,i)\) is simply \(c_\alpha {{\,\textrm{dist}\,}}(q,{{\,\textrm{nn}\,}}_i(q))^{\alpha -2}\). Finally, when q is in the grey area but not in D, that is \(1< {{\,\textrm{dist}\,}}(q,p_0) \leqslant 1.5\), we have that \(\phi (q,i)= c_\alpha \left( \min \{ {{\,\textrm{dist}\,}}(q,{{\,\textrm{nn}\,}}_i(q)),{{\,\textrm{dist}\,}}(q,\partial D_2) \}^{\alpha -2} - {{\,\textrm{dist}\,}}(q,\partial D )^{\alpha -2} \right) \)

We finally define the potential function at time i as

$$\begin{aligned} \Phi (i):= \iint _{{\mathbb R}^2} \phi (q,i) dq. \end{aligned}$$

This potential function can be interpreted as the volume of a certain region in \({\mathbb R}^3\), where we assume without loss of generality that the center of D lies at the origin and the points \(p_0,\ldots ,p_k\) all lie in the plane \(z=0\). The region then consists of the following. Over D it is the region above the plane \(z=0\) and below the lower envelope of a set of “paraboloids”, one for each point \(p_j\in \{ p_0,\ldots , p_i \}\), defined by \({{\,\textrm{Par}\,}}_\alpha (p_j)= \{ (x,y,z) \mid z = c_\alpha {{\,\textrm{dist}\,}}((x,y),p_j)^{\alpha -2} \}\). Outside of D, on the other hand, the region is defined as the region below the lower envelope of \({{\,\textrm{Par}\,}}_\alpha (p)\) for each point \(p\in \{ p_0,\ldots , p_i\} \cup \partial D_2\) and above the paraboloids \({{\,\textrm{Par}\,}}_\alpha (p)\) for each point \(p\in \partial D\). Thus the difference is that outside D, the points \(p\in \partial D_2\) also define paraboloids below which \(V_i\) must stay and, in addition, \(V_i\) is bounded from below by paraboloids defined by points \(p\in \partial D\). See Fig. 4 for an illustration.

Fig. 4
figure 4

Cross section of the region \(V_i\) in gray. For clarity, we do not draw the paraboloids of points outside the cross section

We now need to show that this potential function has the claimed properties. It is easy to see that \(\Phi (i)>0\) for each \(i=0,\ldots ,k\). Next we show that \(\Phi (i-1)-\Phi (i) \geqslant F_{\alpha }(p_i)\) for all i.

Let \(p_j\), with \(j<i\), be a nearest neighbor of \(p_i\) with \(d^*:={{\,\textrm{dist}\,}}(p_i,p_j)\). Upon insertion of \(p_i\), we add a paraboloid defined at a point \(q\in D\) by \(c_\alpha {{\,\textrm{dist}\,}}(q,p_i)^{\alpha -2}\). The decrease of potential is then the volume of the region \(V_i\) subtracted by this surface. Let us consider the region

$$\begin{aligned} V_i^*:=\{ q=(x,y,z)\mid&{{\,\textrm{dist}\,}}((x,y,0),p_i) \leqslant d^* \\&\text{ and } c_\alpha {{\,\textrm{dist}\,}}(q,p_i)^{\alpha -2} \leqslant z \leqslant c_\alpha (d^*-{{\,\textrm{dist}\,}}(q,p_i))^{\alpha -2} \}. \end{aligned}$$

See Fig. 5 for an illustration of the region \(V_i^*\).

Fig. 5
figure 5

The region \(V_i^*\) we use as a lower bound on the decrease of potential upon insertion of \(p_i\). On the left, we have a cross section of the region, where \(d^*={{\,\textrm{dist}\,}}(p_i,p_j)\) and \(p_j\) is the nearest point to \(p_i\) with \(j<i\). On the right, we have the region in 3 dimensions. All the paraboloids defined by \({{\,\textrm{Par}\,}}_\alpha (p)\) for some p

Next we argue that \(V_i^* \subseteq V_i\) by showing that the upper boundary of \(V_i^*\) is under the upper boundary of \(V_i\) and the lower boundary of \(V_i^*\) is above the lower boundary of \(V_i\). Since the upper boundary of \(V_i^*\) is defined by paraboloids at distance \(d^*\), and since \(d^*\) is the distance to the closest point \(p_j\), the region \(V_i^*\) is under the upper boundary of \(V_i\). On the other hand, since the lower boundary of \(V_i^*\) is defined again by the same paraboloids, even if \(p_i\) is close to the boundary of D, the region \(V_i^*\) is above the lower boundary of \(V_i\). Therefore \(V_i^* \subseteq V_i\).

We now compute the volume of \(V_i^*\). We do this by fixing a radius \(\rho \), then computing the area of the largest cylinder of radius \(\rho \) centered around the vertical axis passing through \(p_i\) and inscribed in \(V_i^*\) and integrating that value from 0 until \(d^*/2\). For a certain \(0 \leqslant \rho \leqslant d^*/2\), the area of the cylinder is \(2\pi \rho h(\rho )\) where \(h(\rho )\) is the height of the tallest cylinder of radius \(\rho \) inscribed in \(V_i^*\). It remains to compute \(h(\rho )\). This is given by the difference of height between the two paraboloids (one on \(p_i\) and one on a point at distance \(d^*\) of \(p_i\)), i.e., \(h(\rho )= c_\alpha ((d^*-\rho )^{\alpha -2} - \rho ^{\alpha -2})\). Thus,

$$\begin{aligned} {{\,\textrm{Vol}\,}}(V_i^*)&= \int _0^{d^*/2} 2\pi \rho c_\alpha ((d^*-\rho )^{\alpha -2} - \rho ^{\alpha -2}) d\rho \\&= 2\pi c_\alpha \int _0^{d^*/2} \rho (d^*-\rho )^{\alpha -2} - \rho ^{\alpha -1} d\rho . \end{aligned}$$

We can integrate \(\rho (d^*-\rho )^{\alpha -2}\) by parts:

$$\begin{aligned} \int _0^{d^*/2} \rho (d^*-\rho )^{\alpha -2} d\rho&= \rho \frac{-(d^*-\rho )^{\alpha -1}}{\alpha -1} \Big |_0^{d^*/2} - \int _0^{d^*/2} 1\frac{-(d^*-\rho )^{\alpha -1}}{\alpha -1} d\rho \\&= - \rho \frac{(d^*-\rho )^{\alpha -1}}{\alpha -1} - \frac{(d^*-\rho )^{\alpha }}{\alpha (\alpha -1)} \Big |_0^{d^*/2}. \end{aligned}$$

It gives us the following:

$$\begin{aligned} {{\,\textrm{Vol}\,}}(V_i^*)&= 2\pi c_\alpha \left[ - \rho \frac{(d^*-\rho )^{\alpha -1}}{\alpha -1} - \frac{(d^*-\rho )^{\alpha }}{\alpha (\alpha -1)} - \frac{\rho ^\alpha }{\alpha } \right] \Big |_0^{d^*/2} \\&= 2\pi c_\alpha \left[ - \frac{d^*}{2} \frac{(d^*/2)^{\alpha -1}}{\alpha -1} - \frac{(d^*/2)^{\alpha }}{\alpha (\alpha -1)} - \frac{(d^*/2)^\alpha }{\alpha } + \frac{(d^*)^{\alpha }}{\alpha (\alpha -1)} \right] \\&= 2\pi c_\alpha \left[ - \frac{(d^*)^{\alpha }}{2^{\alpha }(\alpha -1)} - \frac{(d^*)^{\alpha }}{2^{\alpha }\alpha (\alpha -1)} - \frac{(d^*)^{\alpha }}{2^{\alpha }\alpha } + \frac{(d^*)^{\alpha }}{\alpha (\alpha -1)} \right] \\&= \frac{\pi }{2^{\alpha -1}\alpha (\alpha -1)} c_\alpha (d^*)^{\alpha } \left[ - \alpha -1 -(\alpha -1) +2^\alpha \right] \\&= \frac{\pi (2^{\alpha -1} - \alpha )}{2^{\alpha -2}\alpha (\alpha -1)} c_\alpha (d^*)^{\alpha }. \end{aligned}$$

Recall that \(c_\alpha =\frac{\alpha (\alpha -1) 2^{\alpha -2}}{\pi (2^{\alpha -1} -\alpha )}\). We thus get

$$\begin{aligned} {{\,\textrm{Vol}\,}}(V_i^*)&= (d^*)^{\alpha }. \end{aligned}$$

Recall that \(d^*:={{\,\textrm{dist}\,}}(p_i,p_j)\), where \(p_j\) is a nearest neighbor to \(p_i\) with \(p_j\in D\) and \(j<i\). Hence, \((d^*)^{\alpha } = F_{\alpha }(p_i)\) and so

$$\begin{aligned} \Phi (i-1)-\Phi (i) = {{\,\textrm{Vol}\,}}(V_i) \geqslant {{\,\textrm{Vol}\,}}(V^*_i) = (d^*)^{\alpha } = F_{\alpha }(p_i), \end{aligned}$$

as required.

It remains to show that \(\Phi (0)=\alpha \frac{2^\alpha -3}{2^{\alpha -1}-\alpha }\). To that purpose, let \(V_0\) be a region representing \(\Phi (0)\), defined as

$$\begin{aligned} V_0 :=&\{ (x,y,z) \mid (x,y) \in D \text{ and } 0\leqslant z \leqslant c_\alpha {{\,\textrm{dist}\,}}((x,y),p_0)^{\alpha -2} \} \\ \cup&\{ (x,y,z) \mid 1< {{\,\textrm{dist}\,}}((x,y),p_0) \leqslant \frac{3}{2} \\&\text{ and } c_\alpha ({{\,\textrm{dist}\,}}((x,y),p_0)-1)^{\alpha -2} \leqslant z \leqslant c_\alpha (2-{{\,\textrm{dist}\,}}((x,y),p_0))^{\alpha -2} \} \end{aligned}$$

as depicted in Fig. 6.

Fig. 6
figure 6

The region \(V_0\)

We use the same technique as above to compute

$${{\,\textrm{Vol}\,}}(V_0)=\int _0^{3/2} 2\pi \rho \cdot h(\rho )d\rho . $$

We have \(h(\rho )= c_\alpha \rho ^{\alpha -2}\) when \(\rho \leqslant 1\) and \(h(\rho )= c_\alpha [(2-\rho )^{\alpha -2} - (\rho -1)^{\alpha -2}]\) when \(1 < \rho \leqslant 3/2\). We therefore get

$$\begin{aligned} {{\,\textrm{Vol}\,}}(V_0)&= 2\pi c_\alpha \left( \int _0^{1} \rho ^{\alpha -1} d\rho + \int _1^{3/2} \rho (2-\rho )^{\alpha -2} - \rho (\rho -1)^{\alpha -2} d\rho \right) . \end{aligned}$$

We again use integration by parts.

$$\begin{aligned} \int _1^{3/2} \rho (\rho -1)^{\alpha -2} d\rho&= \rho \frac{(\rho -1)^{\alpha -1}}{\alpha -1} \Big |_1^{3/2} - \int _1^{3/2} 1\frac{(\rho -1)^{\alpha -1}}{\alpha -1} d\rho \\&= \left( \rho \frac{(\rho -1)^{\alpha -1}}{\alpha -1} - \frac{(\rho -1)^{\alpha }}{\alpha (\alpha -1)} \right) \Big |_1^{3/2} \\&= \frac{3}{2} \frac{(1/2)^{\alpha -1}}{\alpha -1} - \frac{(1/2)^{\alpha }}{\alpha (\alpha -1)} \\&= \frac{3}{2^\alpha (\alpha -1)} - \frac{1}{2^{\alpha }\alpha (\alpha -1)} \\&= \frac{3\alpha - 1}{2^\alpha \alpha (\alpha -1)} \end{aligned}$$

and

$$\begin{aligned} \int _1^{3/2} \rho (2-\rho )^{\alpha -2} d\rho&= \rho \frac{-(2-\rho )^{\alpha -1}}{\alpha -1} \Big |_1^{3/2} + \int _1^{3/2} 1\frac{(2-\rho )^{\alpha -1}}{\alpha -1} d\rho \\&= \left( -\rho \frac{(2-\rho )^{\alpha -1}}{\alpha -1} - \frac{(2-\rho )^{\alpha }}{\alpha (\alpha -1)} \right) \Big |_1^{3/2} \\&= - \frac{3}{2} \frac{(1/2)^{\alpha -1}}{\alpha -1} - \frac{(1/2)^{\alpha }}{\alpha (\alpha -1)} + \frac{1}{\alpha -1} + \frac{1}{\alpha (\alpha -1)} \\&= - \frac{3}{2^\alpha (\alpha -1)} - \frac{1}{2^{\alpha }\alpha (\alpha -1)} + \frac{1}{\alpha -1} + \frac{1}{\alpha (\alpha -1)} \\&= \frac{-3\alpha -1 + 2^\alpha \alpha + 2^ \alpha }{2^\alpha \alpha (\alpha -1)}. \end{aligned}$$

This together with \(\int _0^{1} \rho ^{\alpha -1} d\rho = 1/\alpha \) gives us the following:

$$\begin{aligned} {{\,\textrm{Vol}\,}}(V_0)&= 2\pi c_\alpha \left( \frac{1}{\alpha } + \frac{-3\alpha -1 + 2^\alpha \alpha + 2^ \alpha }{2^\alpha \alpha (\alpha -1)} - \frac{3\alpha - 1}{2^\alpha \alpha (\alpha -1)} \right) \\&= \frac{\pi c_\alpha }{2^{\alpha -1} \alpha (\alpha -1)} \left( 2^\alpha (\alpha -1) -3\alpha - 1 + 2^\alpha (\alpha + 1) - 3\alpha + 1 \right) \\&= \frac{\pi c_\alpha }{2^{\alpha -1} \alpha (\alpha -1)} \left( 2^{\alpha } (\alpha -1+\alpha +1) - 6 \alpha \right) = \frac{\pi ( 2^{\alpha } \alpha - 3\alpha ) c_\alpha }{2^{\alpha -2} \alpha (\alpha -1)}. \end{aligned}$$

Again, with \(c_\alpha =\frac{\alpha (\alpha -1) 2^{\alpha -2}}{\pi (2^{\alpha -1} -\alpha )}\), we obtain

$$\begin{aligned} {{\,\textrm{Vol}\,}}(V_0)&= \frac{\pi \alpha ( 2^{\alpha } - 3 )}{2^{\alpha -2} \alpha (\alpha -1)} \cdot \dfrac{\alpha (\alpha -1) 2^{\alpha -2}}{\pi (2^{\alpha -1} -\alpha )} \\&= \alpha \frac{ 2^{\alpha } - 3 }{ 2^{\alpha -1} -\alpha }, \end{aligned}$$

concluding the proof. \(\square \)

Using Lemma 2 we can prove a bound on the competitive ratio of nn and ci.

Theorem 3

Let \(F^*_\alpha := \alpha \frac{2^\alpha -3}{2^{\alpha -1}-\alpha }\). For any \(\alpha >2\), the competitive ratio of nn and ci in \({\mathbb R}^2\) is at most \(\min \{ F^*_\beta \mid 2<\beta \leqslant \alpha \}\). Hence, for \(\alpha \leqslant \alpha ^*\), where \(\alpha ^* = \mathop {\mathrm {arg\,min}}\limits F^*_\alpha \approx 4.3\), the competitive ratio is at most \(\alpha \frac{2^\alpha -3}{2^{\alpha -1}-\alpha }\), and for \(\alpha >\alpha ^*\) it is at most 12.94.

Proof

Consider a sequence \(p_0,p_1,\ldots ,p_{n-1}\) of points in the plane. Let \(D_j\) be the disk centered at \(p_j\) in an optimal solution, after the last point \(p_{n-1}\) has been handled, and let \(\rho _j\) be its radius. Thus the cost of the optimal solution is \({{\,\mathrm{{{\,\textrm{OPT}\,}}}\,}}:= \sum _{j=0}^{n-1} \rho _j^{\alpha }\). To bound the cost of nn on the same sequence, we charge the cost of inserting \(p_i\), with \(0<i\leqslant n-1\), to a disk \(D_j\) such that \(j<i\) and \(p_i\in D_j\). Such a disk \(D_j\) exists, since after \(p_i\)’s insertion, \(p_i\) is contained in a disk of an existing point \(p_j\) and so \(p_i\) will also be contained in \(D_j\), the final disk of \(p_j\). (If there are more such points, we take an arbitrary one.) Let \(S(D_j)\) be the set of points that charge disk \(D_j\). Note that \(\{p_1,\ldots ,p_{n-1}\} = \bigcup _{j=0}^{n-2} S(D_j)\). Hence, using Lemmas 1 and 2, for any \(2 < \beta \leqslant \alpha \), for nn (and similarly for ci) we get:

$$\begin{aligned} \begin{array}{llll} {{\,\textrm{cost}\,}}_\alpha (\text {nn}) &{} = &{} \sum _{i=0}^{n-2}\sum _{p_j\in S(D_i)} {{\,\textrm{cost}\,}}_{\alpha }(\text {nn},p_j) &{} \\ &{} = &{} \sum _{i=0}^{n-2} \rho _i^\alpha \sum _{p_j\in S(D_i)} \frac{{{\,\textrm{cost}\,}}_{\alpha }(\text {nn},p_j)}{\rho _i^\alpha } &{} \\ &{} \leqslant &{} \sum _{i=0}^{n-2} \rho _i^\alpha \sum _{p_j\in S(D_i)} \frac{{{\,\textrm{dist}\,}}(p_j,{{\,\textrm{nn}\,}}(p_j))^\alpha }{\rho _i^\alpha } &{} \\ &{} \leqslant &{} \sum _{i=0}^{n-2} \rho _i^\alpha \sum _{p_j\in S(D_i)} \frac{{{\,\textrm{dist}\,}}(p_j,{{\,\textrm{nn}\,}}(p_j))^\beta }{\rho _i^\beta } &{} \text{ because } {{\,\textrm{dist}\,}}(p_j,{{\,\textrm{nn}\,}}(p_j)) \leqslant \rho _i \\ &{} = &{} \sum _{i=0}^{n-2} \rho _i^\alpha \sum _{p_j\in S(D_i)} \frac{F_{\beta }(p_j)}{\rho _i^\beta } &{} \text{ by } \text{ Lemma } \text{1 } \\ &{} \leqslant &{} \sum _{i=0}^{n-2} \rho _i^\alpha F_{\beta }^* &{} \\ &{} \leqslant &{} \beta \frac{2^\beta -3}{2^{\beta -1}-\beta } \sum _{i=0}^{n-2} \rho _i^\alpha &{} \text{ by } \text{ Lemma } \text{2 } \\ &{} = &{} \beta \frac{2^\beta -3}{2^{\beta -1}-\beta } {{\,\mathrm{{{\,\textrm{OPT}\,}}}\,}}{}. &{} \end{array} \end{aligned}$$

\(\square \)

The next theorem gives a lower bound on the competitive ratio of nn.

Theorem 4

For any \(\alpha >1\), nn has a competitive ratio of at least \(6 (1 + (\frac{\sqrt{6}-\sqrt{2}}{2})^\alpha ) \approx 6 (1+ 0.52^\alpha )\) in the plane. In particular, for \(\alpha =2\), we get a lower bound of 7.6 on the competitive ratio.

Proof

Let \(p_0\) be the source placed at the origin. The following construction is depicted in Fig. 7. We place \(p_1,\ldots ,p_{18}\) in a disk of radius 1 around \(p_0\) as explained next, such that a possible solution is to place that single disk and pay 1. For simplicity, in the rest of the proof we use polar coordinates. Let \(\varepsilon >0\) be a positive number. Let then \(p_1=(\varepsilon , 0)\), \(p_2(\varepsilon , \pi /3)\),..., and \(p_6=(\varepsilon , 5\pi /3)\) be the next six points. nn places a disk of radius \(\varepsilon \) on \(p_0\). Let further \(p_7=(1,0)\), \(p_8=(1,\pi /3)\),..., and \(p_{12}=(1,5\pi /3)\) be the next six points. Here nn places six disks of radius \(1-\varepsilon \) centered around \(p_1,\ldots ,p_6\), paying \(6(1-\varepsilon )^\alpha \). Finally, let \(p_{13}=(1,\pi /6-\varepsilon )\), \(p_{14}=(1,3\pi /6-\varepsilon )\),..., and \(p_{18}=(1,11\pi /6-\varepsilon )\) be the last six points. nn is now forced to place 6 disks of radius almost equal to the side of a 12-gon of radius 1, that is \(2\sin (\pi /12) -\delta \) for some \(\delta >0\) that tends to 0 as \(\varepsilon \) tends to 0.

Fig. 7
figure 7

Lower bound on the competitive ratio of nn . The light gray disk (of radius 1) represents the optimal solution on the boundary of which points \(p_7,\ldots ,p_{18}\) are placed. Points \(p_1,\ldots ,p_6\) are placed on the boundary of a disk of radius \(\varepsilon \) (in dark gray). The algorithm nn is forced to place one first disk of radius \(\varepsilon \) around \(p_0\), then six disks of radius \(1-\varepsilon \) around \(p_1,\ldots ,p_6\). And finally six disks of radius roughly 0.5 around \(p_7,\ldots ,p_{12}\)

Thus, we have that for any \(\varepsilon >0\), there is an instance on which a solution of cost 1 exists, whereas nn is forced to pay \(\varepsilon ^\alpha + 6 (1-\varepsilon )^\alpha + 6 (2\sin (\pi /12) -\delta )^\alpha \), where \(\delta > 0\) tends to 0 as \(\varepsilon \) tends to 0. We can therefore conclude that nn has to pay at least  \(6 (1 + (2 \sin (\pi /12))^\alpha ) = 6 (1+ (2 \frac{\sqrt{6}-\sqrt{2}}{4})^\alpha ) = 6 (1 + (\frac{\sqrt{6}-\sqrt{2}}{2})^\alpha ) \approx 6 (1+ 0.52^\alpha ) \), whereas \({{\,\mathrm{{{\,\textrm{OPT}\,}}}\,}}\leqslant 1\).

We can then scale this construction and thus, there is no constant a such that \({{\,\textrm{cost}\,}}(\text {nn}) \leqslant c \cdot {{\,\textrm{OPT}\,}}+ a\) for \(c < 6 (1 + (\frac{\sqrt{6}-\sqrt{2}}{2})^\alpha )\). \(\square \)

3.2 Bounds on the Competitive Ratio of nn and 2-nn when \(\alpha =2\)

Above we proved upper bounds for nn and ci for \(\alpha >2\), and we gave a lower bound for nn for any \(\alpha >1\). We now study nn and 2-nn for the case \(\alpha =2\). Unfortunately, the arguments below do not apply to ci.

An upper bound on the competitive ratio of 2-nn

for \(\alpha =2\). Let \(P:= p_0,p_1,\ldots ,p_{n-1}\) be the input instance. Recall that \({{\,\textrm{nn}\,}}(p_i)\) is the closest point to \(p_i\) among \(p_0,\ldots ,p_{i-1}\). Upon insertion of a point \(p_i\), if \(p_i\) is not covered by the current set of balls \(B(p_{i'},r_{j-1}(p_{i'}))\) with \(i'<i\), then 2-nn increases the range of \({{\,\textrm{nn}\,}}(p_i)\) to \(2\cdot {{\,\textrm{dist}\,}}(p_i,{{\,\textrm{nn}\,}}(p_i))\), and otherwise it does nothing. Suppose that upon the insertion of some point \(p_i\), we increase the range of \({{\,\textrm{nn}\,}}(p_i)\). We now define \(D^*_i\) as the disk centered at \(p_i\) (not at \({{\,\textrm{nn}\,}}(p_i)\)) and of radius \(d_i/2\), where \(d_i:={{\,\textrm{dist}\,}}(p_i,{{\,\textrm{nn}\,}}(p_i))\). We call \(D^*_i\) the charging disk of \(p_i\). Note that the charging disk is a tool in the proof, it is not a disk used by the algorithm. If 2-nn did nothing upon insertion of \(p_i\) because \(p_i\) was already covered by a disk, we define \(D_i^*:=\emptyset \).

Lemma 3

For every pair of charging disks \(D^*_i\) and \(D^*_j\) with \(j\ne i\), we have \(D^*_i \cap D^*_j = \emptyset \).

Proof

Without loss of generality we assume that \(i<j\). Suppose for a contradiction that \(D^*_i \cap D^*_j\ne \emptyset \). Let \(p_{i'}:={{\,\textrm{nn}\,}}(p_i)\) and \(p_{j'}:= {{\,\textrm{nn}\,}}(p_j)\), and let \(d_i:= {{\,\textrm{dist}\,}}(p_i,p_{i'})\) and \(d_j:= {{\,\textrm{dist}\,}}(p_j,p_{j'})\). Since \(i'<i<j\), we have \({{\,\textrm{dist}\,}}(p_j,p_{i'}) > 2d_i\), otherwise \(p_j\) lies inside the disk of \(p_{i'}\) when \(p_j\) is inserted and we would have \(D_j^* = \emptyset \). On the other hand, \(d_i/2+d_j/2\geqslant {{\,\textrm{dist}\,}}(p_i,p_j)\) because \(D^*_i \cap D^*_j\ne \emptyset \). Since \(d_j \leqslant {{\,\textrm{dist}\,}}(p_i,p_j)\), which is true because we assumed \(i<j\), this implies \(d_i\geqslant {{\,\textrm{dist}\,}}(p_i,p_j)\). But then \({{\,\textrm{dist}\,}}(p_j,p_{i'}) \leqslant d_i + {{\,\textrm{dist}\,}}(p_i,p_j) \leqslant 2d_i\), a contradiction. \(\square \)

Lemma 4

For any points \(p_i\) and \(p_j\) with \(i<j\), let \(D_j^{{{\,\textrm{OPT}\,}}}(p_i)\) be the disk centered at \(p_i\) after \(p_j\) is inserted in an optimal solution and let \(\rho _{j}(p_i)\) be its radius. Furthermore, let \(D_{j}^{1.5 {{\,\textrm{OPT}\,}}}(p_i)\) be the disk centered at \(p_i\) of radius \(1.5 \cdot \rho _{j}(p_i)\). Then, for every point \(p_k\), there is a point \(p_i\) such that the charging disk \(D^*_k \) is contained in \(D_{k}^{1.5 {{\,\textrm{OPT}\,}}}(p_i)\).

Proof

Let \(p_i\) be such that \(p_k\) is contained in \(D^{{{\,\textrm{OPT}\,}}}_k(p_i)\). Upon insertion of \(p_k\), we create the charging disk \(D^*_k\) of radius \(\frac{1}{2} {{\,\textrm{dist}\,}}(p_k,{{\,\textrm{nn}\,}}(p_k)) \leqslant \frac{1}{2} {{\,\textrm{dist}\,}}(p_i,p_k)\) centered at \(p_k\). Therefore, the point of \(D^*_k\) furthest from \(p_i\) is at distance at most \(\frac{3}{2} {{\,\textrm{dist}\,}}(p_i,p_k)\). Thus \(D^*_k \subset D_k^{1.5 OPT}(p_i)\). \(\square \)

Using these two lemmas, we can conclude the following.

Theorem 5

In \({\mathbb R}^2\) the strategy 2-nn is 36-competitive for \(\alpha =2\).

Proof

Recall that the charging disk \(D^*_i\) has radius \({{\,\textrm{dist}\,}}(p_i,{{\,\textrm{nn}\,}}(p_i))/2\). Thus the cost incurred by 2-nn upon the insertion of \(p_i\) is at most \((2\cdot {{\,\textrm{dist}\,}}(p_i,{{\,\textrm{nn}\,}}(p_i)))^2 \leqslant 16\cdot {{\,\textrm{radius}\,}}(D^*_i)^2\). By Lemma 3, the disks \(D^*_i\) are pairwise disjoint. Let \(\mathcal {D}_{{{\,\mathrm{{{\,\textrm{OPT}\,}}}\,}}}\) denote the set of disks in an optimal solution, and let \({{\,\textrm{OPT}\,}}\) be its cost. Then by Lemma 4 we have \(\sum _{i=1}^{n-1} {{\,\textrm{radius}\,}}(D^*_i)^2 \leqslant \sum _{D \in \mathcal {D}_{{{\,\mathrm{{{\,\textrm{OPT}\,}}}\,}}}} ((3/2) \cdot {{\,\textrm{radius}\,}}(D))^2 = \frac{9}{4} {{\,\textrm{OPT}\,}}\). Hence the total cost incurred by 2-nn is bounded by \(16 \cdot \sum _{i=1}^{n-1} {{\,\textrm{radius}\,}}(D^*_i)^2 \leqslant 36 {{\,\mathrm{{{\,\textrm{OPT}\,}}}\,}}\). \(\square \)

Upper bound on the competitive ratio of nn

for \(\alpha =2\). We now prove an upper bound on the competitive ratio of nn using a similar strategy as for 2-nn. The proof uses charging disks, as above. The main difference being how the charging disks are defined.

Suppose that nn increases the range of \({{\,\textrm{nn}\,}}(p_i)\) upon the insertion of \(p_i\). Then the charging disk \(D^*_i\) is the disk of radius \(\gamma \cdot d_i\) that is centered on the midpoint of the segment \(p_i {{\,\textrm{nn}\,}}(p_i)\), where \(d_i:={{\,\textrm{dist}\,}}(p_i,{{\,\textrm{nn}\,}}(p_i))\) and \(\gamma \) is a constant to be determined later. If nn did nothing upon insertion of \(p_i\), we define \(D_i^*:=\emptyset \). We now show that the charging disks are disjoint if we pick \(\gamma \) suitably.

Lemma 5

Let \(\gamma < \frac{3-\sqrt{7}}{4}\). Then for every pair \(D^*_i,D^*_{j}\) of charging disks with \(i\ne j\), we have \(D^*_i \cap D^*_{j} = \emptyset \).

Fig. 8
figure 8

The gray area depicts where \(p_i\) can be. Recall that \({{\,\textrm{dist}\,}}(p_j,p_{j'})\leqslant {{\,\textrm{dist}\,}}(p_i,p_{i'})\)

Proof

Let \(p_i\) and \(p_j\) be two points with charging disks \(D_i^*\) and \(D_j^*\). Let \(p_{i'}:={{\,\textrm{nn}\,}}(p_i)\) and \(p_{j'}:={{\,\textrm{nn}\,}}(p_j)\). Let also \(D_{i'}\), respectively \(D_{i}\), be the disk of radius \({{\,\textrm{dist}\,}}(p_i,p_{i'})\) centered on \(p_{i'}\), respectively \(p_{i}\). We define \(D_{j'}\) and \(D_{j}\) similarly. Let also \(m_i\) and \(m_j\) be the midpoints between \(p_i\) and \(p_{i'}\), and \(p_j\) and \(p_{j'}\) respectively. We assume without loss of generality that \({{\,\textrm{dist}\,}}(p_{i'},p_i)=1 \geqslant {{\,\textrm{dist}\,}}(p_{j'},p_j)\). We distinguish two cases.

  • First, if \(i'=j'\), then \(i>j\) otherwise \(p_j\in D_{i'}\) when \(p_j\) is inserted and \(D_j^*=\emptyset \). Moreover, let H be the halfplane defined by the bisector of \(p_{i'}=p_{j'}\) and \(p_j\) with \(p_j\in H\). Then, \(p_i \notin H\) otherwise \({{\,\textrm{nn}\,}}(p_i)\) is not \(p_{i'}\) but \(p_j\). This implies that the angle between \(p_{i'}p_i\) and \(p_{i'}p_j\) is at least \(\pi /3\) (see Fig. 8). Let the two half-lines starting at \(p_{i'}\) with an angle of \(\pi /6\) with \(p_{i'}p_j\) define a wedge w. If \(\gamma \) is such that \(D_j^*\) is contained in the wedge w, the disks \(D_i^*\) and \(D_j^*\) are disjoint. That is the case when the square triangle of hypotenuse 1/2 and angle \(\pi /6\) has its short side at most \(\gamma \). Using trigonometry (see Fig. 9), we get \(\gamma \leqslant \sin (\pi /6)/2=1/4\) which is always the case since \(\gamma < \frac{3-\sqrt{7}}{4}\).

  • We now deal with the case \(i'\ne j'\). Suppose for a contradiction that \(D_i^*\) and \(D_j^*\) intersect. Consider the interior of \(D_{i'}\cap D_{i}\). We claim that if \(p_j\) is in that region, then \(D_i^*\) and \(D_j^*\) do not intersect. Suppose \(p_j\) is in the interior of \(D_{i'}\cap D_{i}\). If \(j \geqslant i\), then \(p_j\in D_{i'}\) and \(D_j^*=\emptyset \) which is a contradiction. If, on the other hand \(j < i\), when \(p_i\) is inserted, we have that \({{\,\textrm{nn}\,}}(p_i)\) is \(p_j\) and not \(p_{i'}\) since \(p_j\) is in \(D_{i}\), which is a contradiction. Therefore, from now on, we can assume that \(p_j \notin {{\,\textrm{Int}\,}}(D_{i'} \cap D_i)\). Note that this implies \({{\,\textrm{dist}\,}}(p_j,m_i)\geqslant 1/2\). Therefore, if \({{\,\textrm{dist}\,}}(p_j,m_j)< 1/2 - 2\gamma \), then we have that \({{\,\textrm{dist}\,}}(m_i,m_j) > 2\gamma \) and thus \(D_{i}^*\) and \(D_{j}^*\) can never intersect because the radius of \(D_i^*\) is \(\gamma {{\,\textrm{dist}\,}}(p_{i'},p_i)=\gamma \) and the radius of \(D_j^*\) is  \(\gamma {{\,\textrm{dist}\,}}(p_{j'},p_j)\leqslant \gamma \). Hence \({{\,\textrm{dist}\,}}(p_j,m_j)\geqslant 1/2 - 2\gamma \) which implies \({{\,\textrm{dist}\,}}(p_j,p_{j'})\geqslant 1 - 4 \gamma \). Moreover, we claim that \(p_j\) has to be in the disk \(D_{m_i}\) of radius \(2\gamma +1/2\) centered on \(m_i\) for the disks \(D_i^*\) and \(D_j^*\) to intersect. Suppose it is not the case. Then \({{\,\textrm{dist}\,}}(p_j,m_i)>2\gamma +1/2\) which implies that  \({{\,\textrm{dist}\,}}(m_i,m_j)\geqslant {{\,\textrm{dist}\,}}(p_j,m_i) - {{\,\textrm{dist}\,}}(p_j,m_j) > 2\gamma \) and then the disks \(D_i^*\) and \(D_j^*\) are disjoint. Figure 10 shows the region \(A:= D_{m_i} {\setminus } {{\,\textrm{Int}\,}}(D_i \cap D_{i'})\) where \(p_j\) has to be in order to have the disks intersect. We are now interested in the maximum distance between \(p_j\) and either \(p_{i'}\) or \(p_i\), depending on which is closer, that is, in

    $$\begin{aligned} \max _{p_j\in A} \min \left( {{\,\textrm{dist}\,}}(p_{i'}, p_j), {{\,\textrm{dist}\,}}(p_i, p_j) \right) . \end{aligned}$$

    See Fig. 11. Using Apollonius’s Theorem, we obtain \(x^2 + 1^2 = 2((2\gamma + 1/2)^2 + (1/2)^2)\). Hence, \(x= 2 \sqrt{\gamma (2\gamma +1)}\). Since \(\gamma < \frac{3-\sqrt{7}}{4}\), then \(2 \sqrt{\gamma (2\gamma +1)} < 1 - 4\gamma \). Thus \(x < {{\,\textrm{dist}\,}}(p_j,p_{j'})\). Consequently, we have that \(p_j\) is closer to either \(p_{i'}\) or \(p_i\) than it is to \(p_{j'}\). We show that both these options lead to a contradiction. Let us first assume that \(p_j\) is in the right crescent, so \(p_j\) is closer to \(p_i\) than \(p_{j'}\). If \(i<j\), then \({{\,\textrm{dist}\,}}(p_j,p_i) \leqslant x < {{\,\textrm{dist}\,}}(p_j,p_{j'})\) which is a contradiction. Otherwise, if \(j<i\) then \(p_j\) is closer to \(p_i\) than \(p_{i'}\) when \(p_i\) is inserted, which is also a contradiction. Then, assume \(p_j\) is in the left crescent, so \(p_j\) is closer to \(p_{i'}\) than \(p_{j'}\). That implies \(j\leqslant i'\) otherwise \(p_{i'}\) is closer to \(p_j\) than \(p_{j'}\) when \(p_j\) is inserted, which is a contradiction. Note that if \(j = i'\), then there are only three points, but all the following arguments hold the same way. We hence have that \(j'<j\leqslant i'<i\). If \(p_{i}\in D_{j'}\) it implies that \(D_i^*=\emptyset \) leading to a contradiction. Therefore we have that \({{\,\textrm{dist}\,}}(p_i, p_{j'})> {{\,\textrm{dist}\,}}(p_j,p_{j'}) \geqslant 1-4\gamma \). We now compute \({{\,\textrm{dist}\,}}(p_i, m_j)\) using Apollonius’s theorem on the triangle \(\Delta p_ip_jp_{j'}\) and the median \(p_i m_j\):

    $$\begin{aligned} {{\,\textrm{dist}\,}}(p_i,p_j)^2 + {{\,\textrm{dist}\,}}(p_i&,p_{j'})^2 = 2({{\,\textrm{dist}\,}}(p_i,m_j)^2 + {{\,\textrm{dist}\,}}(p_j,m_j)^2) \\ \text{ hence } \qquad 2{{\,\textrm{dist}\,}}(p_i,m_j)^2&= {{\,\textrm{dist}\,}}(p_i,p_j)^2 + {{\,\textrm{dist}\,}}(p_i,p_{j'})^2 - 2 {{\,\textrm{dist}\,}}(p_j,m_j)^2 \\&> 1 + (1-4\gamma )^2 - 2 \left( \frac{1}{2} \right) ^2 \\&= 16\gamma ^2 - 8\gamma + \frac{3}{2} \end{aligned}$$

    whose infimum in \(\left( 0,\frac{3-\sqrt{7}}{4} \right) \) is obtained when \(\gamma =\frac{3-\sqrt{7}}{4}\). We therefore have that  \(2{{\,\textrm{dist}\,}}(p_i,m_j)^2 > \frac{23-8\sqrt{7}}{2}\) and so \({{\,\textrm{dist}\,}}(p_i,m_j) > \frac{4-\sqrt{7}}{2}\). This implies that \({{\,\textrm{dist}\,}}(m_i,m_j) \geqslant {{\,\textrm{dist}\,}}(p_i,m_j) - {{\,\textrm{dist}\,}}(p_i,m_i) > \frac{3-\sqrt{7}}{2} = 2\gamma \). Thus \(D_i^* \cap D_j^* =\emptyset \) which is a contradiction.

This concludes the lemma. \(\square \)

Fig. 9
figure 9

If \(\gamma \leqslant x = \sin (\pi /6)/2=1/4\), the disk centered at the midpoint is always contained in the wedge of angle \(\pi /3\)

Fig. 10
figure 10

The region \(A:= D_{m_i} {\setminus } {{\,\textrm{Int}\,}}(D_i \cap D_{i'})\) in gray depicts where \(p_j\) needs to be for the disks to intersect

Fig. 11
figure 11

The point q of the triangle is defined as the intersection of \(D_{m_i}\) and \(D_{i'}\). We want to compute x

We also need the following lemma, whose proof is similar to that of Lemma 4.

Lemma 6

For any points \(p_i\) and \(p_j\) with \(i<j\), let \(D_j^{{{\,\textrm{OPT}\,}}}(p_i)\) be the disk centered at \(p_i\) of radius \(\rho _{j}(p_i)\) after \(p_j\) is inserted in an optimal solution and let \(D_{j}^{(1.5+\gamma ) {{\,\textrm{OPT}\,}}}(p_i)\) be the disk centered at \(p_i\) of radius \((1.5+\gamma ) \rho _{j}(p_i)\). Then, for every point \(p_k\), there is a point \(p_i\) such that the disk \(D^*_k \) is contained in the disk \(D_{k}^{(1.5+\gamma ) {{\,\textrm{OPT}\,}}}(p_i)\).

Putting everything together we obtain the following theorem.

Theorem 6

In \({\mathbb R}^2\) the strategy nn is 322-competitive for \(\alpha =2\).

Proof

Recall that \({{\,\textrm{radius}\,}}(D^*_i) = \gamma \cdot {{\,\textrm{dist}\,}}(p_i,nn(p_i))\). Thus the cost incurred by nn upon the insertion of \(p_i\) is at most \({{\,\textrm{dist}\,}}(p_i,nn(p_i))^2 \leqslant ((1/\gamma ) \cdot {{\,\textrm{radius}\,}}(D^*_i))^2\). By Lemma 5, the disks \(D^*_i\) are pairwise disjoint. If \(\mathcal {D}_{{{\,\mathrm{{{\,\textrm{OPT}\,}}}\,}}}\) denotes the set of disks used in an optimal solution, then by Lemma 6 we have \(\sum _{i=1}^{n-1} \rho (D^*_i)^2 \leqslant \sum _{D \in \mathcal {D}_{{{\,\mathrm{{{\,\textrm{OPT}\,}}}\,}}}} ((1.5+\gamma ) \cdot \rho (D))^2 = (1.5+\gamma )^2 {{\,\textrm{OPT}\,}}\), where \({{\,\textrm{OPT}\,}}\) is the cost of an optimal solution. Hence the total cost incurred by nn is at most \(\frac{1}{\gamma ^2} \sum _{i=1}^{n-1} \rho (D^*_i)^2 = \frac{1}{\gamma ^2} \cdot (1.5+\gamma )^2 {{\,\mathrm{{{\,\textrm{OPT}\,}}}\,}}\). Since this holds for any value of \(\gamma < \frac{3-\sqrt{7}}{4}\), we can conclude that the cost incurred by nn is at most \(\frac{4^2}{(3-\sqrt{7})^2} \cdot (1.5+\frac{3-\sqrt{7}}{4})^2 {{\,\mathrm{{{\,\textrm{OPT}\,}}}\,}}= (163 + 60 \sqrt{7}) {{\,\mathrm{{{\,\textrm{OPT}\,}}}\,}}< 322 {{\,\mathrm{{{\,\textrm{OPT}\,}}}\,}}\).

4 Online Range-Assignment in General Metric Spaces

In this section we consider the problem in general metric spaces. We also consider the offline variant of the problem in the next section; here we focus on the online variant, for which we give an \(O(\log n)\)-competitive algorithm. The key insight to our algorithms is to formulate the problem as a set-cover problem and apply linear-programming techniques. As we will see later, applying the online set cover algorithm of Alon et al.  [1] yields a competitive ratio much worse than \(O(\log n)\), so we need to exploit structural properties of the particular set cover instances arising from our problem.

4.1 A Set Cover Formulation and its LP

Let \(\mathcal {R}\) be the set of distances between pairs of points. Observe that we can restrict ourselves without loss of generality to only using ranges from \(\mathcal {R}\). This allows us to formulate the problem in terms of set cover: The elements are the points \(p_0, p_1, \ldots , p_{n-1}\), with \(p_0\) being the source point, and for each \(0 \leqslant i \leqslant n-2\) and \(r \in \mathcal {R}\) there is a set \(S_{i,r}:= \{p_j: j > i \text{ and } {{\,\textrm{dist}\,}}(p_i, p_j) \leqslant r\}\) with cost \(r^\alpha \). (Note that \(S_{i,r}\) is the set of points arriving after \(p_i\) that are within range r of \(p_i\)). In the following, we abuse notation and also write \(j \in S_{i,r}\) for points \(p_j \in S_{i,r}\). We also say that \(S_{i,r}\) is centered at \(p_i\).

Observe that a feasible range assignment corresponds to a feasible set cover. A set cover is minimally feasible if removing any set from it causes an element to be uncovered. Since a minimally feasible set cover picks at most one set \(S_{i,r}\) for each i, it corresponds to a feasible range assignment. (Note that applying the online set cover algorithm of Alon et al.  [1] only gives a competitive ratio of \(O(\log ^2 n/ \log \log n)\) as our set cover instance has \(n-1\) elements and \(|\mathcal {R}|(n-1)\) sets.)

We can now formulate our problem as an integer linear program. To this end we introduce, for each range \(r\in \mathcal {R}\) and each point \(p_i\) a variable \(x_{i,r}\), where \(x_{i,r}=1\) indicates we choose the set \(S_{i,r}\) (or, in other words, that we assign range r to \(p_i\)) and \(x_{i,r}=0\) indicates we do not choose \(S_{i,r}\). Allowing the \(x_{i,r}\) to take fractional values gives us the following relaxed LP.

$$\begin{aligned} \boxed { \begin{aligned} \text{ Minimize } \quad&\sum _{0 \leqslant i \leqslant n-2} \sum _{r \in \mathcal {R}} x_{i,r}\cdot r^\alpha \\ \text{ Subject } \text{ to }\quad&\sum _{i, r : j \in S_{i,r}} x_{i,r} \geqslant 1&\quad \text{ for } \text{ all } 1 \leqslant j \leqslant n-1\\&x_{i,r} \geqslant 0&\quad \text{ for } \text{ all } (i,r) \text{ with } 0 \leqslant i \leqslant n -2 \text{ and } r \in \mathcal {R}\end{aligned} }\nonumber \\ \end{aligned}$$
(1)

The dual LP corresponding to the LP above is as follows.

$$\begin{aligned} \boxed { \begin{aligned} \text{ Maximize } \quad&\sum _{1 \leqslant j \leqslant n} y_j \\ \text{ Subject } \text{ to }\quad&\sum _{j \in S_{i,r}} y_j \leqslant r^\alpha&\quad \text{ for } \text{ all } (i,r) \text{ with } 0 \leqslant i \leqslant n-2 \text{ and } r \in \mathcal {R}\\&y_j \geqslant 0&\quad \text{ for } \text{ all } 1 \leqslant j \leqslant n - 1 \end{aligned} } \end{aligned}$$
(2)

We say that the set \(S_{i,r}\) is tight if the corresponding dual constraint is tight, that is, if \(\sum _{j \in S_{i,r}} y_j = r^\alpha \).

4.2 The Online Algorithm and its Analysis

Recall that in the online version, we are given the source \(p_0\) and then the points \(p_1, \ldots , p_{n-1}\) arrive one-by-one. When a point \(p_i\) arrives, its distances to previous points and the source are revealed. The algorithm. Let \(\gamma > 1\) be a constant that we will set later. The basic idea of the algorithm is that when a point \(p_i\) arrives, we will raise its associated dual variable \(y_i\) until some set \(S_{j,r}\) containing \(p_i\) is tight and then update the range of point \(p_j\) to be \(r_i(p_j):= \gamma \max \{r: \sum _{k \in S_{j,r}: k \leqslant i} y_k = r^\alpha \}\). In other words, the range of \(p_j\) becomes \(\gamma \) times the largest radius of the tight sets centered at \(p_j\).

Here is a more precise description of the algorithm. When \(p_i\) arrives, we initialize its dual variable \(y_i:= 0\). If \(p_i \in S_{j,r}\) for some \(j < i\) and range r with \(\sum _{k \in S_{j,r}: k \leqslant i} y_k = r^\alpha \), then we set \(r_i(p_j):= \gamma \max \{r: \sum _{k \in S_{j,r}: k \leqslant i} y_k = r^\alpha \}\) for one such j. (It can happen that some \(S_{j,r}\) is tight but that \(r_{i-1}(p_j)\) is still smaller than r, because when multiple sets become tight at the same time, we only increase the range of one point.) Otherwise, we increase \(y_i\) until for some \(j < i\) and range r we have \(\sum _{k \in S_{j,r}: k \leqslant i} y_k = r^\alpha \); we then set \(p_j\)’s new range to \(r_{i}(p_j):= \gamma r\) for one such j. In both cases, we only set \(p_j\)’s range, the other ranges remain unchanged. Note that in the event that multiple sets centered at different points become tight simultaneously, we only update the range of one of them.

Analysis. We begin our analysis of the algorithm by showing the feasibility of the constructed dual solution y and the corresponding range assignment. For each point \(p_i\), the algorithm stops raising \(y_i\) once some set \(S_{j,r}\) containing \(p_i\) is tight and then updates \(p_j\)’s radius to be \(\gamma r > r\). This guarantees that no dual constraint is violated and that \(p_i\) is covered by \(p_j\).

Next we analyze the cost of this algorithm. We use the shorthand \(r_j\) for the final range \(r_{n-1}(p_j)\) of the point \(p_j\). First, we argue that it suffices to bound the cost of the points whose ranges are large enough. Let \(H = \{0 \leqslant i \leqslant n-2: r_i \geqslant \max _{0 \leqslant j \leqslant n-2} r_j/n\}\). Then, the cost of the algorithm is

$$\begin{aligned} \sum _i r_i^\alpha= & {} \sum _{i \in H} r_i^\alpha + \sum _{i \notin H} r_i^\alpha \\\leqslant & {} \sum _{i \in H} r_i^\alpha + n (\max _j r_j/n)^\alpha \\\leqslant & {} (1 + 1/n^{\alpha -1}) \sum _{i \in H} r_i^\alpha \\\leqslant & {} 2\sum _{i \in H} r_i^\alpha , \end{aligned}$$

where the second last inequality is because \(\sum _{i \in H} r_i^\alpha \geqslant \max _j r_j^\alpha \) and the last is because \(\alpha > 1\). In the remainder of this section we will show that

$$\begin{aligned} \sum _{i \in H} r_i^\alpha \leqslant O(\log n) \cdot \sum _{1 \leqslant j \leqslant n-1} y_j. \end{aligned}$$
(3)

The theorem then follows from the Weak Duality Theorem of Linear Programming which states that value of any feasible solution to the primal (minimization) problem is always greater than or equal to the value of any feasible solution to its associated dual problem.

For \(0 \leqslant i \leqslant n-2\), our algorithm sets the final range \(r_i\) of point \(p_i\) such that \(r_i = \gamma r\) for some \(r \in \mathcal {R}\) such that \(\sum _{k \in S_{i,r}} y_k = r^\alpha \). Thus, we get

$$\begin{aligned} \left( \frac{r_i}{\gamma }\right) ^\alpha = \sum _{j \in S_{i, r_i/\gamma }} y_j, \end{aligned}$$

and so

$$\begin{aligned} \sum _{i \in H} r_i^\alpha = \sum _{i \in H} \gamma ^\alpha \left( \sum _{j \in S_{i, r_i/\gamma }} y_j\right) = \gamma ^\alpha \sum _{1 \leqslant j \leqslant n-1} y_j \cdot \left| \{i \in H: j \in S_{i, r_i/\gamma }\} \right| , \end{aligned}$$

where the last equality follows by interchanging the sums. Thus, to prove Inequality (3) it suffices to prove the following lemma.

Lemma 7

For every \(1 \leqslant j \leqslant n-1\) and any fixed \(\gamma > 3\), we have \(|\{i \in H: j \in S_{i, r_i/\gamma }\}| = O(\gamma ^\alpha \log n)\).

Proof

Define \(H_j = \{i \in H: j \in S_{i, r_i/\gamma }\}\). We will show that for every \(i, i' \in H_j\), either \(r_i > \frac{\gamma - 1}{2} r_{i'}\) or \(r_{i'} > \frac{\gamma - 1}{2} r_i\). This implies that the t-th smallest range (among the points in \(H_j\)) is at least \(((\gamma -1)/2)^t\) times the smallest range (among those points). Since \(\frac{\max _{i \in H_j} r_i}{\min _{i \in H_j} r_i} \leqslant n\), this means that \(|H_j| = O(\log _{(\gamma -1)/2} n) = O(\log n)\).

Suppose \(i, i' \in H_j\). Let \(p_k\) be the last-arriving point that causes our algorithm to update \(r_i\), and \(p_{k'}\) be the last-arriving point that causes our algorithm to update \(r_{i'}\). Since the arrival of any point causes at most one point’s range to be updated, we have that \(p_k \ne p_{k'}\). Suppose that \(p_k\) arrived before \(p_{k'}\). By construction of \(r_{i'}\), we have \({{\,\textrm{dist}\,}}(p_{k'},p_{i'}) = r_{i'}/\gamma \). Moreover, since \(i, i' \in H_j\), we have \({{\,\textrm{dist}\,}}(p_i, p_j) \leqslant r_i/\gamma \) and \({{\,\textrm{dist}\,}}(p_{i'}, p_j) \leqslant r_{i'}/\gamma \). Therefore, by the triangle inequality,

$$\begin{aligned} {{\,\textrm{dist}\,}}(p_i, p_{k'}) \leqslant {{\,\textrm{dist}\,}}(p_i, p_j) + {{\,\textrm{dist}\,}}(p_j,p_{i'}) + {{\,\textrm{dist}\,}}(p_{i'}, p_{k'}) \leqslant 2\frac{r_{i'}}{\gamma } + \frac{r_i}{\gamma }. \end{aligned}$$

Since \(p_{k}\) arrived before \(p_{k'}\) and \(p_{k'}\) caused our algorithm to update \(r_{i'}\), the point \(p_{k'}\) must have been uncovered when it arrived, and so \({{\,\textrm{dist}\,}}(p_i, p_{k'}) > r_i\). Therefore, we get

$$\begin{aligned} r_i < {{\,\textrm{dist}\,}}(p_i,p_{k'}) \leqslant 2\frac{r_{i'}}{\gamma } + \frac{r_i}{\gamma } \end{aligned}$$

and so \(r_{i'} > \frac{\gamma - 1}{2} r_i\) as desired. In the case that \(p_{k'}\) arrived before \(p_k\), a similar argument yields \(r_i > \frac{\gamma - 1}{2} r_{i'}\). \(\square \)

By setting \(\gamma = 4\) we obtain the following theorem.

Theorem 7

For any power-distance gradient \(\alpha >1\), there is a \(O(4^\alpha \log n)\)-competitive algorithm for the online range assignment problem in general metric spaces.

5 On Offline Algorithm for General Metric Spaces

In the offline setting, we are given the entire sequence of points \(p_0, \ldots , p_n\) in advance and the goal is to assign ranges \(r_0, \ldots , r_{n-1}\) to the points \(p_0, \ldots , p_{n-1}\) so that for every \(1 \leqslant i \leqslant n\), there exists \(j < i\) such that \({{\,\textrm{dist}\,}}(p_i, p_j) \leqslant r_j\). We can formulate the problem in this way because we know all points beforehand, and we are interested in the cost of the final assignment. Thus we may immediately assign each point its final range, and we need not specify a separate range for every point at each time step. The stated condition on the assignment ensures that after inserting each \(p_j\), we have a broadcast tree on \(p_0,\ldots ,p_j\). Thus we require the algorithm to be what Boyar et al.  [3] call an incremental algorithm: namely an algorithm that maintains a feasible solution at any time (even though, unlike an online algorithm) it may know the future). We emphasise that this is different from the static broadcast range assignment problem studied previously. To avoid confusion with the usual offline broadcast range assignment problem, we call this the Priority Broadcast Range Assignment problem.Footnote 3 Below we give a \(5^\alpha \)-approximation algorithm for the offline version of the problem, based on the LP formulated in Sect. 4.1.

The basic idea of the approximation algorithm is as follows. We start with a maximally feasible dual solution y, i.e. increasing any \(y_j\) would violate some dual constraint. Since y is maximally feasible, for every j, there exists a set \(S_{i,r}\) containing \(p_j\) that is tight. Thus, the tight sets form a feasible set cover. Let \(\mathcal {S}\) be subset of the tight sets that is a minimally feasible set cover. As observed above, for every i, there is at most one set \(S_{i,r} \in \mathcal {S}\). Thus, \(\mathcal {S}\) corresponds to a feasible range assignment. Let \(r_i\) be the radius assigned to \(p_i\).

We now modify the range assignment r to get a range assignment \(r'\) so that

$$\begin{aligned} \sum _{0 \leqslant i \leqslant n-1} {r'}_i^2 \leqslant 5^\alpha \sum _{1 \leqslant j \leqslant n} y_j. \end{aligned}$$

Since y is a feasible dual solution y, weak LP duality implies that \(r'\) is a \(5^\alpha \)-approximation.

Say that i conflicts with j if there exists a point \(p_k \in S_{j, r_j} \cap S_{i, r_i}\) such that \(y_k > 0\). Order the indices in decreasing order of \(r_i\), breaking ties arbitrarily, and denote by \(i \prec j\) if i comes before j in this ordering. We use the following algorithm to construct \(r'\).

Algorithm 1
figure a

Obtaining an approximate solution from \(\mathcal {S}\)

Analysis. We begin by proving that \(r'\) is a feasible range assignment.

Lemma 8

For each \(j > 0\), there exists \(i < j\) such that \({{\,\textrm{dist}\,}}(p_j, p_i) \leqslant r'_i\).

Proof

Note that the sets \(\{C_i\}_{i \in I}\) partition \(\{1, \ldots , n\}\). Consider some set \(C_i\) and let \(p_{i}'\) be the earliest point in \(C_i\). It suffices to prove that \(S_{i', 5r_i} \supseteq \cup _{j \in C_i} S_{j,r_j}\). To see this, first observe that since \(p_{i'}\) is the earliest point in \(C_i\) it can potentially cover all the points covered by any other point \(p_k\) for \(j \in C_i\), i.e. \(S_{i', r} \supseteq \cup _{j \in C_i} S_{j,r_j}\) for large enough r. Next, we show that \(r = 5r_i\) suffices. Since i conflicts with every \(k \in C_i\) and \(r_i = \max _{j \in C_i} r_j\), we have that every point in \(\cup _{j \in C_i} S_{j,r_j}\) is within distance \(3r_i\) of \(p_i\) and that \({{\,\textrm{dist}\,}}(p_{i'}, p_i) \leqslant 2r_i\). Thus, we get that \(S_{i', 5r_i} \supseteq \cup _{j \in C_i} S_{j,r_j}\), as desired. \(\square \)

Lemma 9

\(\sum _i r'^\alpha _i \leqslant 5^\alpha \sum _{j > 0} y_j\).

Proof

We have \(\sum _i r'^\alpha _i = \sum _{i \in I} 5^\alpha r^\alpha _i\). Since the sets in \(\mathcal {S}\) are tight, we have

$$\begin{aligned} \sum _{i \in I} r^\alpha _i = \sum _{i \in I} \sum _{j \in S_{i,r_i}} y_j = \sum _j |\{i \in I : j \in S_{i,r_i}\}| y_j \leqslant \sum _j y_j \end{aligned}$$

where the last inequality follows from the fact that I is conflict-free. \(\square \)

Thus, we get the following theorem.

Theorem 8

There is a \(5^\alpha \)-approximation algorithm for the Priority Range Assignment problem in general metric spaces for any \(\alpha > 1\).

6 Concluding Remarks

We introduced the online version of the broadcast range-assignment problem, and we analyzed the competitive ratio of two natural algorithm, nn and ci, in \({\mathbb R}^1\) and \({\mathbb R}^2\) as a function of the power-distance gradient \(\alpha \). While nn is O(1)-competitive in \({\mathbb R}^2\) and for \(\alpha =2\) the best competitive ratio we can prove is quite large, namely 322. The variant 2-nn has a better ratio, namely 36, but this is still large. We conjecture that the actual competitive ratio of nn is actually much closer to the lower bound we proved, which is 7.61. We also conjecture that ci has a constant (and small) competitive ratio in \({\mathbb R}^2\). Another approach to getting better competitive ratios might be to develop more sophisticated algorithms. For the general (metric-space) version of the problem, the main question is whether an algorithm with constant competitive ratio is possible.

While the requirement that we cannot decrease the range of any point in the online setting is perhaps not necessary in practice, our algorithms have the additional benefit that they modify the range of at most one point. Thus it can also be seen as the first step in studying a more general version, where we are allowed to modify (increase or decrease) the range of, say, two points. In general, it is interesting to study trade-offs between the number of modifications and the competitive ratio. Studying deletions is then also of interest.