1 Introduction

Wireless networks are ubiquitous. We use 802.11 Wi-Fi networks at home and in the office, and elsewhere our mobile devices connect to, e.g., GSM, LTE, Bluetooth. The bandwidth, latency, or error rates of wireless transmissions can be improved by carefully adjusting various parameters such as the modulation and coding scheme. However, apart from such point-to-point considerations, the network itself may also influence wireless communication.

In particular, in a network with more than two devices, concurrent transmissions may interfere. To prevent interference, one may (i) carefully schedule transmissions so that concurrent transmissions are separated in space or time. In addition, one may (ii) control the transmission power in order to reduce interference.

Increasing the transmission power of a sender will likely increase the probability that its packets are being received, but it also increases the interference for other concurrent transmissions. Similarly, scheduling a transmission at a different time may improve this transmission but generate interference for the now concurrent transmissions. Since scheduling and transmission power affect the whole network, they are difficult to understand.

There are several classic models and model variants to represent transmissions and interference in wireless networks. Here we just present two of the most common models; for a more comprehensive survey we recommend, e.g., [52]. A typical model to understand wireless networks is the so-called radio network model, e.g., [3].

Definition 1

(Radio Network Model). In the radio network model, the wireless network is modeled as a graph. The nodes of the graph are the wireless devices, either base stations or mobile nodes. There is an edge between two nodes if these nodes can communicate by wireless transmissions. In addition, edges also model interference. A node v can only receive a wireless transmission of a neighbor node $$u \in N(v)$$ if no other neighbor $$w \in N(v)$$ is transmitting concurrently.

The radio network model was very influential to understand wireless networks, but it sometimes falls short because it is a binary model: either a node has interference or it has no interference. This is often too simplistic. In the real world, a node v may receive a packet of neighbor u despite interference of neighbor w if v is closer to u than w. This cannot be modeled by an unweighted graph. Also power control is difficult to represent with a radio network model. An improved model to understand wireless networks is the so-called disk model, e.g., [11].

Definition 2

(Disk Model). In the geometric disk model, nodes are points in the Euclidean plane. A transmitting node u reaches all possible points within some radius r around u, where the radius may depend on the power that node u is using for the transmission. Again, a transmission will be successfully received if an intended receiver node v is inside the transmission disk of node u but not inside the transmission disk of another concurrent transmission.

Setting the radii correctly, the disk model may be accurate enough to model some wireless phenomena, but it is still “too binary” to model reality well. Maybe a transmission can withstand a single concurrent interfering transmission if it is reasonably far away. But can it withstand multiple concurrent transmissions? Interference of electromagnetic waves is additive, and to truly understand wireless transmission and interference our model must be additive as well. Moreover, electromagnetic waves get weaker with distance – physics tells us that a signal drops at least quadratically with distance. If a receiver is closer to a sender, it may withstand more interference.

About a decade ago, researchers studying wireless network algorithms started dropping the radio network, the disk graph, and various other binary models in favor of a model that seemed to represent reality better: the so-called physical model, e.g., [47].

2 Physical Model

Definition 3

(Physical Model, Signal-to-Interference Ratio). In the physical model, there is a gain between every pair of nodes uv. The gain may be non-symmetric, i.e. the gain from u to v may be different from the gain from v to u. The gain describes how much the power of a transmission at u decreases on the way from u to v. If node u transmits with power $$p_u$$, node v will receive the signal with power $$S = p_u \cdot g(u,v)$$, where g(uv) is the gain from u to v. Interfering transmissions behave exactly the same, so a concurrent interfering transmission of node w will arrive at node v with interference power $$p_w \cdot g(w,v)$$. Interference is additive, so all the interfering transmissions W accumulate to $$I = \sum _{w \in W}p_w \cdot g(w,v)$$. Whether or not node v can correctly receive u’s transmission depends on the signal-to-interference ratio S/I. If this ratio is at least some constant $$\beta$$, node v will receive the transmission correctly. The physical model is also known as the signal-to-interference-ratio model.

The ratio $$\beta$$ is hardware and coding dependent. For inexpensive hardware the signal should be stronger than the interference, i.e., $$\beta \ge 1$$. However, reasonably good hardware and/or coding may drive the value of $$\beta$$ below 1.

Sometimes, we add a constant ambient noise term N to the interference of concurrent transmissions; the reception test then becomes a signal-to-interference-plus-noise (SINR) test, i.e., we want $$\frac{S}{I+N} \ge \beta$$.

Definition 4

(Geometric and General Physical Model). Sometimes, we add a geometric component to this physical model by assuming that the gain is determined by the geometric distance, i.e. $$g(u,v) = d(u,v)^{-\alpha }$$, where d(uv) is the Euclidean distance between u and v, and $$\alpha$$ is the so-called path-loss exponent, typically $$\alpha \ge 2$$. We call this special case the geometric physical model. In practice, wireless effects such as shadowing or reflection at walls may make the gain non-geometric – if we have no restrictions on the gain function, the model is simply known as general physical model.

Wireless networks offer a wide range of challenging algorithmic problems. One family of problems stands out, however: the so-called scheduling/capacity problem. Let us define this family of problems formally.

Definition 5

(Link). A wireless communication link l is defined by a sending node s and a receiving node r, i.e., $$l = (s,r)$$. The length of l is the distance d(sr) from sender to receiver, which we shall overload with the notation l.

Definition 6

(Feasible Link Set, Link Scheduling). We use gain as introduced in Definition 3. A traffic demand is given by a set L of links. We want to choose a subset $$L' \subseteq L$$ such that all links in $$L'$$ are feasible, i.e., all links can be scheduled concurrently. A subset $$L'$$ is feasible if all links $$l \in L'$$ have a signal-to-interference ratio of at least $$\beta$$. More formally, for any $$l = (s,r) \in L'$$, we want $$S_l/I_l \ge \beta$$, where $$S_l = p_{s}\cdot g(s,r)$$ and $$I_l = \sum _{l' \in L' \setminus \{l\}}p_{s'} \cdot g(s',r)$$ with $$l' = (s',r')$$ being a link in $$L' \setminus \{l\}$$.

The link scheduling problem has three main dimensions:

• Gain: Geometric or general gain as discussed in Definition 4.

• Power control: We will introduce power control in Definition 12.

• Objective: Finding the largest possible (weighted) subset $$L'$$ is only one possible objective. Alternatively, we might want to partition all links L into as few as possible subsets $$L_1,L_2,\ldots ,L_k$$, such that each subset is feasible, and then schedule the subsets sequentially. Finding a single subset $$L'$$ is known as the one-shot problem, finding a partition is simply known as the scheduling problem.

The capacity problem is a close relative of the link scheduling problem, inheriting these three dimensions.

Definition 7

(Wireless Capacity). The input to the wireless capacity problem is a set of nodes. On top of these nodes we need to specify a traffic pattern. We want to measure how much concurrent traffic is possible.

On top of the link scheduling dimensions mentioned above, wireless capacity has additional parameters:

• Traffic Pattern: Typical traffic patterns include, e.g. every node must send a packet to every other node, or every node must send to a random node. Other classic traffic patterns form trees, e.g., all nodes must collect the average or median temperature at a specific node known as the sink.

• Node Distribution: Early capacity computations only worked if the nodes were distributed in some peculiar way, e.g. Poisson distributed nodes. Later, researchers studied best or worst case node distributions. Algorithmic analyses are able to handle arbitrary node distributions.

• Multi-Hop: Nodes may forward traffic for other nodes in a multi-hop fashion. Sometimes the routes are given, sometimes routing is part of the problem.

The scheduling/capacity problems measure how efficiently we can use a wireless network in the physical model. For wireless networks this is the core problem, as higher layers are generally not different from wireline networks.

Practically, link scheduling and wireless capacity will tell us how we should organize media access on the link layer, as it answers questions about optimal scheduling and power control. In wireless networks, link and network layers cannot be separated as nicely as in wireline networks as network issues will influence the link layer.

We first explore algorithms for the one-shot uniform-power geometric-gain link scheduling problem, as introduced in Definition 6.

Short links are naturally preferable for maximizing the number of links: their signal is still relatively strong at the receiver, making them more tolerant to interference. We therefore start by sorting the input $$L=\{l_1,l_2,\ldots , l_n\}$$ into a non-decreasing order of length.

Greedy algorithms often work well for subset maximization. In our context, this leads to the natural approach of Algorithm 1.

Unfortunately, Algorithm 1 is too greedy. Suppose the second shortest link $$l_2$$ is just barely feasible when joined with the shortest link $$l_1$$, i.e. the signal-to-interference ratio of $$l_2$$ is exactly $$\beta$$. Then we cannot add any other link without violating the signal-to-interference ratio of $$l_2$$. In contrast, if the other links are well separated in space, the optimal set may contain all of them.

We should therefore be slightly less “greedy”! Before we present a greedy-like algorithm that works, we first introduce a convenient way of quantifying the impact of interference.

Interference matters only in relation to the strength of the signal that is to be received. According to Definition 6, a transmission is received correctly if the strength of the interference is small enough relative to the strength of the signal. We ignore the effect of the ambient noise by setting $$N=0$$. Noise can be included, but it complicates the treatment.

Definition 8

(Affectance). Consider a link $$l = (s,r)$$ and an interfering link $$l' = (s',r')$$. The signal strength of l (as received at r) is $$p/l^\alpha = p/d(s,r)^\alpha$$, where p is the uniform power used by all senders. Similarly, the interference of $$l'$$ is $$p/d(s',r)^\alpha$$. Then, the relative interference of $$l'$$ on l is $$\frac{1}{\beta } \cdot \frac{p/d(s',r)^\alpha }{p/d(s,r)^\alpha }$$. For technical reasons, to define affectance we cap this relative interference of a single link at 1:

$$\text{ affectance } a_{l' \rightarrow l} := \min \left( \frac{1}{\beta } \cdot \frac{d(s',r)^{-\alpha }}{l^{-\alpha }},1\right) .$$

For convenience, let $$a_{l \rightarrow l} = 0$$.

The key feature of affectance is that it is cumulative. The affectance of a set S of links on a given link l is the sum of the individual affectances.

Definition 9

(Set Affectance). Define

$$a_{S \rightarrow l} = \sum _{s \in S} a_{s \rightarrow l} \text { and } a_{l \rightarrow S} = \sum _{s \in S} a_{l \rightarrow s}.$$

Crucially, the question of whether a link l is feasible concurrently with set S of links is equivalent to the condition that $$a_{S \rightarrow l} \le 1$$.

Definition 10

(Symmetric Affectance). Let $$a(x,y) = a_{x \rightarrow y} + a_{y \rightarrow x}$$ be the symmetric version of affectance, and define $$a(S,l) = \sum _{s \in S} a(s,l)$$ as in Definition 9.

Besides avoiding being too greedy, we could also allow infeasible intermediate solutions. Algorithm 2 from [29] combines these two approaches, using a stricter-than-absolutely-necessary criteria to add a link, yet allowing the already added links to accumulate more than affectance 1. A key feature is to bound not only the total affectance on the incoming link from the previous links, but also its total affectance on the previous links (by the same amount). Afterwards, we eliminate those that exceed their affectance budget.

We show here that the algorithm achieves a constant-factor approximation in the one-dimensional setting (when links are positioned on the line), utilizing some arguments of Kesselheim [41]. We first define some concepts.

Definition 11

(Bi-feasible). A set S of links is bi-feasible if it is feasible ($$a_{S \rightarrow l} \le 1$$ for each $$l \in S$$), and if $$a_{l \rightarrow S} \le 2$$, for each $$l \in S$$.

Lemma 1

Each feasible set S contains a bi-feasible subset of at least half the links.

Proof

We use that $$a_{S \rightarrow l} \le 1$$ for each $$l \in S$$. Hence,

$$|S| \ge \sum _{l \in S} a_{S \rightarrow l} = \sum _{l \in S} \sum _{l' \in S} a_{l' \rightarrow l} = \sum _{l \in S} a_{l \rightarrow S}.$$

In other words, the average “out-affectance” $$a_{l \rightarrow S}$$ of each link l is also at most 1. Since affectance is non-negative, less than half the links $$l \in S$$ can have affectance $$a_{l \rightarrow S}$$ more than twice the average.

Lemma 2

With uniform power, two feasible links on a line cannot overlap if $$\beta \ge 1$$.

Proof

Suppose there are links $$l = (s,r)$$ and $$l'=(s',r')$$ that overlap. There are two cases as to their configuration. In one case, sender $$s'$$ is located inside the link l. But then, $$l'$$ generates too much interference on l, and so with $$\beta \ge 1$$ we have $$a_{l \rightarrow l'} = 1$$. In the other case, the order of the nodes on the line is $$s, r', r, s'$$. Then, either s is closer to $$r'$$ than $$s'$$ is, or $$s'$$ is closer to r than s is; either way, at least one of the links is infeasible.

Lemma 3

Let m be a link and S be a bi-feasible set of links not smaller than m, with $$m \notin S$$. Then, $$a(S,m) \le 10$$.

Proof

Thanks to Lemma 2 we know that links in S do not overlap. Let l (r) be the link in S whose receiver is closest to m’s receiver on the left (right), respectively. Let $$S_l$$ ($$S_r$$) be the links of S to the left of l (right of r), respectively. Since l (r) is no smaller than m, and closer to each link in $$S_l$$ ($$S_r$$), it receives more affectance from links in $$S_l$$ ($$S_r$$) than m, respectively. Thus, $$a_{S_l \rightarrow m} \le a_{S_l \rightarrow l} \le 1$$ and $$a_{S_r \rightarrow m} \le a_{S_r \rightarrow r} \le 1$$. Since the affectance of single links is bounded by 1 (Definition 8), we get

$$a_{S \rightarrow m} = a_{l \rightarrow m} + a_{r \rightarrow m} + a_{S_l \rightarrow m} + a_{S_r \rightarrow m} \le 4.$$

Similarly, we can bound $$a_{m \rightarrow S} \le 6$$. Let x (y) be the link in S whose sender is closest to m’s sender on the left (right), respectively. Let $$S_x$$ ($$S_y$$) be the links of S to the left of x (right of y), respectively. Since x (y) is closer to each link in $$S_x$$ ($$S_y$$), it creates more affectance on links in $$S_x$$ ($$S_y$$) than m does, respectively. Thus, $$a_{m \rightarrow S_x} \le a_{x \rightarrow S_x} \le 1$$ and $$a_{m \rightarrow S_y} \le a_{y \rightarrow S_y} \le 1$$. Since the affectance on a single link is bounded by 1 and S being a bi-feasible set (Definition 11), we get

$$a_{m \rightarrow S} = a_{m \rightarrow x} + a_{m \rightarrow y} + a_{m \rightarrow S_x} + a_{m \rightarrow S_y} \le 1 + 1 + 2 + 2 \le 6.$$

Theorem 1

Algorithm 2 is a constant approximation algorithm for one-dimensional one-shot uniform-power geometric-gain link scheduling problem, independent of $$\alpha$$.

Proof

Assume that all links are of different length, with symmetry broken arbitrarily. First, let us compare the sizes of the sets R and X found by Algorithm 2 on a given instance. The selection criterion in line 3 measures the affectance between the new link and all links in set R so far. At the end of the loop, each link $$r \in R$$ has been symmetrically affected exactly once by every other link $$r' \in R$$, i.e.

$$\sum _{r \in R} a(R,r) = \sum _{r \in R} \sum _{r' \in R} a(r',r) < \frac{1}{2}|R|.$$

Thus, on average the value of a(Rr) is less than 1/2. At least half the items in a non-negative set have a value within twice the average value. It follows that at least half the links $$r \in R$$ satisfy $$a_{R \rightarrow r} \le a(r,R) < 1$$; i.e. $$|X| \ge |R|/2$$.

We now compare R with a maximum cardinality feasible set OPT. As we observed in Lemma 1, there is a bi-feasible subset O of OPT of size at least |OPT|/2. Split O into two parts: $$O_1 = O \cap R$$, and $$O_2 = O \setminus R$$. Since $$O_1 \subseteq R$$ we have $$|O_1| \le |R|$$, but it remains to bound the size of $$O_2$$.

On each pair of links $$r \in R$$ and $$o \in O_2$$, define the weight function

$$w(r,o) = {\left\{ \begin{array}{ll} a(r,o) &{} \text {if}~o~\text {is longer than}~r\\ \ 0 &{} \text {else} \end{array}\right. }$$

The weight function w only considers the symmetric affectance between shorter links in r and longer links in $$O_2$$.

Let us consider the point in time when Algorithm 2 decided not to include link $$o \in O_2$$ to the set R in line 3; it did so because $$a(R,o) \ge 1/2$$. Since R contains then only links shorter than o, we have (i) $$w(R,o) = a(R,o) \ge 1/2$$. On the other hand, Lemma 3 implies that (ii) $$w(O_2,r) \le 10$$, for every $$r \in R$$. With (i) and (ii) we get

$$\frac{1}{2}|O_2| \le \sum _{o\in O_2} w(R,o) = \sum _{r\in R} \sum _{o \in O_2} w(r,o) = \sum _{r \in R} w(O_2,r) \le 10 |R| .$$

It follows that $$|O_2| \le 20 |R|$$. Since $$|O_1| \le |R|$$, we get that $$|OPT| \le 2|O|= 2|O_1+O_2| \le 42 |R| \le 84 |X|$$.

Some observations are in order. Note that the approximation ratio is completely independent of $$\alpha$$. This has not been observed before, but crucially needs the one-dimensional setting.

We also note that the performance analysis does not vitally utilize the definition of affectance, we only need a weak sense of monotonicity: $$a_{x \rightarrow z} \le a_{y \rightarrow z}$$, if x is further away from z than y is. Thus, signal strength can be an arbitrary function of distance and the transmitter that is monotone in the distance.

Several heuristic variations are possible without affecting the performance ratio. The affectance threshold “1/2” can be any positive constant less than 1. Also, the greedy set can be formed more gradually, e.g., by eliminating the highest affectance links first.

Moreover, similar algorithms exist for different variants of the problem, multiple dimensions and also arbitrary power link scheduling can be solved similarly, using slightly more geometry in the proofs. We will summarize the most important results in Sect. 5.

The parameter $$\beta$$ indicates how large the signal-to-interference-ratio must be for a signal to be decodable. This is a function of the technology used, both hardware (e.g. antenna design) and software (e.g. modulation, coding, error correction). One natural question is how much impact the value of $$\beta$$ has on link scheduling and wireless capacity. Increasing $$\beta$$ clearly makes decoding more challenging, but could there be some kind of threshold at which point the problem jumps from being very easy to very hard?

The answer is negative: Scaling $$\beta$$ by a constant factor can only lengthen the schedule by a constant factor.

Theorem 2

Let L be a set of links with affectance at most a, i.e., either $$a_{L \rightarrow l} \le a$$ or $$a(L,l) \le a$$ for $$l \in L$$. Then, for any $$b > 0$$, L can be partitioned into $$\lceil 2a/b \rceil ^2$$ sets, each with affectance at most b.

Proof

Let $$\rho = \lceil 2a/b\rceil$$. Process the links in L in an arbitrary order, assigning each link l to some set $$L_i$$, $$i\in \{1,2,\ldots , \rho \}$$, where l’s affectance from the previous links in $$L_i$$ is at most b/2. Such a set must exist, since otherwise the affectance on the link l is larger than $$\rho \cdot b/2 \ge a$$.

Now process each set $$L_i$$ in the opposite order, forming sets $$L_{i,j}$$, $$j=1,2, \ldots , \rho$$. The affectance on each link l is again at most b/2 from the earlier links, with the same argument. Since we processed the links in opposite order, the total affectance on link l is at most b in total.

A linear bound of $$\lceil 2a/b\rceil$$ was given in [6] using linear algebra. The implication to changing $$\beta$$ applies when the noise term can be ignored. When noise is dominant because throughput can mostly be achieved by weak links, Theorem 2 still tells us that we can increase requirements for the spatial separations of links in a solution by paying only a constant factor.

4 Power Control

One of the most versatile tools for increasing throughput in wireless networks is the use of power control. Power control is a double-edged sword though: Increasing the transmission power may make decoding easier at the intended receiver, but it also causes more interference for all other links.

Definition 12

(Power Assignment). There are three types of power assignment:

• Uniform power does not depend on the length of the link.

• Oblivious power only depends on the length of the link. This includes linear power $$l^\alpha$$ and mean power $$l^{\alpha /2}$$, for links of length l.

• Arbitrary power can depend on all other links that are simultaneously transmitting.

A tantalizing question is whether power control matters in a non-trivial way? How much gain is possible by using power control, as opposed to being limited to uniform power?

Theorem 3

Power control matters. Mean power can be arbitrarily more efficient than uniform or linear power.

Proof

Consider the following prototypical example, known as the exponential chain. Nodes are positioned on a line at locations $$2^0,2^1, \ldots , 2^n$$ from left to right. There are bi-directional links between all adjacent nodes; i.e., for each $$i=1, \ldots , n$$, there is a link $$(2^{i-1}, 2^i)$$ and the opposite link $$(2^i, 2^{i-1})$$. With uniform power, at most one node can transmit successfully to its left-hand neighbor: the left-most link will overpower any other transmission. Namely, if senders $$2^i$$ and $$2^j$$ transmit concurrently, where $$i < j$$, then the signal from $$2^j$$ at receiver $$2^{j-1}$$ is weaker than the signal from $$2^i$$ since $$d(2^i,2^{j-1}) < d(2^j,2^{j-1})$$.

Another popular and useful power assignment strategy is linear power, where links of length $$l$$ transmit with power proportional to $$l^\alpha$$. This strategy has the benefit of being frugal, in that the received power of each link is the same. Perhaps surprisingly, linear power fails equally badly on the exponential chain. Namely, at most one node can transmit successfully to its right-hand neighbor, and the right-most link will overpower any other transmission.

On the other hand, mean power $$l^{\alpha /2}$$ for links of length $$l$$ works well here. The affectances from the other links form a geometric series, which converges to a constant. For instance, using that the power used on link i is $$P_i = d(2^i,2^{i-1})^{\alpha /2} = 2^{(i-1)\alpha /2}$$, the affectances on link i by longer links is

\begin{aligned} \sum _{j=i+1}^n \frac{P_j/d(2^j,2^{i-1})^\alpha }{P_i/d(2^i,2^{i-1})^\alpha }< = \sum _{j=i+1}^n \frac{2^(i-1)\alpha /2}{2^{(j-1)\alpha /2}} = = \sum _{k=1}^{n-i} \left( 2^{-\alpha /2}\right) ^{k} < \frac{1}{1 - 2^{-\alpha /2}}. \end{aligned}

We leave the case of affectances by shorter links as an exercise.

Thus, by Theorem 2, the set can be scheduled in constant number of slots. Thus, we see here an example of linear-factor improvement in throughput, by using the right power assignment.

A natural question is whether oblivious power can be as powerful as arbitrary power. This has been answered negatively: For every oblivious power assignment, there is an instance with n links that is feasible under some power assignment, but only one link can be scheduled with oblivious power [18]. There is qualitative difference, though, in comparison to uniform power. In order to achieve these constructions, the lengths of the links must increase doubly exponentially [27, 33], whereas our earlier construction of Theorem 3 only involved a singly-exponential chain. We compare the relative power of these power assignments in Table 1.

4.1 A Measure of Interference Under Power Control

The advent of power control means that we cannot use affectance directly when reasoning about links or instances, since it depends directly on the power assignment. We introduce here a stand-in replacement that avoids any reference to power, but still provides a measure of feasibility like affectance does in fixed-power settings.

First, some additional notation. We assume a total order $$\prec$$ on the links such that if $$l$$ is shorter than $$l'$$, then $$l\prec l'$$. To simplify notation we write $$d_{ll'} = d(s, r')$$, for links $$l=(s,r), l'=(s',r')$$. We generalize affectance to involve arbitrary power assignment $$\mathcal{P}$$, defining $$a^\mathcal{P}_{l \rightarrow l'} = \min (1, \frac{P(l)/d_{ll'}^\alpha }{P(l')/(l')^\alpha })$$. We also combine it with set notation as before, and define $$a^\mathcal{P}(l,l') = a^\mathcal{P}_{l \rightarrow l'} + a^\mathcal{P}_{l' \rightarrow l}$$. A set S of links is bi-feasible under power assignment $$\mathcal{P}$$ if it is feasible and $$a^\mathcal{P}_{l \rightarrow S} \le 2$$ for each link $$l\in S$$.

Define the function W such that $$W(l,l') = \min \left\{ 1, \frac{l^{\alpha }}{\min (d_{ll'}, d_{l' l})^{\alpha }}\right\}$$ if $$l\prec l'$$, while $$W(l,l') = 0$$, otherwise. For set X and link $$l$$, define $$W(X,l) = \sum _{l' \in X} W(l',l)$$ and $$W(l,X) = \sum _{l' \in X} W(l,l')$$.

A key insight is that W lower bounds affectance under arbitrary power assignment. The term $$l^\alpha /d^\alpha _{ll'}$$ corresponds to the affectance of the shorter link $$l$$ on the longer link $$l'$$ using linear power, while $$l^\alpha /d^\alpha _{l' l}$$ matches the affectance of the longer link $$l'$$ on $$l$$ using uniform power. Both of these are minimal requirements for feasibility, modulo constant factors. We need the following bound that follows from the classic theorem of the geometric and arithmetic means.

Observation 4

For any positive $$\gamma , x, y$$, it holds that $$\gamma x + \frac{1}{\gamma } y \ge 2 \sqrt{xy}$$.

Lemma 5

For any links $$l$$ and $$l'$$ and power assignment $$\mathcal{P}$$, $$W(l,l') + W(l',l) \le 3^{\alpha /2} a^\mathcal{P}(l,l')$$.

Proof

Assume without loss of generality that $$l\prec l'$$. Then, $$W(l',l)=0$$. Let $$d_{\min } = \min (d_{ll'}, d_{l' l})$$ and $$d_{\max } = \max (d_{ll'}, d_{l' l})$$. By the triangle inequality, $$d_{\max } \le d_{\min } + l+ l' \le 3 \cdot \max (l', d_{\min })$$. Since $$d_{ll'} d_{l' l} = d_{\min }d_{\max }$$,

$$\frac{l\cdot l'}{d_{l' l} d_{ll'}} \ge \frac{l}{d_{min}} \cdot \frac{l'}{3 \max (l', d_{min}) } \ge \frac{1}{3}W(l,l')^{1/\alpha } \min (1,\frac{l}{d_{min}}) = \frac{W(l,l')^{2/\alpha }}{3}.$$

Applying Observation 4 with $$\gamma = P_{l'}/P_{l}$$, $$x = \left( l/d_{l' l}\right) ^{\alpha }$$ and $$y = \left( l'/d_{ll'}\right) ^{\alpha }$$,

$$a^\mathcal{P}(l,l') = \frac{P_{l'}}{P_{l}} \left( \frac{l}{d_{l' l}}\right) ^{\alpha } + \frac{P_{l}}{P_{l'}} \left( \frac{l'}{d_{ll'}}\right) ^{\alpha } \ge \sqrt{\left( \frac{l}{d_{l' l}} \frac{l'}{d_{ll'}}\right) ^{\alpha }} \ge \left( \frac{1}{3}\right) ^{\alpha /2} W(l,l').$$

Close links cannot coexist in the same (highly) feasible set.

Lemma 6

For links $$l, l'$$ in a $$3^\alpha$$-feasible set, $$d(l, l') \ge \frac{1}{2} \min (d_{ll'}, d_{l' l})$$.

Proof

Assume without loss of generality that $$l\prec l'$$. Suppose the claim is false. Let $$d_{\min } = \min (d_{l, l'}, d_{l', l})$$ and $$d_{\max } = \max (d_{l, l'}, d_{l', l})$$. By the triangle inequality and the supposition,

\begin{aligned} d_{\min } \le l+d(l,l') < l+ \frac{1}{2} d_{\min } \le 2 l. \end{aligned}
(1)

By the strong feasibility, $$3^{-2\alpha } \ge a^\mathcal{P}_{l \rightarrow l'} \cdot a^\mathcal{P}_{l' \rightarrow l} = \left( \frac{l\cdot l'}{d_{l} \cdot d_{l'}} \right) ^\alpha$$. Thus,

$$d_{\max } \cdot d_{\min } = d_{l, l'} \cdot d_{l', l} \ge 9 ll'$$

Applying Inequality (1), we get that $$d_{\max } > \frac{9}{2} l'$$. But, by the triangle inequality, $$d_{\max } \le d(l, l') + l+ l' \le 3 l+ l' \le 4l'$$, which is a contradiction.

The following lemma is the counterpart of Lemma 3 for power control.

Lemma 7

Let X be bi-feasible under some power assignment $$\mathcal{P}$$ and let $$l$$ be a link (not necessarily in X). Then, $$W(l,X) = O(1)$$.

Proof

We may assume without loss of generality that $$l\prec l'$$ for all links $$l' \in T$$, since $$W(l,l') = 0$$ otherwise. Apply Theorem 2 to partition T into $$(2 \cdot 3^{\alpha })^2 = 4 \cdot 9^\alpha$$ sets $$T_i$$, each of which is strongly feasible in the sense that $$a^\mathcal{P}_{T_i \rightarrow v} \le 3^{-\alpha }$$, for each i and each link $$l_v \in T_i$$. We argue a bound for each $$T_i$$ separately and add them up to obtain a bound on X.

Let $$T = T_i$$ be one of the strongly feasible subsets. Let $$d(l_a, l_b)$$ denote the shortest distance between a node on link $$l_a$$ and a node on link $$l_b$$.

Let $$l_x$$ be the link in T containing a node that is closest to a node on $$l$$, i.e.  $$d(l_x,l) = \min _{l' \in T} d(l', l)$$. Let $$l' \in T \setminus \{l_x\}$$. By the triangle inequality, $$d(l_x,l') \le d(l_x, l) + d(l, l') \le 2 d(l, l')$$. Using this and Lemma 6, $$\min (d_{l_x l'},d_{l' l_x}) \le 4 d(l,l') \le 4 \min (d_{ll'},d_{l' l})$$. Thus,

\begin{aligned} W(l,l')&= \min \left( 1, \frac{l^\alpha }{\min (d_{ll'},d_{l' l})^\alpha }\right) \le \min \left( 1, \frac{\min (l_x,l')^\alpha }{4^\alpha \min (d_{l_x l'},d_{l' l_x})^\alpha }\right) \nonumber \\&\le 4^\alpha (W(l_x, l') + W(l',l_x)) \le 3^{\alpha /2} 4^\alpha a^\mathcal{P}(l_x, l') , \end{aligned}
(2)

using the definition of W and Lemma 5. Summing over all $$l'$$ in $$T'$$ we have,

$$W(l,T) = W(l,l_x) + W(l,T \setminus \{l_x \}) \le 1 + 4^\alpha \cdot 3^{\alpha /2} a^\mathcal{P}(l_x,T) .$$

Finally, summing over the subsets $$T_i$$ of X yields

$$W(l,X) \le 4 \cdot 9^\alpha + 4^\alpha 3^{\alpha /2} a^\mathcal{P}(l_x,X) \le 4 \cdot 9^\alpha + 4^\alpha \cdot 3^{\alpha /2+1} ,$$

using the bi-feasibility of X.

4.2 Power Control Algorithm

A constant-approximation algorithm for the one-shot link scheduling problem with arbitrary-power of Kesselheim [40] is given as Algorithm 3. The first part is equivalent to the first pass of Algorithm 2, but using the measure W instead of uniform-power affectance. The second pass assigns the links power in decreasing order of length, designed to assign each link just a little more power than is needed to overcome the interference from the longer links. Note that if the noise N is zero, the first (longest) link can be assigned arbitrary power.

Theorem 4

Let $$\tau$$ be as in Algorithm 3. If S is a set of links that satisfies, for each link $$l\in S$$, $$W(S,l) \le \tau$$, then S is feasible. Moreover, the set S computed by Algorithm 3 is feasible with the power assignment computed.

Proof

Let $$l_v$$ be a link in S. The total interference received by $$l_v$$ is $$I^-_v + I^+_v$$, where $$I^-_v = \sum _{l_w \in S, l_w \prec l_v} P_w/{d^\alpha _{wv}}$$ is the interference received by shorter links and $$I^+_v = N + \sum _{l_w \in S, l_v \prec l_w} P_w/d^\alpha _{wv}$$ is the ambient noise plus interference received by longer links. Note that $$I^+_v = P_v / (2 \beta l_v^\alpha )$$, by the definition of $$P_v$$ (in line 7). So, the focus is on bounding $$I^-_v$$, the interference from shorter links.

We first expand $$I^-_v$$ using the assigned powers:

\begin{aligned} I^-_v = \sum _{\begin{array}{c} l_w \in S\\ l_w \prec l_v \end{array}} \frac{P_w}{d^\alpha _{wv}} = 2\beta \sum _{\begin{array}{c} l_w \in S\\ l_w \prec l_v \end{array}} \left( N l_w^\alpha \frac{1}{d^\alpha _{wv}} + \sum _{\begin{array}{c} l_u \in S\\ l_w \prec l_u \end{array}} \frac{1}{d^\alpha _{wv}} \left( P_u \frac{l_w^\alpha }{d^\alpha _{uw}}\right) \right) \ . \end{aligned}
(3)

The first term is bounded by $$2 \beta N \tau$$, by the condition in line 4 of Algorithm 3 that defines S. Let $$X_{uv} = \{l_w \in S : l_w \le \min (l_v,l_u)\}$$, for any link $$l_u \in S$$. By rearranging indices, we continue from (3) with

\begin{aligned} I^-_v \le 2\beta N \tau + 2 \beta \sum _{l_u \in S} \sum _{l_w \in X_{uv}} \frac{P_u l_w^\alpha }{d^\alpha _{wv} d^\alpha _{uw}}. \end{aligned}
(4)

Let $$l_u$$ be a link in S. Since $$W(l_w,l_v) \le W(X,l_v) \le \tau < 1$$, it holds that $$l_w/d_{wv} < 1$$, so $$l_w \le d_{wv}$$ and $$l_w^\alpha /d_{wv}^\alpha \le W(l_w, l_v)$$, for any link $$l_w \in S$$. Similarly, $$l_w \le d_{uw}$$ and $$l_w^\alpha /d_{uw}^\alpha \le W(l_w, l_u)$$.

We split the terms of the inner sum into two parts: $$M_1 = \{l_w \in X_{uv} | d_{uv} \le 3 d_{uw}\}$$ and $$M_2 = X_{uv} \setminus M_1$$. For each $$l_w \in M_1$$, using the definition of $$M_1$$ and the assumed bound on W,

\begin{aligned} \sum _{l_w \in M_1} \frac{l_w^\alpha }{d^\alpha _{wv} d^\alpha _{uw}} \le \frac{3^\alpha }{d^\alpha _{uv}} \sum _{l_w \in M_1} \frac{l_w^\alpha }{d^\alpha _{wv}} \le \frac{3^\alpha }{d^\alpha _{uv}} W(M_1,l_v) \le \frac{3^\alpha }{d^\alpha _{uv}} \tau . \end{aligned}
(5)

For each $$l_w \in M_2$$, we have by the triangle inequality that $$d_{uv} \le d_{uw} + l_w + d_{wv} \le d_{uw} + 2 d_{wv}$$. By the definition of $$M_2$$, $$d_{uv} > 3 d_{uw}$$, so $$d_{uv} \le \frac{1}{3}d_{uv} + 2 d_{wv} \le 3 d_{wv}$$. Hence, using the assumed bound on W,

\begin{aligned} \sum _{l_w \in M_2} \frac{l_w^\alpha }{d^\alpha _{wv} d^\alpha _{uw}} \le \frac{3^\alpha }{d^\alpha _{uv}} \sum _{l_w \in M_2} \frac{l_w^\alpha }{d^\alpha _{uw}} \le \frac{3^\alpha }{d^\alpha _{uv}} W(M_2,l_u) \le \frac{3^\alpha }{d^\alpha _{uv}} \tau . \end{aligned}
(6)

Applying Inequalities (5) and (6), along with the definition of $$P_v$$,

\begin{aligned} \sum _{\begin{array}{c} l_u \in S,\\ l_v \prec l_u \end{array}} \sum _{l_w \in X_{uv}} \frac{P_u l_w^\alpha }{d^\alpha _{wv} d^\alpha _{uw}} \le 2 \cdot 3^\alpha \tau \sum _{\begin{array}{c} l_u \in S,\\ l_v \prec l_u \end{array}} \frac{P_u}{d^\alpha _{uv}} \le 2 \cdot 3^\alpha \tau \frac{P_v}{2\beta l_v^\alpha } , \end{aligned}
(7)

and, using also the definition of $$I_v^-$$,

\begin{aligned} \sum _{\begin{array}{c} l_u \in S,\\ l_u \prec l_v \end{array}} \sum _{l_w \in X_{uv}} \frac{P_u l_w^\alpha }{d^\alpha _{wv}d^\alpha _{uw}} \le 2 \cdot 3^\alpha \tau \sum _{\begin{array}{c} l_u \in S,\\ l_u \prec l_v \end{array}} \frac{P_u}{d^\alpha _{uv}} = 4 \beta \cdot 3^\alpha \cdot \tau \cdot I^-_v . \end{aligned}
(8)

Plugging (7) and (8) into Eq. (3) gives,

$$I^-_v \le 2\beta N\tau + 3^\alpha \tau \cdot P_v / l_v^\alpha + 4 \beta 3^\alpha \tau \cdot I^-_v .$$

Solving for $$I^-_v$$, cancelling $$\tau$$ and using the bound $$2\beta N \le P_v/l_v^\alpha$$,

\begin{aligned} I^-_v \le \frac{2\beta N + 3^\alpha \cdot P_v / l_v^\alpha }{1/\tau - 4 \beta 3^\alpha } \le \frac{(1 + 3^\alpha ) P_v / l_v^\alpha }{1/\tau - 4 \beta 3^\alpha } = \frac{1}{2\beta } \cdot \frac{P_v}{l_v^\alpha } , \end{aligned}
(9)

after plugging in the value of $$\tau$$. Thus, the total interference on $$l_v$$ is bounded by $$I^-_v + I^-_v \le \frac{1}{\beta }P_v / l_v^\alpha$$, implying the required SINR for $$l_v$$, as desired.

Observe that the constant-approximation bound now follows from exactly the same arguments as in Theorem 1, just using W and Lemma 7 instead of a and Lemma 3.

Theorem 5

Algorithm 3 is a constant approximation algorithm for one-shot arbitrary-power geometric-gain link scheduling problem.

5 Bibliography

Gupta and Kumar [25] proposed the geometric version of the SINR model, where signal decays as a fixed polynomial of distance; it has since been the default model in analytic and simulations studies. They also initiated average-case analysis of network capacity, giving rise to a large body on “scaling law” results. Moscibroda and Wattenhofer [47] initiated the first algorithmic (worst-case) analysis in the SINR model.

The first algorithmic result on link scheduling for arbitrary link sets was by Moscibroda et al. [49]. This result was soon superseded by the first approximation results [8, 22]. These early approaches involved (directly or indirectly) partitioning links into length groups, which results in performance guarantees that are at least logarithmic in $$\varDelta$$, the link diversity [9, 16, 22, 27]. NP-hardness was established in [22]. Constant approximation for the One-shot Link Scheduling problem were given for uniform power [21], linear power [19, 58], fixed power assignments [29], and arbitrary power control [40]. This was extended to distributed learning [4, 16], admission control in cognitive radio [30], link rates [41], multiple channels [7, 59], spectrum auctions [35, 36], changing spectrum availability [13], jamming [14], and MIMO [61]. Numerous works on heuristics are known, as well as exponential time exact algorithms, e.g., [54].

Our treatment for uniform power is based on the algorithm of [29] and simplified arguments of [41]. Theorem 2 on signal-strengthening is due to [34]; an improved bound using linear algebra is given in [6]. Algorithm 3 for power control is due to [40, 41]; the proof given here holds for general metric space, but is significantly shorter than the one in [41].

A related problem is the scheduling problem where we want to partition the given set of links into fewest possible feasible sets. Early work on this problem includes [8, 12, 17]. Constant approximations for one-shot link scheduling immediately imply a $$O(\log n)$$-approximation for scheduling, where n is the number of links. Another approach is to solve links of similar lengths in groups, which results in a $$O(\log \varDelta )$$-approximation [20, 22, 27]. NP-completeness results have been given for different variants [22, 38, 44], but as of yet no APX-hardness or stronger lower bounds are known. The weighted version of One-shot Link Scheduling – where the links have positive weights and the objective is to find a maximum weighted feasible set – behaves similar computationally as scheduling. Recently, a $$O(\log ^* \varDelta )$$-approximation algorithm was given for arbitrary-power scheduling and weighted One-shot Link Scheduling [32], by transforming the physical model into a conflict graph.

A problem of fundamental importance to sensor networks is connectivity: how efficiently can the nodes be connected into a strongly connected structure. This is an issue that affects the network as well as the link layer of the networking stack, as one must select the links in a spanning tree, choosing their power and scheduling them. The first analytic result in the SINR model showed that with the right power control, any set of nodes can be connected into a tree that can be scheduled in polylogarithmic, $$O(\log ^4 n)$$, number of slots [47]. This was soon improved to $$O(\log ^2 n)$$ [46, 50] and later to $$O(\log n)$$ [31].

Other more complex problems studied include non-preemptive scheduling [20], joint power control, scheduling and routing [8], fixed-power multiflow [9], multi-path flow with general demand vectors [60], stochastic packet scheduling [42, 56], and joint power control, routing and throughput scheduling in multiple channels [2], to name a few. Many of these rely on (weighted) One-shot Link Scheduling as a building block.

Beyond the computational aspects covered in this survey, there are challenging issues that arise when trying to achieve communication in a distributed setting. There is also some deep work on geometric characterizations of the regions in which specific transmissions can be decoded under the physical model, e.g., [37].

6 Beyond the Physical Model

6.1 Realistic Signal Propagation

The assumption of geometric gain is mathematically pleasing, but it can be quite far from reality, even in relatively simple environments [24, 45, 53, 55]. On the other hand, the additivity of interference and the near-threshold nature of signal reception has been borne out in experiments [10, 45, 48, 55, 62].

Several proposals have been suggested for extending the standard physical model to capture the non-geometric aspects of signal propagation. The basic model allows the pathloss constant $$\alpha$$ to vary [25], giving a first-order approximation of the signal gain. Another more general approach is to view the variation as conforming the plane into a general metric space [18, 29]. Much recent analytic work holds in arbitrary metric spaces [29, 41], while some requires them to have a certain “bounded independence” property [32].

One practical alternative is to use facts-on-the-ground in the form of signal strength measurements, instead of the prescriptive distance-based formula [7, 24]. This might suggest the general physical model (Definition 4), but that runs into the computational intractability monster, since such a formulation can encode highly-inapproximable problems like maximum independent set in graphs [21]. Instead, one can characterize the performance guarantee in terms of natural parameters of the gain matrix, such as its nearness to a metric [24]. Another such parameter is the weighted “inductive independence” [35], which has shown to be of wide general utility [28].

In the world of stochastic analysis, the default assumption is to model the variations in signal propagation by a probabilistic distribution [26]. Significant experimental literature exists that lends support for stochastic models [51], especially log-normal distributions [62]. There is a need to better understand the impact of such stochastic assumptions on effective algorithms.

The temporal aspect of signal variability is another dimension. The dual graph model [43] extends the radio network model to a pair of graphs, one of which contains the links that are unreliable, that may or may not transmit a message by adversarial control. Stochastic models usually assume independence across time. In such a setting Dams et al. [15] showed that temporal variations that follow as Rayleigh distribution do not significantly affect the performance of link scheduling algorithms, incurring only a $$O(\log ^* n)$$-factor increase in performance.

In wireless communication, multiple-input and multiple-output (MIMO) is a method for expanding the capacity of a radio link using multiple transmission and receiving antennas [39]. MIMO is well established in practice, with several wireless standards supporting it. MIMO has also received a lot of attention from lower layer signal processing research and information theorists. Network wide MIMO applications are known as multi-user MIMO (MU-MIMO) or cooperative MIMO (CO-MIMO). These are still in research, and there are not many studies with an algorithmic flavor, but there are exceptions, e.g. [5].

Closely related to MIMO is beamforming using antenna arrays. Beamforming is a signal processing technique at either the transmission or the reception side. The idea is to carefully choose the phase of the signal of the various antenna in order to produce either constructive or destructive interference at different locations. Again this is an active research area in information theory, less so in algorithms.

Network coding [1] on the other hand only works in the presence of a network. While we assumed that concurrent wireless transmissions interfere, one may hope to make use of the additive nature of concurrent wireless signals. Consider three nodes uvw, with v sitting in the middle of u and w, and nodes u and w want to exchange a message. In the first time slot, let u and w transmit their own message concurrently. Node v may understand neither u’s nor w’s message because of interference, but v could just retransmit the received additive signal in the second time slot. Now both u and w receive the additive signal, and since they know their own original message, they can simply subtract their own message from the additive message, and consequentially get the missing message. All this requires just two time slots. Network coding has been analyzed from an algorithmic perspective, e.g. [23], but it is rarely used in the wild.

7 Open Questions

The last section discussed new directions that all need to be addressed better algorithmically. We list here some of the most significant open questions regarding the physical model:

1. 1.

Is there a constant-approximate algorithm for scheduling problem with uniform power? Only logarithmic factors are known, even in one dimension.

2. 2.

Are there small constant approximations of one-shot link scheduling, e.g. a 2-approximation? Can the possibility of an approximation scheme be disproved?

3. 3.

How much can the capacity of practical wireless networks be improved with good scheduling algorithms? How much of the gain is due to power control, and how much can be achieved already with uniform power?

4. 4.

What kind of infrastructure and/or assumptions are sufficient/necessary to achieve efficient distributed algorithms for link scheduling?

5. 5.

How can one capture the unreliability or time-variability seen in most actual wireless networks, to make such realistic but non-deterministic models algorithmically tractable?