Autonomous Agents and Multi-Agent Systems

, Volume 31, Issue 3, pp 469–492

Robust allocation of RF device capacity for distributed spectrum functions

  • Stephen F. Smith
  • Zachary B. Rubinstein
  • David Shur
  • John Chapin

DOI: 10.1007/s10458-016-9329-5

Cite this article as:
Smith, S.F., Rubinstein, Z.B., Shur, D. et al. Auton Agent Multi-Agent Syst (2017) 31: 469. doi:10.1007/s10458-016-9329-5


Real-time awareness of radio spectrum use across frequency, geography and time is crucial to effective communications and information gathering in congested airway environments, yet acquiring this awareness presents a challenging sensing and data integration problem. A recent proposal has argued that real-time generation of spectrum usage maps might be possible through the use of existing radios in the area of interest, by exploiting their sensing capacity when they are not otherwise being used. In this paper, we assume this approach and consider the task allocation problem that it presents. We focus specifically on the development of a network-level middleware for task management, that assigns resources to prospective mapping applications based on a distributed model of device availability, and allows mapping applications (and other related RF applications) to specify what is required without worrying about how it will be accomplished. A distributed, auction-based framework is specified for task assignment and coordination, and instantiated with a family of minimum set cover algorithms for addressing “coverage” tasks. An experimental analysis is performed to investigate and quantify two types of performance benefits: (1) the basic advantage gained by exploiting knowledge of device availability, and (2) the additional advantage gained by adding redundancy in subregions where the probability of availability of assigned devices is low. To assess the effectiveness of our minimum set cover algorithms, we compute optimal solutions to a static version of the real-time coverage problem and compare performance of the algorithms to these upper bound solutions.


Distributed task allocation Contract net protocol Task allocation under uncertain resource availability Market-based procedures 

1 Introduction

Real-time awareness of radio spectrum use can be a valuable asset in many application settings, making communication in congested airways more effective and also facilitating information gathering activities. However, acquiring this awareness is currently a difficult problem. Either a sufficient number of radio frequency (RF) sensors must be appropriately prepositioned in the particular region of interest or a smaller number of devices must be periodically rotated over the region. A recent proposal being pursued by the DARPA RadioMap program advocates a different approach, recognizing the fact that in many application settings there is a wealth of RF sensing capacity already resident in a given target area of interest that can be exploited to this end [5]. In urban areas, for example, there are numerous mobile police, firefighting and municipality vehicle radios on the road. Likewise, during military operations, communication radios are used pervasively to coordinate activity. In most cases, these devices are not used continuously and could be exploited for RF map building purposes. The goal of the RadioMap program is to realize this potential.

In this paper, we consider the problem of tasking a (possibly changing) set of resident devices to perform RF mapping in a given area of interest. We assume that these devices are equipped with middleware to allow their use for RF map building (or related tasks such as spectrum coordination, surgical electronic attack, etc.) during periods when their “primary mission” is not active.1 In contrast to the classical tasking assumption made in wireless sensor networks (WSNs), devices are assumed to be power rich, and the principal tasking objective is maximizing achievement of secondary RF-mapping tasks rather than load balancing to maximize the power lifetime of the network.

While one might imagine that an application such as an RF spectrum mapping could task specific radios directly, such precise tasking at the application level is problematic and could result in significant additional overhead due to the following realities:
  • Uncertain availability—Since the application is not the primary mission for a device and the primary mission can take control of the device at any point, a tasked device cannot be expected to be available on demand. The primary mission is not constrained to inform potential secondary devices of specific periods of availability, and there are also uncertainties associated with potential communication failures, unexpected device movement, consumption of node resources (e.g. battery) and loss of signal that additionally impact device availability. In fact, scenarios where a given node is available as low as 25 % of the time are very plausible.

  • Competing applications—In any given application setting, availability may be further constrained by the fact that multiple applications require use of the network, which introduces competition for the use of specific devices. While it could be possible to pre-allocate subsets of devices to specific applications, such a static allocation strategy is likely to be highly sub-optimal as circumstances such as device location evolve over time.

Both of these realities admit the non-trivial possibility that mapping requests will fail and have to be re-allocated (either to the same or to other comparable devices). To further complicate matters, the short lead times of expected mapping requests (e.g., a RF frequency map of a specific region every 10 s) effectively preclude the possibility of maintaining a model of current device availability at the application level.

Given these problem characteristics, we instead advocate a framework where applications specify the secondary RF tasks to be performed in a device independent manner, and an independent task management middleware assumes responsibility for allocating network resources to achieve these tasks. Specifically, we propose an auction-based task allocation framework, where individual devices bid on secondary tasks based on their expected availability, and these bids are composed to determine a set of task assignments that maximize the probability of successful execution. Auction-based task allocation procedures have proven effective in a variety of other multi-agent and multi-robot planning domains (e.g., [1, 3, 6, 12, 20, 21]), where communication bandwidth is limited and it is cost-prohibitive to maintain a centralized model of state. In the current context, an auction-based framework allows us to efficiently maintain and exploit a network-level model of device availability.

We focus specifically on RF coverage tasks and investigate the leverage that a market-based allocation framework can provide with respect to two important tasking capabilities:
  • Biased Allocation—By self-monitoring device usage and availability over time, each device can independently construct an availability profile that allows efficient quantification of the probability of the device being available at any point. Task allocation can then exploit this information to bias application task assignments and allocate devices that give the greatest probability of task success.

  • Allocation Redundancy—Availability profile information can also be used to identify the degree of uncertainty associated with various device tasks, and use this knowledge to allocate a proportionate amount of redundant RF sensing tasks. Allocation of redundant tasks can increase the overall probability of success.

The remainder of the paper is organized as follows. First, in Sect. 2, we review relevant prior work in market-based approaches to task allocation. In Sect. 3, we specify our overall auction-based framework for allocation of application-level tasks to known network resources. Next, in Sect. 4, we narrow focus to a class of coverage tasks, which includes the RF spectrum mapping task of interest. We instantiate the framework with a specific task allocation model and experimentally analyze the advantage provided by this tasking middleware approach over direct application-level task assignment. In Sect. 5, we expand the task allocation model to consider the possibility of allocating redundant tasks, and experimentally analyze how redundancy can lead to improved RF spectrum map building performance. The optimizing performance of our task allocation models is considered next in Sect. 6. A Mixed Integer Linear Program (MILP) for optimally solving a static version of the Radiomap problem is formulated, and these upper bound solutions are used to characterize the performance of our approach. Finally, in Sect. 7 we summarize our findings and discuss future research directions.

2 Related work

The use of auction and market-based mechanisms for task allocation descends from the Contract Net Protocol originally proposed in [22]. In this work, a contracting metaphor was used to specify a basic protocol for identifying, soliciting and engaging appropriate problem solving capabilities in the pursuit of solving a larger problem. Over the years, variants of this basic protocol have been effectively applied in many other multi-agent “task allocation” contexts, including manufacturing production scheduling [20], cargo movement [21], grid computing [4], robotic space exploration [12], multi-robot coordination [10] and disaster response planning [1].

Considering applications closer to the RadioMap task allocation problem, research in the mobile robots community has considered auction-based approaches to a range of coverage problems (e.g., surveying or surveilling a physical space, covering terrain, performing search and rescue) [6]. However, in these types of coverage problems, unlike the Radiomap setting, it is typically assumed that robot (sensor) movements are under the allocator’s control and that robots are unconditionally available for tasking. The focus is on optimizing robot movements to minimize time needed to achieve coverage, rather than maximizing the level of coverage that is achievable by a set of pre-positioned devices with limited availability.

Perhaps more relevant is prior work in task allocation for wireless sensor networks (WSNs) and extended sensor, actor/actuator networks (WSANs). This work has explored similar concepts of market-based task allocation middleware (e.g., [7, 13]). However, the typical assumption made in this work is that power is limited, and the overriding task allocation objective is to choose devices over time that maximize the lifetime of the network. Selected efforts have secondarily considered task completion time (e.g., [19]). In the context of RadioMap, alternatively, it is reasonable to assume that radios are linked to larger power sources (e.g., vehicles) or are otherwise easily recharged at regular intervals, and load balancing to conserve power is not a major allocation concern.

For extended WSAN networks, a family of distance-based service discovery algorithms based on the concept of an information mesh (iMesh) [16, 17, 18] has been proposed to allocate actuator tasks. These service discovery algorithms provide an efficient decentralized basis for determining the closest proximity actuator to a given request (hence minimizing travel and energy expenditure). However, like multi-robot coverage procedures, iMesh algorithms do not address the basic coverage objective of the RadioMap problem. Service discovery in the classical sense of maintaining an eligible set of taskable nodes (e.g., [23]) is certainly still relevant, but in this context can be treated as a separable capability that provides input to the task allocation process.2 Given a set of eligible nodes, the RadioMap task allocation challenge is to determine the best subset of nodes to cover the request at hand.

Perhaps even more importantly, task allocation research in wireless sensor networks has assumed that the network is available for tasking throughout its lifetime (i.e., without an alternative mission). The fact that Radiomap network devices are controlled by an independent primary mission makes device availability inherently uncertain and precludes the use of techniques for relocating sensors to improve communication and sensing capabilities (e.g., [15]). Below, we extend use of auction-based task allocation mechanisms to the problem of allocating RF devices with uncertain availability over time.

3 Basic approach

As indicated above, we take a distributed, market-based approach to solving the RadioMap task allocation problem. We generalize from the specific target of building real-time awareness of RF spectrum usage and imagine that a given network of resident devices could support a variety of secondary missions in addition to spectrum mapping that are initiated by a number of distinct application users. In this extended setting, a market-based allocation framework is attractive in several respects. First, it promotes efficient distributed development and maintenance of a model of device availability by individual devices themselves. Second, it centralizes task allocation decisions associated with a given application request and, in doing so, provides the best opportunity to optimize the probability of success. Third, it provides an independent arbiter in circumstances where multiple applications are competing for the same limited resources.

We assume an overall system design where the network task management function (or tasking manager) is organized in a distributed manner and embedded as middleware on network devices. An interface is defined to enable an application to submit specific jobs for execution. The term job is used to designate a higher level abstraction of a complete application function, such as building a spectrum map for a given area. We use the term task to represent a single atomic function that may be carried out by a device in support of a job. The tasking manager of the network node that receives a given input job request first decomposes the job into a set of atomic tasks, and then assigns each task to a specific device for execution. Successive application requests may be submitted to different network nodes, so that no single node becomes a bottleneck and there is no single point of failure. We assume that communication is peer-to-peer and make no additional assumptions about network structure (see [5] for details on maintaining a set of eligible nodes).

Task decomposition is accomplished by applying an appropriate task template, a hierarchical task network (HTN) like description [11] of a certain type of RF application that captures its structure, constraints and interdependencies. Task templates also provide a framework for associating specialized, high-performance task allocation strategies with specific types of RF applications. In our analysis later in this paper, we will focus exclusively on allocation strategies for coverage tasks. But a more general implementation of the tasking manager would operate with a library of templates that can subsequently be reused and composed to expand the set of supported RF applications.

Once an input job has been received by a node and decomposed into a set of executable tasks, its tasking manager initiates a process of brokering these required tasks to other nodes in the network. A network node discovery service is consulted to determine a set of candidate nodes/devices with the required capabilities. The brokering process is then carried out for each constituent RF task via the following Announce/Bid/Assign coordination protocol:
  1. 1.

    The task is announced to candidate “contractor” nodes by the “brokering” node.

  2. 2.

    Potential contractors respond by issuing bids that indicate possible task execution options.

  3. 3.

    Bid Responses are synthesized by the brokering node and tasks are assigned.

Each node maintains a description of its task management state, which tracks the status of tasks that the node is currently brokering, in order to determine whether task requests can be satisfied, and if so, by when. Each node also maintains a description of its execution state, which tracks the tasks that the node has currently committed to and its overall availability. The bidding protocol does add some communication overhead to job execution, and this overhead must be balanced against the information benefit that is achieved during execution. For RF mapping tasks this overhead is negligible, particularly in relation to the volume of spectrum data that is returned.

4 Managing uncertainty through biased allocation

To characterize the performance impact of basing tasking decisions on knowledge of the expected availability of different devices (as would be taken into account by the above market-based approach), we focus on a type of coverage problem that is representative of RadioMap’s target RF spectrum mapping application. The specific problem is to allocate RF scanning tasks to devices to achieve coverage of a specified spatial region. We instantiate our market-based framework with a basic, availability-sensitive tasking model and specify an allocation strategy that exploits this model. We then perform a comparative analysis of this biased allocation strategy and a second naive allocation strategy that simply assumes that devices are dedicated and available (as would likely be the strategy if tasking were being done at the application level).

4.1 Basic tasking model

To define a basic tasking model that considers the uncertainty associated with device availability, we make the following basic assumptions:
  • A node/device is modeled as a single resource that can be either busy or available (with some probability) over any requested interval. A node’s allocation status is considered to be busy over any interval where either the Tasking Manager has explicitly allocated the node to a task or the Tasking Manager otherwise knows with certainty that the node is in use by the primary mission, and free otherwise. Let \(Alloc_{n,i}\) designate the allocation status of node n at time instant i. We assume that the node task manager incrementally maintains this information \(Alloc_{n}\) as task allocation decisions are made.

  • A node’s availability state is represented as a discrete probability distribution over the tasking horizon h that is reflective of the usage patterns of the primary mission, e.g.,
    $$\begin{aligned} \langle ([t_{0}, t_{1}] avail \ 0.0) ([t_{1+1}, t_{2}] avail \ 0.9) ([t_{2+1}, h] avail \ 0.5) \rangle . \end{aligned}$$
    In the simplest case, this distribution can be abstracted into a single probability, e.g., \(\langle ([t_{0}, h] avail \ 0.8)\rangle \). Within a given probability distribution, any period \(([t_{i}, t_{j}] avail \ 0.0\)) is interpreted as a busy period, which implies \(Alloc_{n,k} = busy\) for \(k=i,\ldots , j\). Likewise, a period \(([t_{i},t_{j}] avail \quad x)\), for \(x > 0\), implies that \(alloc_{n,k} = free\) for \(k=i,\ldots ,j\) and it is considered available for allocation. In this case, the probability that this interval will actually be available is x. More precisely, a node’s availability state, \(Avail_n\), is represented as a dense, ordered sequence of one or more availability intervals \(AI_{n,1} \ldots \), where \(End (AI_{n,i}) + 1 = Start (AI_{n,i+1})\) and \(Prob_{Avail} (AI_{n,i})\) specifies the probability of availability over \(AI_{n,i}\)’s entire temporal extent. We assume that each node constructs and maintains this probability distribution from historical information.3
  • For a given scanning request interval \([t_{i}, t_{j}]\) the availability of a node n is defined as a function of \(Alloc_{n}\) and \(Avail_{n}\). Let \(AInts_{n,t_{i}, t_{j}}\) designate the set of consecutive intervals in \(Avail_{n}\) that intersect with the request interval. Then
    $$\begin{aligned} Prob_{Avail}(n, t_{i}, t_{j}) = \left\{ \begin{array}{l@{\quad }l} \text{0 } \text{ if, } \text{ for } \text{ any } \text{ time } \text{ point } k \in [t_{i}, t_{j}], &{}Alloc_{n,k} = busy \\ \prod \limits _{p \in AInts_{n,t_{i}, t_{j}}} Prob_{Avail} (AI_{n,p}) &{} \text{ otherwise } \end{array} \right. \end{aligned}$$
  • A node also has a current location, Loc(n), and a spatial range that it is capable of scanning, Range(n). In the context of the regional coverage task considered here, these parameters will determine the extent of the task that the node is capable of performing.

  • A Task Announcement that is sent out for bid by a tasking node has the form:
    $$\begin{aligned} Announce(scan, t_{1}, t_{2} , d, x_{1}, x_{2}, y_{1}, y_{2}), \end{aligned}$$
    • scan is the capability required of the device,

    • \([t_{1}, t_{2}]\) is the interval in which the scan is requested,

    • \(d \ \ (\le t_{2} - t_{1} + 1)\) is the required duration of the scan, and

    • points \(x_{1}, x_{2}, y_{1}, y_{2}\) delineate the rectangular region to be covered, i.e., between the min point \((x_{1}, y_{1})\) and the max point \((x_{2}, y_{2})\)

  • When a node receives a task announcement
    $$\begin{aligned} Announce(scan, t_{i}, t_{j} , d, x_{k}, x_{l}, y_{m}, y_{n}) \end{aligned}$$
    and the capabilities provided by a node n include scan, then the node’s Bid Response will be:
    $$\begin{aligned} \langle Prob_{Avail}(n, t_{k}, t_{l}), x_{p}, x_{q}, y_{r}, y_{s} \rangle , \end{aligned}$$
    where \([t_{k}, t_{l}]\) is an interval of duration d within the requested interval \([t_{i}, t_{j}]\) and points \( x_{p}, x_{q}, y_{r}, y_{s}\) indicate a rectangular sub-region of the request region that the node is able to cover.4 It is assumed that the node will return the subinterval d that maximizes \(Prob_{Avail}\).
  • The contracting node will collect bids and allocate tasks to (1) maximize the portion of the requested task that is successfully executed, and (2) make efficient use of the nodes in the network (minimizing the number of nodes used).

4.2 Task allocation strategies

We compare the performance of two allocation strategies, each a variant of the basic Minimum Set Cover heuristic originally presented in [2] (depicted in Fig. 1). This heuristic allocation scheme is proposed as a basic strategy for allocating devices to support coverage tasks, and is embedded in the general Coverage Task Template that is used to decompose coverage task requests (jobs) into constituent RF-scan tasks and to combine bids into a set of task assignments. This minimum set cover heuristic is attractive because it provides an efficient (greedy) allocation procedure with a guaranteed bound on performance quality (i.e., distance from the optimal solution). In fact, it can be shown that this heuristic achieves the best approximation bound possible to this NP-hard problem. [2]
Fig. 1

Minimum set cover heuristic

We define two variants of this minimum set cover task allocation strategy for our analysis:
  • Naive Allocation—The Naive Allocation strategy applies the procedure of Fig. 1 directly, with geometric reasoning on rectangles used to implement \(|Additional-Cover_{bid} |\) and \(Complete-Coverage_{Bid-Set}\). More precisely, \(|Additional-Cover_{bid} |\) is defined as \(area_{bid} \over uncovered\) , where \(area_{bid}\) is the uncovered area that the node (bid) can cover and uncovered is the total area that remains uncovered. Essentially, the bid that covers the most additional uncovered area of the request region is selected for assignment on any given iteration (selecting randomly in the case of ties), and both uncovered and \(|Additional-Cover_{bid} |\) are then recomputed. The strategy terminates when either the entire request region is covered or the set of bids are exhausted (in which case the selected task assignments will provide only a partial cover). This strategy allocates nodes strictly on the basis of their coverage capabilities and does not consider node availability information. It is indicative of a strategy that one could expect to be applied if tasking is performed at the application level where there is no visibility of an individual node’s availability.

  • Biased Allocation—The Biased Allocation strategy adopts the same implementation of \(Additional-Cover_{bid}\) and \(Complete-Coverage_{Bid-Set}\), but argmax is defined as \(|Additional-Cover_{bid} | \times Prob_{Avail}(n_{bid}, t_{i}, t_{j})\). In other words, BiasedAllocation is a weighted minimum set cover formulation (weighted by each node’s probability of being available). This strategy represents our proposed tasking model, where availability information is monitored and modeled, and then exploited during task allocation.

4.3 Experimental design

To evaluate the performance of these coverage task allocation strategies, a number of problem scenarios were generated. Each generated scenario conformed to the following constraints:
  • The area of interest was defined to have a spatial extent of 6400 \(\times \) 6400 m, and a temporal extent of 10 s.

  • A specified number of taskable nodes were randomly distributed over the spatial extent, with node scanning range fixed at a 1500 m radius. This assumption is consistent with a tactical communication radio operating outdoors under typical assumptions.

  • Node availability probabilities were randomly distributed between [0.0–1.0] (so that 50 % of the nodes can be expected to be available at any point overall).

  • One or more coverage jobs were randomly distributed over both the spatial and temporal extent of the area of interest.

Following the above scenario generation framework, a number of experiments were defined by varying scenarios along a specific dimension (either by the number of nodes, or by the number of requests). For each value of the varied dimension, 10 problem scenario instances were generated and solved using each task allocation strategy. The task assignments produced were then evaluated and an average score was derived for each strategy for each value of the varied dimension.

To evaluate a given task assignment, its execution was simulated. This was done in a simple way. For each assigned node, we sample from its availability distribution to determine whether its assigned task is successfully completed. We combine all results to determine how much of each coverage request (job) was achieved. This simulation procedure is executed 100 times and the results of these trials are averaged.

The score for a given coverage request r is
$$\begin{aligned} score(r) = {ac(r) \over ar(r)}, \end{aligned}$$
where ac(r) is the area of the region covered and ar(r) is the area of region requested. The score for a given set Req of n requests then, i.e., score(Req), is simply the average of the individual request scores.
We also consider the case of prioritized requests, by assuming that earlier submitted requests are weighted higher. In the experiments assuming prioritized requests reported below, we simulate earlier arrival by simply appealing to the order in which a set of requests is processed. Specifically, the score for a given set Req of n prioritized requests is
$$\begin{aligned} pscore(Req) = {{\sum \nolimits _{i= 1}^n {score(r_{i})} \times {(1 +n - i)}} \over {n \times (n+1) \over 2}}. \end{aligned}$$
In all cases, the maximum attainable score is 1.

4.4 Results

Single mapping request: As a first experiment we consider the scenario of a single request to construct a map of the entire region of interest, simulating the principal application of interest to the RadioMap program. We examine the basic tradeoff between allocating tasks to devices directly at the application level where there is no visibility of availability (Naive Allocation) or relying on a task management middleware to allocate tasks to devices, where models of device availability can be efficiently maintained and exploited (Biased Allocation). We assume that node availability over the horizon of interest is modeled as a single probability.5 We vary the number of nodes from 25 to 100 in increments of 25.

Figure 2a shows the comparative performance of the Biased Allocation and Naive Allocation strategies in this setting. As can be seen, there is significant advantage to biasing task allocation decisions on knowledge of node availability, and this advantage increases as the number of devices in the region of interest is increased. T-test results confirm significance at all node levels (with values range from 0.002 to 0.000008). Task allocation without visibility of node availability results in significantly lower coverage performance and this performance remains relatively flat as the number of nodes is increased.
Fig. 2

RF mapping with and without visibility of node availability. a Percentage of request covered. b Number of devices used

Figure 2b shows the average number of tasks assigned to accomplish the mapping task under each allocation strategy. With relatively few taskable devices, Biased Allocation results in the use of 1–2 more devices than the number used by Naive Allocation. This makes sense as the former is trading off extent of coverage with the probability of the device being available. As the number of nodes in the mapping region increases, it is more likely to find comparable coverage without sacrificing expected availability, hence the numbers of devices tasked by each strategy converge.

The effect of increased demand: As a second experiment, we consider the extent to which the load on the nodes in the network (i.e., number of requests) affects the leverage that knowledge of node availability provides. We fix the number of nodes/devices in the region of interest at 25, and again assume that node availability is modeled as a single probability. We assume that individual scanning requests are rectangular regions ranging between 200–800 m in both dimensions, and vary the number of requests from 25 to 100 in increments of 25.
Fig. 3

Task allocation performance under increasing network load. a Equally weighted requests. b Prioritized requests

Figure 3a shows the comparative performance of Biased Allocation and Naive Allocation in this setting. The Biased Allocation scheme again outperforms the Naive scheme, but the advantage decreases as the number of requests increases. T-test results confirm significance at all request levels (with values ranging from 0.000000002 to 0.002). As the tasking capacity of nodes with high \(Prob_{avail}\) is allocated, the leverage of availability information decreases, and as overall demand approaches the total tasking capacity of the nodes, the advantage is essentially neutralized. That is, as tasks are allocated, the time intervals [\(t_{i}, t_{j}]\) with higher \(Prob_{avail}(n,t_{i}, t_{j})\) are consumed, leaving only time slots with lower \(Prob_{avail}\). As the number of requests increase, all time slots are allocated regardless of the allocation strategy. In this saturation case, there is no difference between the strategies, since all requests are considered of equal priority. If requests are prioritized (as shown in Fig. 3b), then the same trend is present but the advantage decreases at a slower rate.6

The effect of increased tasking capacity: For the third experiment, we consider how an increase in the tasking capacity of the network impacts the allocation strategy. We again assume that that any given scanning request targets an area of between 200 and 800 m in each dimension, and that node availability is modeled as a single probability. We hold the number of requests fixed at 25 and vary the number of nodes/devices in the network from 25 to 100 in increments of 25.
Fig. 4

Task allocation performance as network size is increased. a Average percentage covered. b Average number of nodes allocated

Figure 4a shows the comparative performance of Biased Allocation and Naive Allocation in this setting. As the tasking capacity (number of nodes) in the network is increased, there is increasing opportunity to find task nodes with high expected availability and with sufficient capacity the Biased Allocation scheme approaches complete coverage. Performance of the Naive Allocation scheme alternatively remains flat as the number of nodes in the network is increased. T-test results confirm significance at all node levels (with values ranging from 0.00003 to 0.0000001).

Figure 4b shows the number of nodes allocated under each strategy as the number of nodes in the network was increased. At smaller levels relative to the number of requests, Biased Allocation often utilizes an extra node. But at larger network sizes, the difference in the number of nodes utilized by each allocation strategy decreases significantly.

5 Adding redundancy

It is clear from the results just presented that when allocating RF mapping tasks to devices with uncertain availability, use of knowledge about their expected availability can lead to better overall results. One drawback of the Biased Allocation strategy just analyzed, however, is that it is a single pass coverage algorithm. Using its greedy heuristic, it selects the bid at each step that covers the largest remaining uncovered area (weighed by the \(prob_{avail}(n,t_{i}, t_{j})\) of the node n that submitted the bid). Since the algorithm is focused simply on producing a complete cover, it is quite possible that some sub-regions are covered by nodes with limited expected availability. If there are other nodes that have not yet been tasked, then it may be possible to increase the probability of successful execution by adding redundancy. In this section we explore this possibility.

5.1 Probability maps and redundant allocation

The use of probabilistic information about task execution uncertainty to strengthen plans was first explored in [14] in the context of constructing more robust disaster response plans. In this work, task failure worked against the accomplishment of goals (e.g., survey a damaged site, repair a broken power line) and the accumulation of utility/rewards, and a probabilistic analysis of a generated tasking plan was applied post-hoc to identify weak points and strengthen them through addition of redundant tasks. In our current context, the quality of the spectrum map that is returned by the network in response to a given application request is tied directly to the expected availability of the devices that have been assigned to carry it out.7 We can characterize the strength of a given cover by computing a Probability Map, which, for each overlapping subregion r, combines the \(Prob_{avail}(n,t_{i}, t_{j})\) of nodes n whose cover includes r to produce the probability \(Prob_{suc}(x,y)\) that any point (xy) in the area of interest will be successfully mapped, and by extension, the \(Prob_{suc}(r)\) of each uniquely covered subregion r. More specifically, let N be the set of nodes contributing to the cover of a uniquely covered region r. Then,
$$\begin{aligned} \begin{aligned} Prob_{suc}(r)&= \sum _{n_i \in N} Prob_{avail}(n_i,\ldots ) - J_2 + \cdots J_{|N|}, \text {where} \\ J_s&= \sum _{n_1,\ldots n_s \in N (n_i \ne n_j)} Prob_{avail}(n_1,\ldots ) \cdots Prob_{avail}(n_s,\ldots ) \\ \end{aligned} \end{aligned}$$
Here, each \(J_s\) is the sum of the joint probabilities of every subset of nodes in N of size s. Figure 5 graphically depicts a cover with overlapping subregions and the resulting probability map.
Fig. 5

A sample cover for a mapping task. a Overlay of accepted node bids. b Resulting probability map (each uniquely covered rectangle r has a specific \(Prob{_{suc}}(r)\))

We can use this characterization of the strength of a generated cover in conjunction with our Biased Allocation strategy to define an extended cover strengthening procedure. The basic iterative procedure is given in Fig. 6. It proceeds by first computing an initial cover of the requested region as before, and then performing some number of improvement passes. Three parameters are used to control the process:
  • n_weakest—To determine which subregions to improve on a given pass, Select-Weakest-Regions assigns each subregion r a rank of \((1 - Prob_{suc}(r)) \times {area_{r} \over uncovered}\) , and the \(n\_weakest\) top ranked subregions are selected and composed. The biased allocation procedure is then reinvoked to add redundant coverage to this composite region.

  • threshold—For some requests, there may be a tradeoff between achieving better coverage and expending additional network resources. The threshold parameter (\(0 \le threshold \le 1\)) specifies an upper bound on the \(Prob_{suc}(r)\) that is required for any region r. In the experiments considered below, we associate different threshold values with different classes of request priorities.

  • max_iteration—Finally, the maximum number of improvement passes is set at max_iteration.

Fig. 6

Cover strengthening procedure

5.2 Experimental analysis

To analyze the performance impact of providing some amount of redundant coverage, a set of comparative experiments were performed, using the same experiment design that was used earlier for analysis of biased allocation (see Sect. 4.3). In this case, we focus on a comparison of the Cover Strengthening procedure specified in Fig. 6 and our basic Biased Allocation scheme.

Single mapping request: As an initial experiment, we consider the value of redundancy in support of a single mapping request. We first perform a preliminary analysis to determine a reasonable setting for \(n\_weakest\). We assume that only a single improvement pass is performed (i.e., max_iteration\(= 1\)), fix the number of nodes in the network at 50 and vary \(n\_weakest\) (the number of subregions selected to improve) in increments of 20. We compute the average performance over 10 randomly generated problem scenarios for each setting of \(n\_weakest\).

The results are shown in Fig. 7. First, we observe that biased allocation with redundancy clearly provides improvement in coverage over biased allocation alone (which corresponds to \(n\_weakest = 0\)). We also see that the most significant improvement comes by adding redundancy to subregions with the lowest probability of success (89 % of total gain is achieved at the setting \(n\_weakest = 20\)). As \(n\_weakest\) is increased, the remaining unallocated nodes become less likely to provide additional useful coverage (10 of 15 possible nodes are added by \(n\_weakest = 20\)), and redundancy has a diminishing effect on performance.
Fig. 7

Percentage of request covered and number of nodes allocated for increasing values of \(n\_weakest\), the number of subregions to improve (50 nodes total). a Coverage performance. b Number of nodes allocated

Fig. 8

Performance impact of redundancy on a single mapping request (\(n\_weakest = 20\)). a Coverage performance. b Number of nodes allocated

Following this observed behavior, we fix \(n\_weakest = 20\) and consider the performance benefit of redundancy as the number of nodes in the network is increased. We vary the number of nodes from 25 to 100 in increments of 25. To provide an additional comparison to the best achievable coverage, we also evaluate a third strategy called All that simply allocates all bids received to the mapping request. Note that for problems involving just a single request, All will produce the optimal coverage solution. As before, we compute the average performance over 10 randomly generated scenarios.

As can be seen by the results depicted in Fig. 8, the use of redundancy boosts the baseline performance of biased allocation to near optimal coverage (e.g., from 0.041 % deviation from the optimal to 0.001 % deviation in the case of 25 nodes; from 0.042 to 0.007 % deviation in the case of 100 nodes). T-test results confirm significance at all levels (with values ranging from 0.000009 to 0.005 as the number of nodes is increased). At all levels, this performance boost is achieved using just a small fraction of additional nodes (e.g., 1/4 in the case of 100 nodes).

Multiple mapping requests: As a second experiment, we assess the impact of redundancy as competing demand for network resources is increased. We fix the number of network devices at 25 (randomly distributed over the region of interest as usual). We consider 2 and 3 request problem scenarios that are designed to be non-overlapping but proximal (i.e., close enough to contend for network resources). We vary \(n\_weakest\) in increments of 10, and, once again assume that only a single improvement pass is performed.
Fig. 9

Performance impact of redundancy for competing mapping requests over increasing values of \(n\_weakest\). a Two requests. b Three requests

Table 1

Number of nodes allocated to each request



Redundancy (\(n\_weakest=20\))


2-Request scenario


   1st request




   2nd request




3-Request scenario


   1st request




   2nd request




   3rd request




The comparative results are shown in Fig. 9. The number of nodes assigned to each request is given in Table 1. It can be seen that a small amount of redundancy yields performance improvement over the baseline Biased allocation strategy with the advantage dissipating for later requests. At the same time, a large value for \(n\_weakest\) can result in less coverage than the baseline, due to over-allocation to earlier requests. Notice that the All strategy is clearly suboptimal in the multiple request case, and illustrates the extreme case of eager over-allocation. By allocating all bidding nodes to the first request it sees, fewer nodes are left with availability to support subsequent requests (see the relative imbalance in nodes allocated to each request in Table 1). Consequently, subsequent requests are not well covered, bringing overall performance down.

Multiple requests with targeted redundancy: As a final experiment, we consider a more structured approach to redundancy. We associate different performance thresholds with different classes of request. Specifically, we assume the existence of 3 priority classes—critical, non-critical and optional—and associate threshold values of 0.8, 0.5 and 0.0 respectively with these classes. We define a new scoring function for assessing the strength of a given cover in this setting:
$$\begin{aligned} Tscore(Req) ={ {\sum \nolimits _{i= 1}^{|Req|} \left\{ \begin{array}{cc} score(r_{i}) &{} \text{ if } score(r_{i}) > threshold(r_{i}) \\ 0 &{} \text{ otherwise } \end{array} \right\} } \over {|Req|}} \end{aligned}$$
where Req is the set of input requests.
We consider a 3-request scenario in which the 1st request is optional, the 2nd request is non-critical and the 3rd request is critical, and a 25-request scenario where every 5th request is critical, every 3rd request is non-critical and the rest are optional. In this experiment, we assume that improvement passes are repeatedly performed until no further improvement is possible.
Fig. 10

Performance impact of redundancy for prioritized mapping requests. a 3 requests (25 nodes). b 25 requests (75 nodes)

The overall performance boost observed for both scenarios is given in Fig. 10. Examining the class-specific performance achieved for the 25-request scenario (shown in Table 2), it is clear that use of different, priority-specific performance thresholds can lead to better distribution of redundancy among competing application requests.
Table 2

Coverage performance achieved for requests in each priority class

Priority class












6 Performance analysis

The results of the previous two sections indicate the inherent advantage of a distributed representation that allows incorporation of information about node availability into allocation decisions. Most basically, the results show that our allocation procedure, when biased by node availability information, produces more effective coverage of scanning requests over time than decisions based strictly on node location information (as would be the case if end customers of the network were directly responsible for node allocation). Further, the results show that information about node availability also provides an effective basis for adding sensing redundancy, and boosting the overall level of coverage that is obtained. At the same time, all of these results assume use of an underlying greedy minimum set cover heuristic to generate node coverage assignments. Although this heuristic was selected principally for its known theoretical properties as an approximation algorithm [2], the results of the previous sections give no indication of the performance of this heuristic algorithm in practice from an optimization perspective. In this section, we attempt to provide some insight into this question.

The constraints of the RadioMap application setting require an incremental, real-time allocation process. Requests are expected to arrive dynamically over time and typically with very little lead time (e.g., give me a scan in area x sometime within the next t seconds). Communication bandwidth is also tight, since communication impacts node availability, and hence it is not realistic for a single node to effectively maintain a centralized view of the overall (projected) capacity of the network over time. Recognizing these constraints, we consider a variation of the problem below in which all requests can be known in advance and the communication bandwidth exists to solve the problem centrally. We specify a Mixed Integer Linear Program (MILP) for optimally solving this static analog of the incremental, real-time problem to establish an upper bound on expected performance. We then compare the upper bound solutions obtained to those generated by our minimum set cover heuristic allocation strategy.

6.1 MILP formulation

To provide an upper bound solution, we focus exclusively on optimizing expected coverage of a set of m requests by n nodes (devices) over a time horizon of the next t time ticks. The region of interest is \(D \times D\) meters square. Each request j specifies an earliest start time \(est_{j}\), a latest finish time \(lft_{j}\), and the required scan duration \(d_j\) as before. A request j’s coverage requirement is expressed as a binary column vector \([R_j]_{D^2 \times 1}\), where \(R_j(h,1)\) indicates whether spatial location \(( h \; mod \; D, {h \over D} )\) is to be covered.

A node i has a probability of availability \(P_i \in [0,1]\) over the time horizon \(1, \ldots , t\) and a coverage footprint, which is similarly expressed as a binary column vector \([C_i]_{D^2 \times 1}\), where \(C_i(h,1)\) indicates whether spatial location \(( h \; mod \; D, {h \over D} )\) is within node i’s range.8

Let \(X_{i,j,k}\) be a binary decision variable that designates whether node i starts to provide coverage for request j at time k. \(X_{i,j,k}\) is subject to the following constraints:
  • Single scan—A node i can service a request j at most once over the time horizon
    $$\begin{aligned} \sum _{k=1}^t X_{i,j,k} \le 1, \quad \forall i = 1, \ldots , n; \quad \forall j = 1, \ldots , m \end{aligned}$$
  • Request window—A node scan must occur within the request window
    $$\begin{aligned} \begin{aligned} \sum _{k=1}^{est_j -1} X_{i,j,k}&= 0, \quad \forall i = 1, \ldots , n; \quad \forall j = 1, \ldots , m \\ \sum _{k=lft_j - d_j +1}^t X_{i,j,k}&= 0, \quad \forall i = 1, \ldots , n; \quad \forall j = 1, \ldots , m \\ \end{aligned} \end{aligned}$$
  • Temporal separation—A node can only service a single request at a time. Let O designate the set of request pairs \(<p,q>\) for which \(\{st_p,\ldots ,ft_p \} \cap \{st_q,\ldots ,ft_q \} \ne \emptyset \). Then
    $$\begin{aligned} \begin{aligned} X_{i,p,v} + X_{i,q,w}&\le 1, \quad \forall <p,q> \, \in O, \\ \forall v,w&= 1, \ldots , t \, \text {such that} \, v + d_p \ge w, \quad \forall i=1,.., n \\ \end{aligned} \end{aligned}$$
  • Availability weighted coverage—Assume \(Y_{l,j} \in [0,1]\) is the probability of coverage at location \(l \in \{1,\ldots ,D^2\}\) for request j. To account for the fact that node scanning regions can spatially overlap and that overlaps will affect \(Y_{l,j}\), we first model node scanning overlaps. Let \(Q_s\) be the set of all possible node combinations of size s. We define a binary matrix \([I_{s}]_{D^2 \times |Q_s|}\) for each \(Q_s\), where \(|Q_s| = \frac{n!}{s! (n-s)!}\) and \(I_{s}(h,c)\) indicates whether the node set c overlaps in scanning coverage at location h.9 Next, we define a matrix of binary variables \([OC_s]_{|Q_s| \times m}\) for each \(Q_s\), where \(OC_s (c,j)\) indicates whether combination c is actually providing overlapping coverage for request j. For any node i in combination c, let variable \(x_{i,j} = \sum _{k=est_j}^{lft_j - d_j +1} X_{i,j,k}\). Then, the following constraints must hold \(\forall s = 2, \ldots ,n\) and \(\forall c \in Q_s\):
    $$\begin{aligned} \frac{\sum _{i \in c} x_{i,j}}{s} \ge OC_s (c,j) \quad \text {and} \quad OC_s (c,j)) \ge \frac{\sum _{i \in c} x_{i,j} - (s - 1)}{s} \end{aligned}$$
    Equation 5 specifies a means of detecting whether all nodes in a given node combination c are simultaneously providing coverage to request j. For example, if combination c contains nodes 1 and 2 (i.e., \(c = \{1,2\}\)) then only if both \(x_{1,j}\) and \(x_{2,j}\) are 1 (i.e., only if they are both “ON”), will \(OC_{2}(c , j)\) be 1; else \(OC_{2}(c , j)\) will be 0. Now, assume that for any two vectors \(A = [a_x, a_y, a_z]\) and \(B = [b_x, b_y, b_z]\), the operation \(A.B = [a_xb_x, a_yb_y, a_zb_z]\). Then, following Eq. 1 for computing the union of n probabilities, we have the following coverage constraints:
    $$\begin{aligned} \begin{aligned} R_j.(([C] [S_j]_{n \times 1}) - [J_j^2]_{D^2 \times 1} \qquad \quad \\ + \; [J_j^3]_{D^2 \times 1} \; - \cdots [J_j^{n}]_{D^2 \times 1})&\ge R_j.[Y_j]_{D^2 \times 1} , \quad \forall j= 1,\ldots ,m, \\ \end{aligned} \end{aligned}$$
    $$\begin{aligned} \begin{aligned} S_j(i,1)&= \sum _{k=1}^t P_i X_{i,j,k} \, , \\ J_j^s(l,1)&= \sum _{c \in Q_s} OC_s(c,j)I_{s}(l,c)P_{n_1} \ldots P_{n_s} , n_i \in c \, , and \\ Y_j(l,1)&= Y_{l,j} \\ \end{aligned} \end{aligned}$$
The overall objective is to minimize
$$\begin{aligned} \sum _{j=1}^m \frac{- R_j^T Y_j}{R_j^T R_j}. \end{aligned}$$
where \(R_j^T Y_j\) is the expected coverage of request j and \(R_j^T R_j\) is complete coverage of j.

6.2 Results

To analyze the effectiveness of our proposed task allocation procedure, we compare its performance to the upper bound solutions produced by a MILP solver on instances of the static approximation of the task allocation problem formulated above. For each problem instance generated, we assumed an overall region of interest of \(960 \times 960\) m, a homogeneous node coverage range of \(225 \times 225\) m, and a rectangular request size ranging from 30 to 120 m in each dimension. Request windows were generated randomly over a 10 s time horizon, with request durations ranging from 2 to 4 s. Sets of 9 problem instances each were generated with 10, 15, 20, and 25 nodes, distributed randomly over the \(960 \times 960\) grid. All problem instances contained 20 requests. For each problem size, 9 instances were obtained by generating three node configurations and three request configurations, and then taking the cross-product.10

Each problem set was solved by the three procedures: a MILP solver, the biased-allocation procedure introduced in Sect. 4.1, and the extended biased-allocation scheme with redundancy described in Sect. 5. The MILP solver was implemented using IBM CPLEX and for this procedure each problem instance was given a 30 min solution time limit. The redundant allocation procedure was run with \(n\_ weakest = 20\), \(threshold = 1.0\), and \(max\_iteration\) set to infinite. Each generated solution was evaluated by computing score(R) in the same manner as before. For each solution, the number of nodes allocated and the computation time were also collected.
Fig. 11

Comparison to optimal on a static approximation of Radiomap problem. a Coverage performance. b Number of nodes allocated

Figure 11 shows a performance comparison of each method across the four problem sets. For both the 10 and 15 node problem sets, the results reported are the average values across all problem instances. In the 20 node case, 2 instances were not optimally solved by the MILP solver within the 30 time limit and were excluded. For the 25 node problem set, 4 instances were not optimally solved within the time limit and excluded. With respect to coverage, the performance of the basic biased minimum-set-cover heuristic ranges from 91.5 % of optimal (10 nodes) to 85.5 % (25 nodes).11 Adding redundancy raises the coverage performance to between 92.2 % (10 nodes) and 94.4 % (25 nodes) of optimal. As would be expected since the MILP formulation places no constraints on the number of nodes used, both heuristic procedures allocate significantly fewer nodes. To underscore the inappropriateness of the MILP solution to the actual dynamic Radiomap problem, its comparative solution times are given in Fig. 12. While both minimum set cover procedures solve all problem instances in milliseconds, the MILP solving time ranges from an average of 15 s (10 node problems) to 254 s (25 node problems).
Fig. 12

Solving times for instances of a static approximation of Radiomap problem

To characterize the effectiveness of the minimum-set-cover procedures in minimizing the number of nodes allocated, we perform a second experiment with a slightly modified version of the MILP solver. Specifically, For each problem instance, we take the final availability-weighted coverage value produced by each minimum set cover procedure for each request j, i.e.,
$$\begin{aligned} Prob_{Avail}(C_j,est_j,lft_j) \times \frac{ac(j)}{ar(j)}, \end{aligned}$$
where \(C_j\) is the set of nodes allocated to cover j, and \(\frac{ac(j)}{ar(j)}\) is the portion of j covered as before. These values are introduced into the MILP as level of coverage constraints, by substituting them into the right hand side of equation (6). The MILP objective is then redefined to minimize \(\sum _{i=1}^n \sum _{j=1}^m x_{i,j}\). Thus, the revised MILP computes the minimal number of nodes needed to achieve the level of coverage obtained by the minimum set cover procedure on a given problem instance.
Fig. 13

Comparison of number of nodes allocated to achieve the same coverage performance. a Minimum required nodes (biased solution). b Maximum required nodes (redundancy added)

Figure 13 shows the average number of nodes allocated by the MILP solver over the four problem sets with the constraint that a specific level of coverage is required. Figure 13a compares the number of nodes allocated with that of the biased allocation procedure when the level of coverage achieved by this latter procedure is specified as the coverage constraint. Figure 13b provides an analogous comparison with the extended redundant allocation procedure. Using the number of nodes that could possibly cover a portion of a request j as a worst case baseline for computing the percentage deviation from the minimal number of nodes needed to cover a given request, the number of nodes allocated by the biased allocation procedure ranges over the four problem sets from 4.6 % deviation from optimal (at 10 nodes) to 0 % deviation (at 25 nodes) from the optimal. In this case, the biased allocation procedure is able to be more selective as the number of available nodes is increased and chooses nodes that provide broader coverage of their target requests. In the case of biased allocation with redundancy, the trend appears similar (i.e., at 25 nodes the difference in numbers of nodes allocated starts to close), however the % deviation from optimal is greater. See Tables 3 and 4 for further details.
Table 3

Percent deviation from minimal number of nodes needed to achieve level of coverage (biased allocation)

Total nodes

Possiblecovering nodes

Used byMILP

Used bybiased alloc.

% deviationfrom optimal





















Table 4

Percent deviation from minimal number of nodes needed to achieve level of coverage (Redundant Allocation)


Possible coveringnodes

Used byMILP

Used bybiased alloc.

% deviationfrom optimal





















7 Conclusions and future work

In this paper, we introduced and analyzed an approach to solving the problem of allocating RF spectrum mapping tasks to a set of devices with uncertain availability, due to the fact that devices must be used opportunistically and can only execute allocated tasks when they are not being used for their primary purpose. Because of this uncertain availability, and the fact that multiple mapping and related secondary applications may be competing for use of these devices, we advocated a distributed, market-based approach to task allocation that is encapsulated as a network tasking service and embedded on constituent network devices. One inherent advantage of this approach is it allows knowledge of device availability to be efficiently acquired and used to bias the allocation process. A basic tasking model was developed for a representative class of coverage tasks and was used to analyze the performance leverage that device availability information can provide. Experimental results were presented showing first that (availability) biased-allocation offers significant advantage over a naive allocation strategy that would likely be necessary if RF mapping applications were to attempt to directly task devices, and second, that further performance gains are possible by additionally using availability information to guide the introduction of redundant mapping tasks.

To analyze the optimizing performance of the basic tasking model developed, which relies on a previously proposed minimum set covering heuristic, MILPs for optimally solving two static versions of the RF spectrum mapping allocation problem were developed. Although not practically applicable to solving the actual dynamic, continuous problem, these procedures were used to generate upper bound solutions for comparison to the solutions generated by our heuristic tasking model. With respect to maximizing availability-weighted coverage of input requests, the baseline biased-allocation procedure was found to generate solutions within 85.5–91.5 % of the upper bound solutions, with performance degrading as the number of available nodes for a given request set was increased. Adding redundancy, was found to boost these coverage levels to within 92.2–94.4 % of upper bound solutions, in this case with increasing performance as the number of nodes was increased. To examine performance from the standpoint of minimizing number of nodes allocated, a second MILP was then specified to minimize this value subject to a level of coverage constraint. Problems were resolved using the levels of coverage achieved by both heuristic procedures. For the baseline biased-allocation procedure, the number of nodes allocated was seen to vary with problem size from between 4.6–0.0 % deviation from the upper bound optimal. Higher percentage deviations were observed for the redundant allocation scheme, but the trend was similar. With increasing number of nodes for a given number of requests, the heuristic procedures become increasingly more selective and approach (or equal) the optimal upper bound.

There are several aspects of our approach that are limiting and warrant further research:
  • 3D coverage—The tasking model presented in this paper utilized a simple model of regional coverage. However, RF spectrum mapping tasks also require coverage along the frequency dimension, and the development of a more realistic 3D tasking model where requests can additionally target specific frequency intervals is a short-term objective. We believe that with proper attention to normalization, this extension can be accomplished by computing volumes instead of areas in coverage calculations.

  • Multiplexing of primary and secondary tasks—The analysis performed in this paper assumed that secondary mapping tasks that are preempted at some point during their execution by the primary mission end in failure (reducing the quality of the resulting overall spectrum map). This assumption seems reasonable for basic radios, but for other higher-end devices it may be possible to multiplex primary and secondary tasks (giving strict priority to the primary mission). In circumstances where secondary application tasks can be suspended and resumed when the primary mission has to run, a tasking model that assumes that task duration will vary as a function of device availability may be more appropriate. A second objective of future work is to develop this model and analyze its performance characteristics.

  • Node usage dependencies—Another assumption made by the tasking model presented in this paper is that device availability is mutually independent among nodes. It is entirely possible, however, that the availability of different nodes may be linked in some usage scenarios (e.g, push-to talk radio communication). While we would expect such dependencies to be reflected implicitly in the state descriptions of individual nodes as they self-monitor their availability and learn availability patterns, additional performance improvement may be possible by explicitly taking knowledge of node usage dependencies into account during task allocation. A third objective of future research is to explore this hypothesis.

  • Reactive re-allocation of failed or preempted tasks—Finally, we have ignored a major further advantage of the distributed, auction-based approach to task allocation proposed in this paper: its ability to flexibly re-allocate failed tasks to other available nodes automatically in a manner transparent to the requesting application. The same allocation process can be re-triggered upon notification of any failed task if overall application response time constraints can still be met. A final direction of our future work aims to extend our tasking models to incorporate and exploit a reactive task re-allocation capability.


In fact, our larger effort within the DARPA RadioMap program is aimed at the development this middleware.


The reader is referred to our related work [5] for details on how this is accomplished in the highly constrained and uncertain environment where RadioMap is targeted to operate.


Specification of techniques for obtaining these models is beyond the scope of this paper, but we believe that this is a tractable problem. Within a given node, periods of availability and unavailability can be estimated by periodically measuring the activity of the RF resources, calculating a time windowed average of activity level at some temporal granularity, and normalizing the resulting values for interpretation as a probability of availability over each window. Simple clustering of adjacent windows based on similarity could then be applied to produce a more task-oriented usage profile over time. It is reasonable to assume that construction and management of such availability profiles over the immediate past can be accomplished in real-time, and if RF activity patterns exhibit any degree of continuity, this basic model should enable reliable prediction of near-term future availability. However, the opportunity also exists to improve estimation of availability by discovering and exploiting knowledge of the actual activity patterns associated with the node’s primary mission. For example, a particular device may exhibit infrequent activity during certain periods of the day (e.g., over lunch hour). If additional feature data (e.g., time of day) is collected and integrated with availability profile data, then machine learning techniques such as [8] are immediately relevant to extracting such patterns to enhance the node’s availability model. As a first step, pattern extraction could be formulated as an offline learning process, with the resulting patterns then used to bias real-time prediction of the node’s future availability.


In this experimental study, we reason geometrically using rectangles rather than circles to simplify the implementation.


In all experiments presented in this paper, we make this assumption. Although it is a simplification of the general model of availability defined in Sect. 4.1, characteristics of the Radiomap domain suggest that this simple model of node availability is a reasonable basic assumption. RF scans are carried out by custom hardware and are typically of relatively short time durations of the order of 1–50 ms. Primary mission activity is most often dictated by time constants associated with human controlled activities, such as voice communication, data communication, and persistent RF emissions for disrupting such voice and data communication. These are typically of the order of seconds to 100’s of seconds. Given the extreme time scale differences associated with these activity patterns, a single, time-windowed average availability value that reflects the most recent RF activity associated with the primary mission should provide a good estimate of availability for much shorter duration RF scan tasks in the near future. Some error could creep in if the time instant of availability sampling happens to occur towards the end of the primary mission activity—we estimate this error to be 5–0.1 % overall, given the time differential in activities indicated above. On the other hand, there are also circumstances where a more fine-grained model of availability as a discrete probability distribution can make sense. If the node has visibility of a ”primary mission plan” (or has extracted predictable primary mission usage patterns from historical data), then it has more precise knowledge of when it will and will not be available. Furthermore, some nodes may employ communication waveforms in which the primary mission activity is further scheduled according to some known activity schedule (e.g., time division multiple access or TDMA [9]).


Here we assume that requests are weighted by submission time, which determines the order in which task assignments are made.


Of course, there are other sources of task failure as well, but for our purposes here we assume that they are abstracted into \(Prob_{avail}\).


For simplicity, we represent probability of availability as a single value over time horizon, as was assumed in the experiments in Sects. 4 and 5. Note, however, that the formulation can be easily extended to a discrete distribution that varies over time.


In our implementation bounding overlapping rectangles are computed and used to minimize space requirements.


Larger problem sizes were also tried but these instances could not be reliably solved by our MILP implementation within the time limit set.


Here and below, percentage of optimal is computed as \(\frac{CoverageScore(Alg)}{ CoverageScore(MILP)} \times 100\), where Alg is the heuristic task allocation procedure.



This material is based upon work partially supported by DARPA under the RadioMap program, Contract No. FA8750-13-C-0014. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. The authors would like to thank Jayanth Mogali for his help in formulating and implementing the upper bound MILP solution used to analyze the performance of our heuristic solution in Sect. 6.

Copyright information

© The Author(s) 2016

Authors and Affiliations

  1. 1.The Robotics InstituteCarnegie Mellon UniversityPittsburghUSA
  2. 2.Applied Communication SciencesBasking RidgeUSA
  3. 3.DOD Advanced Research Projects AgencyArlingtonUSA

Personalised recommendations