In this section, we present our algorithms for the general BCLS problem. Previously, it was an open problem whether the general BCLS can be solved in polynomial time. The main difficulty is that we do not know the order of the sensors in an optimal solution. Our main effort is for resolving this difficulty, and we derive an \(O(n^2\log n)\) time algorithm for the general BCLS.
We first give our algorithm for the decision version (in Sect. 2.1), which is crucial for solving the general BCLS (in Sect. 2.2) that we refer to as the optimization version of the problem.
For each sensor \(s_i\in S\), we call the right (resp., left) endpoint of the covering interval of \(s_i\) the right (resp., left) extension of \(s_i\). Each of the right and left extensions of \(s_i\) is an extension of \(s_i\). Denote by \(p(x^{\prime })\) the point on the \(x\)-axis whose coordinate is \(x^{\prime }\), and denote by \(p^+(x^{\prime })\) (resp., \(p^-(x^{\prime })\)) a point to the right (resp., left) of \(p(x^{\prime })\) and infinitely close to \(p(x^{\prime })\). The concept of \(p^+(x^{\prime })\) and \(p^-(x^{\prime })\) is only used to explain the algorithms, and we never need to find such a point explicitly in the algorithm. Let \({\lambda }^*\) denote the maximum sensor moving distance in an optimal solution for the optimization version of the general BCLS problem. Note that we can easily determine whether \({\lambda }^*=0\), say, in \(O(n\log n)\) time. Henceforth, we assume \({\lambda }^*> 0\).
The Decision Version of the General BCLS
Given any value \({\lambda }\), the decision version is to determine whether there is a feasible solution in which the maximum sensor movement is at most \({\lambda }\). Clearly, there is a feasible solution if and only if \({\lambda }\ge {\lambda }^*\). We show that after \(O(n\log n)\) time preprocessing, for any \({\lambda }\), we can determine whether \({\lambda }\ge {\lambda }^*\) in \(O(n)\) time. We explore some properties of a feasible solution in Sect. 2.1.1, describe our decision algorithm in Sect. 2.1.2, argue its correctness in Sect. 2.1.3, and discuss its implementation in Sect. 2.1.4. In Sect. 2.1.5, we show that by extending the algorithm, we can also determine whether \({\lambda }>{\lambda }^*\) in the same time bound; this is particularly useful to our optimization algorithm in Sect. 2.2.
Preliminaries
By a sensor configuration, we refer to a specification of where each sensor \(s_i\in S\) is located. By this definition, the input is a configuration in which each sensor \(s_i\) is located at \(x_i\). The displacement of a sensor in a configuration \(C\) is the distance between the position of the sensor in \(C\) and its original position in the input. A configuration \(C\) is a feasible solution for the distance \({\lambda }\) if the sensors in \(C\) form a barrier coverage of \(B\) (i.e., the union of the covering intervals of the sensors in \(C\) contains \(B\)) and the displacement of each sensor is at most \({\lambda }\). In a feasible solution, a subset \(S^{\prime }\subseteq S\) is called a solution set if the sensors in \(S^{\prime }\) form a barrier coverage; of course, \(S\) itself is also a solution set. A feasible solution may have multiple solution sets. A sensor \(s_i\) in a solution set \(S^{\prime }\) is said to be critical with respect to \(S^{\prime }\) if \(s_i\) covers a point on \(B\) that is not covered by any other sensor in \(S^{\prime }\). If every sensor in \(S^{\prime }\) is critical, then \(S^{\prime }\) is called a critical set.
Given any value \({\lambda }\), if \({\lambda }\ge {\lambda }^*\), our decision algorithm will find a critical set and determine the order in which the sensors of the critical set will appear in a feasible solution for \({\lambda }\). For the purpose of giving some intuition and later showing the correctness of our algorithm, we first explore some properties of a critical set.
Consider a critical set \(S^c\). For each sensor \(s\in S^c\), we call the set of points on \(B\) that are covered by \(s\) but not covered by any other sensor in \(S^c\) the exclusive coverage of \(s\).
Observation 1
The exclusive coverage of each sensor in a critical set \(S^c\) is a continuous portion of the barrier \(B\).
Proof
Assume to the contrary the exclusive coverage of a sensor \(s\in S^c\) is not a continuous portion of \(B\). Then there must be at least one sensor \(s^{\prime } \in S^c\) whose covering interval is between two consecutive continuous portions of the exclusive coverage of \(s\). But that would mean \(s^{\prime }\) is not critical since the covering interval of \(s^{\prime }\) is contained in that of \(s\). Hence, the observation holds. \(\square \)
For a critical set \(S^c\) in a feasible solution SOL, we define the cover order of the sensors in \(S^c\) as the order of these sensors in SOL such that their exclusive coverages are from left to right.
Observation 2
The cover order of the sensors of a critical set \(S^c\) in a feasible solution SOL is consistent with the left-to-right order of the positions of these sensors in SOL. Further, the cover order is also consistent with the order of the right (resp., left) extensions of these sensors in SOL.
Proof
Consider any two sensors \(s_i\) and \(s_j\) in \(S^c\) with ranges \(r_i\) and \(r_j\), respectively. Without loss of generality, assume \(s_i\) is to the left of \(s_j\) in the cover order, i.e., the exclusive coverage of \(s_i\) is to the left of that of \(s_j\) in SOL. Let \(y_i\) and \(y_j\) be the positions of \(s_i\) and \(s_j\) in SOL, respectively. To prove the observation, it suffices to show \(y_i<y_j\), \(y_i+r_i<y_j+r_j\), and \(y_i-r_i<y_j-r_j\).
Let \(p\) be a point in the exclusive coverage of \(s_j\). We also use \(p\) to denote its coordinate on the \(x\)-axis. Then \(p\) is not covered by \(s_i\), implying either \(p>y_i+r_i\) or \(p<y_i-r_i\). But, the latter case cannot hold (otherwise, the exclusive coverage of \(s_i\) would be to the right of that of \(s_j\)). Since \(p\) is covered by \(s_j\), we have \(p\le y_j+r_j\). Therefore, \(y_i+r_i<p\le y_j+r_j\). By using a symmetric argument, we can also prove \(y_i-r_i<y_j-r_j\) (we omit the details). Clearly, the two inequalities \(y_i+r_i<y_j+r_j\) and \(y_i-r_i<y_j-r_j\) imply \(y_i<y_j\). The observation thus holds.\(\square \)
An interval \(I\) of \(B\) is called a left-aligned interval if the left endpoint of \(I\) is at \(0\) (i.e., \(I\) is of the form \([0,x^{\prime }]\) or \([0,x^{\prime })\)). A set of sensors is said to be in attached positions if the union of their covering intervals is a continuous interval of the \(x\)-axis whose length is equal to the sum of the lengths of these covering intervals. Two intervals of the \(x\)-axis overlap if they intersect each other (even at only one point).
The Algorithm Description
Initially, we move all sensors of \(S\) to the right by the distance \({\lambda }\), i.e., for each \(1\le i\le n\), we move \(s_i\) to the position \(x_i^{\prime }=x_i+{\lambda }\). Let \(C_0\) denote the resulting configuration. Clearly, there is a feasible solution for \({\lambda }\) if and only if we can move the sensors in \(C_0\) to the left by at most \(2{\lambda }\) to form a coverage of \(B\). Thus, henceforth we only need to consider moving the sensors to the left. Recall that we have assumed that the extensions of any two distinct sensors are different; hence in \(C_0\), the extensions of all sensors are also different.
Our algorithm takes a greedy approach. It seeks to find sensors to cover \(B\) from left to right, in at most \(n\) steps. If \({\lambda }\ge {\lambda }^*\), the algorithm will end up with a critical set \(S^c\) of sensors along with the destinations for all these sensors. In theory, the other sensors in \(S\setminus S^c\) can be anywhere such that their displacements are at most \({\lambda }\); but in the solution found by our algorithm, they are at the same positions as in \(C_0\). If a sensor is at the same position as in \(C_0\), we say it stands still.
In step \(i\) (initially, \(i=1\)), using the configuration \(C_{i-1}\) produced in step \(i-1\) and based on certain criteria, we find a sensor \(s_{g(i)}\) and determine its destination \(y_{g(i)}\), where \(g(i)\) is the index of the sensor in \(S\) and \(y_{g(i)}\in [x^{\prime }_{g(i)}-2{\lambda },x^{\prime }_{g(i)}]\). We then move the sensor \(s_{g(i)}\) to \(y_{g(i)}\) to obtain a new configuration \(C_i\) from \(C_{i-1}\) (if \(y_{g(i)}=x^{\prime }_{g(i)}\), then we need not move \(s_{g(i)}\), and \(C_i\) is the same as \(C_{i-1}\)). Let \(R_i=y_{g(i)}+r_{g(i)}\) (i.e., the right extension of \(s_{g(i)}\) in \(C_i\)). Assume \(R_0=0\). Let \(S_i=S_{i-1}\cup \{s_{g(i)}\}\) (\(S_0=\emptyset \) initially). We will show that the sensors in \(S_i\) together cover the left-aligned interval \([0,R_i]\). If \(R_i\ge L\), we have found a feasible solution with a critical set \(S^c=S_i\), and terminate the algorithm. Otherwise, we proceed to step \(i+1\). Further, it is possible that a desired sensor \(s_{g(i)}\) cannot be found, in which case we terminate the algorithm and report \({\lambda }<{\lambda }^*\). Below we give the details, and in particular, discuss how to determine the sensor \(s_{g(i)}\) in each step.
Before discussing the first step, we provide some intuition. Let \(S_l\) consist of the sensors whose right extensions are at most \(0\) in \(C_0\). We claim that since \(L>0\), no sensor in \(S_l\) can be in a critical set of a feasible solution if \({\lambda }^*\le {\lambda }\). Indeed, because all sensors have been moved to their rightmost possible positions in \(C_0\), if no sensor in \(S_l\) has a right extension at \(0\) in \(C_0\), then the claim trivially holds; otherwise, suppose \(s_t\) is such a sensor. Assume to the contrary that \(s_t\) is in a critical set \(S^c\). Then \(p(0)\) is the only point on \(B\) that can be covered by \(s_t\). Since \(L>0\), there must be another sensor in \(S^c\) that also covers \(p(0)\) (otherwise, no sensor in \(S^c\) would cover the point \(p^+(0)\)). Hence, \(s_t\) is not critical with respect to \(S^c\), a contradiction. The claim thus follows. Therefore, we need not consider the sensors in \(S_l\) since they do not help in forming a feasible solution.
In step 1, we determine the sensor \(s_{g(1)}\), as follows. Define \(S_{11}=\{s_j\ |\ x_j^{\prime }-r_j\le 0 < x_j^{\prime }+r_j\}\) (Fig. 1), i.e., \(S_{11}\) consists of all sensors covering the point \(p(0)\) in \(C_0\) except any sensor whose right extension is \(0\) (but if the left extension of a sensor is \(0\), the sensor is included in \(S_{11}\)). In other words, \(S_{11}\) consists of all sensors covering the point \(p^+(0)\) in \(C_0\). If \(S_{11}\ne \emptyset \), then we choose the sensor in \(S_{11}\) whose right extension is the largest as \(s_{g(1)}\) (e.g., \(s_i\) in Fig. 1), and let \(y_{g(1)}=x^{\prime }_{g(1)}\). Note that since the extensions of all sensors in \(C_0\) are different, the sensor \(s_{g(1)}\) is unique. If \(S_{11}= \emptyset \), then define \(S_{12}\) as the set of sensors whose left extensions are larger than \(0\) and at most \(2{\lambda }\) (e.g., Fig. 2). If \(S_{12}=\emptyset \), then we terminate the algorithm and report \({\lambda }<{\lambda }^*\). Otherwise, we choose the sensor in \(S_{12}\) whose right extension is the smallest as \(s_{g(1)}\) (e.g., \(s_i\) in Fig. 2), and let \(y_{g(1)}=r_{g(1)}\) (i.e., the left extension of \(s_{g(1)}\) is at \(0\) after it is moved to the destination \(y_{g(1)}\)).
If the algorithm is not terminated, then we move \(s_{g(1)}\) to \(y_{g(1)}\), yielding a new configuration \(C_1\). Let \(S_1=\{s_{g(1)}\}\), and \(R_1\) be the right extension of \(s_{g(1)}\) in \(C_1\). If \(R_1\ge L\), we have found a feasible solution \(C_1\) with the critical set \(S_1\), and terminate the algorithm. Otherwise, we proceed to step 2.
The general step is very similar to step 1. Consider step \(i\) for \(i>1\). We determine the sensor \(s_{g(i)}\), as follows. Let \(S_{i1}\) be the set of sensors covering the point \(p^+(R_{i-1})\) in the configuration \(C_{i-1}\). If \(S_{i1}\ne \emptyset \), we choose the sensor in \(S_{i1}\) with the largest right extension as \(s_{g(i)}\) and let \(y_{g(i)}=x^{\prime }_{g(i)}\). Otherwise, let \(S_{i2}\) be the set of sensors whose left extensions are larger than \(R_{i-1}\) and at most \(R_{i-1}+2{\lambda }\). If \(S_{i2}=\emptyset \), we terminate the algorithm and report \({\lambda }<{\lambda }^*\). Otherwise, we choose the sensor in \(S_{i2}\) with the smallest right extension as \(s_{g(i)}\) and let \(y_{g(i)}=R_{i-1}+r_{g(i)}\). If the algorithm is not terminated, we move \(s_{g(i)}\) to \(y_{g(i)}\) and obtain a new configuration \(C_i\). Let \(S_i=S_{i-1}\cup \{s_{g(i)}\}\). Let \(R_i\) be the right extension of \(s_{g(i)}\) in \(C_i\). If \(R_i\ge L\), we have found a feasible solution \(C_i\) with the critical set \(S_i\) and terminate the algorithm. Otherwise, we proceed to step \(i+1\). If the sensor \(s_{g(i)}\) is from \(S_{i1}\) (resp., \(S_{i2}\)), then we call it the Type I (resp., Type II) sensor.
Since there are \(n\) sensors in \(S\), the algorithm is terminated in at most \(n\) steps. This finishes the description of our algorithm.
The Correctness of the Algorithm
Based on the description of our algorithm, we have the following lemma.
Lemma 1
At the end of step \(i\), suppose the algorithm produces the set \(S_i\) and the configuration \(C_i\); then \(S_i\) and \(C_i\) have the following properties.
- (a):
-
\(S_i\) consists of sensors that are Type I or Type II.
- (b):
-
For each sensor \(s_{g(j)}\in S_i\) with \(1\le j\le i\), if \(s_{g(j)}\) is of Type I, then it stands still (i.e., its position in \(C_i\) is the same as that in \(C_0\)); otherwise, its left extension is at \(R_{j-1}\), and \(s_{g(j)}\) and \(s_{g(j-1)}\) are in attached positions if \(j>1\).
- (c):
-
The interval on \(B\) covered by the sensors in \(S_i\) is \([0,R_i]\).
- (d):
-
For each \(1<j\le i\), the right extension of \(s_{g(j)}\) is larger than that of \(s_{g(j-1)}\).
- (e):
-
For each \(1\le j\le i\), \(s_{g(j)}\) is the only sensor in \(S_i\) that covers the point \(p^+(R_{j-1})\) (with \(R_0=0\)).
Proof
The first three properties are trivially true according to the algorithm description.
For property (d), note that the right extension of \(s_{g(j)}\) (resp., \(s_{g(j-1)}\)) is \(R_j\) (resp., \(R_{j-1}\)). According to our algorithm, the sensor \(s_{g(j)}\) covers the point \(p^+(R_{j-1})\), implying that \(R_j>R_{j-1}\). Hence, property (d) holds.
For property (e), note that the sensor \(s_{g(j)}\) (which is determined in step \(j\)) always covers \(p^+(R_{j-1})\). Consider any other sensor \(s_{g(t)}\in S_i\). If \(t<j\), then the right extension of \(s_{g(t)}\) is at most \(R_{j-1}\), and thus \(s_{g(t)}\) cannot cover \(p^+(R_{j-1})\). If \(t>j\), then depending on whether \(s_{g(t)}\in S_{t1}\) or \(s_{g(t)}\in S_{t2}\), there are two cases. If \(s_{g(t)}\in S_{t2}\), then the left extension of \(s_{g(t)}\) is \(R_{t-1}\), which is larger than \(R_{j-1}\), and thus \(s_{g(t)}\) cannot cover \(p^+(R_{j-1})\) in \(C_i\). Otherwise (i.e., \(s_{g(t)}\in S_{t1}\)), \(s_{g(t)}\) stands still. Assume to the contrary that \(s_{g(t)}\) covers \(p^+(R_{j-1})\) in \(C_i\). Then \(s_{g(t)}\) must have been in \(S_{j1}\) in step \(j\) within the configuration \(C_{j-1}\). This implies \(S_{j1}\ne \emptyset \), \(s_{g(j)}\in S_{j1}\), and \(s_{g(j)}\) stands still. Since \(R_t\) is the right extension of \(s_{g(t)}\) and \(R_j\) is the right extension of \(s_{g(j)}\), by property (d), for \(t>j\), we have \(R_t>R_j\). Since \(R_t>R_j\) (i.e., the right extension of \(s_{g(j)}\) is smaller than that of \(s_{g(t)}\)), the algorithm cannot choose \(s_{g(j)}\) from \(S_{j1}\) in step \(j\), which is a contradiction. Therefore, \(s_{g(t)}\) cannot cover the point \(p^+(R_{j-1})\). Property (e) thus holds. \(\square \)
At its termination, our algorithm either reports \({\lambda }\ge {\lambda }^*\) or \({\lambda }<{\lambda }^*\). To argue the correctness of the algorithm, below we will show that if the algorithm reports \({\lambda }\ge {\lambda }^*\), then indeed there is a feasible solution for \({\lambda }\) and our algorithm finds one; otherwise, there is no feasible solution for \({\lambda }\).
Suppose in step \(i\), our algorithm reports \({\lambda }\ge {\lambda }^*\). Then according to the algorithm, it must be \(R_i\ge L\). By Lemma 1(c) and 1(e), \(C_i\) is a feasible solution and \(S_i\) is a critical set. Further, by Lemma 1(d) and Observation 2, the cover order of the sensors in \(S_i\) is \(s_{g(1)},s_{g(2)},\ldots ,s_{g(i)}\).
Next, we show that if the algorithm reports \({\lambda }<{\lambda }^*\), then certainly there is no feasible solution for \({\lambda }\). This is almost an immediate consequence of the following lemma.
Lemma 2
Suppose \(S_i^{\prime }\) is the set of sensors in the configuration \(C_i\) whose right extensions are at most \(R_i\). Then the interval \([0,R_i]\) is the largest possible left-aligned interval that can be covered by the sensors of \(S^{\prime }_i\) such that the displacement of each sensor in \(S^{\prime }_i\) is at most \({\lambda }\).
Proof
In this proof, when we say an interval is covered by the sensors of \(S_i^{\prime }\), we mean (without explicitly stating) that the displacement of each sensor in \(S_i^{\prime }\) is at most \({\lambda }\).
We first prove a key claim: If \(C\) is a configuration for the sensors of \(S^{\prime }_i\) such that a left-aligned interval \([0,x^{\prime }]\) is covered by the sensors of \(S^{\prime }_i\), then there always exists a configuration \(C^*\) for \(S^{\prime }_i\) in which the interval \([0,x^{\prime }]\) is still covered by the sensors of \(S^{\prime }_i\) and for each \(1\le j\le i\), the position of the sensor \(s_{g(j)}\) in \(C^*\) is \(y_{g(j)}\), where \(g(j)\) and \(y_{g(j)}\) are the values computed by our algorithm.
As similar to our discussion in Sect. 2.1.1, the configuration \(C\) for \(S^{\prime }_i\) always has a critical set for covering the interval \([0,x^{\prime }]\). Let \(S_C\) be such a critical set of \(C\).
We prove the claim by induction. We first show the base case: Suppose there is a configuration \(C\) for the sensors of \(S^{\prime }_i\) in which a left-aligned interval \([0,x^{\prime }]\) is covered by the sensors of \(S^{\prime }_i\); then there is a configuration \(C_1^{\prime }\) for the sensors of \(S^{\prime }_i\) in which the interval \([0,x^{\prime }]\) is still covered by the sensors of \(S^{\prime }_i\) and the position of the sensor \(s_{g(1)}\) in \(C_1^{\prime }\) is \(y_{g(1)}\).
Let \(t=g(1)\). If the position of \(s_t\) in \(C\) is \(y_t\), then we are done (with \(C_1^{\prime }= C\)). Otherwise, let \(y_t^{\prime }\) be the position of \(s_t\) in \(C\), with \(y_t^{\prime }\ne y_t\). Depending on \(s_t\in S_{11}\) or \(s_t\in S_{12}\), there are two cases.
-
If \(s_t\in S_{11}\), then \(y_t=x^{\prime }_t\). Since \(y_t\) is the rightmost position to which the sensor \(s_t\) is allowed to move and \(y_t^{\prime }\ne y_t\), we have \(y_t^{\prime }<y_t\). Depending on whether \(s_t\) is in the critical set \(S_C\), there further are two subcases. If \(s_t\not \in S_C\), then by the definition of a critical set, the sensors in \(S_C\) form a coverage of \([0,x^{\prime }]\) regardless of where \(s_t\) is. If we move \(s_t\) to \(y_t\) (and other sensors keep the same positions as in \(C\)) to obtain a new configuration \(C_1^{\prime }\), then the sensors of \(S_i^{\prime }\) still form a coverage of \([0,x^{\prime }]\). If \(s_t\in S_C\), then because \(y_t>y_t^{\prime }\), if we move \(s_t\) from \(y_t^{\prime }\) to \(y_t\), \(s_t\) is moved to the right. Since \(s_t\in S_{11}\), when \(s_t\) is at \(y_t\), \(s_t\) still covers the point \(p(0)\). Thus, moving \(s_t\) from \(y_t^{\prime }\) to \(y_t\) does not cause \(s_t\) to cover a smaller sub-interval of \([0,x^{\prime }]\). Hence, by moving \(s_t\) to \(y_t\), we obtain a new configuration \(C_1^{\prime }\) in which the sensors of \(S_i^{\prime }\) still form a coverage of \([0,x^{\prime }]\).
-
If \(s_t\in S_{12}\), then according to our algorithm, \(S_{11}=\emptyset \) in this case, and \(s_t\) is the sensor in \(S_{12}\) with the smallest right extension in \(C_0\). If \(s_t\not \in S_C\), then by the same argument as above, we can obtain a configuration \(C_1^{\prime }\) in which the interval \([0,x^{\prime }]\) is still covered by the sensors of \(S^{\prime }_i\) and the position of the sensor \(s_t\) in \(C_1^{\prime }\) is \(y_t\). Below, we discuss the case when \(s_t\in S_C\). In \(S_C\), some sensors must cover the point \(p(0)\) in \(C\). Let \(S^{\prime }\) be the set of sensors in \(S_C\) that cover \(p(0)\) in \(C\). If \(s_t\in S^{\prime }\), then it is easy to see that \(y_t^{\prime }<y_t\) since \(y_t\) is the rightmost position for \(s_t\) to cover \(p(0)\). In this case, again, by the same argument as above, we can always move \(s_t\) to the right from \(y_t^{\prime }\) to \(y_t\) to obtain a configuration \(C_1^{\prime }\) in which the interval \([0,x^{\prime }]\) is still covered by the sensors of \(S^{\prime }_i\). Otherwise (i.e., \(s_t\not \in S^{\prime }\)), we show below that we can always move \(s_t\) to \(y_t\) by switching the relative positions of \(s_t\) and some other sensors in \(S_C\). An easy observation is that each sensor in \(S^{\prime }\) must be in \(S_{12}\). Consider an arbitrary sensor \(s_h\in S^{\prime }\). Since \(s_t\) is the sensor in \(S_{12}\) with the smallest right extension in \(C_0\), the right extension of \(s_h\) is larger than that of \(s_t\) in \(C_0\). Depending on whether the covering intervals of \(s_t\) and \(s_h\) overlap in \(C\), there are two subcases. If the covering intervals of \(s_t\) and \(s_h\) overlap in \(C\), then let \([0,x^{\prime \prime }]\) be the left-aligned interval that is covered by \(s_t\) and \(s_h\) in \(C\) (Fig. 3). If we switch their relative positions by moving \(s_t\) to \(y_t\) and moving \(y_h\) to \(x^{\prime \prime }-r_h\) (i.e., the left extension of \(s_t\) is at \(0\) and the right extension of \(s_h\) is at \(x^{\prime \prime }\)), then these two sensors still cover \([0,x^{\prime \prime }]\) (Fig. 3), and thus the sensors in \(S_i^{\prime }\) still form a coverage of \([0,x^{\prime \prime }]\). Further, after the above switch operation, the displacements of these two sensors are no bigger than \({\lambda }\). To see this, clearly, the displacement of \(s_t\) is at most \({\lambda }\). For the sensor \(s_h\), it is easy to see that the switch operation moves \(s_h\) to the right. Since \(s_t\) covers \(p(x^{\prime \prime })\) in \(C\), \(x^{\prime \prime }\) is no larger than the right extension of \(s_t\) in \(C_0\), which is smaller than that of \(s_h\) in \(C_0\). Thus, \(x^{\prime \prime }\) is smaller than \(x_h^{\prime }+r_h\), implying that the position of \(s_h\) after the switch operation is still to the left of its position in \(C_0\). Hence, after the switch operation, the displacement of \(s_h\) is no bigger than \({\lambda }\). In summary, after the switch operation, we obtain a new configuration \(C_1^{\prime }\) in which the interval \([0,x^{\prime }]\) is still covered by the sensors of \(S^{\prime }_i\) and the position of the sensor \(s_t\) in \(C_1^{\prime }\) is \(y_t\). If the covering intervals of \(s_t\) and \(s_h\) do not overlap in \(C\), then suppose the sensors in the critical set \(S_C\) between \(s_h\) and \(s_t\) are \(s_h,s_{f(1)},s_{f(2)},\ldots ,s_{f(m)},s_t\), in the cover order. Clearly, the covering intervals of any two consecutive sensors in this list overlap in \(C\). Below we show that we can switch the relative positions of \(s_t\) and \(s_{f(m)}\) such that we still form a coverage of \([0,x^{\prime }]\), and then we continue this switch procedure until \(s_t\) is switched with \(s_h\). Note that since \(S_{11}=\emptyset \), the right extension of \(s_{f(j)}\) for any \(1\le j\le m\) is larger than that of \(s_t\) in \(C_0\). Let \(x^{\prime \prime }_1\) be the maximum of \(0\) and the left extension of \(s_{f(m)}\) in \(C\), and \(x^{\prime \prime }_2\) be the minimum of \(x^{\prime }\) and the right extension of \(s_t\) in \(C\) (Fig. 4). Clearly, \(x^{\prime \prime }_1<x^{\prime \prime }_2\). Thus, the sub-interval of \([0,x^{\prime }]\) covered by \(s_t\) and \(s_{f(m)}\) in \(C\) is \([x^{\prime \prime }_1,x^{\prime \prime }_2]\). We perform a switch operation on \(s_t\) and \(s_{f(m)}\) by moving \(s_t\) to the left and moving \(s_{f(m)}\) to the right such that the left extension of \(s_t\) is at \(x_1^{\prime \prime }\) and the right extension of \(s_{f(m)}\) is at \(x_2^{\prime \prime }\) (Fig. 4). It is easy to see that after this switch operation, the sensors in \(S_C\) still form a coverage of \([0,x^{\prime }]\). Since the right extension of \(s_{f(m)}\) is larger than that of \(s_t\) in \(C_0\), by a similar argument as above, we can also prove that after this switch, the displacements of both \(s_t\) and \(s_{f(m)}\) are no bigger than \({\lambda }\). Then, we continue this switch process on \(s_t\) and \(s_{f(m-1)}, s_{f(m-2)}, \ldots \), until \(s_t\) is switched with \(s_h\), after which \(s_t\) is at \(y_t\), and we obtain a new configuration \(C_1^{\prime }\) in which the interval \([0,x^{\prime }]\) is still covered by the sensors in \(S_C\subseteq S_i^{\prime }\) and the position of the sensor \(s_t\) in \(C_1^{\prime }\) is \(y_t\).
This completes the proof of the base case, i.e., there is always a configuration \(C_1^{\prime }\) in which the interval \([0,x^{\prime }]\) is covered by the sensors of \(S^{\prime }_i\) and the position of the sensor \(s_{g(1)}\) in \(C_1^{\prime }\) is \(y_{g(1)}\).
We assume inductively that the claim holds for each \(k-1\) with \(2\le k\le i\), i.e., there is a configuration \(C_{k-1}^{\prime }\) in which the interval \([0,x^{\prime }]\) is covered by the sensors of \(S^{\prime }_i\) and the position of the sensor \(s_{g(j)}\) for each \(1\le j\le k-1\) in \(C_{k-1}^{\prime }\) is \(y_{g(j)}\). In the following, we show that the claim holds for \(k\), i.e., there is a configuration \(C_{k}^{\prime }\) in which the interval \([0,x^{\prime }]\) is still covered by the sensors of \(S^{\prime }_i\) and the position of the sensor \(s_{g(j)}\) for each \(1\le j\le k\) in \(C_{k}^{\prime }\) is \(y_{g(j)}\). The proof is quite similar to that for the base case and we only discuss it briefly below.
Let \(t=g(k)\). If the position of \(s_t\) in \(C_{k-1}^{\prime }\) is \(y_t\), then we are done (with \(C_k^{\prime }= C^{\prime }_{k-1}\)). Otherwise, let \(y_t^{\prime }\) be the position of \(s_t\) in \(C_{k-1}^{\prime }\), with \(y_t^{\prime }\ne y_t\). Depending on \(s_t\in S_{k1}\) or \(s_t\in S_{k2}\), there are two cases.
-
If \(s_t\in S_{k1}\), then \(y_t=x^{\prime }_t\). Since \(y_t\) is the rightmost position to which \(s_t\) is allowed to move and \(y_t^{\prime }\ne y_t\), we have \(y_t^{\prime }<y_t\). Depending on whether \(s_t\) is in the critical set \(S_C\), there further are two subcases. If \(s_t\not \in S_C\), then the sensors in \(S_C\) always form a coverage of \([0,x^{\prime }]\) regardless of where \(s_t\) is. Thus, if we move \(s_t\) to \(y_t\), we obtain a new configuration \(C_k^{\prime }\) from \(C_{k-1}^{\prime }\) in which the sensors of \(S_i^{\prime }\) still form a coverage of \([0,x^{\prime }]\) and the position of the sensor \(s_{g(j)}\) for each \(1\le j\le k\) in \(C_{k}^{\prime }\) is \(y_{g(j)}\). If \(s_t\in S_C\), then since \(y_t>y_t^{\prime }\), if we move \(s_t\) from \(y_t^{\prime }\) to \(y_t\), \(s_t\) is moved to the right. By Lemma 1(c), the interval \([0,R_{k-1}]\) is covered by the sensors of \(S_{k-1} =\{s_{g(1)},s_{g(2)},\ldots , s_{g(k-1)}\}\) in \(C_{k-1}^{\prime }\) (since they are in positions \(y_{g(1)},y_{g(2)},\ldots , y_{g(k-1)}\), respectively). When \(s_t\) is at \(y_t\), \(s_t\) still covers the point \(p^+(R_{k-1})\). Thus, after moving \(s_t\) to \(y_t\), we obtain a new configuration \(C_k^{\prime }\) from \(C_{k-1}^{\prime }\) in which the sensors of \(S_i^{\prime }\) still form a coverage of \([0,x^{\prime }]\).
-
If \(s_t\in S_{k2}\), then \(S_{k1}=\emptyset \) in this case, and \(s_t\) is the sensor in \(S_{k2}\) with the smallest right extension. If \(s_t\not \in S_C\), then by the same argument as above, we can obtain a configuration \(C_k^{\prime }\) from \(C_{k-1}^{\prime }\) in which the interval \([0,x^{\prime }]\) is still covered by the sensors of \(S^{\prime }_i\) and the position of the sensor \(s_{g(j)}\) for each \(1\le j\le k\) in \(C_{k}^{\prime }\) is \(y_{g(j)}\). Below, we discuss the case when \(s_t\in S_C\). In \(S_C\), some sensors must cover the point \(p^+(R_{k-1})\) in \(C\). Let \(S^{\prime }\) be the set of sensors in \(S_C\) that cover \(p^+(R_{k-1})\) in \(C\). If \(s_t\in S^{\prime }\), then \(y_t^{\prime }<y_t\) since \(y_t\) is the rightmost position for \(s_t\) to cover \(p^+(R_{k-1})\). In this case, again, by the same argument as above, we can move \(s_t\) to the right from \(y_t^{\prime }\) to \(y_t\) to obtain a configuration \(C_k^{\prime }\) from \(C_{k-1}^{\prime }\) in which the interval \([0,x^{\prime }]\) is still covered by the sensors of \(S^{\prime }_i\). Otherwise (i.e., \(s_t\not \in S^{\prime }\)), consider a sensor \(s_h\) in \(S^{\prime }\). Let the sensors in \(S_C\) between \(s_h\) and \(s_t\) in the cover order be \(s_h,s_{f(1)},s_{f(2)},\ldots ,s_{f(m)},s_t\) (this sequence may contain only \(s_h\) and \(s_t\)). Note that for each \(1\le j\le k-1\), the sensor \(s_{g(j)}\) is not in this sequence. Then by using a similar sequence of switch operations as for the base case, we can obtain a new configuration \(C_k^{\prime }\) from \(C_{k-1}^{\prime }\) such that the sensors of \(S^{\prime }_i\) still form a coverage of \([0,x^{\prime }]\). Again, the position of the sensor \(s_{g(j)}\) for each \(1\le j\le k\) in \(C_{k}^{\prime }\) is \(y_{g(j)}\).
This proves that the claim holds for \(k\). Therefore, the claim is true. The lemma can then be easily proved by using this claim, as follows.
Suppose the largest left-aligned interval that can be covered by the sensors of \(S_i^{\prime }\) is \([0,x^{\prime }]\). Then by the above claim, there always exists a configuration \(C^*\) for \(S^{\prime }_i\) in which the interval \([0,x^{\prime }]\) is also covered by the sensors of \(S^{\prime }_i\) and for each \(1\le j\le i\), the position of the sensor \(s_{g(j)}\) in \(C^*\) is \(y_{g(j)}\). Recall that \(S_i=\{s_{g(1)},s_{g(2)},\ldots ,s_{g(i)}\}\). Then for each sensor \(s_t\in S_i^{\prime }\setminus S_i\), the rightmost point that can be covered by \(s_t\) is \(x^{\prime }_t+r_t\). Recall that in the configuration \(C_i\), for each \(1\le j\le i\), the position of the sensor \(s_{g(j)}\) is \(y_{g(j)}\), and for each sensor \(s_t\in S_i^{\prime }\setminus S_i\), the position of \(s_t\) is \(x_t^{\prime }\). Further, by the definition of \(S_i^{\prime }\), the right extensions of all sensors in \(S_i^{\prime }\) are at most \(R_i\) in \(C_i\). Therefore, the right extensions of all sensors in \(S_i^{\prime }\) are also at most \(R_i\) in \(C^*\), implying that \(x^{\prime }\le R_i\). On the other hand, by Lemma 1(c), the sensors of \(S_i\) form a coverage of \([0,R_i]\) in \(C^*\). Thus, \([0,x^{\prime }] =[0,R_i]\), and the lemma follows. \(\square \)
Finally, we prove the correctness of our algorithm based on Lemma 2. Suppose our algorithm reports \({\lambda }<{\lambda }^*\) in step \(i\). Then according to the algorithm, \(R_{i-1}<L\) and both \(S_{i1}\) and \(S_{i2}\) are \(\emptyset \). Let \(S_{i-1}^{\prime }\) be the set of sensors whose right extensions are at most \(R_{i-1}\) in \(C_{i-1}\). Since both \(S_{i1}\) and \(S_{i2}\) are \(\emptyset \), no sensor in \(S\setminus S^{\prime }_{i-1}\) can cover any point to the left of the point \(p^+(R_{i-1})\) (and including \(p^+(R_{i-1})\)). By Lemma 2, \([0,R_{i-1}]\) is the largest left-aligned interval that can be covered by the sensors of \(S_{i-1}^{\prime }\). Hence, the sensors in \(S\) cannot cover the interval \([0,p^+(R_{i-1})]\). Due to \(R_{i-1}<L\), we have \([0,p^+(R_{i-1})]\subseteq [0,L]\); thus the sensors of \(S\) cannot cover \(B=[0,L]\). In other words, there is no feasible solution for the distance \({\lambda }\). This establishes the correctness of our algorithm.
The Algorithm Implementation
For the implementation of the algorithm, we first discuss a straightforward approach that runs in \(O(n\log n)\) time. Later, we give another approach which, after \(O(n\log n)\) time preprocessing, can determine whether \({\lambda }^*\le {\lambda }\) in \(O(n)\) time for any given \({\lambda }\). Although the second approach does not change the overall running time of our decision algorithm, it does help our optimization algorithm in Sect. 2.2 to run faster.
In the beginning, we sort the \(2n\) extensions of all sensors by the \(x\)-coordinate, and move each sensor \(s_i\in S\) to \(x_i^{\prime }\) to produce the initial configuration \(C_0\). During the algorithm, for each step \(i\), we maintain two sets of sensors, \(S_{i1}\) and \(S_{i2}\), as defined earlier. To this end, we sweep along the \(x\)-axis and maintain \(S_{i1}\) and \(S_{i2}\), using two sweeping points
\(p_1\) and \(p_2\), respectively. Specifically, the point \(p_1\) follows the positions \(R_0\) (\(=0\)), \(R_1,R_2,\ldots \), and \(p_2\) follows the positions \(R_0+2{\lambda },R_1+2{\lambda },R_2+2{\lambda },\ldots \). Thus, \(p_2\) is kept always by a distance of \(2{\lambda }\) to the right of \(p_1\). To maintain the set \(S_{i1}\), when the sweeping point \(p_1\) encounters the left extension of a sensor, we insert the sensor into \(S_{i1}\); when \(p_1\) encounters the right extension of a sensor, we delete the sensor from \(S_{i1}\). In this way, when the sweeping point \(p_1\) is at \(R_{i-1}\), we have the set \(S_{i1}\) ready. To maintain \(S_{i2}\), the situation is slightly more subtle. First, whenever the sweeping point \(p_2\) encounters the left extension of a sensor, we insert the sensor into \(S_{i2}\). The subtle part is at the deletion operation. By the definition of \(S_{i2}\), if the left extension of any sensor is less than or equal to \(R_{i-1}\), then it should not be in \(S_{i2}\). Since eventually the first sweeping point \(p_1\) is at \(R_{i-1}\) in step \(i\), whenever a sensor is inserted into the first set \(S_{i1}\), we need to delete that sensor from \(S_{i2}\). Thus, a deletion on \(S_{i2}\) happens only when the same sensor is inserted into \(S_{i1}\). In addition, we need a search operation on \(S_{i1}\) for finding the sensor in \(S_{i1}\) with the largest right extension, and a search operation on \(S_{i2}\) for finding the sensor in \(S_{i2}\) with the smallest right extension.
It is easy to see that there are \(O(n)\) insertions and deletions in the entire algorithm. Further, the search operations on both \(S_{i1}\) and \(S_{i2}\) are dependent on the right extensions of the senors. By using a balanced binary search tree to represent each of these two sets in which the right extensions of the sensors are used as keys, the algorithm runs in \(O(n\log n)\) time.
In the sequel, we give the second approach which, after \(O(n\log n)\) time preprocessing, can determine whether \({\lambda }^*\le {\lambda }\) in \(O(n)\) time for any given \({\lambda }\).
In the preprocessing, we compute two sorted lists \(S_L\) and \(S_R\), where \(S_L\) contains all sensors sorted by the increasing values of their left extensions and \(S_R\) contains all sensors sorted by the increasing values of their right extensions. Consider any value \({\lambda }\). Later in the algorithm, for each step \(i\), our algorithm will determine the sensor \(s_{g(i)}\) by scanning the two lists. We will show that when the algorithm finishes, each sensor in \(S_L\) is scanned at most once and each sensor in \(S_R\) is scanned at most three times, and therefore, the algorithm runs in \(O(n)\) time.
Initially, we move each sensor \(s_i\in S\) to \(x_i^{\prime }\) to produce the initial configuration \(C_0\). During the algorithm, we sweep along the \(x\)-axis, using a sweeping point
\(p_1\). Specifically, the point \(p_1\) follows the positions \(R_0\) (\(=0\)), \(R_1,R_2,\ldots \). With a little abuse of notation, we also let \(p_1\) be the coordinate of the current position of \(p_1\). Initially, \(p_1=0\).
Consider a general step \(i\) and we need to determine the sensor \(s_{g(i)}\). In the beginning of this step, \(p_1\) is at the position \(R_{i-1}\). We scan the list \(S_L\) from the beginning until the left extension of the next sensor is strictly to the right of \(p_1\). For each scanned sensor \(s_j\), if its right extension is strictly to the right of \(p_1\), then it is in \(S_{i1}\) by the definition of \(S_{i1}\). Thus, the above scanning procedure can determine \(S_{i1}\), after which we can easily find the sensor \(s_{g(i)}\) in \(S_{i1}\) if \(S_{i1}\ne \emptyset \). In fact, we can compute \(s_{g(i)}\) directly in the above scanning procedure. In addition, for any sensor in \(S_L\) that is scanned, we remove it from \(S_L\) (and thus it will never be scanned later in the algorithm any more). If \(S_{i1}\ne \emptyset \), then \(s_{g(i)}\) is determined and we move the sweeping point \(p_1\) to the right extension of \(s_{g(i)}\) (i.e., \(p_1=R_i\)). If \(p_1\ge L\), we terminate the algorithm and report \({\lambda }^*\le {\lambda }\); otherwise, we continue on to step \(i+1\). Below, we discuss the case \(S_{i1}=\emptyset \).
If \(S_{i1}=\emptyset \), then \(s_{g(i)}\) is the sensor in \(S_{i2}\) with the smallest right extension if \(S_{i2}\ne \emptyset \). Specifically, among the sensors (if any) whose left extensions are larger than \(p_1\) (\(=R_{i-1}\)) and at most \(p_1+2{\lambda }\), \(s_{g(i)}\) is the sensor with the smallest right extension. To find \(s_{g(i)}\), we scan the list \(S_R\) from the beginning until we find the first sensor \(s\) whose left extension is larger than \(p_1\) and at most \(p_1+2{\lambda }\) (Fig. 5). If such a sensor \(s\) does not exist, then \(S_{i2}=\emptyset \), and we terminate the algorithm and report \({\lambda }^*>{\lambda }\). Below, we assume we have found such a sensor \(s\). Since the sensors in \(S_R\) are sorted by their right extensions, \(s_{g(i)}\) is exactly the sensor \(s\). Further, unlike the scanning on \(S_L\) where each scanned sensor is removed immediately, for each scanned sensor in \(S_R\), we remove it only if its right extension is to the left of \(p_1\) (Fig. 5). Specifically, when we are searching the above sensor \(s\) during scanning \(S_R\), we remove from \(S_R\) those sensors whose right extensions are to the left of \(p_1\). It is easy to see that the removed sensors (if any) are consecutive from the beginning of \(S_R\). Let \(S_R\) be the list after all removals. If \(s_{g(i)}\) (\(=s\)) is not the first sensor in \(S_R\), then for any sensor \(s_j\) in \(S_R\) before \(s_{g(i)}\), the left extension of \(s_j\) must be larger than \(p_1+2{\lambda }\); we call the sensors in \(S_R\) before \(s_{g(i)}\) the redundant sensors for the step \(i\) (Fig. 5). Later we will show that these sensors will be not redundant any more in the later algorithm. In summary, for each sensor scanned in the original \(S_R\) in this step, it is removed, or a redundant sensor, or \(s_{g(i)}\). Finally, we move \(s_{g(i)}\) to the left such that its left extension is at \(p_1\), and then we move \(p_1\) to the right extension of \(s_{g(i)}\) (i.e., \(p_1=R_i\)). If \(p_1\ge L\), we terminate the algorithm and report \({\lambda }^*\le {\lambda }\); otherwise, we continue on the next step \(i+1\).
To analyze the algorithm, it is easy to see each sensor in this list \(S_L\) is scanned at most once. For the list \(S_R\), this may not be the case as the redundant sensors may be scanned again in the later algorithm. However, the following lemma shows that this would not be an issue.
Lemma 3
If a sensor \(s_j\) is a redundant sensor for the step \(i\), then it will be not a redundant sensor again in the later algorithm.
Proof
Consider the moment right after the step \(i\). The sweeping point \(p_1\) is at the right extension of \(s_{g(i)}\). To prove the lemma, since \(p_1\) always moves to the right, by the definition of the redundant sensors, it is sufficient to show that the left extension of \(s_j\) is at most \(p_1+2{\lambda }\), as follows.
Consider the moment in the beginning of the step \(i\) (the sensor \(s_{g(i)}\) has not been moved to the left). Since \(s_j\) is a redundant sensor for the step \(i\), the sensor \(s_{g(i)}\) is from \(S_{i2}\) and the left extension of \(s_{g(i)}\) is at most \(R_{i-1}+2{\lambda }\). Thus, the right extension of \(s_{g(i)}\) is at most \(R_{i-1}+2r_{g(i)}+2{\lambda }\). Recall that the right extension of \(s_j\) is less than that of \(s_{g(i)}\) (since \(s_j\) is before \(s_{g(i)}\) in \(S_R\)). Therefore, the right extension of \(s_j\) is at most \(R_{i-1}+2r_{g(i)}+2{\lambda }\). Now consider the moment right after the step \(i\). The sweeping point \(p_1\) is at the position \(R_{i-1}+2r_{g(i)}\). Hence, the right extension of \(s_j\) is at most \(p_1+2{\lambda }\), which implies that the left extension of \(s_j\) is at most \(p_1+2{\lambda }\). The lemma thus follows. \(\square \)
The preceding lemma implies that any sensor can be a redundant sensor in at most one step. Therefore, for the list \(S_R\), each sensor has been scanned at most twice when it is removed, once as a redundant sensor, and once when it is found as \(s_{g(i)}\). Thus, each sensor in \(S_R\) is scanned at most three times. Hence, after the two lists \(S_L\) and \(S_R\) are obtained, the running time of the algorithm is \(O(n)\).
Theorem 1
After \(O(n\log n)\) time preprocessing, for any \({\lambda }\), we can determine whether \({\lambda }^*\le {\lambda }\) in \(O(n)\) time; further, if \({\lambda }^*\le {\lambda }\), we can compute a feasible solution in \(O(n)\) time.
Another Decision Version
Our optimization algorithm in Sect. 2.2 also needs to determine whether \({\lambda }^*\) is strictly less than \({\lambda }\) (i.e., \({\lambda }^*<{\lambda }\)) for any \({\lambda }\). By modifying our algorithm for Theorem 1, we have the following result.
Theorem 2
After \(O(n\log n)\) time preprocessing, for any value \({\lambda }\), we can determine whether \({\lambda }^*<{\lambda }\) in \(O(n)\) time.
Proof
We first apply the algorithm for Theorem 1 on the value \({\lambda }\). If the algorithm reports \({\lambda }^*>{\lambda }\), then we know \({\lambda }^*<{\lambda }\) is false. Otherwise, we have \({\lambda }^*\le {\lambda }\). In the following, we modify the algorithm for Theorem 1 to determine whether \({\lambda }^*<{\lambda }\), i.e., \({\lambda }^*\) is strictly smaller than \({\lambda }\). Note that this is equivalent to deciding whether \({\lambda }^*\le {\lambda }-\varepsilon \) for any arbitrarily small constant \(\varepsilon >0\). Of course, we cannot enumerate all such small values \(\varepsilon \). Instead, we add a new mechanism to the algorithm for Theorem 1 such that the resulting displacement of each sensor is strictly smaller than \({\lambda }\).
At the start of the algorithm, we move all sensors to the right by a distance \({\lambda }\) to obtain the configuration \(C_0\). But, the displacement of each sensor should be strictly less than \({\lambda }\). To ensure this, later in the algorithm, if the destination of a sensor \(s_i\) is set as \(y_i=x_i^{\prime }\), then we adjust this destination of \(s_i\) by moving it to the left slightly such that \(s_i\)’s displacement is strictly less than \({\lambda }\).
Consider a general step \(i\) of the algorithm. We define the set \(S_{i1}\) in the same way as before, i.e., it consists of all sensors covering the point \(p^+(R_{i-1})\) in \(C_{i-1}\). If \(S_{i1}\ne \emptyset \), then the algorithm is the same as before. In this case, the sensor \(s_{g(i)}\) chosen in this step has a displacement of exactly \({\lambda }\), which is actually “illegal” since the displacement of each sensor should be strictly less than \({\lambda }\). We will address this issue later. However, if \(S_{i1}=\emptyset \), then the set \(S_{i2}\) is defined slightly different from before. Here, since \(S_{i1}=\emptyset \), we have to use a sensor to the right of \(R_{i-1}\) in \(C_{i-1}\) to cover \(p^+(R_{i-1})\). Since the displacement of each sensor should be strictly less than \({\lambda }\), we do not allow any sensor to move to the left by exactly the distance \(2{\lambda }\). To reflect this difference, we define \(S_{i2}\) as the set of sensors in \(C_{i-1}\) each of which has its left extension larger than \(R_{i-1}\) and strictly smaller than \(R_{i-1}+2{\lambda }\) (previously, it was “at most”). In this way, if we move a sensor in \(S_{i2}\) to the left to cover \(p^+(R_{i-1})\), then the displacement of that sensor is strictly less than \({\lambda }\). The rest of the algorithm is the same as before. We define the Type I and Type II sensors in the same way as before.
If the algorithm terminates without finding a feasible solution, then it must be \({\lambda }^*\ge {\lambda }\); otherwise, the algorithm finds a “feasible” solution SOL with a critical set \(S^c=\{s_{g(1)},s_{g(2)},\ldots ,s_{g(m)}\}\). But, this does not necessarily mean \({\lambda }^*<{\lambda }\) since in SOL, the displacements of some sensors in \(S^c\) may be exactly \({\lambda }\). Specifically, all Type I sensors in \(S^c\) are in the same positions as they are in \(C_0\) and thus their displacements are exactly \({\lambda }\). In contrast, during the algorithm, the Type II sensors in \(S^c\) have been moved strictly to the left with respect to their positions in \(C_0\); further, due to our new definition of the set \(S_{i2}\), the displacements of all Type II sensors are strictly less than \({\lambda }\). Therefore, if there is no Type I sensor in \(S^c\), then the displacement of each sensor in \(S^c\) is strictly less than \({\lambda }\) and thus we have \({\lambda }^*<{\lambda }\). Below we assume \(S^c\) contains at least one Type I sensor. To make sure that \({\lambda }^*<{\lambda }\) holds, we need to find a real feasible solution in which the displacement of each sensor in \(S\) is strictly less than \({\lambda }\). On the other hand, to make sure that \({\lambda }^*\ge {\lambda }\) holds, we must show that there is no real feasible solution. For this, we apply the following algorithmic procedure.
We seek to adjust the solution SOL to produce a real feasible solution. According to our algorithm, for each sensor \(s_i\in S^c\), if it is a Type I sensor, then \(y_i=x_i^{\prime }\) and thus its displacement is exactly \({\lambda }\); otherwise, its displacement is less than \({\lambda }\). The purpose of our adjustment of SOL is to move all Type I sensors slightly to the left so that (1) their displacements are strictly less than \({\lambda }\), and (2) we can still form a coverage of \(B\). In certain cases, we may need to use some sensors in \(S\setminus S^c\) as well. Also, we may end up with the conclusion that no real feasible solution exists.
According to our algorithm, after finding the last sensor \(s_{g(m)}\) in \(S^c\), we have \(R_m\ge L\). If \(R_m>L\), then we can always adjust SOL to obtain a real feasible solution by shifting each sensor in \(S^c\) to the left by a very small value \(\varepsilon \) such that (1) the resulting displacement of each sensor in \(S^c\) is less than \({\lambda }\), and (2) the sensors of \(S^c\) still form a coverage of \(B\). Note that there always exists such a small value \(\varepsilon \) such that the above adjustment is possible. Therefore, if \(R_m>L\), then we have \({\lambda }^*<{\lambda }\).
If \(R_m=L\), however, then the above strategy does not work. There are two cases. If there is a sensor \(s_t\in S\setminus S^c\) such that \(x_t\in (L-{\lambda }-r_t,L+{\lambda }+r_t)\), then we can also obtain a real feasible solution by shifting the sensors of \(S^c\) slightly to the left as above and using the sensor \(s_t\) to cover the remaining part of \(B\) around \(L\) that is no longer covered by the shifted sensors of \(S^c\); thus we also have \({\lambda }^*<{\lambda }\). Otherwise, we claim that it must be \({\lambda }^*\ge {\lambda }\). Below we prove this claim.
Consider the rightmost Type I sensor \(s_i\) in \(S^c\). Suppose \(s_i=s_{g(j)}\), i.e., \(s_i\) is determined in step \(j\). Thus, \(s_i\) is at \(x_i^{\prime }\) in SOL. Let \(\varepsilon >0\) be an arbitrarily small value (we will determine below how small it should be). Since we have assumed that the extensions of all sensors are different, the value \(\varepsilon \) can be made small enough such that by moving \(s_i\) to \(x_i^{\prime }-\varepsilon \) in \(C_0\), the relative order of the extensions of all sensors remains the same as before. Further, according to our algorithm above, the value \(\varepsilon \) can also be small enough such that the behavior of the algorithm is the same as before, i.e., the algorithm finds the same critical set \(S^c\) with the same cover order as before. It is easy to see that such a small value \(\varepsilon \) always exists. Note that our task here is to prove our claim \({\lambda }^*\ge {\lambda }\) is true, and knowing that such a value \(\varepsilon \) exists is sufficient for our purpose and we need not actually find such a value \(\varepsilon \) in our algorithm.
Now, in step \(j\), the new value \(R_j\), which is the right extension of \(s_i\), is \(\varepsilon \) smaller than its value before since \(s_i\) was at \(x^{\prime }_i\) in \(C_0\). Because \(s_i\) is the rightmost Type I sensor in \(S^c\), after step \(j\), all sensors in \(S^c\) determined after \(s_i\) (if any) are of Type II and thus are moved to the left such that they are all in attached positions along with \(s_i\), which implies that the right extension of the last sensor \(s_{g(m)}\) in \(S^c\) is also \(\varepsilon \) smaller than its previous value (which was \(L\)). Hence, after step \(m\), the sensors in \(S_m\) covers \([0,L-\varepsilon ]\). As discussed above, if \(\varepsilon \) is made small enough, the behavior of the algorithm is the same as before. By a similar analysis, we can also establish a result similar to Lemma 2. Namely, \([0,L-\varepsilon ]\) is the largest left-aligned interval that can be covered by the sensors in \(S_m^{\prime }\) in this setting (here, \(S_m^{\prime }\) is the set of sensors whose right extensions are at most \(L-\varepsilon \) in the configuration after step \(m\)). We omit the detailed analysis for this, which is very similar to that for Lemma 2. Note that \(S^c=S_m\). Since there is no sensor \(s_t\in S\setminus S^c\) such that \(x_t\in (L-{\lambda }-r_t,L+{\lambda }+r_t)\), the interval \((L-\varepsilon ,L]\) cannot be fully covered by the sensors in \(S\). The above discussion implies that if we do not allow the displacement of \(s_i\) to be larger than \({\lambda }-\varepsilon \), then there would be no feasible solution even if we allow the displacements of some other sensors (i.e., those Type I sensors in \(S^c\) before \(s_i\), if any) to be larger than \({\lambda }-\varepsilon \) (but at most \({\lambda }\)). Thus, \({\lambda }^*\le {\lambda }-\varepsilon \) cannot be true. That is, \({\lambda }^*>{\lambda }-\varepsilon \) holds. Further, it is easy to see that, by a similar argument, for any fixed value \(\varepsilon ^{\prime }>0\) with \(\varepsilon ^{\prime }<\varepsilon \), we also have \({\lambda }^*>{\lambda }-\varepsilon ^{\prime }\). Hence, we obtain \({\lambda }^*\ge {\lambda }\).
This finishes the discussion on how to determine whether \({\lambda }^*<{\lambda }\). It is easy to see that the above algorithm can also be implemented in \(O(n)\) time for each value \({\lambda }\), after \(O(n\log n)\) time preprocessing. The theorem thus follows. \(\square \)
Theorems 1 and 2 together lead to the following corollary.
Corollary 1
After \(O(n\log n)\) time preprocessing, for any value \({\lambda }\), we can determine whether \({\lambda }^*={\lambda }\) in \(O(n)\) time.
The Optimization Version of the General BCLS
In this section, we discuss the optimization version of the general BCLS problem. We show that it is solvable in \(O(n^2\log n)\) time, thus settling the open problem in [8].
It should be pointed out that if we can determine a set \(\varLambda \) of candidate values such that \({\lambda }^*\in \varLambda \), then we would use our decision algorithms given in Sect. 2.1 to find \({\lambda }^*\) in \(\varLambda \). We will use this approach in Sect. 3 for the uniform case. However, so far it is not clear to us how to determine such a set \(\varLambda \). Below, we use a different approach.
One main difficulty for solving the problem is that we do not know the order of the sensors in the optimal solution. Our strategy is to determine a critical set of sensors and their cover order in a feasible solution for the (unknown) optimal value \({\lambda }^*\). The idea is somewhat similar to parametric search [6, 14] and here we “parameterize” our algorithm for Theorem 1. But, unlike the typical parametric search [6, 14], our approach does not involve any parallel scheme and is practical. We first give an overview of this algorithm. In the following discussion, the “decision algorithm” refers to our algorithm for Theorem 1 unless otherwise stated.
Recall that given any value \({\lambda }\), step \(i\) of our decision algorithm determines the sensor \(s_{g(i)}\) and obtains the set \(S_i=\{s_{g(1)},s_{g(2)},\ldots ,s_{g(i)}\}\), in this order, which we also call the cover order of the sensors in \(S_i\). In our optimization algorithm, we often use \({\lambda }\) as a variable. Thus, \(S_i({\lambda })\) (resp., \(R_i({\lambda })\), \(s_{g(i)}({\lambda })\), and \(C_i({\lambda })\)) refers to the corresponding \(S_i\) (resp., \(R_i\), \(s_{g(i)}\), and \(C_i\)) obtained by running our decision algorithm on the specific value \({\lambda }\). Denote by \(C_I\) the configuration of the input.
Our optimization algorithm takes at most \(n\) steps. Initially, let \(S_0({\lambda }^*)=\emptyset , R_0({\lambda }^*)=0, {\lambda }^1_0=0\), and \({\lambda }^2_0=+\infty \). For each \(i\ge 1\), step \(i\) receives an interval \(({\lambda }^1_{i-1},{\lambda }^2_{i-1})\) and a sensor set \(S_{i-1} ({\lambda }^*)\), with the following algorithm invariants:
-
\({\lambda }^*\in ({\lambda }^1_{i-1},{\lambda }^2_{i-1})\).
-
For any value \({\lambda }\in ({\lambda }^1_{i-1},{\lambda }^2_{i-1})\), \(S_{i-1}({\lambda })=S_{i-1} ({\lambda }^*)\) and their cover orders are the same.
Step \(i\) either finds the value \({\lambda }^*\) or determines a sensor \(s_{g(i)}({\lambda }^*)\). The interval \(({\lambda }^1_{i-1},{\lambda }^2_{i-1})\) will shrink to a new interval \(({\lambda }^1_{i},{\lambda }^2_{i})\subseteq ({\lambda }^1_{i-1},{\lambda }^2_{i-1})\) and we also obtain the set \(S_i({\lambda }^*)=S_{i-1}({\lambda }^*)\cup \{s_{g(i)}({\lambda }^*)\}\). All these can be done in \(O(n\log n)\) time. The details of the algorithm are given below.
Consider a general step \(i\) for \(i\ge 1\) and we have the interval \(({\lambda }^1_{i-1},{\lambda }^2_{i-1})\) and the set \(S_{i-1}({\lambda }^*)\). While discussing the algorithm, we will also prove inductively the following lemma about the function \(R_{i}({\lambda })\) with variable \({\lambda }\in ({\lambda }_{i}^1,{\lambda }_{i}^2)\).
Lemma 4
For any step \(i\) with \(i\ge 0\), if the algorithm does not stop after the step, then the following hold:
- (a):
-
The function \(R_{i}({\lambda })\) for \({\lambda }\in ({\lambda }_{i}^1,{\lambda }_{i}^2)\) is a line segment of slope 1 or 0.
- (b):
-
We can compute the function \(R_{i}({\lambda })\) for \({\lambda }\in ({\lambda }_{i}^1,{\lambda }_{i}^2)\) explicitly in \(O(n)\) time.
- (c):
-
\(R_{i}({\lambda })<L\) for any \({\lambda }\in ({\lambda }_{i}^1,{\lambda }_{i}^2)\).
In the base case for \(i=0\), the statement of Lemma 4 obviously holds. We assume the lemma statement holds for \(i-1\), in particular, the function \(R_{i-1}({\lambda })\) for \({\lambda }\in ({\lambda }_{i-1}^1,{\lambda }_{i-1}^2)\) is already known. We will show that after step \(i\) the lemma statement holds for \(i\), and thus the lemma will be proved.
Again, in step \(i\), we need to determine the sensor \(s_{g(i)}({\lambda }^*)\) and let \(S_i({\lambda }^*)=S_{i-1}({\lambda }^*)\cup \{s_{g(i)}({\lambda }^*)\}\). We will also obtain an interval \(({\lambda }_i^1,{\lambda }_i^2)\) such that \({\lambda }^*\in ({\lambda }_i^1,{\lambda }_i^2)\subseteq ({\lambda }_{i-1}^1,{\lambda }_{i-1}^2)\) and for any \({\lambda }\in ({\lambda }_i^1,{\lambda }_i^2)\), \(S_i({\lambda }) = S_i({\lambda }^*)\) holds (with the same cover order).
To find the sensor \(s_{g(i)}({\lambda }^*)\), we first determine the set \(S_{i1}({\lambda }^*)\). Recall that \(S_{i1}({\lambda }^*)\) consists of all sensors covering the point \(p^+(R_{i-1}({\lambda }^*))\) in the configuration \(C_{i-1}({\lambda }^*)\). For each sensor in \(S\setminus S_{i-1}({\lambda }^*)\), its position in the configuration \(C_{i-1}({\lambda })\) with respect to \({\lambda }\in ({\lambda }_{i-1}^1,{\lambda }_{i-1}^2)\) is a function of slope \(1\). As \({\lambda }\) increases in \(({\lambda }_{i-1}^1,{\lambda }_{i-1}^2)\), by our assumption that Lemma 4(a) holds for \(i-1\), the function \(R_{i-1}({\lambda })\) is a line segment of slope \(1\) or \(0\). If \(R_{i-1}({\lambda })\) is of slope \(1\), then the relative position of \(R_{i-1}({\lambda })\) in \(C_{i-1}({\lambda })\) does not change and thus the set \(S_{i1}({\lambda })\) does not change; if the function \(R_{i-1}({\lambda })\) is of slope \(0\), then the relative position of \(R_{i-1}({\lambda })\) in \(C_{i-1}({\lambda })\) is monotonically moving to the left. Hence, there are \(O(n)\) values for \({\lambda }\) in \(({\lambda }_{i-1}^1,{\lambda }_{i-1}^2)\) that can incur some changes to the set \(S_{i1}({\lambda })\) and each such value corresponds to a sensor extension (e.g., Fig. 6); further, these values can be easily determined in \(O(n\log n)\) time by a simple sweeping process (we omit the discussion of it). Let \(\varLambda _{i1}\) be the set of all these \({\lambda }\) values. Let \(\varLambda _{i1}\) also contain both \({\lambda }_{i-1}^1\) and \({\lambda }_{i-1}^2\), and thus, \({\lambda }_{i-1}^1\) and \({\lambda }_{i-1}^2\) are the smallest and largest values in \(\varLambda _{i1}\), respectively. We sort the values in \(\varLambda _{i1}\). For any two consecutive values \({\lambda }_1<{\lambda }_2\) in the sorted \(\varLambda _{i1}\), the set \(S_{i1}({\lambda })\) for any \({\lambda }\in ({\lambda }_1,{\lambda }_2)\) is the same. By using binary search on the sorted \(\varLambda _{i1}\) and our decision algorithm in Theorem 1, we determine (in \(O(n\log n)\) time) the two consecutive values \({\lambda }_1\) and \({\lambda }_2\) in \(\varLambda _{i1}\) such that \({\lambda }_1<{\lambda }^*\le {\lambda }_2\). Further, by Corollary 1, we determine whether \({\lambda }^*={\lambda }_2\). If \({\lambda }^*={\lambda }_2\), then we terminate the algorithm. Otherwise, based on our discussion above, \(S_{i1}({\lambda }^*)= S_{i1}({\lambda })\) for any \({\lambda }\in ({\lambda }_1,{\lambda }_2)\). Thus, to compute \(S_{i1}({\lambda }^*)\), we can pick an arbitrary \({\lambda }\) in \(({\lambda }_1,{\lambda }_2)\) and find \(S_{i1}({\lambda })\) in the same way as in our decision algorithm. Hence, \(S_{i1}({\lambda }^*)\) can be easily found in \(O(n\log n)\) time. Note that \({\lambda }^*\in ({\lambda }_1,{\lambda }_2) \subseteq ({\lambda }_{i-1}^1,{\lambda }_{i-1}^2)\). Depending on whether \(S_{i1}({\lambda }^*)\ne \emptyset \), there are two cases.
-
If \(S_{i1}({\lambda }^*)\ne \emptyset \), then \(s_{g(i)}({\lambda }^*)\) is the sensor in \(S_{i1}({\lambda }^*)\) with the largest right extension. An obvious observation is that for any \({\lambda }\in ({\lambda }_1,{\lambda }_2)\), the sensor in \(S_{i1}({\lambda })\) with the largest right extension is the same, which can be easily found. We let \({\lambda }^1_i={\lambda }_1\) and \({\lambda }^2_i={\lambda }_2\). Let \(S_i({\lambda }^*)=S_{i-1}({\lambda }^*)\cup \{s_{g(i)}({\lambda }^*)\}\). The algorithm invariants hold. Further, as \({\lambda }\) increases in \(({\lambda }^1_i,{\lambda }^2_i)\), the right extension of \(s_{g(i)}({\lambda })\), which is \(R_i({\lambda })\), increases by the same amount. That is, the function \(R_i({\lambda })\) on \(({\lambda }^1_i,{\lambda }^2_i)\) is a line segment of slope \(1\). Therefore, we can compute \(R_i({\lambda })\) on \(({\lambda }^1_i,{\lambda }^2_i)\) explicitly in constant time. This also shows Lemma 4(a) and (b) hold for \(i\).
-
If \(S_{i1}({\lambda }^*)=\emptyset \), then we need to compute \(S_{i2}({\lambda }^*)\). For any \({\lambda }\in ({\lambda }_1,{\lambda }_2)\), the set \(S_{i2}({\lambda })\) consists of all sensors whose left extensions are larger than \(R_{i-1}({\lambda })\) and at most \(R_{i-1}({\lambda })+2{\lambda }\) in the configuration \(C_{i-1}({\lambda })\). Recall that the function \(R_{i-1}({\lambda })\) on \(({\lambda }_{i-1}^1,{\lambda }_{i-1}^2)\) is linear with slope \(1\) or \(0\). Due to \(({\lambda }_1,{\lambda }_2)\subseteq ({\lambda }_{i-1}^1,{\lambda }_{i-1}^2)\), the linear function \(R_{i-1}({\lambda })+2{\lambda }\) on \(({\lambda }_1,{\lambda }_2)\) is of slope \(3\) or \(2\). Again, as \({\lambda }\) increases, the position of each sensor in \(S\setminus S_{i-1}({\lambda }^*)\) in \(C_{i-1}({\lambda })\) is a linear function of slope \(1\). Hence, as \({\lambda }\) increases, the relative position of \(R_{i-1}({\lambda })+2{\lambda }\) in \(C_{i-1}({\lambda })\) moves to the right, and the relative position of \(R_{i-1}({\lambda })\) in \(C_{i-1}({\lambda })\) either does not change or moves to the left. Therefore, there are \(O(n)\,{\lambda }\) values in \(({\lambda }_1,{\lambda }_2)\) each of which incurs some change to the set \(S_{i2}({\lambda })\) and each such \({\lambda }\) value corresponds to the left extension of a sensor (e.g., Fig. 7). Further, these values can be easily determined in \(O(n\log n)\) time by a sweeping process (we omit the discussion for this). (Actually, as \({\lambda }\) increases, the size of the set \(S_{i2}({\lambda })\) is monotonically increasing.) Let \(\varLambda _{i2}\) denote the set of these \({\lambda }\) values, and let \(\varLambda _{i2}\) contain both \({\lambda }_1\) and \({\lambda }_2\). Again, \(|\varLambda _{i2}|=O(n)\). We sort the values in \(\varLambda _{i2}\). Using binary search on the sorted \(\varLambda _{i2}\) and our decision algorithm in Theorem 1, we determine (in \(O(n\log n)\) time) the two consecutive values \({\lambda }^{\prime }_1\) and \({\lambda }^{\prime }_2\) in \(\varLambda _{i2}\) such that \({\lambda }^{\prime }_1<{\lambda }^*\le {\lambda }^{\prime }_2\). Further, by Corollary 1, we determine whether \({\lambda }^*={\lambda }^{\prime }_2\). If \({\lambda }^*={\lambda }^{\prime }_2\), then we are done. Otherwise, \(S_{i2}({\lambda }^*)= S_{i2}({\lambda })\) for any \({\lambda }\in ({\lambda }^{\prime }_1,{\lambda }^{\prime }_2)\), which can be easily found. Note that \({\lambda }^*\in ({\lambda }^{\prime }_1,{\lambda }^{\prime }_2)\subseteq ({\lambda }_1,{\lambda }_2)\). The above obtains the set \(S_{i2}({\lambda }^*)\). We claim that \(S_{i2}({\lambda }^*)\ne \emptyset \). Indeed, due to our assumption that Lemma 4 holds for \(i-1\), we have \(R_{i-1}({\lambda })<L\) for \({\lambda }\in ({\lambda }_{i-1}^1,{\lambda }_{i-1}^2)\). Suppose to the contrary that \(S_{i2}({\lambda }^*)= \emptyset \). Then, the sensor \(s_{g(i)}({\lambda }^*)\) does not exist, which implies that \(S_{i-1}({\lambda }^*)\) is the critical set for covering the barrier \(B\) in an optimal solution. By our algorithm invariants, \({\lambda }^*\in ({\lambda }_{i-1}^1,{\lambda }_{i-1}^2)\) and \(S_{i-1}({\lambda }^*)\) is the same as \(S_{i-1}({\lambda })\) for any \({\lambda }\in ({\lambda }_{i-1}^1,{\lambda }_{i-1}^2)\). Due to \(R_{i-1}({\lambda })<L\) for \({\lambda }\in ({\lambda }_{i-1}^1,{\lambda }_{i-1}^2)\), the sensors in \(S_{i-1}({\lambda }^*)\) cannot cover the entire barrier \(B\), which contradicts with that \(S_{i-1}({\lambda }^*)\) is the critical set in the optimal solution. Hence, \(S_{i2}({\lambda }^*)\ne \emptyset \). Since \(S_{i2}({\lambda }^*)\ne \emptyset \), \(s_{g(i)}({\lambda }^*)\) is the sensor in \(S_{i2}({\lambda }^*)\) with the smallest right extension. As before, the sensor in \(S_{i2}({\lambda })\) with the smallest right extension is the same for any \({\lambda }\in ({\lambda }^{\prime }_1,{\lambda }^{\prime }_2)\). Thus, \(s_{g(i)}({\lambda }^*)\) can be easily determined. We let \({\lambda }^1_i={\lambda }^{\prime }_1\) and \({\lambda }^2_i={\lambda }^{\prime }_2\). Let \(S_i({\lambda }^*)=S_{i-1}({\lambda }^*)\cup \{s_{g(i)}({\lambda }^*)\}\). The algorithm invariants hold. Further, we examine the function \(R_i({\lambda })\), i.e., the right extension of \(s_{g(i)}({\lambda })\) in the configuration \(C_i({\lambda })\), as \({\lambda }\) increases in \(({\lambda }^1_i,{\lambda }^2_i)\). Since \(s_{g(i-1)}({\lambda }^*)\) and \(s_{g(i)}({\lambda }^*)\) are always in attached positions in this case, for any \({\lambda }\in ({\lambda }^1_i,{\lambda }^2_i)\), we have \(R_i({\lambda })=R_{i-1}({\lambda })+2r_{g(i)}\). Thus, the function \(R_i({\lambda })\) is a vertical shift of \(R_{i-1}({\lambda })\) by the distance \(2r_{g(i)}\). Because we already know explicitly the function \(R_{i-1}({\lambda })\) for \({\lambda }\in ({\lambda }^1_i,{\lambda }^2_i)\), which is a line segment of slope \(1\) or \(0\), the function \(R_i({\lambda })\) can be computed in constant time, which is also a line segment of slope \(1\) or \(0\). Note that this shows that Lemma 4(a) and (b) hold for \(i\).
If the algorithm does not stop, the above determines an interval \(({\lambda }^1_i,{\lambda }^2_i)\) such that the algorithm invariants and Lemma 4(a) and (b) hold on the interval. Below, we do further processing such that Lemma 4(c) also holds.
Because the function \(R_i({\lambda })\) on \(({\lambda }^1_i,{\lambda }^2_i)\) is a line segment of slope \(1\) or \(0\), there are three cases depending on the values \(R_i({\lambda })\) and \(L\): (1) \(R_i({\lambda })<L\) for any \({\lambda }\in ({\lambda }^1_i,{\lambda }^2_i)\), (2) \(R_i({\lambda })>L\) for any \({\lambda }\in ({\lambda }^1_i,{\lambda }^2_i)\), and (3) there exists \({\lambda }^{\prime }\in ({\lambda }^1_i,{\lambda }^2_i)\) such that \(R_i({\lambda }^{\prime })=L\).
-
1.
For Case (1), we proceed to the next step, along with the interval \(({\lambda }^1_i,{\lambda }^2_i)\). Clearly, the algorithm invariants hold and Lemma 4(c) holds for \(i\).
-
2.
For Case (2), the next lemma shows that it actually cannot happen due to \({\lambda }^*\in ({\lambda }^1_i,{\lambda }^2_i)\).
Lemma 5
It is not possible that \(R_i({\lambda })>L\) for any \({\lambda }\in ({\lambda }^1_i,{\lambda }^2_i)\).
Proof
Assume to the contrary that \(R_i({\lambda })>L\) for any \({\lambda }\in ({\lambda }^1_i,{\lambda }^2_i)\). Since \({\lambda }^*\in ({\lambda }^1_i,{\lambda }^2_i)\), let \({\lambda }^{\prime \prime }\) be any value in \(({\lambda }^1_i,{\lambda }^*)\). Due to \({\lambda }^{\prime \prime }\in ({\lambda }^1_i,{\lambda }^2_i)\), we have \(R_i({\lambda }^{\prime \prime })>L\). But this would imply that we have found a feasible solution where the displacement of each sensor is at most \({\lambda }^{\prime \prime }\), which is smaller than \({\lambda }^*\), incurring contradiction. \(\square \)
-
3.
For the Case (3), note that the slope of \(R_i({\lambda })\) on \(({\lambda }^1_i,{\lambda }^2_i)\) cannot be 0. To see this, suppose to the contrary the slope of \(R_i({\lambda })\) on \(({\lambda }^1_i,{\lambda }^2_i)\) is \(0\). Then, \(R_i({\lambda })=L\) for any \({\lambda }\in ({\lambda }^1_i,{\lambda }^2_i)\). Since \({\lambda }^*\in ({\lambda }^1_i,{\lambda }^2_i)\), for any \({\lambda \lambda }^{\prime }\in ({\lambda }^1_i,{\lambda }^*)\), \(R_i({\lambda }^{\prime })=L\), which means that there is a feasible solution where the displacement of each sensor is at most \({\lambda }^{\prime }<{\lambda }^*\), incurring contradiction. Hence, \(R_i({\lambda })\) on \(({\lambda }^1_i,{\lambda }^2_i)\) is a line segment of slope \(1\), and thus we can determine in constant time the unique value \({\lambda }^{\prime }\in ({\lambda }^1_i,{\lambda }^2_i)\) such that \(R_i({\lambda }^{\prime })=L\). Clearly, \({\lambda }^*\le {\lambda }^{\prime }\). By Corollary 1, we determine whether \({\lambda }^*={\lambda }^{\prime }\). If \({\lambda }^*={\lambda }^{\prime }\), then we terminate the algorithm; otherwise, we have \({\lambda }^*\in ({\lambda }^1_i,{\lambda }^{\prime })\) and update \({\lambda }^2_i\) to \({\lambda }^{\prime }\). We proceed to the next step, along with the interval \(({\lambda }^1_i,{\lambda }^2_i)\). Again, the algorithm invariants hold and Lemma 4(c) holds for \(i\).
This finishes the discussion of step \(i\) of our algorithm. The running time of step \(i\) is \(O(n\log n)\). Note that in each case where we proceed to the next step, the statement of Lemma 4 holds for \(i\), and thus Lemma 4 has been proved.
In the following lemma, we show that the algorithm must stop within at most \(n\) steps.
Lemma 6
The algorithm finds \({\lambda }^*\) in at most \(n\) steps.
Proof
Assume the critical set is \(S_k({\lambda }^*)\) for some \(k\) if we run our decision algorithm with \({\lambda }={\lambda }^*\). Since there are \(n\) sensors from the input, we have \(1\le k\le n\).
We claim that our algorithm finds \({\lambda }^*\) in at most \(k\) steps. Suppose to the contrary that the algorithm does not find \({\lambda }^*\) in the first \(k\) steps. In other words, the algorithm does not stop after step \(k\). By the algorithm invariants, after step \(k\), we have an interval \(({\lambda }_k^1,{\lambda }_k^2)\) such that \({\lambda }^* \in ({\lambda }_k^1,{\lambda }_k^2)\) and \(S_k({\lambda })=S_k({\lambda }^*)\) for any \({\lambda } \in ({\lambda }_k^1,{\lambda }_k^2)\). Further, by Lemma 4, \(R_k({\lambda })<L\) for any \({\lambda } \in ({\lambda }_k^1,{\lambda }_k^2)\), which means that the sensors in \(S_k({\lambda }^*)\) cannot cover the entire barrier \(B\) for any \({\lambda } \in ({\lambda }_k^1,{\lambda }_k^2)\), contradicting with that \(S_k({\lambda }^*)\) is the critical set for the decision algorithm when \({\lambda }={\lambda }^*\).
Therefore, our algorithm finds \({\lambda }^*\) in at most \(k\) steps. The lemma thus follows. \(\square \)
After \({\lambda }^*\) is found, by applying our decision algorithm on \({\lambda }={\lambda }^*\), we finally produce an optimal solution in which the displacement of every sensor is at most \({\lambda }^*\). Since each step takes \(O(n\log n)\) time, the total time of the algorithm is \(O(n^2\log n)\).
Theorem 3
The general BCLS problem is solvable in \(O(n^2\log n)\) time.
We shall make a technical remark. The typical parametric search [6, 14] usually returns with an interval containing the optimal value and then uses an additional step to find the optimal value. In contrast, our algorithm is guaranteed to find the optimal value \({\lambda }^*\) directly. This is due to the mechanism in our algorithm that requires \(R_i({\lambda })<L\) for any \({\lambda }\in ({\lambda }^1_i,{\lambda }^2_i)\) after each step \(i\) if the algorithm is not terminated. This mechanism actually plays the role of the additional step used in the typical parametric search.