Abstract
We consider the online smoothing problem, in which a tracker is required to maintain distance no more than Δ≥0 from a timevarying signal f while minimizing its own movement. The problem is determined by a metric space (X,d) with an associated cost function c:ℝ→ℝ. Given a signal f _{1},f _{2},…∈X the tracker is responsible for producing a sequence a _{1},a _{2},… of elements of X that meet the proximity constraint: d(f _{ i },a _{ i })≤Δ. To complicate matters, the tracker is online—the value a _{ i } may only depend on f _{1},…,f _{ i }—and wishes to minimize the cost of his travels, ∑c(d(a _{ i },a _{ i+1})). We evaluate such tracking algorithms competitively, comparing this with the cost achieved by an optimal adversary apprised of the entire signal in advance.
The problem was originally proposed by Yi and Zhang (In: Proceedings of the 20th annual ACMSIAM symposium on discrete algorithms (SODA), pp. 1098–1107. ACM Press, New York, 2009), who considered the natural circumstance where the metric spaces are taken to be ℤ^{k} with the ℓ _{2} metric and the cost function is equal to 1 unless the distance is zero (thus the tracker pays a fixed cost for any nonzero motion).

We begin by studying arbitrary metric spaces with the “pay if you move” metric of Yi and Zhang (In: Proceedings of the 20th annual ACMSIAM symposium on discrete algorithms (SODA), pp. 1098–1107. ACM Press, New York, [2009]) described above and describe a natural randomized algorithm that achieves a O(logb _{Δ})competitive ratio, where b _{Δ}=max_{ x∈X }B _{Δ}(x) is the maximum number of points appearing in any ball of radius Δ. We show that this bound is tight.

We then focus on the metric space ℤ with natural families of monotone cost functions c(x)=x ^{p} for some p≥0. We consider both the expansive case (p≥1) and the contractive case (p<1), and show that the natural lazy algorithm performs well in the expansive case. In the contractive case, we introduce and analyze a novel deterministic algorithm that achieves a constant competitive ratio depending only on p. Finally, we observe that by slightly relaxing the guarantee provided by the tracker, one can obtain natural analogues of these algorithms that work in continuous metric spaces.
Similar content being viewed by others
References
Bienkowski, M., Schmid, S.: Online function tracking with generalized penalties. In: Proceedings of the 12th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT), pp. 359–370 (2010)
Cormode, G., Garofalakis, M.: Sketching streams through the net: Distributed approximate query tracking. In: Proceedings of the 31st Int. Conference on Very Large Data Bases (VLDB), pp. 13–24 (2005)
Cormode, G., Garofalakis, M., Muthukrishnan, S., Rastogi, R.: Holistic aggregates in a networked world: Distributed tracking of approximate quantiles. In: Proceedings of ACM Special Interest Group on Management of Data (SIGMOD), pp. 25–36. ACM Press, New York (2005)
Cormode, G., Muthukrishnan, S., Yi, K.: Algorithms for distributed functional monitoring. ACM Trans. Algorithms 7(2), 1–21 (2011)
Davis, S., Edmonds, J., Impagliazzo, R.: Online algorithms to minimize resource reallocations and network communication. In: Proceedings of the 9th Int. Workshop on Approximation Algorithm for Combinatorial Optimization (APPROX), pp. 104–115 (2006)
Fiat, A., Karp, R., Luby, M., McGeoch, L., Sleator, D., Young, N.: Competitive paging algorithms. J. Algorithms 12(4), 658–699 (1991)
Keralapura, R., Cormode, G., Ramamirtham, J.: Communicationefficient distributed monitoring of thresholded counts. In: Proceedings of ACM Special Interest Group on Management of Data (SIGMOD), pp. 289–300. ACM Press, New York (2006)
Krumke, S.: Online optimization. OptALI Summer School Auckland (2011)
Smith, W.L.: Renewal theory and its ramifications. J. R. Stat. Soc., Ser. B, Stat. Methodol. 20(2), 243–302 (1958)
Yi, K., Zhang, Q.: Multidimensional online tracking. In: Proceedings of the 20th Annual ACMSIAM Symposium on Discrete Algorithms (SODA), pp. 1098–1107. ACM Press, New York (2009)
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: Another Proof of the Logarithmic Lower Bound on the Competitive Ratios of Randomized Algorithms in Euclidean Space
We consider the one dimensional Euclidean metric space ℤ. We will first build a high level picture of the proof. The adversary reports a sequence of random signal values according to a given distribution, which can be partitioned into log(2Δ) phases P _{1},P _{2},…,P _{log(2Δ)}, such that each P _{ i } lasts for i time steps and contains signal values f _{1},…,f _{ i } (note that the subscripts on “f _{ i }” and “P _{ i }” are the same). Furthermore, the signal series is constructed so that, for i>1, \(\bigcap_{j=1}^{i}B_{\Delta}(f_{j})\) is exactly half of \(\bigcap_{j=1}^{i1}B_{\Delta}(f_{j})\). The optimal algorithm can serve the whole sequence with one node, which is the sole element of \(\bigcap_{j=1}^{\log(2\Delta)}B_{\Delta}(f_{j})\). However, in each phase, any deterministic algorithm suffers a fault with probability at least 1/2. So the expected number of faults in all the phases is at least (log(2Δ))/2, and the expected competitive ratio for any deterministic algorithm under this distribution is (log(2Δ))/2=Θ(log(b _{Δ})). Therefore, by Yao’s principle, any randomized algorithm cannot obtain a competitive ratio better than log(b _{Δ}) against an oblivious adversary.
Now we will present the input distribution used by the adversary in greater detail. In phase P _{1}, the adversary randomly reports f _{1} to be a value in ℤ. Without loss of generality, assume f _{1}=0. In phase P _{2}, the adversary reports 0, and then flips a coin to report Δ or −Δ, each with probability 1/2. In general, in any P _{ i } with i∈{2,…,log2Δ}, the input sequence f _{1},…,f _{ i−1} is repeated, and then another coin is flipped, to report a random value from \(\lbrace \mbox{median}(\bigcap_{j=1}^{i}B_{\Delta}(f_{j})) \nobreak \Delta, \mbox{median}(\bigcap_{j=1}^{i}B_{\Delta}(f_{j})) + \Delta\rbrace\)
The last step is to show the probability of a fault in P _{ i } is at least 1/2. However, given the condition that the last tracking value that an algorithm chooses in the previous phase satisfies all of the first i−1 signal values in P _{ i }, the probability of a fault when the last signal value is reported is 1/2.
Since the probability of a fault in each phase is at least 1/2, we have shown the logarithmic lower bound for randomized algorithms against an oblivious adversary.
Note that before the final step of each phase, the adversary may not necessarily duplicate the complete process in the previous phase. As long as the region that the tracker might inhabit before the final input of the phase is reported is the same as at the end of the previous round, our lower bound on the probability of a fault during this phase will not change. So the only signal values from the previous round that really need to be repeated are the two values which finally determined the lower and upper limits of that region.
Appendix B: The Lazy Algorithm when p<1
We also perform an amortized analysis of the lazy algorithm in this case. We use the same notation as Sect. 3.1. Define the tracker’s potential at time i to be Φ _{ i }=a _{ i }−o _{ i }. The total amortized cost is \(\sum_{i=1}^{n}{\hat{c}_{i}}=\sum_{i=1}^{n} {c_{i}} +\varPhi_{n}\varPhi_{0}\). Because we know 0≤Φ _{ i }≤2Δ, we have \(\sum_{i=1}^{n}{\hat{c}_{i}}\sum_{i=1}^{n}{c_{i}} \leq2\Delta\). So we consider the same cases as Sect. 3.1 (the reader may refer to the same figures) and prove that for each case \(\hat{c}_{i} \leq O(\Delta^{1p})\cdot\delta_{o}^{p}\).
 Case 1a :

\(\hat{c}_{i}=c_{i}+\varPhi_{i}\varPhi_{i1}=\delta_{a}^{p}+\varepsilon_{i}\varepsilon_{i1}=\varepsilon_{i}\varepsilon_{i1} \leq\varepsilon_{i} = \varepsilon_{i}^{1p}\cdot\varepsilon_{i}^{p}\le(2\Delta)^{1p}\cdot\varepsilon_{i}^{p} \leq(2\Delta)^{1p}\cdot\delta_{o}^{p} \).
 Case 1b :

\(\hat{c}_{i}=\delta_{a}^{p}+\varepsilon_{i}\varepsilon_{i1}=\varepsilon_{i}\varepsilon_{i1} =\delta_{o}\).
 Case 1c :

\(\hat{c}_{i}=\delta_{a}^{p}+\varepsilon_{i}\varepsilon_{i1}=\varepsilon_{i}\varepsilon_{i1}\le0 \le\delta_{o}\).
 Case 2a :

\(\hat{c}_{i}=\delta_{a}^{p}+\varepsilon_{i}\varepsilon_{i1}\le\delta_{a}^{p}+\varepsilon_{i} \le\delta_{o}^{p}+(2\Delta )^{1p}\cdot\delta_{o}^{p} = ((2\Delta)^{1p}+1)\cdot\delta_{o}^{p}\).
 Case 2b :

\(\hat{c}_{i}=\delta_{a}^{p}+\varepsilon_{i}\varepsilon_{i1}\). Here, assume δ _{ a }≥1. If δ _{ a }≥δ _{ o }, then \(\delta_{a}^{p}\delta_{o}^{p} \leq \delta_{a}\delta_{o}\). So \(\delta_{a}^{p}+\varepsilon_{i}\varepsilon_{i1} =\delta_{a}^{p}+\delta_{o}\delta_{a} \leq\delta_{o}^{p}\). If δ _{ a }<δ _{ o }, then \(\delta_{a}^{p} +\varepsilon_{i}\varepsilon_{i1} =\delta_{a}^{p}+\delta_{o}\delta_{a}\leq\delta_{a}^{p} + \min\{\delta_{o},\varepsilon_{i}\} \leq\delta_{o}^{p} + (2\Delta)^{1p}\cdot\delta_{o}^{p} =((2\Delta)^{1p}+1)\cdot v^{p}\).
 Case 2c :

Same analysis as Case 2b.
 Case 2d :

When δ _{ a }≥1, \(\delta_{a}^{p} \le\delta_{a}\), so \(\hat{c}_{i}=\delta_{a}^{p}+\varepsilon_{i}\varepsilon_{i1} \le0 \le \delta_{o}^{p}\).
The reader may have noticed that Cases 2b, 2c and 2d neglect the possibility that δ _{ a } may be less than 1. What will happen if δ _{ a }<1 in case 2b, 2c and 2d? We know that δ _{ a }<1 only if Δ is not an integer and a _{ i−1}=f _{0} or a _{ i−1}=f _{ t }−Δ for some t<i. If a _{ i−1}=f _{0}, the total cost up to time i is at most \(\delta_{a}^{p}(<1)\) more than the optimal cost, which can be ignored. So we only need to consider the situation when a _{ i−1}=f _{ t }−Δ for t<i. We analyze the total amortized cost starting at time t till the current time, i. We know that a _{ t }=a _{ t+1}=…=a _{ i−1} and \(\sum_{j=1}^{it} o_{ij}o_{ij+1}^{p} \geqo_{i1}o_{i}^{p} + o_{t}o_{i1}^{p}\) since p<1. Let \(\delta_{o}'\) and ε _{ t } denote o _{ t }−o _{ i−1} and o _{ t }−a _{ t } respectively. In Fig. 6, we use the left rectangle to represent B _{Δ}(f _{ t }), middle rectangle to represent B _{Δ}(f _{ i−1}) and right rectangle to represent B _{Δ}(f _{ i }).
We give an example to show that our bound is tight. Suppose:
It follows that ∀t, a _{ t }=t. There is a better tracker b that behaves as follows:
At time t=m(2Δ+1) for some m∈ℤ^{+}, the lazy algorithm incurs a cost of m(2Δ+1). Meanwhile, the superior tracker incurs a cost of m(2Δ+1)^{p}. Therefore, the competitive ratio for the lazy algorithm is Θ(Δ^{1−p}).
Rights and permissions
About this article
Cite this article
Chen, S., Russell, A. Online Metric Tracking and Smoothing. Algorithmica 68, 133–151 (2014). https://doi.org/10.1007/s0045301296698
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s0045301296698