1 Introduction

Assume there is a provider offering a service which cannot be realized with a traditional cloud solution due to the high amount of induced network traffic. Instead of one centralized instance, the provider chooses to maintain multiple copies of the service which are located close to the end users such that data only has to be sent over a short distance. However, since the different instances might still have to exchange some data for coordination and maintaining an instance comes with a cost, the number of such instances should be limited. In conclusion, given a network topology, the provider wants to distribute k copies of the service onto nodes of the topology such that they are close to the end user devices to guarantee short response times. Naturally, the user devices appear at different positions in the network over time and the future is unknown to the provider. Each request for the service is answered by the nearest copy. To account for the shifting user base, the provider can move a copy of the service. However, each time a copy is moved the respective service has to be stopped, migrated, and started again which results in the copy being inaccessible while being moved. Therefore, the provider wants to avoid moving a copy over long distances to guarantee high availability.

In a previous work [12], a subset of the authors considered the scenario with only one mobile resource. The scenario described above was modeled based on the classical Page Migration problem [8]: A single resource can be moved between two points a and b for costs Dd(a,b), where d(a,b) is the distance between a and b and D ≥ 1 is a constant. In every round a request appears at some point r, and if the current position of the resource is p, it is served for costs d(p,r). The analysis is conducted in the standard framework of online algorithms and competitive analysis. This problem was extended to the Mobile Server problem in [12], which puts a limit on how much the resource (called server) can move in each time step reflecting the idea that the resource can only be moved locally to avoid congestion and have the service available again shortly after the decision to move it.

In our work, we extend the Mobile Server problem to multiple resources: We consider k identical servers located in the Euclidean space (of arbitrary dimension). We use the Euclidean space as an abstraction from concrete topologies and as an idealization of a fine grained network, where each router or base station is a candidate for holding a service instance. Each of the servers may move a distance of at most ms per time step. In each time step, a request appears which has to be served by one of the servers by the end of the time step. The cost function is the same as in the Page Migration problem described above, i.e., the cost for serving a request is equal to the distance to the nearest server and moving a server induces cost equal to the distance times some constant D. We evaluate our algorithms using competitive analysis, where the costs of an algorithm on an instance is compared to the optimal cost on the same instance. Formally, let CAlg(I) be the costs of an algorithm Alg on an input I and COpt(I) the minimal possible cost on I. Algorithm Alg is c-competitive, if CAlg(I) ≤ cCOpt(I) + a for all instances I, where a is a constant independent of I. If a = 0, Alg is strictly c-competitive. Our goal is to state strictly competitive online algorithms where the competitive ratio should not depend on the length of the input sequence.

1.1 Related Work

Besides being a direct extension of the Mobile Server problem [12], our work builds on and is related to results surrounding the k-Server and Page Migration problems. These problems have been examined in many variants and especially for the k-Server problem there are many algorithms for special metrics. In this overview we only focus on most relevant results for our problem, which are mostly algorithms with an (asymptotically) optimal competitive ratio.

In the classical k-Server problem as introduced by Manasse et al. [18], k identical servers are located in a metric space and requests are answered by moving at least one of the servers to the point of the request. The associated costs are equal to the total distance moved. Manasse et al. showed that no online algorithm could be better than k-competitive on every metric with at least k + 1 points. They stated as the k-Server Conjecture that there is a k-competitive online algorithm for every metric space. Further, the conjecture is shown to hold for k = 2 and k = n − 1 where n is the number of points in the metric space.

Since its introduction, many algorithms have been designed for special cases of the problem. Most notable is the Double-Coverage algorithm [11], which is k-competitive on trees. For general metrics, the best known result is the Work-Function algorithm, which is shown to be 2k − 1-competitive [16]. Although this algorithm seems generally inefficient with respect to runtime and memory, there have been studies showing that an efficient implementation of this algorithm is indeed possible [19, 20]. It was also shown that the algorithm has an optimal competitive ratio of k on line and star metrics, as well as metrics with k + 2 points [5]. Recently, an alternative upper bound of n − 1 was shown for the algorithm [22] which improves the results for metrics with less than 2k points.

The study of randomized online algorithms was initiated by Fiat et al. [13] who gave a \(\log (k)\)-competitive algorithm for the complete graph. It is speculated that this factor can be obtained for all metrics, however the question is still open. For general metrics, the first algorithm with polylogarithmic competitive ratio was an \(\mathcal {O}(\log ^{3} n\cdot \log ^{2} k)\)-competitive algorithm by Bansal et al. [3]. This was recently improved by Bubeck et al. [10] who gave an \(\mathcal {O}(\log ^{2} k)\)-competitive algorithm for HSTs which can be turned into an \(\mathcal {O}(\log ^{9}(k)\cdot \log \ \log (k))\)-competitive one for general metrics by a dynamic embedding of general metrics into HSTs [17].

Regarding the Page Migration problem [8] (also known as File Migration problem), most results focus on online algorithms which handle only a single page. Contrary to the k-Server problem, the design of such algorithms is not trivial for the Page Migration problem. To the best of our knowledge, the current best results are a 4-competitive deterministic algorithm by Bienkowski et al. [7] and a collection of randomized algorithms with competitive ratio of at most 3 by Jeffery Westbrook [21]. The most relevant results for our problem are two constructions by Bartal et al. [4] who give both a deterministic and a randomized algorithm which transform a given algorithm for the k-Server problem into a deterministic / randomized algorithm for the k-Page Migration problem. If the given k-Server algorithm is c-competitive, the deterministic algorithm is \(\mathcal {O}(c^{2})\)-competitive, the randomized algorithm is \(\mathcal {O}(c)\)-competitive. Conversely, we use the resulting algorithms as a black box in our constructions.

For our problem, we consider the locality of requests, which is a variant where requests accessing the server are in close proximity. Similar variants have also been considered in traditional problems. The idea with these problems is that in practice, requests to memory by a program will often abide certain locality properties. Making these properties part of the model has the potential to lower the achievable competitive ratio and to bring the results for theory and practice closer together. Popular models benefiting from such locality are the List Update problem [1, 2] and the Paging problem [9, 14], which can be regarded as a special case of the k-Server problem.

1.2 Our Results & Outline of the Paper

In [12] it was already shown that no online algorithm for our problem can be competitive even on the real line and with just k = 1 server. As a consequence, we employ the following methods to derive bounds independent of the number of requests for the problem: On the one hand we apply resource augmentation as in [12]: i.e., we allow the online algorithm to use a maximum movement distance of (1 + δ)ms. Other than in the case of k = 1, this is not enough to receive algorithms with a competitive ratio independent of time. Therefore, on the other hand, we restrict the adversary to the case with locality of requests: i.e., we introduce a parameter mc by which we can define families of instances classified by the maximum distance between two consecutive requests. We show that, for k ≥ 2, both methods are needed to yield competitive bounds independent of the length of the instance. For k = 1, it was shown in [12] that a locality of requests can improve the competitiveness, but is not necessary to achieve a constant upper bound.

The parameters mc and ms have a crucial impact on the resulting competitiveness and thus separate simple from hard instances. We are able to show that these parameters seem to naturally describe the problem, since we can prove a lower bound of \({\varOmega }(\frac {m_{c}}{m_{s}})\). For fast moving resources (mc < (1 + δ)ms), our algorithm has an almost optimal competitive ratio when given an optimal k-Page Migration algorithm. For the case of slow moving resources (mc ≥ (1 + δ)ms), we can achieve bounds independent of the length of the input stream. In detail, we obtain a bound of \(\mathcal {O}(\frac {1}{\delta ^{4}}\cdot k^{2}\cdot \frac {m_{c}}{ m_{s}} +\frac {1}{\delta ^{3}}\cdot k^{2} \cdot \frac {m_{c}}{ m_{s}}\cdot c(\mathcal {K}))\), where \(c(\mathcal {K})\) is the competitiveness of a given k-Page Migration algorithm. For the case D = 1, which we call the unweighted problem, the k-Page Migration algorithm can be replaced by a k-Server algorithm. Our results for the Euclidean space of arbitrary dimension are listed in Table 1. Note that the parameter ε is indirectly given as the relative difference between mc and ms. If mc < ms, then in the first row we have ε > δ. Alternatively, if δ = 0, this case still yields an almost optimal upper bound up to a factor of 1/ε.

Finally, we construct an algorithm for the line based on the Double Coverage (DC) algorithm for the k-Server problem to demonstrate how direct implementations of algorithms, as opposed to the general simulation technique, can help to reduce the resulting competitive ratio.

Table 1 An overview of the results, using the best known deterministic algorithms for k-Server / k-Page Migration

The paper is structured as follows: A formal definition of our model can be found in Section 2. All relevant lower bounds are established in Section 3, showing that resource augmentation and locality of requests are necessary to obtain competitive algorithms. In terms of upper bounds, we first give an algorithm for the unweighted problem in Section 4. The analysis for instances with mc < (1 + δ)ms consists of a simple potential function argument found in Section 4.1. The analysis of the other case is much more challenging and is conducted in Section 4.2. The weighted case (D > 1) is discussed in Section 5. While the basic approach stays the same, we need to modify the movement of the online algorithm due to the higher movement costs. We show how the algorithm can be adapted and present the resulting competitive ratio following a similar structure as in the unweighted case. Finally, the adaption of the DC algorithm for the line is presented and analyzed in Section 6.

2 Model & Notation

In this section we formally describe the model and some common notation used throughout the paper.

Time is considered discrete and divided into time steps 1,…,n. An input to the k-Mobile Server problem is given by a sequence of requests r1,…,rn where rt occurs in time step t and is represented by a point in the Euclidean space of arbitrary dimension. We are given k servers a1,…,ak controlled by our online algorithm. At each point in time, one server occupies exactly one point in the Euclidean space. We denote by \(a_{i}^{(t)}\) the position of server ai at end of time step t, and by d(a,b) the Euclidean distance between two points a and b. For the distance between two servers \(a_{i}^{(t)}\) and \(a_{j}^{(t)}\) in the same time step t, we also use the notation dt(ai,aj). We may also leave out the time t entirely if it is clear from the context.

In each time step t, the current request rt is revealed to the online algorithm. The algorithm may then move each server, such that \(d(a_{i}^{(t-1)},a_{i}^{(t)})\leq m_{s}\) for all servers ai. The movement incurs cost of \(D\cdot {\sum }_{i=1}^{k}d(a_{i}^{(t-1)},a_{i}^{(t)})\) for a constant D ≥ 1. The request rt is then served by a closest server \(a_{i}^{(t)}\), which incurs cost of \(d(a_{i}^{(t)},\ r_{t})\). Note that the variables indexed with the time t represent the configuration at the end of the time step t.

In our model, we consider the locality of requests dictated by a parameter mc limiting the distance between consecutive requests, i.e., d(rt,rt+ 1) ≤ mc. We will often refer to the distance which objects move within one time step as speed. We also consider a resource augmentation setting, where the maximum distance an online algorithm may move is in fact (1 + δ)ms for some δ ∈ (0,1). The cost of our online algorithm is denoted by CAlg. We compare the costs of an online algorithm to an offline optimum, whose servers are denoted by o1,…,ok and whose cost is COpt.

3 Lower Bounds

In this section, we will prove lower bounds for the competitive ratio of our problem. They show the importance both of the resource augmentation and the locality of requests introduced above. All our lower bounds already hold on the line (and therefore in arbitrary dimensions, too). Since our model is an extension of the k-Page Migration problem, Ω(k) is a lower bound for deterministic algorithms inherited from that problem (which itself inherits the bound from the k-Server problem, see [4, 18]). Even when mc is restricted, the lower bound instance can simply be scaled down such that the distance limits are not relevant for the instance. We derive new bounds which hold even for randomized algorithms against oblivious adversaries (and therefore for deterministic algorithms as well).

We start by discussing the model without any restriction on the distance between the requests in two consecutive time steps, i.e., the parameter mc is unbounded. We also consider the case that there is no resource augmentation: i.e., the maximum movement distance of the online algorithm and of the offline solution are the same. The following lower bound, originally formulated for k = 1, carries over from [12]:

Theorem 1

Every randomized online algorithm for the Mobile Server problem (with k = 1) has a competitive ratio of \({\varOmega }(\frac {\sqrt {n}}{D})\) against an oblivious adversary, where n is the length of the input sequence.

For more than one server, we obtain an additional bound which cannot be resolved with the help of resource augmentation. In the proofs of the following lower bounds, we use Yao’s principle: we construct a probability distribution over inputs and show a lower bound for general deterministic online algorithms (on the same sequence). According to the principle, the resulting competitive ratio then applies to randomized online algorithms against oblivious adversaries (who may generate sequences adapted to the concrete algorithm).

Theorem 2

For k ≥ 2, every randomized online algorithm for the k-Mobile Server problem has a competitive ratio of at least \({\varOmega }(\frac {n}{Dk^{2}})\) against an oblivious adversary, where n is the length of the input sequence.

Proof

We divide the line to the right of the starting point into 4(k − 1) segments of size xms each. The segments are divided into k − 1 groups of size 4. Each group has two inner and two outer segments, where the outer segments neighbor segments of other groups. The adversary now chooses in each group one of the two inner segments uniformly at random. We refer to the middle point in each of the chosen segments as Z1,…,Zk− 1. During the first phase, 4kx requests appear at the starting point, and the adversary moves one server to Z1,…,Zk− 1 each, the last server remains at the starting point. The moving costs for the adversary are \(\mathcal {O}(Dk^{2}x\cdot m_{s})\) (Fig. 1).

Fig. 1
figure 1

The line as used in the proof of Theorem 2. The circles indicate a possible configuration of the servers of the online algorithm and the optimal solution at the beginning of the second phase. The adversary has successfully chosen two segments which the online algorithm does not occupy

Fig. 2
figure 2

The line as used in the proof of Theorem 3. The circles indicate a possible configuration of the servers of the online algorithm and the optimal solution at the beginning of the second phase. The adversary has successfully chosen two segments which the online algorithm does not occupy

In the second phase, on each point Z1,…,Zk− 1, \(\frac {x}{4}\) requests appear in order of distance to the starting point. If at the first time when a request appears on Zi the online algorithm does not have one server in the corresponding segment, then the costs for serving requests for the online algorithm are at least \({\sum }_{i=1}^{\frac {x}{4}}(\frac {x}{2}m_{s}-i\cdot m_{s})={\varOmega }(x^{2} m_{s})\). Now we iterate over the groups of segments: Consider the group which contains Z1. At the time of the first request on Z1 the online algorithm either covers both, one or no inner segment of that group. In case of only one covered segment, Z1 lies in the other inner segment with probability 1/2. Consider a server in one of the inner segments: This server cannot move into a neighboring group within x/4 time steps. Hence we can regard the servers which cover inner segments as “used up” for the following groups and hence we may apply the arguments inductively. Let a, b and c the number of groups where the online algorithm covers both, one and no inner segment of that group respectively. We have a + b + c = k − 1, 2a + bk and the expected number of segments for which the online algorithm incurs costs of Ω(x2ms) are at least \(c+\frac {1}{2}b\). It is easy to see that the number of these segments are in Ω(k): If \(c\leq \frac {k}{4}-1\), then \(a+b\geq \frac {3}{4}k\) and hence \(b\geq \frac {k}{4}\).

For the ratio we compare the costs and get \(\frac {{\varOmega }(k x^{2} m_{s})}{\mathcal {O}(Dk^{2}x\cdot m_{s})}={\varOmega }(\frac {x}{Dk})={\varOmega }(\frac {n}{Dk^{2}})\). □

Note that the dependence on n does not disappear in this bound, even if the online algorithm may move its servers a distance of (1 + δ)ms in each time step: By reducing the number of requests on each point to \(\frac {x}{4(1+\delta )}\), the bound gets a term of \(\frac {1}{1+\delta }\). This is not sufficient, since we want δ to be independent of n and especially also be smaller than 1.

Since we often consider input sequences for problems such as ours to be potentially infinite, we deem competitive ratios dependent on the input length undesirable. Hence, as a consequence of the bounds shown so far, we apply two modifications to our model which help us to achieve a competitive ratio independent of the length of the input sequence. We use the concept of resource augmentation just as in [12] to allow the online algorithm to utilize a maximum movement distance of (1 + δ)ms for some δ ∈ (0,1) as opposed to the distance ms used by the optimal offline solution. This measure alone does not address the bound from Theorem 2 (the ratio shrinks, but still depends on n). Hence, we introduce the locality of requests: We restrict the distance between two consecutive requests to a maximum distance of mc. Note that only restricting the distance between consecutive requests does not remove the dependence on n either, as was shown in [12]. The following theorem can be obtained in a similar way as Theorem 2:

Theorem 3

For k ≥ 2, every randomized online algorithm for the k-Mobile Server problem, where the distance between consecutive requests is bounded by mc, has a competitive ratio of at least Ω(mc ms) against an oblivious adversary.

Proof

We use a similar construction as in the proof of Theorem 2, but now divide the line to the right of the starting point into 5(k − 1) segments of size xms each. The segments are divided into k − 1 groups of 5. Each group has three inner and two outer segments, where the outer segments neighbor segments of other groups. The adversary now chooses in each group one of the two inner segments, which neighbor an outer segment uniformly at random. We refer to the middle point in each of the chosen segments as Z1,…,Zk− 1. During the first phase, 5kx requests appear at the starting point, and the adversary moves one server to Z1,…,Zk− 1 each, the last server remains at the starting point. The moving costs for the adversary are \(\mathcal {O}(Dk^{2}x\cdot m_{s})\) (Fig. 2).

In the second phase, on each point Z1,…,Zk− 1, \(\frac {x}{4}\) requests appear in order of distance to the starting point, with requests in between to have a distance mc between requests, e.g., between Z1 and Z2, there will be \(\frac {d(Z_{1},Z_{2})}{m_{c}}\) requests. The distance between two points Zi and Zi+ 1 is at most 8xms, hence the number of requests between them is at most \(8x\frac {m_{s}}{m_{c}}\). The total cost for requests in this phase for the offline solution is therefore at most \((k-1)\cdot 8xm_{s}\cdot 8x\frac {m_{s}}{m_{c}}=\mathcal {O}(k\frac {x^{2} {m_{s}^{2}}}{m_{c}})\).

The costs of the online algorithm can be bounded similarly as in the previous theorem: As there are two potential choices for each Zi which are chosen with probability \(\frac {1}{2}\) each, Ω(k) of the chosen segments are initially uncovered when the first request occurs in the respective group of segments. Consider a server of the online algorithm which initially covers one of the candidate segments for Zi but is then moved to cover a different candidate segment. Since there is at least one segment in between, the travel distance is at least xms. In order to cover the original segment, the online server cannot start moving until the first request occurs in the segment. From this point, there are at most \(8x\frac {m_{s}}{m_{c}}\) time steps until the first request is in the next segment. This means that for \(\frac {m_{s}}{m_{c}}\leq \frac {1}{8}\), the online server cannot cover requests in two segments over a distance smaller than \(\frac {x}{2}\). The costs for the online algorithm are therefore Ω(kx2ms).

For the ratio we compare the costs and get \(\frac {{\varOmega }(k x^{2} m_{s})}{\mathcal {O}(Dk^{2}x\cdot m_{s} + k\frac {x^{2} {m_{s}^{2}}}{m_{c}})}={\varOmega }(\frac {m_{c}}{m_{s}})\) for sufficiently large x. □

From [12], we get a lower bound of Ω(1/δ) for k = 1 if mcms. This result can be extended for larger k as well, using the k-dimensional space and adapting the technique accordingly.

4 An Algorithm for the Unweighted Problem

In this section we consider the unweighted problem (D = 1). Our algorithm does the following: We mainly follow around a simulated k-Server algorithm, but always move a closest server greedily towards the request. After formally introducing our algorithm, we will briefly argue why both of these ideas need to be part of it to achieve a competitive ratio independent of the input length.

We use the following notation in this section: Denote by a1,…,ak the servers of the online algorithm, c1,…,ck the servers of the simulated k-server algorithm and o1,…,ok the servers of the optimal solution. For an offline server oi, we denote by \({o_{i}^{a}}\) the closest server of the online algorithm to oi (this might be the same server for multiple offline servers). Furthermore, we denote by a, c and o the closest server to the request of the algorithm, the k-server algorithm, and the optimal solution respectively. Ties for the closest servers can be broken arbitrarily. For a fixed time step t, we add a to any variable to denote the state at the end of the current time step, e.g., HCode \(a_{1}=a_{1}^{t-1}\) is the position of the server at the beginning of the time step and \(a_{1}^{\prime }={a_{1}{~}^{t}}\) is the position at the end of the current step.

Our algorithm Unweighted-Mobile Servers (UMS) works as follows: Take any k-Server algorithm \(\mathcal {K}\) with bounded competitiveness in the Euclidean space. Upon receiving the next request \(r^{\prime }\), simulate the next step of \(\mathcal {K}\). Calculate a minimum weight matching (with the distances as weights) between the servers a1,…,ak of the online algorithm and the servers \(c_{1}^{\prime },\ldots ,c_{k}^{\prime }\) of \(\mathcal {K}\). There must be a server ci for which \(c_{i}^{\prime }=r^{\prime }\). If the server matched to \(c_{i}^{\prime }\) can reach \(r^{\prime }\) in this turn, move all servers towards their counterparts in the matching with the maximum possible speed of (1 + δ)ms. Otherwise, select a server \(\tilde {a}\) which is closest to \(r^{\prime }\) and move it to \(r^{\prime }\) with speed at most \((1+\frac {\delta }{2})m_{s}\). All other servers move towards their counterparts in the matching with speed (1 + δ)ms.

In the second case of the algorithm, the server which moves greedily towards the request does so only a distance of \((1+\frac {\delta }{2})m_{s}\). This is to guarantee that we always move more towards the simulated algorithm than away from it, and hence in a sense always catch up to it. We briefly want to discuss the fact that both the greedy move and the movement towards the simulated servers are necessary for a bounded competitiveness. For the classical k-Server problem, a simple greedy algorithm, which always moves the closest server onto the request has an unbounded competitive ratio. We can show that a simple algorithm which just tries to imitate any k-Server algorithm as best as possible is not successful either. Intuitively, the simulated algorithm can move many servers towards the request within one time step and serve the following sequence with them, while the online algorithm needs multiple time steps to get the corresponding servers in position due to the speed limitation. At the same time, the closest server of the online algorithm to the request might be matched to a server further away from the request, and hence it would move that server away from it.

Simple algorithm: Let \(\mathcal {K}\) be any given k-Server algorithm. The k-Mobile Server algorithm does the following: Simulate \(\mathcal {K}\). Compute a minimum weight matching (with the distances as weights) between the own servers and the servers of \(\mathcal {K}\). Move every server towards the matched server at maximum speed.

Theorem 4

For k ≥ 2, there are competitive k-Server algorithms such that the simple algorithm for the k-Mobile Server problem does not achieve a competitive ratio independent of n.

Proof

Consider the following instance: All servers and the request start at the same point on the real line. The requests moves x times to the right by a distance of ms each. It then moves \(y<\frac {x}{4}\) steps to the left again and remains at that point for the remaining x − 2y time steps.

An optimal solution may be to just follow the request around with a single server which induces cost (x + y)ms. Assume the k-Server algorithm does the following: As long as the request moves to the right, it gets served by a single server, the requests after that are served by a second server (this k-server algorithm would be at most 2-competitive in this instance). As a result, the online algorithm will move one server to the rightmost point in the sequence and then begin to move a second server towards the request. When the request has reached its final position, the second server of the online algorithm has moved a distance of yms to the right and hence it takes x − 3y more time steps for it come closer than a distance of yms to the request. The server of the online algorithm who followed the request initially to the rightmost point has now a distance of yms to the request. It follows that the costs of the online algorithm are at least xms + (x − 3y)yms. By setting \(y={\varTheta }(\sqrt {x})\), the competitive ratio becomes as large as \({\varOmega }(\sqrt {n})\). □

The remainder of this section is devoted to the analysis of the competitive ratio of the UMS algorithm. In Section 4.1, we first consider the case that the distance between consecutive requests mc is smaller than the movement speed of the algorithm’s servers. This case is easier than the case of slower servers since we can always guarantee that the online algorithm has one server on the position of the request. In the other case (mc ≥ (1 + δ)ms), described in Section 4.2, we need to extend our analysis to incorporate situations in which our online algorithm has no server near the request although the optimal offline solution might have such a server.

4.1 Fast Resource Movement

We first deal with the case that mc ≤ (1 − ε) ⋅ ms for some ε ∈ (0,1). We show that we can achieve a result independent of the input length, even without resource augmentation. At the end of this section, we briefly discuss how to extend the result to incorporate resource augmentation: i.e., if the online algorithm has a maximum movement distance of (1 + δ)ms, we handle all cases with mc ≤ (1 + δε) ⋅ ms.

Theorem 5

If mc ≤ (1 − ε) ⋅ ms for some ε ∈ (0,1), the algorithm UMS is \({2}/{\varepsilon }\cdot c(\mathcal {K})\)-competitive, where \(c(\mathcal {K})\) is the competitive ratio of the simulated k-server algorithm \(\mathcal {K}\).

Proof

We assume the servers adapt their ordering a1,…,ak according to the minimum matching in each time step. Based on the matching, we define the potential \(\psi :=\frac {2}{\varepsilon }\cdot {\sum }_{i=1}^{k}d(a_{i},c_{i})\). Note that the algorithm reaches the point of r in each time step, and hence only pays for the movement of its servers, i.e., \(C_{Alg}={\sum }_{i=1}^{k}d(a_{i},a_{i}^{\prime })\). We assume that c1 is on the request after the current time step, i.e., \(c_{1}^{\prime }=r^{\prime }\).

First, consider the case that a1 can reach \(r^{\prime }\) in this time step. Since each server moves directly towards their counterpart in the matching, we have

$$\begin{array}{rcl} {\varDelta}\psi & = & \frac{2}{\varepsilon}\cdot{\sum}_{i=1}^{k}d(a_{i}^{\prime},c_{i}^{\prime}) - \frac{2}{\varepsilon}\cdot{\sum}_{i=1}^{k}d(a_{i},c_{i}) \\ & \leq & \frac{2}{\varepsilon}\cdot{\sum}_{i=1}^{k} d(c_{i},c_{i}^{\prime}) - \frac{2}{\varepsilon}\cdot{\sum}_{i=1}^{k} d(a_{i},a_{i}^{\prime}) \\ & = & \frac{2}{\varepsilon}\cdot C_{\mathcal{K}} - \frac{2}{\varepsilon}\cdot C_{Alg}. \end{array}$$

Now assume that a1 cannot reach \(r^{\prime }\) in this time step. The server moves at full speed and hence \(d(a_{1}^{\prime },c_{1}^{\prime })-d(a_{1},c_{1}^{\prime })=-m_{s}\). Now, let a2 be the server which is at range at most mc to \(r^{\prime }\) and does the greedy move possibly away from \(c_{2}^{\prime }\) onto \(r^{\prime }\). It holds \(d(a_{2}^{\prime },c_{2}^{\prime })-d(a_{2},c_{2}^{\prime })\leq m_{c}\). In total, we get

$$\begin{array}{rcl} {\varDelta}\psi & \leq & \frac{2}{\varepsilon}({\sum}_{i=1}^{k}d(a_{i}^{\prime},c_{i}^{\prime}) - {\sum}_{i=1}^{k}d(a_{i},c_{i}^{\prime})) + \frac{2}{\varepsilon}{\sum}_{i=1}^{k}d(c_{i},c_{i}^{\prime}) \\ & \leq & \frac{2}{\varepsilon}(d(a_{1}^{\prime},c_{1}^{\prime})-d(a_{1},c_{1}^{\prime}) + d(a_{2}^{\prime},c_{2}^{\prime})-d(a_{2},c_{2}^{\prime})) \\ & & - \frac{2}{\varepsilon}{\sum}_{i=3}^{k}d(a_{i},a_{i}^{\prime}) + \frac{2}{\varepsilon}{\sum}_{i=1}^{k}d(c_{i},c_{i}^{\prime}) \\ & \leq & -2m_{s} - \frac{2}{\varepsilon}{\sum}_{i=3}^{k}d(a_{i},a_{i}^{\prime}) + \frac{2}{\varepsilon}\cdot C_{\mathcal{K}} \\ & \leq & - {\sum}_{i=1}^{k}d(a_{i},a_{i}^{\prime}) + \frac{2}{\varepsilon}\cdot C_{\mathcal{K}}. \end{array}$$

We can extend this bound to the resource augmentation scenario, where the online algorithm may move the servers a maximum distance of (1 + δ) ⋅ ms. When relaxing the condition appropriately to mc ≤ (1 + δε) ⋅ ms, we get the following result:

Corollary 1

If mc ≤ (1 + δε) ⋅ ms for some ε ∈ (0,1), the algorithm UMS is \(\frac {2\cdot (1+\delta )}{\varepsilon }\cdot c(\mathcal {K})\)-competitive, where \(c(\mathcal {K})\) is the competitive ratio of the simulated k-server algorithm \(\mathcal {K}\).

The proof works the same as above by replacing occurrences of ms by (1 + δ)ms and changing the potential to \(\frac {2\cdot (1+\delta )}{\varepsilon }{\sum }_{i=1}^{k}d(a_{i},c_{i})\).

At first glance, the result seems to become weaker with increasing δ if ε stays the same. The reason is that by fixing the absolute distance ε the relative difference ((1 + δ)msmc)/ms between mc and (1 + δ)ms actually decreases: i.e., relatively speaking, mc gets closer to (1 + δ)ms. It can be seen that if instead we fix the value of mc and increase δ, the value of ε increases by the same amount and hence the competitive ratio tends towards \(2\cdot c(\mathcal {K})\).

4.2 Slow Resource Movement

This section considers the case mc ≥ (1 + δ)ms and is structured as follows: To support our potential argument, we first introduce a transformation of the simulated k-Server algorithm which ensures that the simulated servers are always located near the request. We then introduce an abstraction of the offline solution, reducing it to the positioning of a single server \(\hat {o}\) which acts as a reference point for a new potential function. The server \(\hat {o}\) approximates the optimal positioning of the servers while at the same time obeying certain movement restrictions necessary in our analysis. Finally, we complete the analysis by combining the new derived potential function with the methods from the previous section.

4.2.1 The k-Server Projection

Our goal is to transform a k-Page Migration algorithm \(\mathcal {K}\) into a k-Page Migration algorithm \(\hat {\mathcal {K}}\) which serves the requests of a k-Mobile Server instance such that all servers keep relatively close to the current request r. We formulate the projection for general D ≥ 1 as we will also use it in the next section. Note that using a k-Server algorithm for \(\mathcal {K}\) also yields a k-Server algorithm for \(\hat {\mathcal {K}}\), i.e., there will always be a server at the point of the request. For the case mc ≥ (1 + δ)ms, we want our algorithm to use this projection as a simulated algorithm as opposed to a regular k-Server algorithm, hence we must ensure that this projection is computable online with the information available to our online algorithm. The servers of \(\mathcal {K}\) are denoted as c1,…,ck and the servers of \(\hat {\mathcal {K}}\) as \(\hat {c}_{1},\ldots , \hat {c}_{k}\).

We define two circles around the request r: The inner circle inner(r) has a radius of 16kDmc and the outer circle outer(r) has a radius of (32kD + 1) ⋅ mc. We will maintain \(\hat {c}_{i}\in outer(r)\) for all i for the entirety of the execution. The time is divided into phases, where the phase starting at time t with the request at point rt ends on the smallest \(t^{\prime }>t\) such that \(d(r_{t},r_{t^{\prime }})\geq 16kD\cdot m_{c}\). During a phase the simulated servers move to preserve the following: If ciinner(r), then \(\hat {c}_{i}=c_{i}\). At the end of the phase the servers move such that additionally, the following holds: If ciinner(r), then \(\hat {c}_{i}\) is on the boundary of inner(r) such that \(d(c_{i},\hat {c}_{i})\) is minimized. It is obvious that the definition of the algorithm guarantees \(\hat {c}_{i}\in outer(r)\) for all i at each point in time. Intuitively, the upper bound of \(\mathcal {O}(k)\) for the factor between the new and previous algorithms stems from instances where the optimal algorithm will only have to send one server along with the request, while the transformed algorithm will always keep all k servers nearby.

Proposition 1

For the servers \(\hat {c}_{1},\ldots , \hat {c}_{k}\) of \(\hat {\mathcal {K}}\) it holds \(d(\hat {c}_{i},r)\leq (32 k D + 1)\cdot m_{c}\) during the whole execution. The costs of \(\hat {\mathcal {K}}\) are at most \(\mathcal {O}(k)\) times the costs of \(\mathcal {K}\).

Proof

We define the following potential: \(\phi =D\cdot {\sum }_{i=1}^{k}d(c_{i},\hat {c}_{i})\). During a phase, the potential decreases every time \(\hat {c}_{i}\) moves to ci by D times the amount \(\hat {c}_{i}\) moves. Each time ci moves, ϕ increases by at most D times the amount that ci moves. Let ci = c be a closest server of \(\mathcal {K}\) to r. If ciinner(r), then \(\hat {c}_{i}=c_{i}\) and hence the serving costs of the algorithms are the same. Otherwise, ciinner(r), \(\hat {c_{i}}\in outer(r)\) and hence the serving costs differ at most by a factor of 3.

We show that during each phase, \(\mathcal {K}\) has costs of Ω(1) ⋅ kD2mc. Consider the movement of the request from its starting point r to the final point \(r^{\prime }\). We know that \(d(r,r^{\prime })\geq 16 k D\cdot m_{c}\). Imagine drawing a straight line between r and \(r^{\prime }\) and separating it into segments of length mc by hyperplanes orthogonal to the line. There are now at least 16kD such segments. Every server of has two segments adjacent to its own. Denote the segments which do not contain a server of and are not adjacent to a segment containing such a server unoccupied segments. Since there are 16kD segments in total and k servers of , there are at least (16D − 3)k ≥ 13kD unoccupied segments at the beginning of a phase. Since the maximum movement distance of r is mc, there is at least one request per segment.

The k servers of \(\mathcal {K}\) divide the unoccupied segments into at most k + 1 many groups of segments right next to each other. We now analyze the cost of a group of size x. We only consider one half of the group and argue that the other half has at least the same cost. Requests in the given x/2 segments can be served the following way: An adjacent server moves into the first y segments and then serves the remaining x/2 − y segments over the distance. The costs incurred are at least \(y\cdot Dm_{c} + {\sum }_{i=1}^{x/2-y} i\cdot m_{c} \geq y\cdot Dm_{c} + \frac {(x/2-y)^{2}}{2}\cdot m_{c}\). This term is minimized by setting \(y=\frac {x}{2}-D\) which implies cost of at least \(\frac {x}{2}Dm_{c} - \frac {D^{2}}{2}m_{c}\). No matter how the 13kD unoccupied segments are divided into k + 1 groups, this gives a total cost of at least Ω(1) ⋅ kD2mc.

We can now bound the costs at the end of the phase: The argument when ciinner(r) is the same as before. Otherwise, ϕ increases by at most \(D\cdot d(\hat {c}_{i},\hat {c}_{i}^{\prime })\leq 32 k D^{2}\cdot m_{c}\). This yields \(C_{\mathcal {K^{\prime }}}\leq \mathcal {O}(k)\cdot C_{\mathcal {K}}\). □

4.2.2 The Offline Helper

We define a new offline server \(\hat {o}\), which approximates the optimal position o while managing the role change of o in a smooth manner. By \(\hat {a}\), we denote the server of the online algorithm with minimal distance to \(\hat {o}\). For a formal description of the behavior, we need the following definitions:

  • The inner circle innert(oi) contains all points p with \(d_{t}(o_{i},p)\leq \frac {\delta ^{2}}{48960k}\cdot d_{t}(o_{i},{o_{i}^{a}})\).

  • The outer circle outert(oi) contains all points p with \(d_{t}(o_{i},p)\leq \frac {\delta }{48}\cdot d_{t}(o_{i},{o_{i}^{a}})\).

Recall that \({o_{i}^{a}}\) is the server of the online algorithm closest to oi. Abusing notation, we also refer to innert(oi) and outert(oi) as distances equal to the radius defined above.

This section is devoted to proving the following:

Proposition 2

There exists a virtual server \(\hat {o}\) which moves at a speed of at most \((2+\frac {1020k}{\delta })\cdot m_{c}\) per time step, for which \(d(\hat {a},\hat {o}) \leq 2\cdot d(o^{*},o^{*a}) + d(a^{*},r)\) at all times, and for which the following conditions hold as long as \(d_{t}(o^{*},o^{*a}) \geq 2\cdot 51483\frac {km_{c}}{\delta ^{2}}\):

  1. 1.

    If rinner(o) at the end of the current time step, \(\hat {o}\) moves at a maximum speed of \((1+\frac {\delta }{8})m_{s}\), i.e., \(r_{t}\in inner_{t}(o^{*}) \Rightarrow d(\hat {o}^{(t-1)},\ \hat {o}^{(t)})\leq (1+\frac {\delta }{8})m_{s}\).

  2. 2.

    If rinner(o) at the end of the current time step, then \(\hat {o}\in outer(o^{*})\) at the end of the current time step, i.e., \(r_{t}\in inner_{t}(o^{*}) \Rightarrow \hat {o}^{(t)}\in outer_{t}(o^{*})\).

Before formally describing the algorithm for \(\hat {o}\), we will give an intuition on the pattern and why it is useful for the analysis: In essence, \(\hat {o}\) follows the point of the request r, but will always slow down when the request is near a server (in its inner radius) of the optimal solution. It will then not be able to match the exact position, but be within the outer radius of the same server. Due to the higher speed of \(\hat {o}\), it can quickly catch up to r once it leaves the area near the optimal server. This statement will help us in the analysis, since the online algorithm will close the distance towards \(\hat {o}\) when it is near an optimal server. We choose the potential to be a function in the distance to \(\hat {o}\) and hence can pay for the potentially high costs the algorithm has in such a step. When the request is not near an optimal server however, we can afford to pay into the potential since the optimal solution has high costs in such a step.

In the following, we show that it is possible to define a movement pattern for \(\hat {o}\) in a way, such that invariants 1 and 2 of Proposition 2 hold as long as \(d(o^{*},o^{*a}) \geq 51483\frac {km_{c}}{\delta ^{2}}\). Otherwise, \(\hat {o}\) will simply follow r and restore the properties once \(d(o^{*},o^{*a}) \geq 2\cdot 51483\frac {km_{c}}{\delta ^{2}}\). In order to describe the movement in detail, we introduce the concept of transitions.

In the input sequence and a given optimal solution, we define a transition between two steps t1 < t2, if there are oi, oj such that oi = o and \(r\in inner_{t_{1}}(o_{i})\) at time step t1 and oj = o and \(r\in inner_{t_{2}}(o_{j})\) at time step t2. In between these two time steps, rinner(o). For such a transition, we define the transition time as t := t2t1. If \(t^{*}>inner_{t_{1}}(o^{*})/m_{c} + 2\), we call this a long transition. Otherwise, we call it a short transition. We say that oi passes the request after t1 and oj receives the request in t2. The concept is illustrated in Fig. 3.

Fig. 3
figure 3

Example for a transition from oi to oj. By definition, r crosses the border of inner(oi) after time step t1 (oi passes r after t1). The transition stops at step t2 when r has entered (oj receives r in t2). Note that position and the radius of its inner circle may change from t1 to t2. The distance moved by r is at most . The dotted line represents the estimation of used in Lemma 2

The \(d_{t_{1}}(o_{i},o_{j})\)\(inner_{t_{2}}(o_{j})\)\((t_{2}-t_{1})\cdot m_{c}\)\(o_{j}\) behavior of \(\hat {o}\) can be computed as follows:

  1. 1.

    During a long transition between time steps t1 and t2, move with speed \(d(\hat {o}^{(t-1)},\ \hat {o}^{(t)})\leq (2+\frac {1020k}{\delta })\cdot m_{c}\) towards rt during steps t1 + 1 to t2 − 2. In the last two steps t2 − 1 and t2, move such that \(\hat {o}^{(t_{2}-1)}=r_{t_{2}}\) at time t2 − 1 and do not move in t2 at all. Informally, \(\hat {o}\) moves one step ahead of r such that \(\hat {o}=r\) after the transition, as soon as rinner(o).

  2. 2.

    For a sequence of short transitions starting with o = oi in t1, determine which of the following events terminating the current sequence occurs first:

    1. (a)

      A long transition from a server o to oj between time t2 and t3 occurs. In this case, \(\hat {o}\) simply moves towards \(o_{\ell }^{(t)}\) in each step t with speed at most \((1+\frac {\delta }{8})m_{s}\) until t2.

    2. (b)

      A short transition from a server o to oj between time t2 and t3 occurs, where at one point prior in the sequence d(oj,o) > outer(o)/3. If \(\hat {o}\) can move straight towards the final position of oj in t3 with speed \((1+\frac {\delta }{8})m_{s}\) without ever leaving outer(o), then do that. Otherwise move towards a point p with \(d(p,o_{\ell })=\frac {2\delta }{145}\cdot d(o_{\ell },o_{\ell }^{a})\). Among those candidates, p minimizes \(d(p,o_{\ell }^{(t_{3})})\). When this point is reached, keep the invariant \(d(\hat {o},o_{\ell })=\frac {2\delta }{145}\cdot d(o_{\ell },o_{\ell }^{a})\) whenever the final position of oj is not within \(\frac {2\delta }{145}\cdot d(o_{\ell },o_{\ell }^{a})\) around o. The position of \(\hat {o}\) on the circle around o should be the one closest to oj’s final position. When \(o_{j}^{(t_{3})}\) is inside the circle, the position of \(\hat {o}\) should be equal to \(o_{j}^{(t_{3})}\).

  3. 3.

    If \(d_{t_{1}}(o^{*},o^{*a}) < 51483\frac {km_{c}}{\delta ^{2}}\), treat the time until \(d_{t_{2}}(o^{*},o^{*a}) \geq 2\cdot 51483\frac {km_{c}}{\delta ^{2}}\) as a long transition between t1 and t2: i.e., move towards r with speed \((2+\frac {1020k}{\delta })\cdot m_{c}\) and skip one step ahead of r during the last 2 time steps. (Steps 1 and 2 are not executed during this time.)

Note that the server \(\hat {o}\) is a purely analytical tool and hence the behavior as described above does not have to be computable online.

Our goal is to show that all invariants described in Proposition 2 hold inductively over all transitions. We divide the entire timeline into sequences, where each sequence starts with both r and \(\hat {o}\) being in inner(o). A sequence ends when one of the events stated in step 2 of the algorithm completes. The following lemma states that the initial condition is restored after every long transition.

Lemma 1

If \(\hat {o}\in outer_{t_{1}}(o^{*})\) at the beginning of a long transition between t1 and t2, then \(\hat {o}\in inner_{t_{2}}(o^{*})\) at the end of the transition.

Proof

During the transition time t := t2t1, r moves a distance of at most tmc. At the beginning, \(\hat {o}\in outer_{t_{1}}(o^{*})\) and \(r\in inner_{t_{1}}(o^{*})\), hence \(d_{t_{1}}(\hat {o},r) \leq d_{t_{1}}(\hat {o},o^{*}) + d_{t_{1}}(o^{*},r)\leq inner_{t_{1}}(o^{*}) + outer_{t_{1}}(o^{*})\). During the first \(\lceil inner_{t_{1}}(o^{*})/m_{c}\rceil \) time steps, \(\hat {o}\) can catch up to r a distance of

$$\begin{array}{rcl} \frac{inner_{t_{1}}(o^{*})}{m_{c}}\cdot (1+\frac{1020k}{\delta})\cdot m_{c} & = & inner_{t_{1}}(o^{*}) + \frac{1020k}{\delta}\cdot inner_{t_{1}}(o^{*}) \\ & = & inner_{t_{1}}(o^{*}) + outer_{t_{1}}(o^{*}) \end{array}$$

and therefore reaches r (the speed of \(\hat {o}\) is an additional mc higher which accounts for the movement of r). Since \(t^{*}>inner_{t_{1}}(o^{*})/m_{c} + 2\), there are at least 2 time steps remaining where \(\hat {o}\) can move to the final position of r. □

Our next goal is to analyze a sequence of short transitions. During these transitions, r moves faster than \(\hat {o}\) and hence the distance of \(\hat {o}\) to o increases due to the role change after a transition. The next lemma establishes an upper bound on that increase. Since we use the lemma in another context as well, the formulation is slightly more general.

Lemma 2

Every short transition between oi in step t1 and oj in step t2 can increase or reduce the distance of some server s, which moves at speed at most (1 + δ)ms, to o by at most \(\min \limits \{6.001\cdot \frac {\delta ^{2}}{48960k}\cdot d_{t_{1}}(o^{*},o^{*a}) + 8.001m_{c} \ ,\ 6.002\cdot \frac {\delta ^{2}}{48960k}\cdot d_{t_{2}}(o^{*},o^{*a}) + 8.002m_{c}\}\).

Proof

We consider a short transition from offline server oi to oj in between time steps t1 and t2. By definition, \(t^{*}=t_{2}-t_{1}\leq {inner_{t_{1}}(o_{i})}/{m_{c}} +2\).

We show that since oi and oj must be relatively close together, their distance to the closest server of the online algorithm must be similar. We first upper bound the distance between oi and oj in step t1: The request travels a distance of at most tmc between the two. During this time, oj could have moved a distance of at most tms, and the inner radius could have changed by at most \(t^{*}\cdot \frac {\delta }{16}m_{s}\). Since after the t time steps r enters the inner circle of oj, we can use the above information to trace the distance between the two servers and the inner circle’s radius of oj back to time step t1 (see Fig. 3).

With this knowledge, we get

$$\begin{array}{rrcl} & d_{t_{1}}(o_{j},{o_{j}^{a}}) & \geq & d_{t_{1}}(o_{i},{o_{i}^{a}}) - d_{t_{1}}(o_{i},o_{j}) \\ & & \geq & d_{t_{1}}(o_{i},{o_{i}^{a}}) - t^{*}\cdot (m_{c}+m_{s}+\frac{\delta}{16}m_{s}) \\ & & & - inner_{t_{1}}(o_{i}) - inner_{t_{1}}(o_{j}) \\ & & \geq & d_{t_{1}}(o_{i},{o_{i}^{a}}) - 3\cdot inner_{t_{1}}(o_{i}) - inner_{t_{1}}(o_{j}) -4m_{c} \\ & & \geq & (1-3\cdot \frac{\delta^{2}}{48960k})\cdot d_{t_{1}}(o_{i},{o_{i}^{a}}) - \frac{\delta^{2}}{48960k}\cdot d_{t_{1}}(o_{j},{o_{j}^{a}}) -4m_{c} \\ \Leftrightarrow & d_{t_{1}}(o_{j},{o_{j}^{a}}) & \geq & \frac{1-3\cdot \frac{\delta^{2}}{48960k}}{1+ \frac{\delta^{2}}{48960k}}\cdot d_{t_{1}}(o_{i},{o_{i}^{a}}) -\frac{4}{1+ \frac{\delta^{2}}{48960k}}\cdot m_{c} \\ \Rightarrow & d_{t_{1}}(o_{j},{o_{j}^{a}}) & \geq & (1-\frac{4}{48960k+1})\cdot d_{t_{1}}(o_{i},{o_{i}^{a}}) - 4m_{c}. \end{array}$$

In reverse, we can bound

$$\begin{array}{rrcl} & d_{t_{1}}(o_{i},{o_{i}^{a}}) & \geq & d_{t_{1}}(o_{j},{o_{j}^{a}}) - d_{t_{1}}(o_{i},o_{j}) \\ & & \geq & d_{t_{1}}(o_{i},{o_{i}^{a}}) - 3\cdot inner_{t_{1}}(o_{i}) - inner_{t_{1}}(o_{j}) -4m_{c} \\ & & \geq & (1-\frac{\delta^{2}}{48960k})\cdot d_{t_{1}}(o_{j},{o_{j}^{a}}) - \frac{3\delta^{2}}{48960k}\cdot d_{t_{1}}(o_{i},{o_{i}^{a}}) -4m_{c} \\ \Leftrightarrow & d_{t_{1}}(o_{i},{o_{i}^{a}}) & \geq & \frac{1-\frac{\delta^{2}}{48960k}}{1+ \frac{3\delta^{2}}{48960k}}\cdot d_{t_{1}}(o_{j},{o_{j}^{a}}) -\frac{4}{1+ \frac{3\delta^{2}}{48960k}}\cdot m_{c} \\ \Rightarrow & d_{t_{1}}(o_{i},{o_{i}^{a}}) & \geq & (1-\frac{4}{48960k})\cdot d_{t_{1}}(o_{j},{o_{j}^{a}}) - 4m_{c}. \end{array}$$

Since s can move away from oj during the transition and oj itself moves at speed at most ms, we get

$$\begin{array}{rcl} d_{t_{2}}(s,o_{j}) & \leq & d_{t_{1}}(s,o_{j}) + t^{*}\cdot(2+\delta)m_{s} \\ & \leq & d_{t_{1}}(s,o_{i}) + d_{t_{1}}(o_{i},o_{j})+ t^{*}\cdot(2+\delta)m_{s} \\ & \leq & d_{t_{1}}(s,o_{i}) + t^{*}\cdot (m_{c}+m_{s}+\frac{\delta}{16}m_{s}) + inner_{t_{1}}(o_{i}) \\ & & + inner_{t_{1}}(o_{j}) + t^{*}\cdot(2+\delta)m_{s} \\ & \leq & d_{t_{1}}(s,o_{i}) + 5\cdot inner_{t_{1}}(o_{i}) + inner_{t_{1}}(o_{j}) + 8m_{c} \\ & \leq & d_{t_{1}}(s,o_{i}) + 5\cdot \frac{\delta^{2}}{48960k}\cdot d_{t_{1}}(o_{i},{o_{i}^{a}}) + \frac{\delta^{2}}{48960k}\cdot d_{t_{1}}(o_{j},{o_{j}^{a}}) +8m_{c}. \end{array}$$

To derive the first bound, we get

$$\begin{array}{rcl} d_{t_{2}}(s,o_{j}) & \leq & d_{t_{1}}(s,o_{i}) + 5\cdot \frac{\delta^{2}}{48960k}\cdot d_{t_{1}}(o_{i},{o_{i}^{a}}) + \frac{\delta^{2}}{48960k} \cdot \frac{1}{1-\frac{4}{48960k}}\cdot d_{t_{1}}(o_{i},{o_{i}^{a}}) \\ & & + (8 +\frac{\delta^{2}}{48960k}\cdot \frac{4}{1-\frac{4}{48960k}} )\cdot m_{c} \\ & \leq & d_{t_{1}}(s,o_{i}) + 6.001\frac{\delta^{2}}{48960k}\cdot d_{t_{1}}(o_{i},{o_{i}^{a}}) + 8.001m_{c}. \end{array}$$

For the second bound, we continue with

$$\begin{array}{rcl} d_{t_{2}}(s,o_{j}) & \leq & d_{t_{1}}(s,o_{i}) + 5\cdot \frac{\delta^{2}}{48960k}\cdot d_{t_{1}}(o_{i},{o_{i}^{a}}) + \frac{\delta^{2}}{48960k}\cdot d_{t_{1}}(o_{j},{o_{j}^{a}}) +8m_{c} \\ & \leq & d_{t_{1}}(s,o_{i}) + (1+\frac{5}{1-\frac{4}{48960k+1}})\cdot \frac{\delta^{2}}{48960k}\cdot d_{t_{1}}(o_{j},{o_{j}^{a}}) \\ & & +(8+5\cdot \frac{\delta^{2}}{48960k}\cdot\frac{4}{1-\frac{4}{48960k+1}})m_{c} \\ & \leq & d_{t_{1}}(s,o_{i}) + 6.001\cdot \frac{\delta^{2}}{48960k}\cdot d_{t_{1}}(o_{j},{o_{j}^{a}}) + 8.001\cdot m_{c}. \end{array}$$

Next we bound the change in \(d(o_{j},{o_{j}^{a}})\) during the transition:

$$\begin{array}{rrcl} & d_{t_{1}}(o_{j},{o_{j}^{a}}) & \leq & d_{t_{2}}(o_{j},{o_{j}^{a}}) + t^{*}\cdot (2+\delta)m_{s} \\ & & \leq & d_{t_{2}}(o_{j},{o_{j}^{a}}) + 2\cdot inner_{t_{1}}(o_{i}) + 4m_{c} \\ & & \leq & d_{t_{2}}(o_{j},{o_{j}^{a}}) + 2\cdot \frac{\delta^{2}}{48960k}\cdot d_{t_{1}}(o_{i},{o_{i}^{a}}) +4m_{c} \\ & & \leq & d_{t_{2}}(o_{j},{o_{j}^{a}}) + 2.001\cdot \frac{\delta^{2}}{48960k}\cdot d_{t_{1}}(o_{j},{o_{j}^{a}}) + 4.001m_{c} \\ \Leftrightarrow & d_{t_{1}}(o_{j},{o_{j}^{a}}) & \leq & \frac{1}{1 - 2.001\cdot \frac{\delta^{2}}{48960k}} \cdot d_{t_{2}}(o_{j},{o_{j}^{a}}) + 4.002m_{c}. \end{array}$$

This gives us

$$\begin{array}{rcl} d_{t_{2}}(s,o_{j}) & \leq & d_{t_{1}}(s,o_{i}) + \frac{1}{1 - 2.001\cdot \frac{\delta^{2}}{48960k}}\cdot 6.001\cdot \frac{\delta^{2}}{48960k}\cdot d_{t_{2}}(o_{j},{o_{j}^{a}}) \\ & & + (8.001+ 6.001\cdot \frac{\delta^{2}}{48960k}\cdot 4.002)\cdot m_{c} \\ & \leq & d_{t_{1}}(s,o_{i}) + 6.002\cdot \frac{\delta^{2}}{48960k}\cdot d_{t_{2}}(o_{j},{o_{j}^{a}}) + 8.002m_{c}. \end{array}$$

For the bound of decreasing the distance, the same proof can be applied: Start with \(d_{t_{2}}(s,o_{j}) \geq d_{t_{1}}(s,o_{j}) - t^{*}\cdot (2+\delta )m_{s} \geq d_{t_{1}}(s,o_{i}) - d_{t_{1}}(o_{i},o_{j})- t^{*}\cdot (2+\delta )m_{s}\) and use the same estimations as before from there. □

We want to show that \(\hat {o}\in inner(o^{*})\) holds after a sequence of short transitions is terminated by one of the conditions described in step 2 of the algorithm. During the sequence, we must also show that \(\hat {o}\in outer(o^{*})\). The main idea for the following lemma is that o never leaves outer(o)/3 per definition and hence following it keeps \(\hat {o}\) inside outer(o).

Lemma 3

Consider a sequence of short transitions which is terminated by a long transition. If \(\hat {o}\in inner(o^{*})\) at the beginning of the sequence, then \(\hat {o}\in inner(o^{*})\) after the long transition. During the sequence of short transitions, \(\hat {o}\in outer(o^{*})\).

Proof

As in step 2 of the algorithm, we assume the sequence starts at time t1 with o = oi, and terminates with a long transition from o to oj between time steps t2 and t3. \(\hat {o}\) selects the server o which passes r on to oj over the long transition and follows it. Since d(o,o) ≤ outer(o)/3 for the duration of the sequence, we have \(\hat {o}\in outer(o_{\ell })\) at the beginning of the sequence and therefore \(\hat {o}\in outer(o_{\ell })\) holds for the entire duration. At the beginning, with

$$\begin{array}{rrcl} & d_{t_{1}}(o^{*},o^{*a}) & \leq & d_{t_{1}}(o^{*},o_{\ell}^{a}) \\ & & \leq & d_{t_{1}}(o^{*},o_{\ell}) + d_{t_{1}}(o_{\ell},o_{\ell}^{a}) \\ & & \leq & \frac{\delta}{144} \cdot d_{t_{1}}(o^{*},o^{*a}) + d_{t_{1}}(o_{\ell},o_{\ell}^{a}) \\ \Leftrightarrow & d_{t_{1}}(o^{*},o^{*a}) & \leq & \frac{1}{1-\frac{\delta}{144}}\cdot d_{t_{1}}(o_{\ell},o_{\ell}^{a}) \end{array}$$

we get

$$\begin{array}{rcl} d_{t_{1}}(\hat{o},o_{\ell}) & \leq & d_{t_{1}}(\hat{o},o^{*}) + d_{t_{1}}(o^{*},o_{\ell}) \\ & \leq & (\frac{\delta^{2}}{48960k} + \frac{\delta}{144})\cdot d_{t_{1}}(o^{*},o^{*a}) \\ & \leq & \frac{1}{1-\frac{\delta}{144}}\cdot(\frac{\delta^{2}}{48960k} + \frac{\delta}{144})\cdot d_{t_{1}}(o_{\ell},o_{\ell}^{a}) \\ & \leq & 0.01\cdot\delta\cdot d_{t_{1}}(o_{\ell},o_{\ell}^{a}). \end{array}$$

Furthermore, since \(\hat {o}\) at least holds its relative distance to o, during any step t during the sequence,

$$\begin{array}{rcl} d_{t}(\hat{o},o^{*}) & \leq & d_{t}(\hat{o},o_{\ell}) + d_{t}(o_{\ell},o^{*}) \\ & \leq & \frac{d_{t_{1}}(\hat{o},o_{\ell})}{d_{t_{1}}(o_{\ell},o_{\ell}^{a})}\cdot d_{t}(o_{\ell},o_{\ell}^{a}) + d_{t}(o_{\ell},o^{*}) \\ & \leq & 0.01\cdot\delta\cdot d_{t}(o_{\ell},o_{\ell}^{a}) + \frac{\delta}{144}\cdot d_{t}(o^{*},o^{*a}) \\ & \leq & 0.01\cdot\delta\cdot d_{t}(o_{\ell},o^{*a}) + \frac{\delta}{144}\cdot d_{t}(o^{*},o^{*a}) \\ & \leq & 0.01\cdot\delta\cdot (d_{t}(o_{\ell},o^{*}) + d_{t}(o^{*},o^{*a})) + \frac{\delta}{144}\cdot d_{t}(o^{*},o^{*a}) \\ & \leq & 0.01\cdot\delta\cdot (\frac{\delta}{144}\cdot d_{t}(o^{*},o^{*a}) + d_{t}(o^{*},o^{*a})) + \frac{\delta}{144}\cdot d_{t}(o^{*},o^{*a}) \\ & \leq & \frac{\delta}{48}\cdot d_{t}(o^{*},o^{*a}) \end{array}$$

and therefore \(\hat {o}\in outer_{t}(o^{*})\) during the whole sequence. By Lemma 1, we have \(\hat {o}\in inner(o^{*})\) after the long transition. □

We show with the help of Lemma 2 that during the sequence of transitions, \(\hat {o}\) does not lose too much distance to o, while oj, since at one point d(oj,o) > outer(o)/3, takes enough time to get into position for a short transition such that \(\hat {o}\) can reach the final position of oj in time.

Lemma 4

Consider a sequence of short transitions which is terminated by a short transition from o to oj, where at one point prior in the sequence d(oj,o) > outer(o)/3. If \(\hat {o}\in inner(o^{*})\) at the beginning of the sequence and \(d(o^{*},o^{*a}) \geq 51483\frac {km_{c}}{\delta ^{2}}\) at all times, then \(\hat {o}\in inner(o^{*})\) after the transition to oj. During the sequence, \(\hat {o}\in outer(o^{*})\).

Proof

We assume the sequence starts at time t1 with o = oi, and terminates with a short transition from o to oj between time steps t2 and t3.

We first consider the case that \(\hat {o}\) would run outside outer(o) if it moved directly to its target point. First, we need to show that \(d(\hat {o},o_{\ell })=\frac {2\delta }{145}\cdot d(o_{\ell },o_{\ell }^{a})\) is reached before this happens. In the beginning, it holds

$$\begin{array}{rcl} d_{t_{1}}(\hat{o},o_{\ell}) & \leq & d_{t_{1}}(\hat{o},o^{*}) + d_{t_{1}}(o^{*},o_{\ell}) \\ & \leq & (\frac{\delta^{2}}{48960k} + \frac{\delta}{144})\cdot d_{t_{1}}(o^{*},o^{*a}). \end{array}$$

With

$$\begin{array}{rrcl} & d(o^{*},o^{*a}) & \leq & d(o^{*},o_{\ell}) + d(o_{\ell},o_{\ell}^{a}) \\ & & \leq & \frac{\delta}{144}\cdot d(o^{*},o^{*a})+ d(o_{\ell},o_{\ell}^{a}) \\ \Leftrightarrow & (1-\frac{\delta}{144})\cdot d(o^{*},o^{*a}) & \leq & d(o_{\ell},o_{\ell}^{a}) \end{array}$$

we get \(d_{t_{1}}(\hat {o},o_{\ell }) \leq \frac {1}{1-\frac {\delta }{144}}\cdot (\frac {\delta ^{2}}{48960k} + \frac {\delta }{144})\cdot d_{t_{1}}(o_{\ell },o_{\ell }^{a}) <\frac {2\delta }{145}\cdot d_{t_{1}}(o_{\ell },o_{\ell }^{a})\).

Now assume \(d(\hat {o},o_{\ell }) \leq \frac {2\delta }{145}\cdot d(o_{\ell },o_{\ell }^{a})\). Then

$$\begin{array}{rcl} d(\hat{o},o^{*}) & \leq & d(\hat{o},o_{\ell}) + d(o_{\ell},o^{*}) \\ & \leq & \frac{2\delta}{145}\cdot d(o_{\ell},o_{\ell}^{a}) + \frac{\delta}{144}\cdot d(o^{*},o^{*a}) \\ & \leq & \frac{2\delta}{145}\cdot (d(o_{\ell},o^{*}) + d(o^{*},o^{*a})) + \frac{\delta}{144}\cdot d(o^{*},o^{*a}) \\ & \leq & \frac{2\delta}{145}\cdot (1+\frac{\delta}{144})\cdot d(o^{*},o^{*a})+ \frac{\delta}{144}\cdot d(o^{*},o^{*a}) \\ & \leq & \frac{\delta}{48}\cdot d(o^{*},o^{*a}), \end{array}$$

meaning \(\hat {o}\in outer(o^{*})\) for the duration of the sequence. Taking the negation of that statement it also follows that \(d(\hat {o},o_{\ell })=\frac {2\delta }{145}\cdot d(o_{\ell },o_{\ell }^{a})\) is reached before \(\hat {o}\notin outer(o^{*})\).

Note that \(\hat {o}\) can maintain the point at the fixed distance to o which is closest to the final position of oj: Imagine the radius \(\frac {2\delta }{145}\cdot d(o_{\ell },o_{\ell }^{a})\) stays fixed and only o moves by at most ms. Then the point at the fixed radius closest to \(o_{j}^{(t_{3})}\) only changes by at most ms. Afterwards the radius changes by at most \(3m_{s}\cdot \frac {2\delta }{145}<\frac {\delta }{20}m_{s}\) and hence the movement speed of \((1+\frac {\delta }{8})m_{s}\) is sufficient.

We now need to determine that at the final time step t3, \(d_{t_{3}}(o_{j},o_{\ell }) \leq \frac {2\delta }{145}\cdot d_{t_{3}}(o_{\ell },o_{\ell }^{a})\). Apply Lemma 2 by setting s = o and we can bound

$$\begin{array}{rcl} d_{t_{3}}(o_{j},o_{\ell}) & \leq & 6.002\cdot \frac{\delta^{2}}{48960k}\cdot d_{t_{3}}(o^{*},o^{*a}) + 8.002m_{c} \\ & \leq & \frac{1}{1-\frac{\delta}{144}}\cdot (\frac{6.002\delta^{2}}{48960k} + \frac{8.002\delta^{2}}{43170}) \cdot d_{t_{3}}(o_{\ell},o_{\ell}^{a}) \\ & < & \frac{2\delta}{145}\cdot d_{t_{3}}(o_{\ell},o_{\ell}^{a}). \end{array}$$

Now assume it holds true that \(\hat {o}\) can move straight towards the final position of oj with speed \((1+\frac {\delta }{8})m_{s}\) without ever leaving outer(o). In this case, we compute a path which constitutes an upper bound on the distance \(\hat {o}\) has to traverse, using the following definition:

Definition 1 (Transition Path)

Assume om = o at time step t and on = o at some later time step t. Consider the path constructed as follows. Start at the position of om in time step t. Let ti be the last time step before t in which om = o and rinner(om). The first part of the path goes from om’s position at time step t to om’s position in time step ti. Afterwards, a short transition from om to some other server ox between time step ti and tj occurs, in which case our path goes from om in ti to ox in tj. Continue the procedure recursively until on in time step \(t^{\prime }\) is reached. We call the constructed path a (t,t)-transition path. See Fig. 4 for an illustration of one of the recursion steps.

Fig. 4
figure 4

The construction of a transition path. The transition path is marked by black points, while the movement of om is depicted by dashed arrows. The movement of r is marked by the gray arrows. Starting at the position of om at t, the last time step ti is identified at which om = o and rinner(om). Note that the role of o might change multiple times between t and ti

Now consider the (t1,t3)-transition path. The distance traveled by ô is bounded by the distance of ô to o at time t1 plus the length of the transition path. The former has a length of \(d_{t_{1}}(\hat {o},o^{*})\leq \frac {\delta ^{2}}{48960k}\cdot d_{t_{1}}(o^{*},o^{*a})\).

To upper bound the length of the (t1,t3)- transition path, we divide it into two types of edges (excluding the first edge): The first type is between the same offline server in different time steps. If the total time is \(\hat {t}=t_{3}-t_{1}\), the maximum distance induced is \(\hat {t}\cdot m_{s}\).

The second type of edges are between different offline servers and represent a short transition. By construction, there are at most k such edges. With the help of Lemma 2 we may upper bound the length of an edge by \(6.001\cdot \frac {\delta ^{2}}{48960k}\cdot d_{t^{\prime }}(o^{*},o^{*a}) + 8.001m_{c}\), where \(t^{\prime }\) is the time the transition begins (in the lemma, set s to a static server at the position of the server who passes the request at time \(t^{\prime }\)).

The distance d(o,oa) can change in two ways over time: It changes due to the movement of the servers or due to a role change of o, where it suffices to consider only those short transitions included in our constructed path. Let \(t^{\prime }_{1},\ldots , t^{\prime }_{k}\) be the points in time where the short transitions inducing the second type edges begin. We can upper bound their total length as \( {\sum }_{i=1}^{k} (6.001\cdot \frac {\delta ^{2}}{48960k}\cdot d_{t^{\prime }_{i}}(o^{*},o^{*a}) + 8.001m_{c}) \). Assuming the highest possible distance for each of the \(d_{t^{\prime }_{i}}(o^{*},o^{*a})\), we get the total distance of the movement during the sequence added to the original length, which is \(d_{t_{1}}(o^{*},o^{*a}) + \hat {t}\cdot (2+\delta )m_{s}\), for the first transition. The transitions after that build inductively on the resulting lengths. Define \(T_{0}:=d_{t_{1}}(o^{*},o^{*a}) + \hat {t}\cdot (2+\delta ) m_{s}\). The first edge length is upper bounded by \(A_{1}:=\frac {6.001\delta ^{2}}{48960k}\cdot T_{0} + 8.001 m_{c}\), the resulting value for d(o,oa) is T1 := T0 + A1. In general, \(A_{i}:=\frac {6.001\delta ^{2}}{48960k}\cdot T_{i-1} + 8.001 m_{c}\) and \(T_{i}:=T_{i-1}+A_{i}=T_{0}+{\sum }_{j=1}^{i}A_{j}\). We can bound the total increase by

$$\begin{array}{rrcl} & {\sum}_{i=1}^{k}A_{i} & = & {\sum}_{i=1}^{k}\left( \frac{6.001\delta^{2}}{48960k}\cdot (T_{0}+{\sum}_{j=1}^{i-1}A_{j}) + 8.001 m_{c}\right) \\ & & \leq & k\cdot\frac{6.001\delta^{2}}{48960k}\cdot (T_{0} +{\sum}_{j=1}^{k}A_{j})+ k\cdot 8.001m_{c} \\ \Leftrightarrow & (1-\frac{6.001\delta^{2}}{48960})\cdot {\sum}_{i=1}^{k}A_{i} & \leq & \frac{6.001\delta^{2}}{48960}\cdot T_{0} + 8.001 k m_{c} \\ \Rightarrow & {\sum}_{i=1}^{k}A_{i} & \leq & 0.0002\delta^{2}\cdot T_{0} + 8.002 k m_{c}. \end{array}$$

The total path length may hence bounded by \(\hat {t}\cdot m_{s} + 0.0002\delta ^{2}\cdot (d_{t_{1}}(o^{*},o^{*a}) + \hat {t}\cdot (2+\delta )m_{s}) + 8.002km_{c} + \frac {\delta ^{2}}{48960k}\cdot d_{t_{1}}(o^{*},o^{*a})\).

For comparison, we lower bound the time it takes oj to move into position such that a short transition can occur. Take the last time step t where \(o_{j}\notin outer_{t}(o^{*})/3 \Rightarrow d_{t}(o_{j},o^{*})>\frac {\delta }{144}\cdot d_{t}(o^{*},o^{*a})\). We may assume that t = t1, otherwise the travel time for oj simply increases. For a short transition between time steps t2 and t3 to oj to occur, we need \(r\in inner_{t_{2}}(o_{\ell })\), \(r\in inner_{t_{3}}(o_{j})\) and \(t^{*}:=t_{3}-t_{2}\leq inner_{t_{2}}(o_{\ell })/m_{c} + 2\). We have \(d_{t_{2}}(o_{j},o_{\ell })\leq t^{*}\cdot (m_{c}+m_{s}+\frac {\delta }{16}m_{s}) + inner_{t_{2}}(o_{\ell }) + inner_{t_{2}}(o_{j})\) (see Fig. 3 and the proof of Lemma 2).

With \(d_{t_{2}}(o_{j},{o_{j}^{a}}) \leq d_{t_{2}}(o_{j},o_{\ell }) + d_{t_{2}}(o_{\ell },o_{\ell }^{a})\) we get

$$\begin{array}{rrcl} & d_{t_{2}}(o_{j},o_{\ell}) & \leq & inner_{t_{2}}(o_{j}) + inner_{t_{2}}(o_{\ell}) + t^{*}\cdot 2m_{c} \\ & & \leq & \frac{\delta^{2}}{48960k}\cdot d_{t_{2}}(o_{j},{o_{j}^{a}}) + 3\cdot inner_{t_{2}}(o_{\ell}) + 4m_{c} \\ & & \leq & \frac{4\cdot\delta^{2}}{48960k}\cdot d_{t_{2}}(o_{\ell},o_{\ell}^{a}) + \frac{\delta^{2}}{48960k}\cdot d_{t_{2}}(o_{j},o_{\ell}) +4m_{c} \\ \Leftrightarrow & (1-\frac{\delta^{2}}{48960k})\cdot d_{t_{2}}(o_{j},o_{\ell}) & \leq & \frac{4\cdot \delta^{2}}{48960k}\cdot d_{t_{2}}(o_{\ell},o_{\ell}^{a}) + 4m_{c} \\ \Rightarrow & d_{t_{2}}(o_{j},o_{\ell}) & \leq & \frac{4.001\cdot \delta^{2}}{48960k}\cdot d_{t_{2}}(o_{\ell},o_{\ell}^{a}) + 4.001m_{c}. \end{array}$$

Comparing the distances at t1 and t2, we conclude that \(d_{t_{1}}(o_{j},o^{*}) - d_{t_{2}}(o_{j},o^{*})\geq \frac {\delta }{144}\cdot d_{t_{1}}(o^{*},o^{*a}) - 4.001\cdot \frac {\delta ^{2}}{48960k}\cdot d_{t_{2}}(o^{*},o^{*a}) - 4.001m_{c}\).

In order to lower bound the number of time steps \(\hat {t}:=t_{2}-t_{1}\) needed for bridging that distance, we first examine the change in d(o,oa). Recall that o = oi in t1 and o = o in t2. We can represent the movement of o with the (t1,t2)-transition path. The distance d(o,oa) can change in two ways over time: It changes due to the movement of the servers or due to a role change of o, where it suffices to consider only those short transitions included in our constructed path. If we set the beginnings of the short transitions at time steps \(t^{\prime }_{1},\ldots , t^{\prime }_{k}\), we get the upper bound similar to before:

$$\begin{array}{rcl} d_{t_{2}}(o^{*},o^{*a}) & \leq & d_{t_{1}}(o^{*},o^{*a}) + \hat{t}\cdot (2+\delta)m_{s} \\ & & + {\sum}_{i=1}^{k} (6.001\cdot \frac{\delta^{2}}{48960k}\cdot d_{t^{\prime}_{i}}(o^{*},o^{*a}) + 8.001m_{c}) \\ & \leq & d_{t_{1}}(o^{*},o^{*a}) + \hat{t}\cdot (2+\delta)m_{s} \\ & & + 0.0002\delta^{2}\cdot (d_{t_{1}}(o^{*},o^{*a}) + \hat{t}\cdot (2+\delta)m_{s}) + 8.002km_{c} \end{array}$$

Continuing from above, we have

$$\begin{array}{rcl} d_{t_{1}}(o_{j},o^{*}) - d_{t_{2}}(o_{j},o^{*}) & \geq & \frac{\delta}{144}\cdot d_{t_{1}}(o^{*},o^{*a}) - 4.001\cdot (\frac{\delta^{2}}{48960k}d_{t_{2}}(o^{*},o^{*a}) +m_{c}) \\ & \geq & \frac{\delta}{144}\cdot d_{t_{1}}(o^{*},o^{*a}) - \frac{4.001\delta^{2}}{48960k}\cdot (1.0002\cdot (d_{t_{1}}(o^{*},o^{*a}) \\ & & + \hat{t}\cdot (2+\delta)m_{s}) + 8.002km_{c}) - 4.001m_{c}. \end{array}$$

Now we consider the ways in which d(oj,o) shrinks: The first is the movement of oj and o, reducing the distance by at most 2ms per time step: i.e., if the entire sequence lasts \(\hat {t}\) steps, the maximum reduction is \(\hat {t}\cdot 2m_{s}\). The other way is by the role change of o. Note that above, we just accounted for the change of the distance d(o,oa) due to the role change, and not for the change of d(oj,o). Lemma 2 gives us that the distance of oj to any server decreases by at most \(6.001\cdot \frac {\delta ^{2}}{48960k}\cdot d_{t}(o^{*},o^{*a}) + 8.001m_{c}\). This decrease is maximized the same as above, i.e., \(0.0002\delta ^{2}\cdot (d_{t_{1}}(o^{*},o^{*a}) + \hat {t}\cdot 2m_{s}) + 8.002 km_{c}\).

We can now lower bound the number of time steps it takes to complete the sequence: It is bounded by the minimum time \(\hat {t}\), such that

$$\begin{array}{ccl} & & \hat{t}\cdot 2m_{s} + 0.0002\delta^{2}\cdot (d_{t_{1}}(o^{*},o^{*a}) + \hat{t}\cdot 2m_{s}) + 8.002km_{c} \\ & \geq & \frac{\delta}{144}\cdot d_{t_{1}}(o^{*},o^{*a}) - \frac{4.001\delta^{2}}{48960k}\cdot (d_{t_{1}}(o^{*},o^{*a}) + \hat{t}\cdot (2+\delta)m_{s} \\ & & + 1.0002\cdot (d_{t_{1}}(o^{*},o^{*a}) + \hat{t}\cdot (2+\delta)m_{s}) + 8.002km_{c}) - 4.001m_{c} \\ \Leftrightarrow & & \hat{t}\cdot 2.0004m_{s} + \frac{4.001\delta^{2}}{48960k}\cdot 2.0002\cdot \hat{t}\cdot (2+\delta)m_{s} \\ & \geq & \frac{\delta}{144}\cdot d_{t_{1}}(o^{*},o^{*a}) - \frac{4.001\delta^{2}}{48960k}\cdot 2.0002\cdot d_{t_{1}}(o^{*},o^{*a}) - 0.0002\delta^{2}\cdot d_{t_{1}}(o^{*},o^{*a}) \\ & & - 8.002km_{c} - \frac{4.001\delta^{2}}{48960k}\cdot 8.002km_{c} - 4.001m_{c} \\ \Rightarrow & & 2.0009 \cdot \hat{t}\cdot m_{s} \geq 0.0065\delta \cdot d_{t_{1}}(o^{*},o^{*a}) -12.0047km_{c}. \end{array}$$

To finish the proof, we show that \(\hat {o}\) has enough time to reach its destination by comparing the lower bound of the time oj takes to move into position to the upper bound of the travel path of \(\hat {o}\):

$$\begin{array}{rcl} & & \hat{t}\cdot(1+\frac{\delta}{8})\cdot m_{s} \\ & \geq & \hat{t}\cdot m_{s} + 0.0002\delta^{2}\cdot (d_{t_{1}}(o^{*},o^{*a}) + \hat{t}(2+\delta)\cdot m_{s}) + 8.002km_{c} \\&& + \frac{\delta^{2}}{48960k}\cdot d_{t_{1}}(o^{*},o^{*a}) \\ \Leftrightarrow & & \hat{t}\cdot(1+\frac{\delta}{8})\cdot m_{s} - (1+0.0006\delta^{2})\cdot \hat{t}\cdot m_{s} \\ & \geq & (0.0002\delta^{2}+ \frac{\delta^{2}}{48960k})\cdot d_{t_{1}}(o^{*},o^{*a})+ 8.002km_{c} \\ \Leftarrow & & \hat{t}\cdot (\frac{\delta}{8} - 0.0006\delta^{2})m_{s} \\ & \geq & (0.0002\delta^{2}+ \frac{\delta^{2}}{48960k})\cdot d_{t_{1}}(o^{*},o^{*a})+ 8.002km_{c} \\ \Leftarrow & & \frac{1}{2.0009 \cdot m_{s}}\cdot (0.0065\delta \cdot d_{t_{1}}(o^{*},o^{*a}) -12.0047km_{c}) \cdot (\frac{\delta}{8} - 0.0006\delta^{2})m_{s} \\ & \geq & (0.0002\delta^{2}+ \frac{\delta^{2}}{48960k})\cdot d_{t_{1}}(o^{*},o^{*a})+ 8.002km_{c} \\ \Leftarrow & & 0.0004\delta^{2}\cdot d_{t_{1}}(o^{*},o^{*a}) - 0.75\delta km_{c} \\ & \geq & (0.0002\delta^{2}+ \frac{\delta^{2}}{48960k})\cdot d_{t_{1}}(o^{*},o^{*a})+ 8.002km_{c} \\ \Leftarrow & & 0.00017\delta^{2}\cdot d_{t_{1}}(o^{*},o^{*a}) \geq (8.002 + 0.75\delta)km_{c} \\ \Leftarrow & & d_{t_{1}}(o^{*},o^{*a}) \geq 51483k\frac{m_{c}}{\delta^{2}} \end{array}$$

Our analysis of the movement pattern of \(\hat {o}\) leads directly to the following lemma, in which we mostly need to argue that either \(\hat {o}\in outer(o^{*})\) or \(\hat {o}=r\).

Lemma 5

During the execution of the algorithm, \(d(\hat {a},\hat {o}) \leq 2\cdot d(o^{*},o^{*a}) + d(a^{*},r)\) as long as the algorithm is in step 1 or 2.

Proof

We argue that \(\hat {o}\in outer(o^{*})\) or \(\hat {o}=r\). We have demonstrated that during a sequence of short transitions, \(\hat {o}\) never leaves outer(o). It remains to show that the statement holds during a long transition. We observe \(\hat {o}\) during the transition time t = t2t1. Before the first step, \(r\in inner_{t_{1}}(o^{*})\) and \(\hat {o}\in outer_{t_{1}}(o^{*})\). We have already shown that for \(t^{*} \geq inner_{t_{1}}(o^{*})/m_{c}\), \(\hat {o}\) catches up to r within the time t in the proof of Lemma 1. Expressed in distance, \(\hat {o}\) catches up to r when r is a distance of \(inner_{t_{1}}(o^{*})\) outside the inner circle of o. We show that at this time, r is still in outer(o): Let \(\hat {t}=\left \lceil inner_{t_{1}}(o^{*})/m_{c}\right \rceil \). We have

$$\begin{array}{rcl} d_{t_{1}+\hat{t}}(r,o^{*}) & \leq & d_{t_{1}}(r,o{*}) + \hat{t}\cdot 2m_{c} \\ & \leq & \frac{\delta^{2}}{48960k}\cdot d_{t_{1}}(o^{*},o^{*a}) + 2\cdot inner_{t_{1}}(o^{*}) + 2m_{c} \\ & \leq & 3\cdot \frac{\delta^{2}}{48960k}\cdot d_{t_{1}}(o^{*},o^{*a}) + 2m_{c}. \end{array}$$

With

$$\begin{array}{rrcl} & d_{t_{1}+\hat{t}}(o^{*},o^{*a}) & \geq & d_{t_{1}}(o^{*},o^{*a}) - \hat{t}\cdot (2+\delta)m_{s} \\ & & \geq & d_{t_{1}}(o^{*},o^{*a}) - 2\cdot inner_{t_{1}}(o^{*}) - 2m_{c} \\ \Leftrightarrow & d_{t_{1}+\hat{t}}(o^{*},o^{*a}) + 2m_{c} & \geq & (1-\frac{2\delta^{2}}{48960k})\cdot d_{t_{1}}(o^{*},o^{*a}) \\ \Rightarrow & 2\cdot d_{t_{1}+\hat{t}}(o^{*},o^{*a}) + 4m_{c} & \geq & d_{t_{1}}(o^{*},o^{*a}) \end{array}$$

we get

$$\begin{array}{rcl} d_{t_{1}+\hat{t}}(r,o^{*}) & \leq & \frac{6\delta^{2}}{48960k}\cdot d_{t_{1}+\hat{t}}(o^{*},o^{*a}) + 3m_{c} \\ & \leq & \frac{\delta}{48}\cdot d_{t_{1}+\hat{t}}(o^{*},o^{*a}) \end{array}$$

as long as \(d_{t_{1}+\hat {t}}(o^{*},o^{*a})>145m_{c}\).

This implies that at all times, either \(\hat {o}\in outer(o^{*})\) or \(r=\hat {o}\).

We now turn to the claim of the lemma. If \(\hat {o}\in outer(o^{*})\), then \(d(\hat {a},\hat {o}) \leq d(o^{*a},\hat {o}) \leq 2\cdot d(o^{*},o^{*a})\). If \(\hat {o}=r\), then \(\hat {a}=a^{*}\) and therefore \(d(\hat {a},\hat {o})=d(a^{*},r)\). □

So far we have shown that all claims of Proposition 2 hold as long as the algorithm is not in step 3. It remains to analyze step 3 of the algorithm, using similar arguments as for analyzing the long transitions earlier.

Lemma 6

After the execution of step 3 it holds \(\hat {o}=r\). Furthermore, \(d(\hat {a},\hat {o}) \leq 2\cdot d(o^{*},o^{*a}) + d(a^{*},r)\) during step 3 of the algorithm.

Proof

We define time steps t1 and t2 such that they encompass step 3 of the algorithm: i.e., t1 and t2 are chosen minimal such that \(d_{t_{1}}(o^{*},o^{*a}) < 51483\frac {m_{c}}{\delta ^{2}}\) and \(d_{t_{2}}(o^{*},o^{*a}) \geq 2\cdot 51483\frac {m_{c}}{\delta ^{2}}\). Since d(o,oa) changes by at most (2 + δ)ms ≤ 2mc in each time step, \(t_{2}-t_{1}\geq 25741.5\cdot \frac {1}{\delta ^{2}}\).

If at time t1, the procedure is in a long transition, the algorithm already follows r and can continue as usual (the result for the long transition holds independently of d(o,oa)). Otherwise, we have \(\hat {o},r\in outer(o^{*})\). Hence \(d_{t_{1}}(\hat {o},r)\leq \frac {\delta }{24}\cdot d_{t_{1}}(o^{*},o^{*a}) \leq \frac {51483}{24}\cdot \frac {m_{c}}{\delta }\). The server \(\hat {o}\) catches up to r a distance of at least \((1+\frac {1020k}{\delta })\cdot m_{c}\) per time step. Clearly, \((t_{2}-t_{1})\cdot (1+\frac {1020k}{\delta })\cdot m_{c} > \frac {51483}{24}\cdot \frac {m_{c}}{\delta }\) and therefore \(\hat {o}=r\) at time t2.

The second claim, \(d(\hat {a},\hat {o}) \leq 2\cdot d(o^{*},o^{*a}) + d(a^{*},r)\) can be shown the same way as in the previous lemma, where it is clear that r is reached before d(o,oa) falls below 145mc. □

4.2.3 Algorithm Analysis

We now turn our attention back to the analysis of the UMS algorithm. In the following, we assume \(\mathcal {K}\) to be a k-Server algorithm as described in Section 4.2.1. We use a potential composed of two major parts which balance the main ideas of our algorithm against each other: ϕ will measure the costs of the greedy strategy, while ψ will cover the matching to the simulated k-Server algorithm.

Let \(\hat {o}\) be an offline server which fulfills the invariants stated in Proposition 2. Recall that \(\hat {a}\) denotes the currently closest server of the online algorithm to \(\hat {o}\). The first part of the potential is then defined as

$$\phi:=\left\{\begin{array}{ll} 4\cdot d(\hat{a},\hat{o}) & \text{ if } d(\hat{a},\hat{o})\leq 107548\cdot \frac{k m_{c}}{\delta^{2}} \\ 4\cdot \frac{1}{\delta m_{s}}d(\hat{a},\hat{o})^{2} - A & \text{ if } 107548\cdot \frac{k m_{c}}{\delta^{2}}<d(\hat{a},\hat{o}) \end{array}\right.$$

with \(A:=4\cdot (\frac {1}{\delta m_{s}}(107548\frac {k m_{c}}{\delta ^{2}})^{2} - 107548\frac {k m_{c}}{\delta ^{2}})\).

For the second part, we set

$$\psi:=Y\cdot\frac{m_{c}}{\delta m_{s}}\sum\limits_{i=1}^{k}d(a_{i},c_{i})$$

where the online servers ai are always sorted such that they represent a minimum weight matching to the simulated servers ci. We choose \(Y={\varTheta }(\frac {k}{\delta ^{2}})\) to be sufficiently large.

If we understand ϕ as a function in \(d(\hat {a},\hat {o})\), then we can rewrite it as

$$\phi(d(\hat{a},\hat{o})) = \max\{4\cdot d(\hat{a},\hat{o}), 4\cdot\frac{1}{\delta m_{s}}d(\hat{a},\hat{o})^{2} - A\}.$$

Hence, when estimating the potential difference \({\varDelta }\phi =\phi (d(\hat {a}^{\prime },\hat {o}^{\prime }))-\phi (d(\hat {a},\hat {o}))\), we can upper bound it by replacing the term \(\phi (d(\hat {a},\hat {o}))\) with the case identical to \(\phi (d(\hat {a}^{\prime },\hat {o}^{\prime }))\). This mostly reduces estimating Δϕ to bounding the difference \(d(\hat {a}^{\prime },\hat {o}^{\prime })-d(\hat {a},\hat {o})\).

For some of our estimations we use a slightly altered result from [12] (simply replace δ by \(\frac {\delta }{2}\)). Note that while this lemma might hold for some other metrics as well, it explicitly requires the Euclidean space in the proof provided in [12].

Lemma 7

Let s be some server with \(d(s^{\prime },r^{\prime })\leq \frac {\sqrt \delta }{4}\cdot d(a_{i}^{\prime },r^{\prime })\) and ai moves towards \(r^{\prime }\) a distance of \(d(a_{i},a_{i}^{\prime })\), then \(d(a_{i},s^{\prime })-d(a_{i}^{\prime },s^{\prime })\geq \frac {1+\frac {1}{4}\delta }{1+\frac {1}{2}\delta }d(a_{i},a_{i}^{\prime })\).

We start the analysis by bounding the second potential difference Δψ. The bounds can be obtained by similar arguments as in the proof of Theorem 5.

Lemma 8

\({\varDelta }\psi \leq Y\cdot \frac {m_{c}}{\delta m_{s}}\cdot C_{\mathcal {K}} - {\sum }_{i=1}^{k}d(a_{i},a_{i}^{\prime })\).

Proof

Assume that a = a1. Every other server ai moves towards its counterpart ci, hence

\(\begin {array}{rcl} {\varDelta }\psi & \leq & Y\cdot \frac {m_{c}}{\delta m_{s}}\sum \limits _{i=1}^{k}(d(a_{i}^{\prime },c_{i}^{\prime })-d(a_{i},c_{i})) \\ & \leq & Y\cdot \frac {m_{c}}{\delta m_{s}}\left (d(a_{1}^{\prime },c_{1}^{\prime })-d(a_{1},c_{1}) + \sum \limits _{i=2}^{k}(d(c_{i},c_{i}^{\prime })-d(a_{i},a_{i}^{\prime }))\right ). \end {array}\)

Now, if \(\mathcal {K}\) serves the request with c1, i.e., \(c_{1}^{\prime }=r^{\prime }\), then

$$ {\varDelta}\psi \leq Y\cdot\frac{m_{c}}{\delta m_{s}} \sum\limits_{i=i}^{k}(d(c_{i},c_{i}^{\prime})-d(a_{i},a_{i}^{\prime})). $$

Otherwise, \(\mathcal {K}\) serves the request with another server (assume c2). Since a2 was not chosen as a, it moves the full distance of (1 + δ)ms and hence \( \begin {array}{rcl} {\varDelta }\psi & \leq & Y\cdot \frac {m_{c}}{\delta m_{s}}\left (d(a_{1},a_{1}^{\prime }) + d(c_{1},c_{1}^{\prime }) + d(c_{2},c_{2}^{\prime })-d(a_{2},a_{2}^{\prime })\right .\\ & & \left .+ \sum \limits _{i=3}^{k}(d(c_{i},c_{i}^{\prime })-d(a_{i},a_{i}^{\prime }))\right )\\ & \leq & Y\cdot \frac {m_{c}}{\delta m_{s}}\left (\sum \limits _{i=1}^{k}d(c_{i},c_{i}^{\prime }) - \frac {\delta }{2}m_{s} - \sum \limits _{i=3}^{k}d(a_{i},a_{i}^{\prime })\right ). \end {array} \) The lemma follows by setting Y ≥ 8, as \(d(a_{1},a_{1}^{\prime }) + d(a_{2},a_{2}^{\prime })\leq 4 m_{s}\). □

Lemma 9

If \(d(a^{*'},r^{\prime })>0\), then \({\varDelta }\psi \leq Y\frac {m_{c}}{\delta m_{s}}\cdot C_{\mathcal {K}} - \sum \limits _{i=1}^{k}d(a_{i},a_{i}^{\prime }) - \frac {Y-4}{2}m_{c}\).

Proof

We assume a = a1. Since \(d(a^{*'},r^{\prime })>0\), we have \(d(a_{1},a_{1}^{\prime })=(1+\frac {\delta }{2})m_{s}\). If r is served by c1, then

$$\begin{array}{rcl} {\varDelta}\psi & = & Y\cdot\frac{m_{c}}{\delta m_{s}}\sum\limits_{i=1}^{k}(d(a_{i}^{\prime},c_{i}^{\prime}) - d(a_{i},c_{i})) \\ & \leq & Y\cdot\frac{m_{c}}{\delta m_{s}}\sum\limits_{i=1}^{k}(d(c_{i},c_{i}^{\prime}) - d(a_{i},a_{i}^{\prime})) \\ & \leq & Y\cdot\frac{m_{c}}{\delta m_{s}}C_{\mathcal{K}} - \frac{m_{c}}{\delta m_{s}}\sum\limits_{i=1}^{k}d(a_{i},a_{i}^{\prime}) - (Y-1)\cdot\frac{m_{c}}{\delta m_{s}}(1+\frac{\delta}{2})m_{s}. \end{array}$$

If r is served by a different server of \(\mathcal {K}\) (assume c2), then

$$\begin{array}{rcl} {\varDelta}\psi & = & Y\cdot\frac{m_{c}}{\delta m_{s}}\sum\limits_{i=1}^{k}(d(a_{i}^{\prime},c_{i}^{\prime}) - d(a_{i},c_{i})) \\ & \leq & Y\cdot\frac{m_{c}}{\delta m_{s}}\sum\limits_{i=1}^{k}d(c_{i},c_{i}^{\prime}) - \frac{m_{c}}{\delta m_{s}}\sum\limits_{i=3}^{k}d(a_{i},a_{i}^{\prime}) - Y\cdot\frac{m_{c}}{\delta m_{s}}\cdot\frac{\delta m_{s}}{2} \\ & \leq & Y\cdot\frac{m_{c}}{\delta m_{s}}C_{\mathcal{K}} - \sum\limits_{i=1}^{k}d(a_{i},a_{i}^{\prime}) - \frac{Y-4}{2}m_{c}. \end{array}$$

This term is larger then the former one for sufficiently large Y. □

Now consider the case that \(r^{\prime }\notin inner(o^{*'})\). We have

$$\begin{array}{rcl} d(a^{*'},r^{\prime}) & \leq & d(o^{*a^{\prime}},r^{\prime}) \\ & \leq & d(o^{*'},o^{*a^{\prime}}) + d(o^{*'},r^{\prime}) \\ & \leq & (\frac{48960k}{\delta^{2}}+1)\cdot d(o^{*'},r^{\prime}). \end{array}$$

The movement cost are canceled by Δψ as in Lemma 8. It only remains to bound the possible increase of ϕ. We use \(d(\hat {a}^{\prime },\hat {o}^{\prime }) - d(\hat {a},\hat {o}) \leq (3+\frac {1020k}{\delta })\cdot m_{c}\).

Lemma 10

If \(r^{\prime }\notin inner(o^{*'})\), then \({\varDelta }\phi \leq \mathcal {O}(\frac {k^{2}}{\delta ^{4}}\frac {m_{c}}{m_{s}})\cdot C_{Opt}\).

Proof

  1. 1.

    \(d(\hat {a}^{\prime },\hat {o}^{\prime })\leq 107548\cdot \frac {k m_{c}}{\delta ^{2}}\): \({\varDelta }\phi \leq 4\cdot d(\hat {a}^{\prime },\hat {o}^{\prime })\leq 8\cdot d(o^{*'},o^{*a^{\prime }}) + 4\cdot d(a^{*'},r^{\prime }) \leq (12\cdot \frac {48960k}{\delta ^{2}} + 4)\cdot d(o^{*'},r^{\prime })\).

  2. 2.

    \(107548\cdot \frac {k m_{c}}{\delta ^{2}}<d(\hat {a}^{\prime },\hat {o}^{\prime })\): \(\begin {array}{rcl}{\varDelta }\phi & \leq & \frac {4}{\delta m_{s}}(d(\hat {a}^{\prime },\hat {o}^{\prime })^{2}-d(\hat {a},\hat {o})^{2}) \\ & \leq & \frac {4}{\delta m_{s}}(d(\hat {a}^{\prime },\hat {o}^{\prime })^{2}-(d(\hat {a}^{\prime },\hat {o}^{\prime })-(3+\frac {1020k}{\delta })\cdot m_{c})^{2}) \\ & \leq & \mathcal {O}(\frac {k}{\delta })\cdot \frac {m_{c}}{\delta m_{s}}d(\hat {a}^{\prime },\hat {o}^{\prime }) \\ & \leq & \mathcal {O}(\frac {k^{2}}{\delta ^{3}})\cdot \frac {m_{c}}{\delta m_{s}}d(o^{*'},r^{\prime }). \end {array}\)

In all of the above, the competitive ratio is bounded by

$$\mathcal{O}(\frac{k^{2}}{\delta^{3}})\cdot \frac{m_{c}}{\delta m_{s}} + Y\cdot\frac{m_{c}}{\delta m_{s}}\cdot c(\mathcal{K}).$$

Finally, we consider the case \(r^{\prime }\in inner(o^{*'})\). When \(d(a^{*},r^{\prime })> 102970\frac {k m_{c}}{\delta ^{2}}\), we use Lemma 7 to obtain the following:

Lemma 11

If \(d(a^{*},r^{\prime })> 102970\frac {k m_{c}}{\delta ^{2}}\) and \(r^{\prime }\in inner(o^{*'})\), then \(d(\hat {a}^{\prime },\hat {o}^{\prime })-d(\hat {a},\hat {o})\leq -\frac {\delta }{8}m_{s}\).

Proof

By our construction of the simulated k-Server algorithm, we have \(d(c_{i}^{\prime },r^{\prime }) \leq 9km_{c} \leq \frac {\delta ^{2}}{9724}\cdot d(a^{*'},r^{\prime })\) for all i. Furthermore,

$$\begin{array}{rrcl} & d(o^{*'},o^{*a^{\prime}}) & \leq & d(o^{*'},a^{*'}) \\ & & \leq & d(o^{*'},r^{\prime}) + d(r^{\prime},a^{*'}) \\ \Leftrightarrow & (1-\frac{\delta^{2}}{48960k})\cdot d(o^{*'},o^{*a^{\prime}}) & \leq & d(r^{\prime},a^{*'}). \end{array}$$

Hence

$$\begin{array}{rcl} d(c_{i}^{\prime},\hat{o}^{\prime}) & \leq & d(c_{i}^{\prime},r^{\prime}) + d(r^{\prime},o^{*'}) + d(o^{*'},\hat{o}^{\prime}) \\ & \leq & \frac{\delta^{2}}{9724}\cdot d(a^{*'},r^{\prime}) + (\frac{\delta}{48} + \frac{\delta^{2}}{48960k})\cdot d(o^{*'},o^{*a^{\prime}}) \\ & \leq & 0.021\delta\cdot d(a^{*'},r^{\prime}) \\ & \leq & 0.021\delta\cdot d(a_{i}^{\prime},r^{\prime}) \end{array}$$

and with Lemma 7, we get \(d(a_{i}^{\prime },\hat {o}^{\prime })-d(a_{i},\hat {o}^{\prime })\leq -\frac {1+\frac {1}{4}\delta }{1+\frac {1}{2}\delta }d(a_{i},a_{i}^{\prime })\) for all i.

In order to bound the movement of \(\hat {o}\), we need to show that \(d(o^{*},o^{*a}) \geq 2\cdot 51483\frac {km_{c}}{\delta ^{2}}\).

We use

$$\begin{array}{rrcl} & d(a^{*},r^{\prime}) & \leq & m_{c} + d(a^{*'},r^{\prime}) \\ & & \leq & m_{c} + d(o^{*a^{\prime}},r^{\prime}) \\ & & \leq & m_{c} + (1 + \frac{\delta^{2}}{48960k}) \cdot d(o^{*'},o^{*a^{\prime}}) \\ \Leftrightarrow & \frac{1}{1 + \frac{\delta^{2}}{48960k}}(d(a^{*},r^{\prime})-m_{c}) & \leq & d(o^{*'},o^{*a^{\prime}}). \end{array}$$

The bound follows from \(d(a^{*},r^{\prime })> 102970\frac {k m_{c}}{\delta ^{2}}\).

From Proposition 2 we get \(d(\hat {o},\hat {o}^{\prime })\leq (1+\frac {\delta }{8})m_{s}\) and therefore

$$\begin{array}{rcl} d(\hat{a}^{\prime},\hat{o}^{\prime}) - d(\hat{a},\hat{o}) & \leq & -\frac{1+\frac{1}{4}\delta}{1+\frac{1}{2}\delta}d(a_{i},a_{i}^{\prime}) + d(\hat{o},\hat{o}^{\prime}) \\ & \leq & -(1+\frac{\delta}{4})m_{s} + (1+\frac{\delta}{8})m_{s} \\ & \leq & -\frac{\delta}{8}m_{s} \end{array}$$

where i is chosen such that ai is closest to \(\hat {o}^{\prime }\). □

With this lemma, ϕ can be used to cancel the costs of the algorithm in case of a high distance to r.

Lemma 12

If \(r^{\prime }\in inner(o^{*'})\), then \(C_{Alg} + {\varDelta }\phi + {\varDelta }\psi \leq Y\cdot \frac {m_{c}}{\delta m_{s}}\cdot C_{\mathcal {K}} + 2\cdot d(o^{*'},r^{\prime })\).

Proof

  1. 1.

    \(d(\hat {a}^{\prime },\hat {o}^{\prime })\leq 107548\cdot \frac {k m_{c}}{\delta ^{2}}\): We use

    $$\begin{array}{rcl} d(a^{*'},r^{\prime}) & \leq & d(\hat{a}^{\prime},r^{\prime}) \\ & \leq & d(\hat{a}^{\prime},\hat{o}^{\prime}) + d(\hat{o}^{\prime},r^{\prime}) \\ & \leq & d(\hat{a}^{\prime},\hat{o}^{\prime}) + 2\cdot\frac{\delta}{48}\cdot d(o^{*'},o^{*a^{\prime}}) \\ & \leq & (1 + \frac{2\delta}{47})\cdot d(\hat{a}^{\prime},\hat{o}^{\prime}) \end{array}$$

    to get \(C_{Alg} + {\varDelta }\phi \leq 6\cdot d(\hat {a}^{\prime },\hat {o}^{\prime }) + {\sum }_{i=1}^{k}d(a_{i},a_{i}^{\prime })\). Furthermore, \({\varDelta }\psi \leq Y\cdot \frac {m_{c}}{\delta m_{s}}C_{\mathcal {K}} - \sum \limits _{i=1}^{k}d(a_{i},a_{i}^{\prime }) - \frac {Y-4}{2}m_{c}\) due to Lemma 9. In total, \(C_{Alg} + {\varDelta }\phi + {\varDelta }\psi \leq Y\cdot \frac {m_{c}}{\delta m_{s}}\cdot C_{\mathcal {K}}\) with \(Y\geq {\varOmega }(\frac {k }{\delta ^{2}})\).

  2. 2.

    \(107548\cdot \frac {k m_{c}}{\delta ^{2}}<d(\hat {a}^{\prime },\hat {o}^{\prime })\): We show that the condition of Lemma 11 applies:

    $$\begin{array}{rrcl} & d(\hat{a}^{\prime},\hat{o}^{\prime}) & \leq & d(a^{*'},\hat{o}^{\prime}) \\ & & \leq & d(a^{*'},a^{*}) + d(a^{*},r^{\prime}) + d(r^{\prime},\hat{o}^{\prime}) \\ & & \leq & m_{c} + d(a^{*},r^{\prime}) + 2\cdot\frac{\delta}{48}\cdot d(o^{*'},o^{*a^{\prime}}) \\ & & \leq & m_{c} + d(a^{*},r^{\prime}) + \frac{2}{47}\cdot d(\hat{a}^{\prime},\hat{o}^{\prime}) \\ \Leftrightarrow & \frac{45}{47}\cdot d(\hat{a}^{\prime},\hat{o}^{\prime}) - m_{c} & \leq & d(a^{*},r^{\prime}) \\ \Rightarrow & 102970\frac{k m_{c}}{\delta^{2}} & \leq & d(a^{*},r^{\prime}) \end{array}$$

    Hence the lemma gives us

    $$\begin{array}{rcl} {\varDelta}\phi & \leq & 4\cdot \frac{1}{\delta m_{s}}\left( d(\hat{a}^{\prime},\hat{o}^{\prime})^{2} - d(\hat{a},\hat{o})^{2}\right) \\ & \leq & 4\cdot\frac{1}{\delta m_{s}}\left( d(\hat{a}^{\prime},\hat{o}^{\prime})^{2} - (d(\hat{a}^{\prime},\hat{o}^{\prime})+\frac{\delta}{8}m_{s})^{2}\right) \\ & = & -d(\hat{a}^{\prime},\hat{o}^{\prime}). \end{array}$$

    Furthermore, we have

    $$\begin{array}{rcl} C_{Alg} & \leq & d(\hat{a}^{\prime},r^{\prime}) + {\sum}_{i=1}^{k}d(a_{i},a_{i}^{\prime}) \\ & \leq & d(\hat{a}^{\prime},\hat{o}^{\prime}) + d(\hat{o}^{\prime},o^{*'}) + d(o^{*'},r^{\prime}) + {\sum}_{i=1}^{k}d(a_{i},a_{i}^{\prime}) \\ & \leq & d(\hat{a}^{\prime},\hat{o}^{\prime}) + (1+\frac{\delta}{48})\cdot d(o^{*'},r^{\prime}) + {\sum}_{i=1}^{k}d(a_{i},a_{i}^{\prime}) \end{array}$$

    and \({\varDelta }\psi \leq Y\cdot \frac {m_{c}}{\delta m_{s}}\cdot C_{\mathcal {K}} -{\sum }_{i=1}^{k}d(a_{i},a_{i}^{\prime })\) due to Lemma 8. In total, we get \(C_{Alg} + {\varDelta }\phi + {\varDelta }\psi \leq Y\cdot \frac {m_{c}}{\delta m_{s}}\cdot C_{\mathcal {K}} + 2\cdot d(o^{*'},r^{\prime })\).

The resulting competitive ratio of \(Y\cdot \frac {m_{c}}{\delta m_{s}}\cdot c(\mathcal {K}) + 2\) is less than the \(\mathcal {O}(\frac {k^{2}}{\delta ^{3}})\cdot \frac {m_{c}}{\delta m_{s}} + Y\cdot \frac {m_{c}}{\delta m_{s}}\cdot c(\mathcal {K})\) bound from the former set of cases. Accounting for the loss due to the transformation of the simulated k-Server algorithm, we obtain the following result:

Theorem 6

If mc ≥ (1 + δ)ms, the algorithm UMS is \(\mathcal {O}(\frac {1}{\delta ^{4}}\cdot k^{2}\cdot \frac {m_{c}}{ m_{s}} +\frac {1}{\delta ^{3}}\cdot k^{2} \cdot \frac {m_{c}}{ m_{s}}\cdot c(\mathcal {K}))\)-competitive, where \(c(\mathcal {K})\) is the competitive ratio of the simulated k-server algorithm \(\mathcal {K}\).

5 Extension to the Weighted Problem

In this section we consider our general model in which the movement costs are weighted with a factor D > 1. We assume throughout the section that D ≥ 2 for convenience in the analysis. In case D < 2, we may just apply the algorithm from the previous section, whose costs increase by at most a factor of 2 as a result.

The main difference to the unweighted case is that our algorithm uses a k-Page Migration algorithm as guidance, whose best competitive ratio in the deterministic case so far is a factor Θ(k) worse than that of a k-Server algorithm for general metrics. The analysis is slightly more involved since unlike in the k-Server problem, a k-Page Migration algorithm does not always have to have one page at the point of the request. In case of small distances to r, the movement costs have to be balanced against the serving costs by scaling down the movement distance by a factor of D. Throughout this section, we use the same notation as for the unweighted version.

Our algorithm Weighted-Mobile Servers (WMS) works as follows: Take any k-Page Migration algorithm \(\mathcal {K}\). Upon receiving the next request \(r^{\prime }\), simulate the next step of \(\mathcal {K}\). Calculate a minimum weight matching (with the distances as weights) between the servers a1,…,ak of the online algorithm and the pages \(c_{1}^{\prime },\ldots ,c_{k}^{\prime }\) of \(\mathcal {K}\). Select a closest server \(\tilde {a}\) to \(r^{\prime }\) and move it to \(r^{\prime }\) at most a distance \(\min \limits \{m_{c},\frac {1}{D}(1-\varepsilon )\cdot d(\tilde {a},r^{\prime })\}\) in case mc ≤ (1 + δε)ms and at most \(\min \limits \{(1+\frac {\delta }{2})m_{s},\frac {1}{D}(1-\frac {\delta }{2})\cdot d(\tilde {a},r^{\prime })\}\) in case mc ≥ (1 + δ)ms. All other servers ai move towards their counterparts in the matching \(c_{i}^{\prime }\) with speed \(\min \limits \{(1+\delta )m_{s},\frac {1}{D}\cdot d(\tilde {a},r^{\prime })\}\). If another server than \(\tilde {a}\) is closer to \(r^{\prime }\) after movement, then move all servers towards their counterpart in the matching with speed ms instead.

The remainder of this section is devoted to the analysis of the WMS algorithm and is structured similar to Section 4.

We start by analyzing the case that mc ≤ (1 − ε) ⋅ ms for some \(\varepsilon \in (0,\frac {1}{2}]\). For \(\varepsilon \geq \frac {1}{2}\), our algorithm simply assumes \(\varepsilon =\frac {1}{2}\). It can be easily verified that this does not hinder the analysis.

Theorem 7

If mc ≤ (1 − ε) ⋅ ms for some \(\varepsilon \in (0,\frac {1}{2}]\), the algorithm WMS is \({\sqrt {2}\cdot 11}/{\varepsilon }\cdot c(\mathcal {K})\)-competitive, where \(c(\mathcal {K})\) is the competitive ratio of the simulated k-Page Migration algorithm \(\mathcal {K}\).

Proof

We assume the servers adapt their ordering a1,…,ak according to the minimum matching in each time step. Based on the matching, we define the following potential: \(\psi :=\sqrt {2}\cdot \frac {4D}{\varepsilon }{\sum }_{i=1}^{k}d(a_{i},c_{i})\). We observe that in all time steps it holds \(d(a^{*},r)\leq \frac {D}{1-\varepsilon }\cdot m_{c} \leq 2Dm_{c}\). This is because the distance does not increase if the movement towards r is mc, and this is done as soon as mc is less or equal \(\frac {1}{D}(1-\varepsilon )\cdot d(\tilde {a},r^{\prime })\) at the beginning of the time step. We fix a time step and assume \(\tilde {a}=a_{1}\).

First examine the case that \(\tilde {a}\) moves towards its matching partner instead of \(r^{\prime }\). Then \({\varDelta }\psi \leq \sqrt {2}\frac {4D}{\varepsilon }{\sum }_{i=1}^{k}d(c_{i},c_{i}^{\prime }) - \sqrt {2}\frac {4D}{\varepsilon }{\sum }_{i=1}^{k}d(a_{i},a_{i}^{\prime })\) and \(C_{Alg} = D\cdot {\sum }_{i=1}^{k}d(a_{i},a_{i}^{\prime }) + d(a^{*'},r^{\prime }) \leq D\cdot {\sum }_{i=1}^{k}d(a_{i},a_{i}^{\prime }) + 2Dm_{c}\). Consider the server which is matched to \(c^{*'}\): Either it reaches \(c^{*'}\) or it moves a distance of ms. In the first case \(d(a^{*'},r^{\prime }) \leq d(c^{*'},r^{\prime })\) which gives a competitive ratio of \(\sqrt {2}\frac {4}{\varepsilon }\cdot c(\mathcal {K})\) immediately. In the latter case, there is a server aj such that \(d(a_{j},a_{j}^{\prime })=m_{s}\) and hence \({\varDelta }\psi \leq \sqrt {2}\frac {4D}{\varepsilon }{\sum }_{i=1}^{k}d(c_{i},c_{i}^{\prime }) - \sqrt {2}\frac {D}{\varepsilon }{\sum }_{i=1}^{k}d(a_{i},a_{i}^{\prime }) - \sqrt {2}\frac {3D}{\varepsilon }m_{s}\) which implies a competitive ratio of at most \(\sqrt {2}\frac {4}{\varepsilon }\cdot c(\mathcal {K})\) as well.

Now assume \(\tilde {a}=a_{1}\) moves towards \(r^{\prime }\) and hence \(a^{*'}=a_{1}^{\prime }\). We have \(d(a_{1}^{\prime },c_{1}^{\prime })-d(a_{1},c_{1}^{\prime })\leq \min \limits \{m_{c},\frac {1}{D}(1-\varepsilon )\cdot d(\tilde {a},r^{\prime })\}\). In all of the following cases, we make use of

$$\begin{array}{rcl} {\varDelta}\psi & = & \sqrt{2}\frac{4D}{\varepsilon}\left( {\sum}_{i=1}^{k}d(a_{i}^{\prime},c_{i}^{\prime}) - {\sum}_{i=1}^{k}d(a_{i},c_{i})\right) \\ & \leq & \sqrt{2}\frac{4D}{\varepsilon}{\sum}_{i=1}^{k}d(c_{i},c_{i}^{\prime}) + \sqrt{2}\frac{4D}{\varepsilon}\left( {\sum}_{i=1}^{k}d(a_{i}^{\prime},c_{i}^{\prime}) - {\sum}_{i=1}^{k}d(a_{i},c_{i}^{\prime})\right). \end{array}$$

We distinguish the following cases with respect to the positioning of the pages of \(\mathcal {K}\):

  1. 1.

    \(d(a^{*'},r^{\prime }) \leq d(c^{*'},r^{\prime })\): Since we assume D ≥ 2, we have

    \(\begin {array}{rrcl} & D\cdot d(a_{1},a_{1}^{\prime }) & \leq & d(a_{1},r^{\prime }) \\ & & \leq & d(a_{1},a_{1}^{\prime }) + d(a_{1}^{\prime },r^{\prime }) \\ \Rightarrow & \frac {D}{2}\cdot d(a_{1},a_{1}^{\prime }) & \leq & d(c^{*'},r^{\prime }). \end {array}\)

    It follows \(C_{Alg} \leq 3\cdot d(c^{*'},r^{\prime }) + D\cdot {\sum }_{i=2}^{k}d(a_{i},a_{i}^{\prime })\) and \({\varDelta }\psi \leq \sqrt {2}\frac {4D}{\varepsilon }{\sum }_{i=1}^{k}d(c_{i},c_{i}^{\prime }) + \sqrt {2}\frac {8}{\varepsilon }\cdot d(c^{*'},r^{\prime }) - D\cdot {\sum }_{i=2}^{k}d(a_{i},a_{i}^{\prime })\).

  2. 2.

    \(d(a^{*'},r^{\prime }) > d(c^{*'},r^{\prime })\) and \(c^{*'}=c_{1}^{\prime }\): We know that \(d(a_{1}^{\prime },c_{1}^{\prime }) - d(a_{1},c_{1}^{\prime }) \leq -\frac {1}{\sqrt {2}}\cdot d(a_{1},a_{1}^{\prime }) = -\frac {1}{\sqrt {2}}\cdot \min \limits \{m_{c},\frac {1}{D}(1-\varepsilon )\cdot d(\tilde {a},r^{\prime })\}\) and hence \({\varDelta }\psi \leq \sqrt {2}\frac {4D}{\varepsilon }{\sum }_{i=1}^{k}d(c_{i},c_{i}^{\prime }) -\frac {4D}{\varepsilon }\cdot \min \limits \{m_{c},\frac {1}{D}(1-\varepsilon )\cdot d(\tilde {a},r^{\prime })\} - D\cdot \sum \limits _{i=2}^{k}d(a_{i},a_{i}^{\prime })\). If \(d(a_{1},a_{1}^{\prime })=m_{c}\) then \(C_{Alg}\leq 3Dm_{c} + D\cdot {\sum }_{i=2}^{k}d(a_{i},a_{i}^{\prime })\), otherwise \(C_{Alg}\leq 2\cdot d(\tilde {a},r^{\prime }) + D\cdot {\sum }_{i=2}^{k}d(a_{i},a_{i}^{\prime })\).

  3. 3.

    \(d(a^{*'},r^{\prime }) > d(c^{*'},r^{\prime })\) and \(c^{*'}\neq c_{1}^{\prime }\): We assume \(c^{*'}=c_{2}^{\prime }\). It must hold \(a_{2}^{\prime }\neq c_{2}^{\prime }\) and hence \(d(c_{2}^{\prime },a_{2}^{\prime })-d(c_{2}^{\prime },a_{2})\leq - \min \limits \{m_{s},\frac {1}{D}\cdot d(\tilde {a},r^{\prime })\}\). In the case \(d(a_{2},a_{2}^{\prime })= \frac {1}{D}\cdot d(\tilde {a},r^{\prime })\), it holds \(d(a_{1}^{\prime },c_{1}^{\prime }) - d(a_{1},c_{1}^{\prime }) + d(a_{2}^{\prime },c_{2}^{\prime }) - d(a_{2},c_{2}^{\prime }) \leq -\frac {\varepsilon }{D}\cdot d(\tilde {a},r^{\prime })\). This gives us \({\varDelta }\psi \leq \sqrt {2}\frac {4D}{\varepsilon }({\sum }_{i=1}^{k}d(c_{i},c_{i}^{\prime }) -\frac {\varepsilon }{D}\cdot d(\tilde {a},r^{\prime }) - {\sum }_{i=3}^{k}d(a_{i},a_{i}^{\prime }))\). With \(C_{Alg} = d(a_{1}^{\prime },r^{\prime }) + D\cdot {\sum }_{i=1}^{k}d(a_{i},a_{i}^{\prime }) \leq 3\cdot d(\tilde {a},r^{\prime }) + D\cdot {\sum }_{i=3}^{k}d(a_{i},a_{i}^{\prime })\) the bound follows.

    In case \(d(a_{2},a_{2}^{\prime })= m_{s}\), we have \(d(a_{1}^{\prime },c_{1}^{\prime }) - d(a_{1},c_{1}^{\prime }) + d(a_{2}^{\prime },c_{2}^{\prime }) - d(a_{2},c_{2}^{\prime }) \leq m_{c} - m_{s} \leq -\varepsilon m_{s}\). Similar as before, \({\varDelta }\psi \leq \sqrt {2}\frac {4D}{\varepsilon }({\sum }_{i=1}^{k}d(c_{i},c_{i}^{\prime }) - \varepsilon m_{s} - {\sum }_{i=3}^{k}d(a_{i},a_{i}^{\prime }))\) and \(C_{Alg}\leq 4Dm_{s} + D\cdot {\sum }_{i=3}^{k}d(a_{i},a_{i}^{\prime })\).

We can extend this bound to the resource augmentation scenario, where the online algorithm may move the servers a maximum distance of (1 + δ) ⋅ ms. When relaxing the condition appropriately to mc ≤ (1 + δε) ⋅ ms, then we get the following result:

Corollary 2

If mc ≤ (1 + δε) ⋅ ms for some \(\varepsilon \in (0,\frac {1}{2}]\), the algorithm WMS is \(\frac {\sqrt {2}\cdot 11\cdot (1+\delta )}{\varepsilon }\cdot c(\mathcal {K})\)-competitive, where \(c(\mathcal {K})\) is the competitive ratio of the simulated k-Page Migration algorithm \(\mathcal {K}\).

From here on we assume \(\mathcal {K}\) to be a k-Page Migration algorithm obtained from the transformation in Section 4.2.1. The offline helper and its invariants as stated in Proposition 2 do not depend on the simulated algorithm and therefore all insights gained from Section 4.2.2 are still valid. We use a potential composed of two major parts just as for the unweighted case.

Let \(\hat {o}\) be an offline server which fulfills the invariants stated in Proposition 2. The first part of the potential is then defined as

$$\phi:=\left\{\begin{array}{ll} 4\cdot d(\hat{a},\hat{o}) & \text{ if } d(\hat{a},\hat{o})\leq 107548D\cdot \frac{k m_{c}}{\delta^{2}} \\ 4\cdot \frac{1}{\delta m_{s}}d(\hat{a},\hat{o})^{2} + A & \text{ if } 107548D\cdot \frac{k m_{c}}{\delta^{2}}<d(\hat{a},\hat{o}) \end{array}\right.$$

with \(A:=4\cdot (107548D\frac {k m_{c}}{\delta ^{2}} - \frac {1}{\delta m_{s}}(107548D\frac {k m_{c}}{\delta ^{2}})^{2})\).

For the second part, we set

$$\psi:=Y\cdot D\frac{m_{c}}{\delta m_{s}}\sum\limits_{i=1}^{k}d(a_{i},c_{i})$$

where the online servers ai are always sorted such that they represent a minimum weight matching to the simulated servers ci. We choose \(Y={\varTheta }(\frac {k}{\delta ^{2}})\) to be sufficiently large.

We begin by analyzing ψ, reusing ideas from the proof of Theorem 7.

Lemma 13

\({\varDelta }\psi \leq \mathcal {O}(1)\cdot Y\cdot \frac {m_{c}}{\delta m_{s}}\cdot C_{\mathcal {K}} - D\cdot {\sum }_{i=1}^{k}d(a_{i},a_{i}^{\prime })\).

Proof

Assume that a = a1. Every other server ai moves towards its counterpart ci, hence

\(\begin {array}{rcl} {\varDelta }\psi & \leq & Y\cdot D\frac {m_{c}}{\delta m_{s}}\sum \limits _{i=1}^{k}(d(a_{i}^{\prime },c_{i}^{\prime })-d(a_{i},c_{i})) \\ & \leq & Y\cdot D\frac {m_{c}}{\delta m_{s}}\left (d(a_{1}^{\prime },c_{1}^{\prime })-d(a_{1},c_{1}) + \sum \limits _{i=2}^{k}(d(c_{i},c_{i}^{\prime })-d(a_{i},a_{i}^{\prime }))\right ). \end {array}\)

First examine the case that \(\tilde {a}\) moves towards its matching partner instead of \(r^{\prime }\). Then \({\varDelta }\psi \leq Y\cdot D\frac {m_{c}}{\delta m_{s}}\cdot {\sum }_{i=1}^{k}d(c_{i},c_{i}^{\prime }) - Y\cdot D\frac {m_{c}}{\delta m_{s}}\cdot {\sum }_{i=1}^{k}d(a_{i},a_{i}^{\prime })\).

Now assume \(\tilde {a}=a_{1}\) moves towards \(r^{\prime }\). We have \(d(a_{1},a_{1}^{\prime })\leq \min \limits \{(1+\frac {\delta }{2})m_{s},\frac {1}{D}(1-\frac {\delta }{2})\cdot d(\tilde {a},r^{\prime })\}\). We distinguish the following cases with respect to the positioning of the pages of \(\mathcal {K}\):

  1. 1.

    \(d(a^{*'},r^{\prime }) \leq d(c^{*'},r^{\prime })\):

    Since we assume D ≥ 2, we have

    $$\begin{array}{rrcl} & D\cdot d(a_{1},a_{1}^{\prime}) & \leq & d(a_{1},r^{\prime}) \\ & & \leq & d(a_{1},a_{1}^{\prime}) + d(a_{1}^{\prime},r^{\prime}) \\ \Rightarrow & \frac{D}{2}\cdot d(a_{1},a_{1}^{\prime}) & \leq & d(c^{*'},r^{\prime}). \end{array}$$

    It follows \({\varDelta }\psi \leq Y D\frac {m_{c}}{\delta m_{s}}\cdot {\sum }_{i=1}^{k}d(c_{i},c_{i}^{\prime }) + Y\frac {m_{c}}{\delta m_{s}}\cdot d(c^{*'},r^{\prime }) - D\cdot {\sum }_{i=2}^{k}d(a_{i},a_{i}^{\prime })\).

  2. 2.

    \(d(a^{*'},r^{\prime }) > d(c^{*'},r^{\prime })\) and \(c^{*'}=c_{1}^{\prime }\): We know that \(d(a_{1}^{\prime },c_{1}^{\prime }) - d(a_{1},c_{1}^{\prime }) \leq -\frac {1}{\sqrt {2}}\cdot d(a_{1},a_{1}^{\prime })\) and hence \({\varDelta }\psi \leq \sqrt {2}\frac {4D}{\varepsilon }{\sum }_{i=1}^{k}d(c_{i},c_{i}^{\prime }) -\frac {4D}{\varepsilon }\cdot d(a_{1},a_{1}^{\prime }) - D\cdot {\sum }_{i=2}^{k}d(a_{i},a_{i}^{\prime })\).

  3. 3.

    \(d(a^{*'},r^{\prime }) > d(c^{*'},r^{\prime })\) and \(c^{*'}\neq c_{1}^{\prime }\): We assume \(c^{*'}=c_{2}^{\prime }\). It must hold \(a_{2}^{\prime }\neq c_{2}^{\prime }\) and hence \(d(c_{2}^{\prime },a_{2}^{\prime })-d(c_{2}^{\prime },a_{2})\leq - \min \limits \{m_{s},\frac {1}{D}\cdot d(\tilde {a},r^{\prime })\}\). This gives us \(d(a_{1}^{\prime },c_{1}^{\prime }) - d(a_{1},c_{1}^{\prime }) + d(a_{2}^{\prime },c_{2}^{\prime }) - d(a_{2},c_{2}^{\prime }) \leq -\varepsilon \cdot d(a_{2},a_{2}^{\prime })\). It follows

    $$\begin{array}{rcl} {\varDelta}\psi & \leq & \sqrt{2}\frac{4D}{\varepsilon}({\sum}_{i=1}^{k}d(c_{i},c_{i}^{\prime}) -\varepsilon\cdot d(a_{2},a_{2}^{\prime}) - {\sum}_{i=3}^{k}d(a_{i},a_{i}^{\prime})) \\ & \leq & \sqrt{2}\frac{4D}{\varepsilon}{\sum}_{i=1}^{k}d(c_{i},c_{i}^{\prime}) - D\cdot{\sum}_{i=1}^{k}d(a_{i},a_{i}^{\prime}). \end{array}$$

Lemma 14

If \(d(a^{*'},r^{\prime })>d(c^{*'},r^{\prime })\), then \({\varDelta }\psi \leq Y\cdot \frac {m_{c}}{\delta m_{s}}C_{\mathcal {K}} - D\cdot \sum \limits _{i=1}^{k}d(a_{i},a_{i}^{\prime }) - \frac {Y-4}{2}D\frac {m_{c}}{\delta m_{s}}\cdot \min \limits \{m_{s},\frac {1}{D}\cdot d(\tilde {a},r^{\prime })\}\).

Proof

Assume that a = a1. Every other server ai moves towards its counterpart ci, hence

$$\begin{array}{rcl} {\varDelta}\psi & \leq & Y\cdot D\frac{m_{c}}{\delta m_{s}}\sum\limits_{i=1}^{k}(d(a_{i}^{\prime},c_{i}^{\prime})-d(a_{i},c_{i})) \\& \leq & Y\cdot D\frac{m_{c}}{\delta m_{s}}\left( d(a_{1}^{\prime},c_{1}^{\prime})-d(a_{1},c_{1}) + \sum\limits_{i=2}^{k}(d(c_{i},c_{i}^{\prime})-d(a_{i},a_{i}^{\prime}))\right). \end{array}$$

First examine the case that \(\tilde {a}\) moves towards its matching partner instead of \(r^{\prime }\). Then

$$\begin{array}{rcl} {\varDelta}\psi & \leq & Y\cdot D\frac{m_{c}}{\delta m_{s}}\cdot{\sum}_{i=1}^{k}d(c_{i},c_{i}^{\prime}) - Y\cdot D\frac{m_{c}}{\delta m_{s}}\cdot{\sum}_{i=1}^{k}d(a_{i},a_{i}^{\prime}) \\ & \leq & Y\cdot\frac{m_{c}}{\delta m_{s}}C_{\mathcal{K}} - D\cdot\sum\limits_{i=1}^{k}d(a_{i},a_{i}^{\prime}) - (Y-1)\cdot D\frac{m_{c}}{\delta} \end{array}$$

since the server matched to \(c^{*'}\) moves the full distance.

Now assume \(\tilde {a}\) moves towards \(r^{\prime }\). If \(c^{*'}=c_{1}^{\prime }\), we know that \(d(a_{1}^{\prime },c_{1}^{\prime }) - d(a_{1},c_{1}^{\prime }) \leq -\frac {1}{\sqrt {2}}\cdot d(a_{1},a_{1}^{\prime })\) and hence \({\varDelta }\psi \leq Y\cdot D\frac {m_{c}}{\delta m_{s}}{\sum }_{i=1}^{k}d(c_{i},c_{i}^{\prime }) - D\frac {m_{c}}{\delta m_{s}}\cdot {\sum }_{i=1}^{k}d(a_{i},a_{i}^{\prime })-\frac {Y}{\sqrt {2}}\cdot D\frac {m_{c}}{\delta m_{s}}d(a_{1},a_{1}^{\prime })\) with \(d(a_{1},a_{1}^{\prime }) = \min \limits \{(1+\frac {\delta }{2})m_{s},\frac {1}{D}(1-\frac {\delta }{2})\cdot d(\tilde {a},r^{\prime })\}\).

Otherwise, we assume \(c^{*'}=c_{2}^{\prime }\). It must hold \(a_{2}^{\prime }\neq c_{2}^{\prime }\) and hence \(d(c_{2}^{\prime },a_{2}^{\prime })-d(c_{2}^{\prime },a_{2})\leq - \min \limits \{m_{s},\frac {1}{D}\cdot d(\tilde {a},r^{\prime })\}\). This gives us \(d(a_{1}^{\prime },c_{1}^{\prime }) - d(a_{1},c_{1}^{\prime }) + d(a_{2}^{\prime },c_{2}^{\prime }) - d(a_{2},c_{2}^{\prime }) \leq -\frac {\delta }{2}\cdot d(a_{2},a_{2}^{\prime })\). It follows

$$\begin{array}{rcl} {\varDelta}\psi & \leq & Y\cdot D\frac{m_{c}}{\delta m_{s}}({\sum}_{i=1}^{k}d(c_{i},c_{i}^{\prime}) -\frac{\delta}{2}\cdot d(a_{2},a_{2}^{\prime}) - {\sum}_{i=3}^{k}d(a_{i},a_{i}^{\prime})) \\ & \leq & Y\cdot D\frac{m_{c}}{\delta m_{s}}{\sum}_{i=1}^{k}d(c_{i},c_{i}^{\prime}) - D\cdot{\sum}_{i=1}^{k}d(a_{i},a_{i}^{\prime}) - \frac{Y-4}{2}D\frac{m_{c}}{\delta m_{s}}\cdot d(a_{2},a_{2}^{\prime}). \end{array}$$

Now consider the case that \(r^{\prime }\notin inner(o^{*'})\). We have

$$\begin{array}{rcl} d(a^{*'},r^{\prime}) & \leq & d(o^{*a^{\prime}},r^{\prime}) \\ & \leq & d(o^{*'},o^{*a^{\prime}}) + d(o^{*'},r^{\prime}) \\ & \leq & (\frac{48960k}{\delta^{2}}+1)\cdot d(o^{*'},r^{\prime}). \end{array}$$

The movement costs are canceled by Δψ as in Lemma 13. The increase of ϕ can be bound with Lemma 10. In all of the above, the competitive ratio is bounded by \(\mathcal {O}(\frac {k^{2}}{\delta ^{3}})\cdot \frac {m_{c}}{\delta m_{s}} + Y\cdot \frac {m_{c}}{\delta m_{s}}\cdot c(\mathcal {K})\).

Finally, we consider the case \(r^{\prime }\in inner(o^{*'})\). As in the previous Section, whenever \(d(a^{*},r^{\prime })> 102970D\frac {k m_{c}}{\delta ^{2}}\), we make use of Lemma 7 to obtain the following result, which then helps us bound Δϕ:

Lemma 15

If \(d(a^{*},r^{\prime })> 102970D\frac {k m_{c}}{\delta ^{2}}\) and \(r^{\prime }\in inner(o^{*'})\), then \(d(\hat {a}^{\prime },\hat {o}^{\prime })-d(\hat {a},\hat {o})\leq -\frac {\delta }{8}m_{s}\).

Proof

By our construction of the simulated k-Page Migration algorithm, we have \(d(c_{i}^{\prime },r^{\prime }) \leq 33Dkm_{c} \leq \frac {\delta ^{2}}{2652}\cdot d(a^{*'},r^{\prime })\) for all i. Furthermore,

$$\begin{array}{rrcl} & d(o^{*'},o^{*a^{\prime}}) & \leq & d(o^{*'},a^{*'}) \\ & & \leq & d(o^{*'},r^{\prime}) + d(r^{\prime},a^{*'}) \\ \Leftrightarrow & (1-\frac{\delta^{2}}{48960k})\cdot d(o^{*'},o^{*a^{\prime}}) & \leq & d(r^{\prime},a^{*'}). \end{array}$$

Hence

$$\begin{array}{rcl} d(c_{i}^{\prime},\hat{o}^{\prime}) & \leq & d(c_{i}^{\prime},r^{\prime}) + d(r^{\prime},o^{*'}) + d(o^{*'},\hat{o}^{\prime}) \\ & \leq & \frac{\delta^{2}}{2652}\cdot d(a^{*'},r^{\prime}) + (\frac{\delta}{48} + \frac{\delta^{2}}{48960k})\cdot d(o^{*'},o^{*a^{\prime}}) \\ & \leq & 0.022\delta\cdot d(a^{*'},r^{\prime}) \\ & \leq & 0.022\delta\cdot d(a_{i}^{\prime},r^{\prime}) \end{array}$$

and with Lemma 7, \(d(a_{i}^{\prime },\hat {o}^{\prime })-d(a_{i},\hat {o}^{\prime })\leq -\frac {1+\frac {1}{4}\delta }{1+\frac {1}{2}\delta }d(a_{i},a_{i}^{\prime })\) follows for all i.

In order to bound the movement of \(\hat {o}\), we need to show that \(d(o^{*},o^{*a}) \geq 2\cdot 51483\frac {km_{c}}{\delta ^{2}}\). We use

$$\begin{array}{rrcl} & d(a^{*},r^{\prime}) & \leq & m_{c} + d(a^{*'},r^{\prime}) \\ & & \leq & m_{c} + d(o^{*a^{\prime}},r^{\prime}) \\ & & \leq & m_{c} + (1 + \frac{\delta^{2}}{48960k}) \cdot d(o^{*'},o^{*a^{\prime}}) \\ \Leftrightarrow & \frac{1}{1 + \frac{\delta^{2}}{48960k}}(d(a^{*},r^{\prime})-m_{c}) & \leq & d(o^{*'},o^{*a^{\prime}}). \end{array}$$

The bound follows from \(d(a^{*},r^{\prime })> 102970\frac {k m_{c}}{\delta ^{2}}\).

From Proposition 2 we get \(d(\hat {o},\hat {o}^{\prime })\leq (1+\frac {\delta }{8})m_{s}\) and therefore

$$\begin{array}{rcl} d(\hat{a}^{\prime},\hat{o}^{\prime}) - d(\hat{a},\hat{o}) & \leq & -\frac{1+\frac{1}{4}\delta}{1+\frac{1}{2}\delta}d(a_{i},a_{i}^{\prime}) + d(\hat{o},\hat{o}^{\prime}) \\ & \leq & -(1+\frac{\delta}{4})m_{s} + (1+\frac{\delta}{8})m_{s} \\ & \leq & -\frac{\delta}{8}m_{s} \end{array}$$

where i is chosen such that ai is closest to \(\hat {o}^{\prime }\). □

Lemma 16

If \(r^{\prime }\in inner(o^{*'})\), then \(C_{Alg} + {\varDelta }\phi + {\varDelta }\psi \leq Y\cdot \frac {m_{c}}{\delta m_{s}}\cdot C_{\mathcal {K}} + 2\cdot d(o^{*'},r^{\prime })\).

Proof

  1. 1.

    \(d(\hat {a}^{\prime },\hat {o}^{\prime })\leq 107548D\cdot \frac {k m_{c}}{\delta ^{2}}\): First consider the case \(d(a^{*'},r^{\prime })\leq d(c^{*'},r^{\prime })\). With Lemma 13 we can bound the movement costs of the algorithm. Furthermore, we use

    $$\begin{array}{rrcl} & d(o^{*'},o^{*a^{\prime}}) & \leq & d(o^{*'},\hat{a}^{\prime}) \\ & & \leq & d(o^{*'},\hat{o}^{\prime}) + d(\hat{o}^{\prime},\hat{a}^{\prime}) \\ & & \leq & \frac{\delta}{48}\cdot d(o^{*'},o^{*a^{\prime}}) + d(\hat{o}^{\prime},\hat{a}^{\prime}) \\ \Leftrightarrow & (1-\frac{\delta}{48})\cdot d(o^{*'},o^{*a^{\prime}}) & \leq & d(\hat{o}^{\prime},\hat{a}^{\prime}) \end{array}$$

    to get

    $$\begin{array}{rrcl} & d(\hat{o}^{\prime},\hat{a}^{\prime}) & \leq & d(\hat{o}^{\prime},a^{*'}) \\ & & \leq & d(a^{*'},r^{\prime}) + d(r^{\prime},o^{*'}) + d(o^{*'},\hat{o}^{\prime}) \\ & & \leq & d(a^{*'},r^{\prime}) + 2\cdot\frac{\delta}{48}\cdot d(o^{*'},o^{*a^{\prime}}) \\ & & \leq & d(a^{*'},r^{\prime}) + \frac{2\delta}{47}\cdot d(\hat{o}^{\prime},\hat{a}^{\prime}) \\ \Rightarrow & d(\hat{o}^{\prime},\hat{a}^{\prime}) & \leq & 2\cdot d(a^{*'},r^{\prime}). \end{array}$$

    Hence \({\varDelta }\phi \leq 4\cdot d(\hat {a}^{\prime },\hat {o}^{\prime })\leq 8\cdot d(c^{*'},r^{\prime })\). In total, \(C_{Alg} + {\varDelta }\phi + {\varDelta }\psi \leq Y\cdot \frac {m_{c}}{\delta m_{s}}\cdot C_{\mathcal {K}}\) with Y ≥ 9.

    Otherwise, Lemma 14 applies which gives us \({\varDelta }\psi \leq Y\cdot \frac {m_{c}}{\delta m_{s}}C_{\mathcal {K}} - D\cdot \sum \limits _{i=1}^{k}d(a_{i},a_{i}^{\prime }) - \frac {Y-4}{2}D\frac {m_{c}}{\delta m_{s}}\cdot \min \limits \{m_{s},\frac {1}{D}\cdot d(\tilde {a},r^{\prime })\}\). We may either use \(C_{Alg} + {\varDelta }\phi \leq 9\cdot d(\tilde {a},r^{\prime }) + D\cdot \sum \limits _{i=1}^{k}d(a_{i},a_{i}^{\prime })\), or

    $$\begin{array}{rcl} d(a^{*'},r^{\prime}) & \leq & d(\hat{a}^{\prime},r^{\prime}) \\ & \leq & d(\hat{a}^{\prime},\hat{o}^{\prime}) + d(\hat{o}^{\prime},r^{\prime}) \\ & \leq & d(\hat{a}^{\prime},\hat{o}^{\prime}) + 2\cdot\frac{\delta}{48}\cdot d(o^{*'},o^{*a^{\prime}}) \\ & \leq & (1 + \frac{2\delta}{47})\cdot d(\hat{a}^{\prime},\hat{o}^{\prime}) \end{array}$$

    gives us \(C_{Alg} + {\varDelta }\phi \leq 6\cdot d(\hat {a}^{\prime },\hat {o}^{\prime }) + D\cdot \sum \limits _{i=1}^{k}d(a_{i},a_{i}^{\prime }) \leq 6\cdot 91414D\cdot \frac {k m_{c}}{\delta ^{2}}+ D\cdot \sum \limits _{i=1}^{k}d(a_{i},a_{i}^{\prime })\). In any case, \(C_{Alg} + {\varDelta }\phi + {\varDelta }\psi \leq Y\cdot \frac {m_{c}}{\delta m_{s}}\cdot C_{\mathcal {K}}\) with \(Y\geq {\varOmega }(\frac {k }{\delta ^{2}})\).

  2. 2.

    \(107548D\cdot \frac {k m_{c}}{\delta ^{2}}<d(\hat {a}^{\prime },\hat {o}^{\prime })\):

    We show that the condition of Lemma 15 applies:

    $$\begin{array}{rrcl} & d(\hat{a}^{\prime},\hat{o}^{\prime}) & \leq & d(a^{*'},\hat{o}^{\prime}) \\ & & \leq & d(a^{*'},a^{*}) + d(a^{*},r^{\prime}) + d(r^{\prime},\hat{o}^{\prime}) \\ & & \leq & m_{c} + d(a^{*},r^{\prime}) + 2\cdot\frac{\delta}{48}\cdot d(o^{*'},o^{*a^{\prime}}) \\ & & \leq & m_{c} + d(a^{*},r^{\prime}) + \frac{2}{47}\cdot d(\hat{a}^{\prime},\hat{o}^{\prime}) \\ \Leftrightarrow & \frac{45}{47}\cdot d(\hat{a}^{\prime},\hat{o}^{\prime}) - m_{c} & \leq & d(a^{*},r^{\prime}) \\ \Rightarrow & 102970D\frac{k m_{c}}{\delta^{2}} & \leq & d(a^{*},r^{\prime}) \end{array}$$

    Hence the lemma gives us

    $$\begin{array}{rcl} {\varDelta}\phi & \leq & 4\cdot \frac{1}{\delta m_{s}}\left( d(\hat{a}^{\prime},\hat{o}^{\prime})^{2} - d(\hat{a},\hat{o})^{2}\right) \\ & \leq & 4\cdot\frac{1}{\delta m_{s}}\left( d(\hat{a}^{\prime},\hat{o}^{\prime})^{2} - (d(\hat{a}^{\prime},\hat{o}^{\prime})+\frac{\delta}{8}m_{s})^{2}\right) \\ & = & -d(\hat{a}^{\prime},\hat{o}^{\prime}). \end{array}$$

    Furthermore, we have

    $$\begin{array}{rcl} C_{Alg} & \leq & d(\hat{a}^{\prime},r^{\prime}) + D\cdot{\sum}_{i=1}^{k}d(a_{i},a_{i}^{\prime}) \\ & \leq & d(\hat{a}^{\prime},\hat{o}^{\prime}) + d(\hat{o}^{\prime},o^{*'}) + d(o^{*'},r^{\prime}) + D\cdot{\sum}_{i=1}^{k}d(a_{i},a_{i}^{\prime}) \\ & \leq & d(\hat{a}^{\prime},\hat{o}^{\prime}) + (1+\frac{\delta}{48})\cdot d(o^{*'},r^{\prime}) + D\cdot{\sum}_{i=1}^{k}d(a_{i},a_{i}^{\prime}) \end{array}$$

    and \({\varDelta }\psi \leq Y\cdot \frac {m_{c}}{\delta m_{s}}\cdot C_{\mathcal {K}} -D\cdot {\sum }_{i=1}^{k}d(a_{i},a_{i}^{\prime })\) due to Lemma 14. In total, we get \(C_{Alg} + {\varDelta }\phi + {\varDelta }\psi \leq Y\cdot \frac {m_{c}}{\delta m_{s}}\cdot C_{\mathcal {K}} + 2\cdot d(o^{*'},r^{\prime })\).

The resulting competitive ratio \(Y\cdot \frac {m_{c}}{\delta m_{s}}\cdot c(\mathcal {K}) + 2\) is less than the \(\mathcal {O}(\frac {k^{2}}{\delta ^{3}})\cdot \frac {m_{c}}{\delta m_{s}} + Y\cdot \frac {m_{c}}{\delta m_{s}}\cdot c(\mathcal {K})\) bound from the former set of cases. Accounting for the loss due to the transformation of the simulated k-Page Migration algorithm, we obtain the following upper bound:

Theorem 8

If mc ≥ (1 + δ)ms, the algorithm WMS is \(\mathcal {O}(\frac {1}{\delta ^{4}}\cdot k^{2}\cdot \frac {m_{c}}{ m_{s}} +\frac {1}{\delta ^{3}}\cdot k^{2} \cdot \frac {m_{c}}{ m_{s}}\cdot c(\mathcal {K}))\)-competitive, where \(c(\mathcal {K})\) is the competitive ratio of the simulated k-Page Migration algorithm \(\mathcal {K}\).

6 Improved Competitiveness on the Line

In this section, we show how to use our offline abstraction from Section 4.2.2 to analyze an algorithm not based on the simulation of an existing k-Page Migration algorithm. We take the well known Double Coverage algorithm for the line and adapt it to our setting with restricted movement. Note that we will only state the algorithm and its analysis for D = 1. It is easy to see how to extend it to an arbitrary D ≥ 1.

Our algorithm Restricted Double Coverage (RDC) works as follows:

Let a1,…,ak be the servers ordered by their position on the line from left to right. If the new request \(r^{\prime }\) is to the left of a1 or to the right of ak, move the respective server ai a distance of \(\min \limits \{d(a_{i},r^{\prime }),(1+\delta )m_{s}\}\) towards \(r^{\prime }\). Otherwise, \(r^{\prime }\) is located between two servers ai and ai+ 1. In this case, let aj be a closest server to \(r^{\prime }\). Move ai and ai+ 1 a distance of \(\min \limits \{d(a_{j},r^{\prime }),(1+\delta )m_{s}\}\) towards \(r^{\prime }\).

The following analysis, specifically the potential function ψ and its use in the analysis, is adapted from [11]. The overall structure is the same as in the previous sections.

We use the abstraction \(\hat {o}\) from Proposition 2 and use as before the potential

$$\phi:=\left\{\begin{array}{ll} 4\cdot d(\hat{a},\hat{o}) & \text{ if } d(\hat{a},\hat{o})\leq 107548\cdot \frac{k m_{c}}{\delta^{2}} \\ 4\cdot \frac{1}{\delta m_{s}}d(\hat{a},\hat{o})^{2} + A & \text{ if } 107548\cdot \frac{k m_{c}}{\delta^{2}}<d(\hat{a},\hat{o}) \end{array}\right.$$

with \(A:=4\cdot (107548\frac {k m_{c}}{\delta ^{2}} - \frac {1}{\delta m_{s}}(107548\frac {k m_{c}}{\delta ^{2}})^{2})\).

To handle shorter distances to the request, we use an adaption of the potential used in the proof for DC:

$$\psi:=X\cdot\frac{m_{c}}{m_{s}}\sum\limits_{i<j}d(a_{i},a_{j}) + Y\cdot \frac{m_{c}}{m_{s}}\sum\limits_{i=1}^{k}d(a_{i},o_{i})$$

where both the online servers ai and the optimal servers oi are always sorted from left to right. It is easy to see that every offline solution can be transferred such that the servers never change their ordering. We choose \(X={\varTheta }(\frac {k }{\delta ^{2}}),Y={\varTheta }(\frac {k^{2} }{\delta ^{2}})\) to be sufficiently large.

Similar to the original analysis we obtain the following result:

Lemma 17

\({\varDelta }\psi \leq 2Y\frac {m_{c}}{ m_{s}}\cdot C_{Opt} - \min \limits \{\frac {1}{2}Y ,X\}\cdot \frac {m_{c}}{m_{s}}\cdot {\sum }_{i=1}^{k}d(a_{i},a_{i}^{\prime })\).

Proof

We distinguish two major cases: First, assume the request is either to the left of a1 or to the right of an. Both are analogous, hence we deal only with \(r^{\prime }\) being to the left of a1.

If \(o_{1}^{\prime }\) is not the closest server of the optimal solution to \(r^{\prime }\), then it is to the left of \(r^{\prime }\) and hence \(d(o_{1}^{\prime },a_{1}^{\prime })-d(o_{1}^{\prime },a_{1})\leq - d(a_{1},a_{1}^{\prime })\). Otherwise, \(\begin {array}{rcl} d(o_{1}^{\prime },a_{1}^{\prime })-d(o_{1}^{\prime },a_{1}) & \leq & d(o_{1}^{\prime },r^{\prime }) + d(a_{1}^{\prime },r^{\prime }) - (d(a_{1},r^{\prime })-d(r^{\prime },o_{1}^{\prime })) \\ & \leq & 2d(o_{1}^{\prime },r^{\prime }) - d(a_{1},a_{1}^{\prime }). \end {array}\)

For the potential it therefore holds

$$\begin{array}{rcl} {\varDelta}\psi & = & X\cdot(k-1)\frac{m_{c}}{m_{s}}\cdot d(a_{1},a_{1}^{\prime}) + Y\frac{m_{c}}{m_{s}}\left( d(o_{1}^{\prime},a_{1}^{\prime})-d(o_{1}^{\prime},a_{1}) + \sum\limits_{i=1}^{k}d(o_{i},o_{i}^{\prime})\right) \\ & \leq & 2Y\cdot \frac{m_{c}}{m_{s}}\cdot C_{Opt} + X\cdot(k-1)\frac{m_{c}}{m_{s}}\cdot d(a_{1},a_{1}^{\prime}) - Y\cdot \frac{m_{c}}{ m_{s}}d(a_{1},a_{1}^{\prime}) \\ & \leq & 2Y\cdot \frac{m_{c}}{m_{s}}\cdot C_{Opt} - \frac{1}{2}Y\cdot \frac{m_{c}}{m_{s}}d(a_{1},a_{1}^{\prime}) \end{array}$$

for Y ≥ 2kX.

In the second major case, the request \(r^{\prime }\) is in between ai and ai+ 1. We assume that ai is closest to \(r^{\prime }\), the other case is analogous. Both servers move \(\min \limits \{d(a_{i},r^{\prime }),(1+\delta )m_{s}\}\) towards \(r^{\prime }\).

Consider first the term \(X\cdot \frac {m_{c}}{m_{s}}\sum \limits _{i<j}d(a_{i},a_{j})\). Servers ai and ai+ 1 decrease their distance by \(2d(a_{i},a_{i}^{\prime })\). For the servers right of ai+ 1, ai decreases the distance by \(d(a_{i},a_{i}^{\prime })\) and ai+ 1 moves away from the same servers by the same distance. The same argument applies for the servers left of ai. Hence \(X\cdot \frac {m_{c}}{m_{s}}\sum \limits _{i<j}d(a_{i}^{\prime },a_{j}^{\prime }) - X\cdot \frac {m_{c}}{m_{s}}\sum \limits _{i<j}d(a_{i},a_{j})\leq -2X\frac {m_{c}}{m_{s}}\cdot d(a_{i},a_{i}^{\prime })\).

For the second term \(Y\cdot \frac {m_{c}}{m_{s}}\sum \limits _{i=1}^{k}d(a_{i},o_{i})\). If \(o_{i}^{\prime }\) is to the right of \(r^{\prime }\) or \(o_{i+1}^{\prime }\) to the left of \(r^{\prime }\) then \(\sum \limits _{i=1}^{k}d(a_{i}^{\prime },o_{i}^{\prime }) - \sum \limits _{i=1}^{k}d(a_{i},o_{i}^{\prime })\leq 0\). Otherwise, let j ∈{i,i + 1} such that \(o^{*'}=o_{j}^{\prime }\). Then

$$\begin{array}{rcl} d(o_{j}^{\prime},a_{j}^{\prime})-d(o_{j}^{\prime},a_{j}) & \leq & d(o_{j}^{\prime},r^{\prime}) + d(a_{j}^{\prime},r^{\prime}) - (d(a_{j},r^{\prime})-d(r^{\prime},o_{j}^{\prime})) \\ & \leq & 2d(o_{j}^{\prime},r^{\prime}) - d(a_{j},a_{j}^{\prime}). \end{array}$$

Since \(d(o_{\ell }^{\prime },a_{\ell }^{\prime })-d(o_{\ell }^{\prime },a_{\ell })\leq d(a_{\ell },a_{\ell }^{\prime })\), we have \(d(o_{i}^{\prime },a_{i}^{\prime })-d(o_{i}^{\prime },a_{i}) + d(o_{i+1}^{\prime },a_{i+1}^{\prime })-d(o_{i+1}^{\prime },a_{i+1})\leq 2d(o^{*'},r^{\prime })\).

Summarizing this case, we got

$$\begin{array}{rcl} {\varDelta}\psi & \leq & -2X\frac{m_{c}}{m_{s}}\cdot d(a_{i},a_{i}^{\prime}) + 2Y \frac{m_{c}}{m_{s}}\cdot d(o^{*'},r^{\prime}) + Y \frac{m_{c}}{m_{s}}\cdot\sum\limits_{i=1}^{k}d(o_{i},o_{i}^{\prime}) \\ & \leq & 2Y\frac{m_{c}}{m_{s}}\cdot C_{Opt} - X\cdot \frac{m_{c}}{m_{s}}{\sum}_{i=1}^{k}d(a_{i},a_{i}^{\prime}). \end{array}$$

Now consider the case that \(r^{\prime }\notin inner(o^{*'})\). We have \(d(a^{*'},r^{\prime })\leq d(o^{*a^{\prime }},r^{\prime }) \leq d(o^{*'},o^{*a^{\prime }}) + d(o^{*'},r^{\prime }) \leq (\frac {48960k}{\delta ^{2}}+1)\cdot d(o^{*'},r^{\prime })\). The movement costs are canceled by Δψ as in Lemma 17. The increase of ϕ can be bounded as in Lemma 10. In all of the above, the competitive ratio is bounded by \(\mathcal {O}(\frac {k^{2}}{\delta ^{3}}\cdot \frac {m_{c}}{\delta m_{s}} + Y\cdot \frac {m_{c}}{m_{s}})\).

Finally, we consider the case \(r^{\prime }\in inner(o^{*'})\). In contrast to the previous sections, we can simplify the use of ϕ due to the line metric and the DC algorithm:

Lemma 18

If \(d(a^{*},r^{\prime })> 102970\frac {k m_{c}}{\delta ^{2}}\) and \(r^{\prime }\in inner(o^{*'})\), then \(d(\hat {a}^{\prime },\hat {o}^{\prime })-d(\hat {a},\hat {o})\leq -\frac {\delta }{2}m_{s}\).

Proof

Since we are on the line, it is guaranteed by the algorithm that the servers which move in the current time step move directly towards \(\hat {o}^{\prime }\), and hence we get \(d(a_{i}^{\prime },\hat {o}^{\prime })-d(a_{i},\hat {o}^{\prime })= -d(a_{i},a_{i}^{\prime })=-(1+\delta )m_{s}\).

In order to bound the movement of \(\hat {o}\), we need to show that \(d(o^{*},o^{*a}) \geq 2\cdot 51483\frac {km_{c}}{\delta ^{2}}\). We use

$$\begin{array}{rrcl} & d(a^{*},r^{\prime}) & \leq & m_{c} + d(a^{*'},r^{\prime}) \\ & & \leq & m_{c} + d(o^{*a^{\prime}},r^{\prime}) \\ & & \leq & m_{c} + (1 + \frac{\delta^{2}}{48960k}) \cdot d(o^{*'},o^{*a^{\prime}}) \\ \Leftrightarrow & \frac{1}{1 + \frac{\delta^{2}}{48960k}}(d(a^{*},r^{\prime})-m_{c}) & \leq & d(o^{*'},o^{*a^{\prime}}). \end{array}$$

The bound follows from \(d(a^{*},r^{\prime })> 102970\frac {k m_{c}}{\delta ^{2}}\).

From Proposition 2 we get \(d(\hat {o},\hat {o}^{\prime })\leq (1+\frac {\delta }{8})m_{s}\) and therefore

$$\begin{array}{rcl} d(\hat{a}^{\prime},\hat{o}^{\prime}) - d(\hat{a},\hat{o}) & \leq & -d(a_{i},a_{i}^{\prime}) + d(\hat{o},\hat{o}^{\prime}) \\ & \leq & -(1+\delta)m_{s} + (1+\frac{\delta}{8})m_{s} \\ & \leq & -\frac{\delta}{2}m_{s} \end{array}$$

where i is chosen to be closest to \(\hat {o}^{\prime }\). □

Lemma 19

If \(r^{\prime }\in inner(o^{*'})\), then \(C_{Alg} + {\varDelta }\phi + {\varDelta }\psi \leq 4Y\frac {m_{c}}{m_{s}}\cdot C_{Opt}\).

Proof

  1. 1.

    \(d(\hat {a}^{\prime },\hat {o}^{\prime })\leq 107548\cdot \frac {k m_{c}}{\delta ^{2}}\): We use

    $$\begin{array}{rcl} d(a^{*'},r^{\prime}) & \leq & d(\hat{a}^{\prime},r^{\prime}) \\ & \leq & d(\hat{a}^{\prime},\hat{o}^{\prime}) + d(\hat{o}^{\prime},r^{\prime}) \\ & \leq & d(\hat{a}^{\prime},\hat{o}^{\prime}) + 2\cdot\frac{\delta}{48}\cdot d(o^{*'},o^{*a^{\prime}}) \\ & \leq & (1 + \frac{2\delta}{47})\cdot d(\hat{a}^{\prime},\hat{o}^{\prime}). \end{array}$$

    to get \(C_{Alg} + {\varDelta }\phi \leq 6\cdot d(\hat {a}^{\prime },\hat {o}^{\prime }) + {\sum }_{i=1}^{k}d(a_{i},a_{i}^{\prime })\). Furthermore,

    $$\begin{array}{rcl} {\varDelta}\psi & \leq & 2Y\frac{m_{c}}{ m_{s}}\cdot C_{Opt} - \min\{\frac{1}{2}Y ,X\}\cdot\frac{m_{c}}{m_{s}}\cdot{\sum}_{i=1}^{k}d(a_{i},a_{i}^{\prime}) \\ & \leq & 2Y\frac{m_{c}}{m_{s}}\cdot C_{Opt} - {\sum}_{i=1}^{k}d(a_{i},a_{i}^{\prime}) - (\min\{\frac{1}{2}Y ,X\}-1)\cdot m_{c} \end{array}$$

    due to Lemma 17. In total, \(C_{Alg} + {\varDelta }\phi + {\varDelta }\psi \leq 2Y\frac {m_{c}}{m_{s}}\cdot C_{Opt}\) with \(X,Y\geq {\varOmega }(\frac {k }{\delta ^{2}})\).

  2. 2.

    \(107548\cdot \frac {k m_{c}}{\delta ^{2}}<d(\hat {a}^{\prime },\hat {o}^{\prime })\): We show that the condition of Lemma 18 applies:

    $$\begin{array}{rrcl} & d(\hat{a}^{\prime},\hat{o}^{\prime}) & \leq & d(a^{*'},\hat{o}^{\prime}) \\ & & \leq & d(a^{*'},a^{*}) + d(a^{*},r^{\prime}) + d(r^{\prime},\hat{o}^{\prime}) \\ & & \leq & m_{c} + d(a^{*},r^{\prime}) + 2\cdot\frac{\delta}{48}\cdot d(o^{*'},o^{*a^{\prime}}) \\ & & \leq & m_{c} + d(a^{*},r^{\prime}) + \frac{2}{47}\cdot d(\hat{a}^{\prime},\hat{o}^{\prime}) \\ \Leftrightarrow & \frac{45}{47}\cdot d(\hat{a}^{\prime},\hat{o}^{\prime}) - m_{c} & \leq & d(a^{*},r^{\prime}) \\ \Rightarrow & 102970\frac{k m_{c}}{\delta^{2}} & \leq & d(a^{*},r^{\prime}). \end{array}$$

    Hence the lemma gives us

    $$\begin{array}{rcl} {\varDelta}\phi & \leq & 4\cdot \frac{1}{\delta m_{s}}\left( d(\hat{a}^{\prime},\hat{o}^{\prime})^{2} - d(\hat{a},\hat{o})^{2}\right) \\ & \leq & 4\cdot\frac{1}{\delta m_{s}}\left( d(\hat{a}^{\prime},\hat{o}^{\prime})^{2} - (d(\hat{a}^{\prime},\hat{o}^{\prime})+\frac{\delta}{2}m_{s})^{2}\right) \\ & = & -4\cdot d(\hat{a}^{\prime},\hat{o}^{\prime}). \end{array}$$

    Furthermore, we have

    $$\begin{array}{rcl} C_{Alg} & \leq & d(\hat{a}^{\prime},r^{\prime}) + {\sum}_{i=1}^{k}d(a_{i},a_{i}^{\prime}) \\ & \leq & d(\hat{a}^{\prime},\hat{o}^{\prime}) + d(\hat{o}^{\prime},o^{*'}) + d(o^{*'},r^{\prime}) + {\sum}_{i=1}^{k}d(a_{i},a_{i}^{\prime}) \\ & \leq & d(\hat{a}^{\prime},\hat{o}^{\prime}) + (1+\frac{\delta}{48})\cdot d(o^{*'},r^{\prime}) + {\sum}_{i=1}^{k}d(a_{i},a_{i}^{\prime}) \end{array}$$

    and \({\varDelta }\psi \leq 2Y\frac {m_{c}}{ m_{s}}\cdot C_{Opt} - \min \limits \{\frac {1}{2}Y ,X\}\cdot \frac {m_{c}}{m_{s}}\cdot {\sum }_{i=1}^{k}d(a_{i},a_{i}^{\prime })\) due to Lemma 17. In total, we get \(C_{Alg} + {\varDelta }\phi + {\varDelta }\psi \leq 4Y\frac {m_{c}}{m_{s}}\cdot C_{Opt}\).

The resulting competitive ratio \(\mathcal {O}(1)\cdot Y\cdot \frac {m_{c}}{m_{s}}\) is less than the \(\mathcal {O}(\frac {k^{2}}{\delta ^{3}}\cdot \frac {m_{c}}{\delta m_{s}} + Y\cdot \frac {m_{c}}{m_{s}})\) bound from the former set of cases. We therefore conclude:

Theorem 9

The RDC algorithm is \(\mathcal {O}(\frac {k^{2}}{\delta ^{4}}\cdot \frac {m_{c}}{m_{s}})\)-competitive.

7 Open Problems

The gap between the upper and lower bound is closely related to the question of the deterministic upper bound for k-Page Migration: Not only would an \(\mathcal {O}(k)\)-competitive algorithm for k-Page Migration directly improve the bound for D > 1, it could also give an idea how to improve the analysis of the greedy step in our algorithm, such that the costly transformation of the simulated algorithm would no longer be needed. This would potentially reduce the upper bound by another factor of k.

We can argue from previous work that we have a lower bound of Ω(1/δ) for when the dimension of the space is as large as k. It would be an interesting problem to explore the problem without this resource augmentation on the line, and whether a competitive algorithm exists in this case for k > 1. For higher dimensions, intuitively there should be a lower bound depending on 1/δ but decreasing with k, as there are more initial places the algorithm can cover as a “guess”, and hence smaller distances between the adversary and the algorithm can be achieved at the time where the target position of the request is revealed.

In Section 6 we have shown how to circumvent the use of the transformation by stating an explicit algorithm for the problem. For the original k-Server problem, the DC algorithm also works for trees. Similarly, we believe this result is also extendable to continuous trees, where the servers can occupy any place on the paths within the tree. Furthermore, it would be interesting to adopt other algorithms as well if possible. If one wants to use the scheme of analysis we used here, there needs to be a potential function for the algorithm which cancels the cost in every time step where the distance to the requests is small such that our offline abstraction cannot be utilized yet.

If Ω(k2) is a lower bound for k-Page Migration, this carries over to our model as well. We believe that the main algorithmic idea is suitable to reach an asymptotically optimal competitive ratio, but it remains an open problem to derive a proof of that. The high constants in our proofs are partially due to allowing easier argumentation in certain segments of the proof. There is however also great potential in reducing constants by trying to extend the potential analysis to operate in longer phases instead of doing a step-by-step analysis.

If we allow randomization, we can get k-Page Migration algorithms with polylogarithmic competitive ratio from [4]. As discussed in the related work section, the question of the best possible competitive ratio of randomized algorithms for the k-Server problem is still open, however we know that a result polylogarithmic in k can be achieved [17]. As our construction is entirely deterministic, apart from potentially the simulated algorithm, it would be interesting whether randomization can be used to significantly improve the competitive ratio. The desired result would be an algorithm with a competitive ratio polylogarithmic in k.

Finally, it would be interesting to know whether a competitive ratio independent of time can be achieved if instead of restricting the distance between consecutive requests, we would analyze the problem under a weak adversary who has less servers than the online algorithm. This problem is also considered for the classical k-Server problem [15], where the question of the competitive ratio is also still unresolved. In our problem, this extension could not just replace the restriction to the parameter mc, but also reduce the competitive ratio with respect to the number of servers. For Euclidean metrics, not much is known in this regard, with only a recent bound showing that no matter how high the difference in the number of servers, the dependence on the number of optimal servers can never be removed [6].