Toward Online Mobile Facility Location on General Metrics

We introduce an online variant of mobile facility location (MFL) (introduced by Demaine et al. (SODA 258–267 2007)). We call this new problem online mobile facility location (OMFL). In the OMFL problem, initially, we are given a set of k mobile facilities with their starting locations. One by one, requests are added. After each request arrives, one can make some changes to the facility locations before the subsequent request arrives. Each request is always assigned to the nearest facility. The cost of this assignment is the distance from the request to the facility. The objective is to minimize the total cost, which consists of the relocation cost of facilities and the distance cost of requests to their nearest facilities. We provide a lower bound for the OMFL problem that even holds on uniform metrics. A natural approach to solve the OMFL problem for general metric spaces is to utilize hierarchically well-separated trees (HSTs) and directly solve the OMFL problem on HSTs. In this paper, we provide the first step in this direction by solving a generalized variant of the OMFL problem on uniform metrics that we call G-OMFL. We devise a simple deterministic online algorithm and provide a tight analysis for the algorithm. The second step remains an open question. Inspired by the k-server problem, we introduce a new variant of the OMFL problem that focuses solely on minimizing movement cost. We refer to this variant as M-OMFL. Additionally, we provide a lower bound for M-OMFL that is applicable even on uniform metrics.


Introduction
On-demand mobile distribution hubs are emerging as a popular solution for businesses looking to meet growing demand and provide efficient, long-term access to warehouse space for their customers.These mobile facilities can be relocated to different locations within a region, based on market needs, and can be used for various purposes such as delivering goods, providing services, and improving network coverage.To reduce different costs, applications may need to move facilities closer to the end user.This means that facilities should always stay within a reasonable distance from customers who have ongoing service demands.However, relocating facilities also involves a cost.An efficient solution could dynamically adjust the locations of these mobile facilities in response to new customers, ensuring they are always within a reasonable distance of where they are needed.This results in a trade-off between two types of costs: the assignment cost, which is the sum of distances between customers and their nearest facilities at the current time, and the total movement cost, which is the total cost of relocating facilities between locations so far.The goal is to minimize the total cost of serving customers that arrive over time and request long-term service, which is the sum of these two costs.
In this paper, we introduce the online mobile facility location (OMFL) problem.We present two applications from different domains.The first one is related to the distribution of goods from mobile warehouses to retail stores.Suppose a company has a fixed number of mobile warehouses, initially located at various points within a region.The company serves this region, where the distances between points represent the cost of shipping goods or relocating warehouses.As demand for the company's products grows, new retail stores are built, resulting in new requests.These requests represent new retail stores that need to be served by the company.In this scenario, we do not consider the individual demands of each retail store.Instead, we assume that these stores typically have demands for goods that need to be shipped from mobile warehouses.
In the second application, we consider a telecommunications company that provides mobile broadband services to households within a region using movable internet providers.The company has a fixed number of these providers, which are initially positioned at various locations throughout the region.The distances between these points represent the signal strength or coverage between them.As the population in the region gradually grows and new households are established or connected, there is an increase in demand for the company's services, resulting in new requests for continuous and long-term mobile broadband coverage.
Paper Outline In the rest of this section, we introduce the OMFL problem and its two variants, G-OMFL and M-OMFL.Each subsection defines the corresponding problem, provides results, and reviews related work.In Section 2, we analyze G-OMFL and provide an upper bound.Section 3 presents our lower bound analysis for OMFL, showing that our analysis for G-OMFL is almost tight since OMFL is a special case of G-OMFL.Section 4 offers our lower bound analysis for M-OMFL.Finally, Section 5 concludes the paper and poses two open questions.Appendix A defines HSTs and discusses their use as a tool for solving online problems.

Online Mobile Facility Location (OMFL)
The mobile facility location (MFL) problem was first introduced by Demaine et al. [11] and later studied by Friggstad and Salavatipour [24].Many MFL scenarios, such as the two applications presented above, are online, where requests arrive one at a time and require long-term service, and the end of the request sequence is unknown.In MFL, each facility and request has a starting location in a metric space, and the goal is to find a destination point for each such that every request is assigned to a point that is the destination of some facility.The objective can be either to minimize the total or maximum distance between any pair of facilities and requests and their destinations.In OMFL, facilities can move more than once as new requests arrive online, while in MFL, facilities move only once offline.However, the assignment cost in both problems is based on the final configuration of facilities.In the online setting, the future requests are unknown, so any facility configuration at a given time could be the final one.The intermediate movements of facilities could be interpreted as the "learning cost" that an online algorithm must pay compared to an offline algorithm that solves MFL to ensure that facilities are well positioned at any time.Therefore, OMFL can be seen as an online adaptation of MFL, where the online algorithm must balance the trade-off between moving facilities closer to requests and minimizing relocation costs.

Problem Defintion
We define an instance of the OMFL problem as follows.We are given an arbitrary finite metric space M = (V, d) where |V | = n and a non-negative, symmetric distance function d : V × V → R + , which satisfies the triangle inequality.A set of requests are issued at points one at a time.There is a set of k mobile facilities.We define a configuration (at the time t = 1, 2, . ..) to be a function S(t) : [k] → V that specifies which of the facilities {1, . . ., k} is located at which point at time t, i.e., S j (t) denotes the point that hosts facility j at time t.At each time t = 1, 2, 3, . . ., there is exactly one request placed, and J (t) denotes the point at which the single request at time t is placed.
Formally, "serving all requests" at time t is a function I : [t] → [k], which assigns any request i ∈ [t] to one of the k facilities1 .We assume there is no bound on the number of requests that can be assigned to the same facility.If request i is assigned to facility j at time t then this induces the cost d (J (i), S j (t)), denoted as assignment cost of request i at time t.The overall assignment cost of a configuration F ⊂ V at time t denoted by S t (F ) is the sum of the assignment costs of all requests at this time, i.e., t i=1 d J (i), S I i (t) (t) .In order to keep the overall assignment cost small, an algorithm can move the facilities between the points, which implies some additional cost, called the movement cost.The movement cost equals the distance between the points, i.e., if facility j is at point v at time t and at point v ′ at time t + 1, the cost of movement of facility j at time t + 1 equals the distance d(v, v ′ ).
Feasible Solution At any given time t, any facility configuration at time t is considered a feasible solution.This means that even if an algorithm does not perform any actions can output the initial facility configuration as a feasible solution.However, this solution may not necessarily be efficient, as it may result in a high assignment cost.
The total cost of any algorithm A at time t is the sum of the total movement cost by time t and the overall assignment cost at time t and defined as follows where and Remark 1.1.The competitive ratio for any online algorithm at any given time t ≥ 1 is defined as the total cost of the online algorithm at time t divided by the cost of the optimal MFL solution at time t.It is important to note that an MFL algorithm can wait until time t and perform all necessary facility movements at that time.However, the online algorithm does not know the value of t in advance.As a result, if the online algorithm aims to guarantee a competitive ratio γ against the optimal MFL solution at any given time t, it must guarantee γ at any time t ′ ∈ [t] and may relocate facilities at any time t ′ since it does not know the value of t in advance.
Definition 1.1 (Optimal Assignment Cost).The optimal assignment cost at any time t, denoted by S * t , is the lowest total assignment cost that can be achieved by an "optimal configuration" of facilities at time t.Mathematically, where S t (F ) is the sum of the assignment costs of all requests at time t given the configuration F of facilities at time t.Further, F * ∈ arg min Remark 1.2.Due to the trade-off in the cost function (1), an optimal offline algorithm O may not move facilities to the optimal configuration.This means that the optimal offline algorithm may lose more in movement cost than it gains in assignment cost if it moves its facilities to the configuration that minimizes the total assignment cost.Therefore, the total assignment cost of an optimal offline algorithm at any time t, i.e., S O t may not equal the optimal assignment cost at t, i.e., S * t .

Our Results
We provide a lower bound on the achievable competitive ratio.Moreover, the lower bound even holds for the OMFL problem on uniform metric spaces when all pairwise distances are equal.The lower bound theorem essentially says that if any online algorithm ON wants to guarantee a small multiplicative competitive ratio, ON has to tolerate a relatively large additive term.Basically, if ON wants to keep the additive term within a bound of β, the multiplicative competitive ratio becomes at least 1 + Ω(k/β).
Theorem 1.1.Let O denote an optimal offline algorithm.Consider any deterministic online algorithm ON .Further, assume that ON guarantees that at all times t > 0, the additive difference between the assignment cost of ON and the optimal assignment cost at time t is less than β.Then, there exist an execution and a time t 0 > 0 such that the total cost of ON can be lower bounded as follows.

Related Work
MFL is a generalization of the k-facility location problem [12], which itself generalizes the facility location and k-median problems [1,30,14].[11] provided a 2-approximation for the maximum objective, which is optimal unless P = N P .[24] provided an 8-approximation for the total objective.Consider the following abstract online problem: The problem is defined on a graph or metric space with resources that can be either fixed (potentially any point of the metric can be opened as a resource) or mobile (typically k resource points are initially given).Requests are revealed one by one and must be served by a resource.A request may need to be served by moving a mobile resource to the requested point.Alternatively, a model may relax this consideration and allow a request to be served remotely by either a fixed or mobile resource.The offline setting for resource allocation problems does not clearly capture the difference between real-world applications where requests need long-term service or one-time service at the time of arrival.However, the online setting can define this difference more clearly.Requests may need either long-term service or to be served only at the time they arrive.The assignment cost of a request, r i , that arrives at time i for one-time service is equal to the distance from r i to f i (i), where f i (t) represents the nearest facility to r i at any time t ≥ i.For long-term service, there are different cases.Consider the following two cases: 1) The "fixed long-term" case, where the reassignment is not an option, the assignment cost of r i is the distance from r i to the point that hosts f i (i) at any time t ≥ i. 2) The "dynamic long-term" case, where the reassignment is possible, the assignment cost of r i equals the distance from r i to f i (t) at any time t ≥ i.The movement cost is the total distance resources travel.In some models, moving a resource over some distance is more expensive than serving a request over the same distance, that we call it "expensive move".By varying these criteria, several classic online problems are defined.Some aim to minimize only the total movement cost, while others involve a trade-off between the assignment cost and either the total opening cost or the total movement cost.The goal is to find an optimal balance between these costs.
OMFL is an online problem that falls under the umbrella of this abstract problem, along with other classic online problems such as page migration introduced by Black and Sleator [8], online facility location (OFL) introduced by Meyerson [34], and k-server introduced by Manasse et al. [32].For uniform metric spaces, the k-server problem is equivalent to the paging problem [35].Table 1 summarizes the similarities and differences between OMFL and these classic online problems, as well as some other related work.
In the online version of the k-facility reallocation problem, we have more than one customer points that request service at each time step, instead of one request at a time as in the abstract online problem.Fotakis et al. [23] present a constant-competitive algorithm for this setting when k = 2 on the lines.
Almost all work have investigated the page migration problem for only a single page (i.e., k = 1).A recent paper introduces a variant of page migration in the Euclidean space (of arbitrary dimension) where they limit the allowed distance the facility can move in a time step [16].[17] extends [16] for k mobile facilities.The best deterministic and randomized algorithms for page migration problem have competitive ratios of 4 [7] and 3 [36], respectively.There are methods to transform any k-server algorithm into a deterministic and randomized k-page migration algorithm with competitive ratios of O(c 2 ) and O(c), respectively, where c is the competitive ratio of the k-server algorithm [5] in detail in [6].
Fotakis [21] shows that the competitive ratio for OFL is Θ(log m/ log log m) where m is the total number of requests.A variant of OFL is the incremental facility location problem [20,22] where it is possible to merge clusters.A relaxed variant of incremental facility location that investigates different models is presented where an online algorithm may make corrections to the positions of its open facilities [13,18].While the OMFL problem is an online variant of MFL, the models proposed by [13,18] are mobile variants of OFL.As such, there are key differences between the OMFL and these models.In contrast to the OMFL problem, [13] does not charge any cost for moving a facility.This is a major limitation of their model.They present an algorithm which is 2-competitive.Afterward, a model was proposed by Feldkord et al. [18] where an online algorithm may move its open facilities, but instead of this being free, it incurs an "expensive movement" cost.Further, they consider two models where the movements of the open facilities can be arbitrarily or limited to some constant in each time step.They achieve the same competitiveness on the line for both one-time and fixed long-term services.In the scenario where movement is unrestricted, the expected competitive ratio achieved is O(log D/ log log D), where D > 1 represents the multiplicative factor that increases the cost of movement relative to the distance traveled.In the case of limiting the movement of an open facility to some constant δ in each time step, they achieve an expected competitive ratio that depends on δ and the cost of opening a new facility.They showed that their results are asymptotically tight on the line.For the one-time service, they extend their result to the Euclidean space of arbitrary dimension, where they achieve a competitive ratio with an additional additive term O( √ k), where k is the number of facilities in the optimal solution.
Sections 1.2.3 and 1.3.3review the similarities and differences between OMFL variants, paging, and the k-server problems.M-OMFL is more closely related to paging when it is studied on uniform metrics, while some works on the randomized k-server conjecture are more closely related to G-OMFL.

G-OMFL
According to Theorem 1.1, if a company aims to keep its facilities close to an optimal configuration in response to requests at any given time, it is inevitable to have a multiplicative competitive ratio of Ω(k).This motivates us to explore the role of randomization in the OMFL problem.Following the approach of [9,3], who used randomization to achieve a polylogarithmic competitive ratio for the k-server problem, we also consider solving OMFL on HSTs.In [3], it is shown that by exploiting the randomized low-stretch hierarchical tree decomposition of [15], it is possible to obtain a poly-logarithmic competitive ratio for the k-server problem.Utilizing HSTs as a tool in solving some online problems is a common approach [9,3,27,2] (see Appendix A for more details about HSTs and the approach).HSTs are a useful tool for solving some online problems, as they have a simple structure that allows reducing the problem on an HST to a more general problem on a uniform metric [9,3].This leads us to define a generalized variant of OMFL on uniform metrics as the first step toward solving OMFL on HSTs and general metrics.

Problem Defintion
We adapt the OMFL problem statement to accommodate the G-OMFL on uniform metrics and introduce new notations accordingly.We are given a set V of n nodes and there is a set of k facilities.Further, a set of requests arrive one at a time.We assume that at time t ≥ 1, request t arrives at node v(t) ∈ V .For a node v ∈ V , let r v,t be the number of requests at node v after t requests have arrived, i.e., r v,t := |{i ≤ t : v(i) = v}|.
In order to keep the total service cost small, an online algorithm can move the facilities between the nodes (if necessary, for answering one new request, we allow an algorithm to also move more than one facility).We define a configuration of facilities by integers f v ∈ N 0 for each v ∈ V such that v∈V f v = k.We describe such a configuration by a set of pairs as The initial configuration is denoted by F 0 .

Feasible Solution
The feasible solution for G-OMFL is the same as for OMFL, as defined in Section 1.1.1.This means that any facility configuration at any time step is a feasible solution.
For any online algorithm ON , we denote the sequence of facility configurations until time t by F ON t := {F ON (i) : i ∈ [0, t]}, where F ON (t) is the configuration after reacting to the arrival of request t and where F ON (0) = F 0 .
Service Cost We implicitly assume that if a node v has some facilities, all requests at v are served by these facilities.Depending on the number of facilities and the number of requests at a node v ∈ V , an algorithm has to pay some service cost to serve the requests located at v.This service cost of node v is defined by a service cost function σ v such that σ v (x, y) ≥ 0 is the cost for serving y requests if there are x facilities at node v.For convenience, for t ≥ 1, we also define σ v,t (x) := σ v (x, r v,t ) to be the service cost with x facilities at node v at time t.For some configuration F , we denote the total service cost at time t by The service cost of any algorithm A at time t is denoted by S A t := S t (F A (t)).Let S t (F * ) equal S * t for any optimal configuration F * at time t, i.e., S t (F * ) = S * t (see Definition 1.1 and Remark 1.2 for S * t ).Movement Cost We define the movement cost M A t of given algorithm A to be the total number of facility movements by time t.Generally, for two configurations, we define the distance d(F, F ′ ) between the two configurations as follows: The distance d(F, F ′ ) is equal to the number of movements that are needed to get from configuration F to configuration F ′ (or vice versa).Based on the definition of d given by ( 2), we can express the movement cost of an online algorithm ON with the sequence of facility configurations Regarding the movement cost of an offline algorithm for G-OMFL at any given time t, it is similar to the MFL solution in that an offline algorithm for G-OMFL only performs movements once, since it knows the request sequence until time t in advance from the beginning.Therefore, at any given time t, the movement cost of any optimal offline algorithm, denoted as O, is Service Cost Function Properties The service cost function σ has to satisfy a number of natural properties.First of all, for every v ∈ V , σ v (x, y) has to be monotonically decreasing in the number of facilities x that are placed at node v and monotonically increasing in the number of requests y at v.
These two properties imply that adding more facilities at a node will reduce the total service cost, while adding more requests at a node will increase the total service cost.Naturally, this is because more facilities can serve a fixed number of requests with lower cost, while more requests are served with the same number of facilities with higher cost.
Further, the effect of adding additional facilities to a node v should become smaller with the number of facilities (convex property in x) and it should not decrease if the number of requests gets larger.Therefore, for all v ∈ V and all x, y ∈ N 0 , we have The third property means that if one adds more facilities at node v, the total service cost at that node will decrease, but the decrease will be less and less as one adds more facilities.This is because there are diminishing returns to adding more facilities: the first few facilities will have a big impact on reducing the service cost, but after a certain point, the impact will become smaller and smaller.The fourth property means that the difference in assignemnt cost between adding one facility with more requests should be greater than or equal to the difference in cost between adding one facility with fewer requests.These properties ensure that the service cost function σ is well-behaved.
Remark 1.3.We note that the OMFL problem on uniform metric spaces is a special case of G-OMFL, where the cost is either 0 (if there is a facility at the node) or it is equal to the number of requests at the node.

Our Results
We devise a simple, deterministic online algorithm, called dynamic greedy allocation (DGA), denote by Gwith the following properties.For two parameters α ≥ 1 and β ≥ 0, DGA guarantees that at all times t ≥ 0, S G t < αS * t + β.DGA achieves this while keeping the total movement cost small.In particular, our algorithm a) only moves when it needs to move because the configuration is not feasible anymore and b) always moves a facility which improves the service cost as much as possible.We show that the total number of movements up to a time t of our online greedy algorithm can be upper bounded as a function of the optimal service cost S * t at time t.Most significantly, we show that at the cost of an additive term which is roughly linear in k, it is possible to achieve a competitive ratio of (1 + ε) for every constant ε > 0. This result almost matches the lower bound of Theorem 1.1.
Remark 1.4.Note that the lower bound of Theorem 1.1 even holds for OMFL on uniform metrics, and therefore w.r.t. the Remark 1.3 also holds for the G-OMFL problem.
More precisely, we prove the following main theorem.Theorem 1.2.Let O denote an optimal offline algorithm.There is a deterministic online algorithm G such that for all times t ≥ 0, the total cost of G can be upper bounded as follows.
Choosing α > 1 The results of the above theorem all hold for α = 1, i.e., our algorithm is always forced to move to a configuration that is optimal up to the additive term β.Even if α is chosen to be larger than 1, as long as we want to guarantee a reasonably small multiplicative competitive ratio (of order o(k)), an additive term of order Ω(k) is unavoidable.In fact, in order to reduce the additive term to O(k), α has to be chosen to be of order k δ for some constant δ > 0. Note that in this case, the multiplicative competitive ratio grows to at least α ≫ 1.However, it might still be desirable to choose α > 1.In that case, it can be shown that the movement cost M G t only grows logarithmically with the optimal service cost S * t (where the basis of the logarithm is α).As an application, this, for example, allows being (1 + ε)competitive for any constant ε > 0 against an objective function of the form γ (1) .

Related Work
The OMFL variant is a specific instance of the G-OMFL problem on uniform metrics, where the service cost function is the most basic.Recall that in Section 1.1.1,we defined the assignment cost for the OMFL problem as the distance between each request and its nearest facility.In a uniform metric, this means that the assignment cost is equal to the number of requests that are not co-located with any facility.
We have introduced a general service cost model for G-OMFL in Section 1.2.1.This model is similar to the one used by Hajiaghayi et al. [29] for facility location, where the opening cost of a facility depends on the number of requests it serves.Another related generalization was made for the k-server problem by [9,3], who defined a cost function for each subtree of an HST.They used an online algorithm for this variant, called the "allocation problem" that is defined on the uniform metrics, as a building block to design an online algorithm for k-server on the HST.The guarantees obtained by [9] for the allocation problem on a two-point metric space allowed them to obtain a polylogarithmic-competitive algorithm for the k-server problem on a binary HST with sufficient separation.However, this algorithm is limited by the binary and well-separated structure of the HST.[3] extended [9] to general HSTs and presented the first polylogarithmic-competitive algorithm for the k-server problem on arbitrary finite metrics with a competitive ratio of O(log 3 n log 2 k log log n).This result is remarkable considering that the best deterministic upper bound for the k-server problem is 2k − 1 [31] while there are no lower bounds better than Ω(log k) [19] for the k-server problem.The randomized k-server conjecture states that the expected competitive ratio for the k-server is O(log k).

M-OMFL
A naive algorithm for OMFL may not relocate any facilities and output the initial facility location as its solution.This may result in a high assignment cost, but no movement cost.However, this may not be desirable for companies that have to keep the total assignment cost below some threshold.There are several reasons why companies may have such a constraint on the assignment cost.One reason is budget limitation.Companies may have a fixed amount of money to spend on assigning requests to mobile facilities.Another reason is market competition.If the assignment cost is too high, it may affect the end users, making the company's services less attractive than those of similar companies that assign requests from closer facilities.For example, if a company serves end users from mobile facilities that are far away from them, it may incur a higher assignment cost, which could reduce their attractiveness.To address this issue, we define a variant of OMFL where any solution has to keep the total assignment cost below some threshold as a function of optimal assignment cost and at the same time the goal is to minimize the movement cost.

Problem Defintion
The problem statement and the model definition for M-OMFL are the same as for OMFL, as stated in Section 1.1.1.The only difference is that the goal is to minimize the movement cost only, subject to a constraint that the total assignment cost does not exceed a given threshold for any algorithm that solves the problem, including an offline algorithm.The movement cost and the assignment cost are defined as in Section 1.1.1.The threshold is defined as a function of the optimal assignment cost.The problem is specified by two parameters α and β such that α ≥ 1 and max{α − 1, β} ≥ 1.
Feasible Configuration We define a configuration F to be feasible at time t iff Feasible Solution For a given algorithm A, we denote the solution at time t by is the feasible configuration after reacting to the arrival of request t and where F A (0) = F 0 .The assignment cost of an algorithm A at time t is denoted by S A t := S t (F A (t)).The total cost of an algorithm, including an optimal offline algorithm, is the sum of the movement costs incurred by the algorithm.However, an optimal offline algorithm for M-OMFL cannot defer all the server movements until the end, as it can for OMFL and G-OMFL.It has to adjust the facility configuration whenever the condition in (8) is violated.

Our Results
We show that any deterministic online algorithm that solves an instance of the M-OMFL problem in uniform metrics necessarily has a competitive ratio of at least Ω(n).
Theorem 1.3.Assume that we are given parameters α and β which satisfy (7).Then, for any online algorithm ON and for every 1 ≤ k < n, there exists an execution and a time t > 0 such that the competitive ratio between the number of movements by ON and the number of movements of an optimal offline algorithm O is at least n/2.More precisely for all M O t > 0 there is an execution such that

Related Work
Theorems 1.1 and 1.2 demonstrate that if we aim to maintain the facilities in a configuration with an optimal assignment cost up to an additive β for small β, we must pay a multiplicative competitive ratio of Ω(k).This is similar to the deterministic competitive ratio of Θ(k) for the k-server problem [35,31].The M-OMFL is analogous to the k-server problem, where only the movement cost is minimized.When the distances between every pair of points are uniform, this variant becomes more similar to the paging problem.However, while the paging problem has a deterministic competitive ratio of k [33], we prove that the deterministic M-OMFL on uniform metrics (and hence on general metrics) has a lower bound of n/2, where n is the number of metric points.Our lower bound analysis for M-OMFL is similar to the classic deterministic lower bound analysis for paging, particularly in how we compare the movements by the optimal offline algorithm and any online algorithm in each phase or interval.The randomized competitive ratio for the paging problem is Θ(log k) [19].

G-OMFL: An Upper Bound Analysis
In this section, we provide a proof of Theorem 1.2.The proof of Theorem 1.1 is postponed to Section 3, as it also holds for G-OMFL, considering Remark 1.4.In Section 2.1, we introduce our online algorithm, the dynamic greedy allocation (DGA) denoted by G, for solving the G-OMFL problem.An overview of our analysis of DGA is provided in Section 2.2.The complete analysis of DGA is presented in Section 2.3.In the following, whenever clear from the context, we omit the superscript G in the algorithm-dependent quantities defined above.

Algorithm Description
The goal of our algorithm is two-fold.On the one hand, we want to guarantee that the service cost of DGA is always within some fixed bounds of the optimal service cost.On the other hand, we want to achieve this while keeping the overall movement cost low.Specifically, for two parameters α and β, where α ≥ 1 and max{α − 1, β} ≥ 1.
we guarantee that at all times where S t denotes the total service cost of DGA at time t.Condition (10) is maintained in the most straightforward greedy manner.Whenever after a new request arrives, Condition (10) is not satisfied, DGA greedily move facilities until Condition (10) holds again.Hence, as long as Condition (10) does not hold, DGA moves a facility that reduces the total service cost as much as possible.Our algorithm stops moving any facilities as soon as the validity of Condition ( 10) is restored.Whenever DGA moves a facility, it does a best possible move, i.e., a move that achieves the best possible service cost improvement.Thus, DGA always moves a facility from a node where removing a facility is as cheap as possible to a node where adding a facility reduces the cost as much as possible.Therefore, for each movement m, we have Let τ m be the time of the m-th movement and f v,m−1 be the number of facilities at node v after the (m − 1)-th movement.As defined in Section 1.2.1, σ v,t (x) is the service cost at node v of having x facilities at time t.(11) identifies a set of points where removing a facility causes the least increase in service cost.(12) identifies a set of points where adding a facility causes the most decrease in service cost.DGA then moves a facility from a point in the first set to a point in the second set.

Analysis Overview
While the algorithm DGA itself is quite simple, its analysis turns out relatively technical.We thus first describe the key steps of the analysis by discussing a simple case.We assume the classic cost model where on uniform metrics the service cost at any node is equal to 0 if there is at least one facility at the node and the service cost is equal to the number of requests at the node, otherwise.Further, we assume that we run DGA with parameters α = 1 and β = 0, i.e. after each request arrives, DGA moves to a configuration with optimal service cost.Note that these parameter settings violate Condition (10) and we therefore get a weaker bound than the one promised by Theorem 1.2.First, note that in the described simple scenario, DGA clearly never puts more than one facility to the same node.Further, whenever DGA moves a facility from a node u to a node v, the overall service cost has to strictly decrease and thus, the number of requests at node v is larger than the number of requests at node u.Consider some point in time t and let r min (t) := min v∈V :fv,t=1 r v,t be the minimum number of requests among the nodes v with a facility at time t.Hence, whenever at a time t, DGA moves a facility from a node u to a node v, node u has at least r min (t) requests and consequently, node v has at least r min (t) + 1 requests.Further, if at some later time t ′ > t, the facility at node v is moved to some other node w, because DGA always removes a facility from a node with as few requests as possible, we have r min (t ′ ) ≥ r min (t) + 1.Consequently, if in some time interval [t 1 , t 2 ], there is some facility that is moved more than once, we know that r min (t 1 ) < r min (t 2 ).We partition time into phases, starting from time 0. Each phase is a maximal time interval where no facility is moved more than once.(cf.Definition 2.1 in the formal analysis of DGA).
The above argument implies that after each phase r min increases by at least one and therefore at any time t in Phase p, we have r min (t) ≥ p − 1 and at the end of Phase p, we have r min (t) ≥ p.In Section 2.3, the more general form of this statement appears in Lemma 2.1.There, γ p is defined to be the smallest service cost improvement of any movement in Phase p (γ p = 1 in the simple case considered here), and Lemma 2.1 shows that r min grows by at least γ p in Phase p. Assume that at some time t in Phase p, a facility is moved from a node u to a node v.Because node u already had its facility at the end of Phase p − 1, we have r u,t = r min (t) ≥ p − 1.Consequently, at the end of Phase p, there is at least one node (the source of the last movement) that has no facility and at least p−1 requests.The corresponding (more technical) statement in our general analysis appears in Lemma 2.3.
We bound the total cost of DGA and an optimal offline algorithm from above and below, respectively, as a function of the optimal service cost.Hence, the ratio between these two total costs provides the desired competitive factor.Our algorithm guarantees that at all times, the service cost is within fixed bounds of the optimal service cost (in the simple case here, the service cost is always equal to the optimal service cost).Knowing that there are nodes with many requests and no facilities, therefore allows to lower bound the optimal service cost.In the general case, this is done by Lemma 2.6 and Lemma 2.7.In the simple case, considered here, as at the end of Phase p, there are k nodes with at least p requests (the nodes that have facilities) and there is at least one additional node with at least p − 1 requests, we know that at the end of Phase p, the optimal service cost is at least p − 1.Consequently, DGA (in the simple case) pays exactly the optimal service cost (as mentioned before, in the general case, the service cost is within fixed bounds of the optimal service cost) and at most (p − 1)k as movement cost.Hence, the total cost paid by DGA is at most a factor k + 1 times the optimal service cost since the optimal service cost is at least p − 1.By choosing α slightly larger than 1 and a larger β (β ≥ k), DGA becomes lazier and one can show that the difference between the number of movements of DGA and the optimal service cost becomes significantly smaller.Also note that by construction, the service cost of DGA is always at most αS * t + β.When analyzing DGA, we mostly ignore the movement cost of an optimal offline algorithm.We only exploit the fact that by the time DGA decides to move a facility for the first time, any other algorithm must also move at least one facility and therefore the optimal offline cost becomes at least 1.

Upper Bound Analysis
In the following, we show that how to upper bound the total cost of our online algorithm denoted by G by a function of the total cost of an optimal offline algorithm O. Clearly, DGA at all times t ≥ 0 guarantees that the service cost can be bounded as In order to upper bound the total cost, it therefore suffices to study how the movement cost M G t of DGA grows as a function of the optimal offline algorithm cost.Let O be an optimal offline algorithm and let F O (t) be the configuration of O at time t.Recall that d(F 0 , F O (t)) denotes the total number of movements required to move from the initial configuration to configuration F O (t).We therefore have . In order to upper bound M G t as a function of cost O t , we upper bound it as a function of S * t + d(F 0 , F O (t)).Instead of directly dealing with d(F 0 , F O (t)), we make use of the fact that our analysis works for a general cost function σ satisfying the conditions given in (3), ( 4), (5), and (6).Given an service cost function σ, consider a function σ ′ which is defined as follows: is the number of facilities at time t on node v. Clearly, σ ′ also satisfies the conditions given in (3), ( 4), ( 5), and (6).In addition, for any time t and any configuration where S ′ t (F ) refers to the total service cost w.r.t. the new cost function σ ′ .Hence, S ′ t (F ) exactly measures the sum of service cost and movement cost of a configuration F .Of course now, in all our results, S * t corresponds to the combination of service and movement cost of an optimal configuration F * .
We are now going to analyze DGA.In our analysis, we bound the total costs of optimal offline algorithm O and online algorithm G from below and above, respectively, as functions of optimal service cost and thus provide the upper bound (competitive factor) promised in Theorem 1.2.Hence we first go through calculating the optimal service cost.
For the analysis of DGA, we partition the movements into phases p = 1, 2, . . ., where roughly speaking, a phase is a maximal consecutive sequence of movements in which no facility is moved twice.We use m p to denote the first movement of Phase p (for p ∈ N).In addition, we define v src,G m and v dst,G m to be the nodes involved in the m-th facility move, where we assume that G moves a facility from node v src m to v dst m .Formally, the phases are defined as follows.
Definition 2.1 (Phases).The movements are divided into phases p = 1, 2, . . ., where Phase p starts with movement m p and ends with movement m p+1 − 1.We have m 1 = 1, i.e., the first phase starts with the first movement.Further for every p > 1, we define For a Phase p ≥ 1, let λ p := m p+1 − m p be the number of movements of Phase p.

Optimal Service Cost Analysis
The algorithm DGA moves facilities in order to improve the service cost.Throughout the rest of our analysis, we use τ G m to denote the time of the m-th movement.For a given movement m, we use γ(m) > 0 to denote service cost improvement of m.Further, we use F 0 to denote the initial configuration of the k facilities and for a given (deterministic) algorithm A, for any m ≥ 1, we let F A m = {(v, f A v,m ) : v ∈ V } be the configuration of the k facilities for A after m facility movements (i.e., after m server movements of A, node v has f A v,m facilities).
For each Phase p, we define the improvement γ p of p and the cumulative improvement Γ p by Phase p as follows We are now ready to prove our first technical lemma, which lower bounds the cost of removing facilities from nodes with facilities (for all v ∈ V such that f v ≥ 1) at any point in the execution.The result of following lemma implies that removing any facility of an optimal configuration during some Phase p increases the optimal service cost at least Γ p−1 (and Γ p at end of Phase p) since the facilities of an optimal configuration are located at places with maximum number of requests.Lemma 2.1.Let m be a movement and, F = {(v, f v ) : v ∈ V } be the configuration of DGA at any point in the execution after movement m and let t ≥ τ m be the time at which the configuration F occurs.Then, for all times t ′ ≥ t and for all nodes v ∈ V , if where p is the phase in which movement m occurs.
Proof.We show that for each facility movement m ∈ N of DGA, it holds that where p is the phase in which movement m occurs (i.e., the claim of the lemma holds immediately after movement m).The lemma then follows because (i) any configuration {(v, f v ) : v ∈ V } occurring after movement m is the configuration F m ′ for some movement m ′ ≥ m, (ii) the values Γ p−1 are monotonically increasing with p, and (iii) by ( 6), for all v ∈ V , the value σ v,t (f − 1) − σ v,t (f ) is monotonically non-decreasing with t.
It therefore remains to prove (18) for every m, where p is the phase of movement m.We prove a slightly stronger statement.Generally, for a movement m ′ and a Phase p ′ , let V dst p ′ ,m ′ be the set of nodes that have received a new facility by some movement m ′′ ≤ m ′ of Phase p ′ .Hence, We show that in addition to (18), it also holds that We prove (18) and (19) together by using induction on m.Induction Base (m = 1): The first movement occurs in Phase 1.By (17), Γ 0 = 0 and by (3), we also have σ v,t (f − 1) − σ v,t (f ) ≥ 0 for all times t ≥ 0, all nodes v ∈ V , and all f ≥ 1. Inequality (18) therefore clearly holds for m = 1.It remains to show that also (19) holds for m = 1.We have V dst 1,1 = {v dst 1 } and showing (19) for m = 1 therefore reduces to showing that , which follows directly from ( 16) and ( 17).Induction Step (m > 1): We first show that Inequalities ( 18) and ( 19) hold immediately before movement m and thus, If m is not the first movement of Phase p, Inequalities ( 20) and ( 21) follow directly from the induction hypothesis (for m − 1) and from (6).Let us therefore assume that m is the first movement of Phase p.Note that in this case V dst p,m−1 = ∅ and ( 21) therefore trivially holds.Because m > 1, we know that in this case p ≥ 2. From the induction hypothesis and from (6), we can therefore conclude that for every node v ∈ V dst p−1,m−1 (every node v that is the destination of some facility movement in Phase p − 1), we have Note that for all these nodes, we have and therefore (20) also holds if m ≥ 2 is the first movement of some phase.
We can now prove (18) and (19).For all nodes v / ∈ {v src m , v dst m }, we have f v,m = f v,m−1 and we further have (18) and ( 19) therefore directly follow from ( 20) and ( 21), respectively.For the two nodes involved in movement m, first note that v src m / ∈ V dst p,m−1 .It therefore suffices to show that as well as We have 22) therefore directly follows from ( 20) and (5).For (23), we have ≥ Γ p .
This completes the proof of ( 18) and ( 19) and thus the proof of the lemma.
For each phase number p, let θ p := τ mp be the time of the the first movement m p of Phase p.Before continuing, we give lower and upper bounds on γ p , the improvement of Phase p.For all p ≥ 1, we define Lemma 2.2.Let m be a movement of Phase p and let F * ∈ arg min F S t (F ) be the optimal configuration at time τ m .We then have Proof.For the upper bound, observe that we have γ(m) ≤ S τm (F m−1 ) − S * τm as clearly the service cost cannot be improved by a larger amount.Because at all times t, DGA keeps the service cost below αS * t + β, we have The upper bound on γ(m) follows from (24) and because S * τm ≤ S * θ p+1 .For the lower bound on γ(m), we need to prove that d(F m−1 , F * ) ≥ η p /γ(m).Because DGA moves a facility at time τ m , we know that S τm (F m−1 ) ≥ αS τm (F * ) + β and applying the (24) of η p , we thus have S τm (F m−1 ) − S τm (F * ) ≥ η p .Intuitively, we have d(F m−1 , F * ) ≥ η p /γ(m) because DGA always chooses the best possible movement and thus every possible movement improves the overall service cost by at most γ(m).Thus, the number of movements needs to get from F m−1 to an optimal configuration F * has to be at least η p /γ(m).For a formal argument, assume that we are given a sequence of ℓ := d(F m−1 , F * ) movements that transform configuration F m−1 into configuration F * .For i ∈ [ℓ], assume that the i-th of these movements moves a facility from node u i to node v i .Further, for any i ∈ [ℓ] let f i be the number of facilities at node u i and let f ′ i be the number of facilities at node v i before the i-th of these movements.Because the sequence of movements is minimal to get from F m−1 to F * , we certainly have For the service cost improvement γ of the i-th of these movements, we therefore obtain ≤ γ(m).
We can now lower bound the distribution of requests at the time of each movement.
Lemma 2.3.Let m be a movement of Phase p (for p ≥ 1).Then, there are integers and Proof.It suffices to prove the statement for t = τ m .For larger t, the claim then follows from (6).Consider an optimal configuration Otherwise, moving a facility from u to v would (strictly) improve the configuration F * .By Lemma 2.1, we have To prove the lemma, it therefore suffices to show 26) implies the claim of the lemma.By (2), we have We therefore need that d(F m−1 , F * ) ≥ η p /γ(m), which follows from Lemma 2.2.
In the next lemma, we derive a lower bound on S * θp , the service cost of optimal configuration when Phase p starts.For each Phase p ≥ 1, we first define S p as follows.
Proof.We prove the lemma by induction on p.
Induction Base (p = 1, 2): Using ( 14) we have S * θ 1 ≥ 1 and since S 1 = S 2 = 1, we get We use the induction hypothesis to assume that the claim of the lemma is true up to Phase p and we prove that it also holds for Phase p + 1. Therefore by the induction hypothesis, for all i ∈ [p], For all i ∈ [p], we define η i := (α−1)S i +β and δ i := max As a consequence of ( 24) and ( 28), we get that η i ≥ η i for all i ∈ [p].In the following, let p ′ ∈ [2, p] be some phase.Lemma 2.3 implies that after the last movement m of Phase p ′ , there are non-negative integers As there are only k facilities for any feasible configuration Hence, after the last movement of Phase p ′ , for any feasible configuration F , we have At the beginning of Phase p + 1 (for p ≥ 2), the total optimal service cost therefore is We define ζ i for all i ∈ [3, p] as follows: Using the definition of δ i , we thus have Considering the definition of η i we get We therefore have ζ p+1 = S p+1 directly from (27) and thus the claim of the lemma follows.
In order to explicitly lower bound the optimal service cost after p phases, we need the following technical statement.c i .Further, let λ ≥ 0 be an arbitrary non-negative real number.We have Proof.The first part of the claim follows from the means inequality (the fact that the arithmetic mean is larger than or equal to the geometric mean).In the following, we nevertheless directly prove both parts together.We let x = (x 1 , . . ., x ℓ ) ∈ R ℓ be a vector ℓ real variables and we define multivariate functions f (x) : R ℓ → R and g(x) : R ℓ → R as follows: We need to show that for x ∈ X, f (x) and g(x) are lower bounded by the right-hand sides of Inequalities (I) and (II) above, respectively.Note that X is a closed subset of R ℓ and because c min > 0, both functions f (x) and g(x) are continuous when defined on X.The minimum for x ∈ X is therefore well-defined for both f (x) and g(x).We show that both f (x) and g(x) attain their minimum for x * := (x * 1 , . . ., x * ℓ ), where ∀i ∈ [ℓ] : Note that x * is the unique configuration x ∈ X to the following system of equations Because we know that min ), it is therefore sufficient to show that for any y ∈ X that does not satisfy (31), f (y) and g(y) are not minimal.Let us therefore consider a vector y = (y 1 , . . ., y ℓ ) ∈ X that does not satisfy (31).First note that both f (x) and g(x) are strictly monotonically increasing in x 1 and strictly monotonically decreasing in x ℓ .If either y 1 > c min or y ℓ < c max , it is therefore clear that f (y) and g(y) are both not minimal (over X).Let us therefore assume that y 1 = c min and y ℓ = c max .From the assumption that y does not satisfy (31), we then have an i 0 ∈ {2, . . ., ℓ − 1} for which . We define a new vector y ′ = (y ′ 1 , . . ., y ′ ℓ ) ∈ X as follows.We have y ′ i 0 = √ y i 0 −1 y i 0 +1 and y ′ i = y i for all i ̸ = i 0 and we show that f (y ′ ) < f (y) and g(y ′ ) < g(y).Define We then have Note that λ ≥ 0 and C > 0. In both cases, we therefore need to show that This follows because the function h : [c min , c max ] → R, h(z) := As long as (α − 1)S * θp < β, the effect of the (α − 1)S * θp -term on η p (and thus of the αS * t term in ( 10) is relatively small.Let us therefore first analyze how the service cost grows by just considering terms that depends on β (and not on α).
On the other hand, as soon as S * θp > max{1, β α−1 }, the effect of the β-term in (10) becomes relatively small.As a second case, therefore, we analyze how the service cost grows by just considering terms that depends on α (and not on β).Lemma 2.7.Let p 0 ≥ 2 be a phase for which S p 0 ≥ S p 0 −1 ≥ S 0 := max 1, β α−1 .For any phase p > p 0 , we have Proof.By Lemma 2.4, using β ≥ 0, for all p > p 0 , we get S p ≥ 1 + (α − 1) for all p ≥ p 0 .Similarly to before, we define γ min = min{γ p 0 −1 , . . ., γ p−1 } and γ max = max{γ p 0 −1 , . . ., γ p−1 }.By Lemma 2.2, the assumptions regarding p 0 , and because the values η i are non-decreasing in i, we have The last inequality follows because S * θp ≥ S p ≥ S p 0 ≥ max 1, β α−1 and by applying (9).We can now apply Inequality (II) from Lemma 2.5 to obtain In the following, assume that Note that if (36) does not hold, the claim of the lemma is trivially true.By replacing S * θp on the right-hand side of (35) with the upper bound of (36), we obtain The lemma then follows because we assumed that S p 0 ≥ max 1, β α−1 .

Optimal Offline Algorithm Total Cost
Service Cost In order to minimize the service cost, we can simply bound the service cost of O as follows Movement Cost To simplify our analysis, we take no notice of movement cost by optimal offline algorithm since it has no substantial effect on the competitive factor we provide since O has to pay at least the optimal service cost which we show it is large enough.The total cost of optimal offline algorithm, therefore, is bounded as follows

DGA Total Cost
Service Cost The online algorithm DGA has to keep the service cost smaller than a linear function of optimal service cost as mentioned in (10).Thus Movement Cost First, using Definition 2.1 we bound the number of movement in each phase.
Proof.As an immediate consequence of Definition 2.1, we obtain that the maximum number of movements in each phase is at most k.Let m > m p and consider the movements [m p , m].We prove that if m < m p+1 , no two the movements in [m p , m] move the same facility.The claim then follows because there are only k facilities.For the sake of contradiction, assume that there is some facility i that is moved more than once and let m ′ and m ′′ (m ′ , m ′′ ∈ [m p , m], m ′ < m ′′ ) be the first two movements in [m p , m], where facility i is moved.We clearly have v dst m ′ = v src m ′′ and Definition 2.1 thus leads to a contradiction to the assumption that m < m p+1 .
As a result of above observation and Lemma 2.6 and Lemma 2.7, it is possible to prove the following lemma to bound the number of DGA movements by means of optimal service cost.
Lemma 2.9.For any α ≥ 1 and β satisfying (9), there is a deterministic online algorithm G such that for all times t ≥ 0, the total movement cost M G t is bounded as follows.
• If α = 1, for any ℓ ≥ 1, ε > 0, and β ≥ k(2k) 1/ℓ /ε, we have • For α ≥ 1 + ε where ε > 0 is some constant and any β satisfying (9), we have Proof.First note that by Observation 2.8, the movement cost of our algorithm by time θ p is at most Together with the lower bounds on S * θp of Lemma 2.6 and Lemma 2.7, this allows to derive an upper bound on the movement cost of our algorithm as a function of S * θp .Note that as all upper bound claimed in the lemma have an additive term of O(k) (with no specific constant), it is sufficient to prove that the lemma holds for all time t = θ p , where p ≥ 2 is a phase number.
Let us first consider the case where α = 1.Because in that case β/(α − 1) is unbounded, we can only apply Lemma 2.6 to upper bound the movement cost as a function of S * t .We choose ℓ ≥ 1 and assume that β ≥ k(2k) 1/ℓ /ε for ε > 0. Together with (39), for p ≥ ℓ + 2, Lemma 2.6 then gives The first part of Lemma 2.9 then follows because the total movement cost for the first ℓ + 2 phases is at most O(ℓk).The special cases are obtained as follows.For β = Ω (k + k/ε), we set ℓ = Θ(log k) and every ε > 0, whereas for β = Ω(k log k/ log log k), we set ε = Θ(log log k/ log 1−δ k) and ℓ = Θ 1 δ • log k log log k for constant 0 < δ ≤ 1.Let us therefore move to the case where α > 1.Let p 0 be the first Phase p 0 ≥ 2 for which S * θp 0 ≥ S 0 , where S 0 = max 1, β α−1 as in Lemma 2.7.Further, we set p 1 = p 0 + ⌈2 log α (2k)⌉.Using Lemma 2.7, for p ≥ p 1 , we have We therefore get The second claim of Lemma 2.9 then follows by showing that If S 0 = 1, we have p 0 = 2. Otherwise, we can apply Lemma 2.6 to upper bound p 0 as the smallest value p 0 for which 2) .For α = O log k log log k , the assumption that α is at least 1 + ε for some constant ε > 0 gives that p 0 = Θ log k log log k .Otherwise, (i.e., for large α), we obtain p 0 = Θ(log α−1 k) = Θ(log α k).
Note that by choosing α > 1, the dependency of the movement cost M G t on the optimal service cost S * t is only logarithmic because terms min{ log k log log k , log α k} and log α k 1+β are dominated by log k.

Lower Bound Execution
We assume that ON is the given online algorithm and O is an optimal offline algorithm.Further recall that we assume that ON guarantees that the difference between the assignment cost of ON and the optimal assignment cost at all times is less than β for some given β > 0.
We need n to be sufficiently large and for simplicity, we assume that n ≥ 3k.We denote a feasible configuration by a set F ⊂ V of size |F | = k.Further, without loss of generality, we assume that all facilities of ON and O are at the same locations at the beginning (i.e. at time t = 0).At each point t in the execution, a configuration F * t with optimal assignment cost places facilities at the k nodes with the most requests (breaking ties arbitrarily if there are several nodes with the same number of requests).Also, at a time t the optimal assignment cost is equal to the total number of requests at nodes in V \ F * t for an arbitrary optimal configuration F * t .Time is divided into phases.We construct the execution such that it lasts for at least k phases.As described in the outline, we define integers Γ 1 < Γ 2 < . . .such that at the end of Phase i, there are exactly k nodes with Γ i requests (and all other nodes have fewer requests).For each phase i, we define V i to be this set of k nodes with Γ i requests.We also fix integers n 1 ≥ n 2 ≥ • • • ≥ 1 where n 1 ≤ k/3 and at the beginning of each Phase i, we pick a set N i of n i nodes to which we directly add requests so that all of them have exactly Γ i requests.For i = 1, we pick N 1 as an arbitrary subset of V \ F 0 .We define V 0 := F 0 .For i ≥ 2, we choose N i as an arbitrary subset of V i−2 \ V i−1 .Clearly, at the end of Phase i, we have N i ⊆ V i as otherwise there would be more than k nodes with exactly Γ i requests.Note that because nodes and it is therefore possible to choose N i as described.Note also that because N i ⊆ V i−2 \ V i−1 , at the beginning of Phase i all nodes in N i have exactly Γ i−2 requests.The remaining ones of the k nodes that end up in V i (and thus have Γ i requests at the end of Phase i) are chosen among the nodes in V i−1 .Consequently, at the end of Phase i − 1 and thus at the beginning of Phase i, there are exactly k nodes with Γ i−4 requests, and so on.Now, n i of the nodes in V i−2 \ V i−1 are chosen as set N i and we increase their number of requests to Γ i .From now on, throughout phase i, there are k + n i nodes with at least Γ i−1 requests such that at most k of these nodes have Γ i requests.The number of nodes with less than Γ i−1 requests is the same as at the end of Phase i − 1.In fact nodes that are not in V i−1 ∪ N i do not change their number of requests after phase i − 1.As a consequence of the execution, after increasing the number of requests in N i to Γ i , the optimal assignment cost remains constant throughout Phase i ≥ 1 and it can be evaluated to For convenience, we also define Σ * 0 := 0 and moreover Σ * 1 = 0 since there are at most k nodes with Γ 1 requests at the end of Phase 1.
In the following, let v be a free node at some point in the execution, if the algorithm currently has no facility at node v.We now fix a Phase p ≥ 1 and assume that we are at a time t, when we have already picked the set N p and increased the number of requests of nodes in N p to Γ p .By the above observation, we have S * t = Σ * p and therefore ON is forced to move For p ≥ 1, we then get In the following, we for simplicity assume that for i = 1, 2, . . ., p, values n i do not have to be integers.For integer n i , the proof works in the same way, but becomes more technical and harder to read.We fix the values of n i as such that n 1 = k/3 and n p = 1.For all i ≥ 1, we then have (43) now be simplified as We have already seen that S * t = Σ * p .Using (44) and ( 43), the claim of the first part of the lemma follows analogously from Lemma 2.4 and Lemma 2.6 and the claim of the second part of the lemma follows analogously from Lemma 2.4 and Lemma 2.7 in the upper bound analysis section.

Optimal Offline Algorithm Total Cost
An optimal offline algorithm, say O, knows the request sequence in advance.In other words, it can wait until all requests have arrived and just perform all the necessary facility movements at the very end.Therefore, an upper bound for the total cost of O at any time t is We now have everything we need to prove Theorem 1.1.
Proof of Theorem 1.1.The proof of Theorem 1.1 now directly follows from Lemma 3.1 and from (45).

M-OMFL: A Lower Bound Analysis
We provide our lower bound execution in the following.We consider a uniform metric where the distance between every pair of points is 1.In this setting, the goal of any feasible solution for M-OMFL is to minimize the total number of movements.As we can assume that each node either has 0 or 1 facilities, we slightly overload our notation and simply denote a feasible configuration by a set F ⊂ V of size |F | = k.We first fix ON to be any given deterministic online algorithm and O to be an optimal offline algorithm denoted by O.For proving the statement of Theorem 1.3, we distinguish two cases, depending on the number of facilities k.In both cases, we define iterations to be subsequences of requests such that ON needs to move at least once per iteration.The number of movements by ON is therefore at least the number of iterations of a given execution.
Case k ≤ ⌊n/2⌋ At the beginning, we place a large number of requests on any k − 1 nodes that initially have facilities.We choose this number of requests sufficiently large such that no algorithm can even move any of these k − 1 facilities.This essentially reduces the problem to k = 1 and n − k + 1 nodes.
To bound the number of movements by O, we then consider intervals of n − k iterations such that ON is forced to move in each iteration.During each interval, the requests are distributed in such a way that at the beginning of the i-th iteration of the interval there are at least n − k − i + 1 nodes such that if any offline algorithm places a facility on one of these nodes, (8) remains satisfied throughout the whole interval.Hence, there exists an offline algorithm that moves at most once in each interval and therefore the number of movements by O is upper bounded by the number of intervals.
Case k > ⌊n/2⌋ In this case, there is some resemblance between the constructed execution and the lower bound constructions for the paging problem.For simplicity assume that there are n = k + 1 nodes (we let requests arrive at only k + 1 nodes).At the beginning of each iteration we locate a sufficiently large number of requests on the node without any facility of ON such that (8) is violated.Thus, ON has to move at least one server to keep (8) satisfied.By contrast, O does not need to move in each iteration.There is always a node which will not get new requests for the next k iterations and therefore O only needs to move at most once every k iterations to keep (8) satisfied.
Proof of Theorem 1.3: Consider any request sequence.First we provide a partitioning of the request sequence as follows.The request sequence is partitioned into iterations.Iteration 0 is the empty sequence and for every i ≥ 1, iteration i consists of a request sequence of a length dependent on α, β, and the iteration number i.The request sequence of an iteration i is chosen dependent on a given online algorithm ON such that ON must move at least once in iteration i.We see that while ON needs to move at least once per iteration, there is an offline algorithm which only moves once every at least n/2 iterations.
In the proof, we reduce all the cases to two extreme cases.In the first case, we reduce the original metric on a set of n nodes with k ≤ ⌊n/2⌋ facilities to the case where there is only 1 facility.To do this, we first place sufficiently many requests on k − 1 nodes that have facilities at the beginning of execution (for simplicity, assume that we place an unbounded number of requests on these nodes).This prevents any algorithm from moving its facilities from these k − 1 nodes during the execution and hence we can ignore these k − 1 nodes and facilities in our analysis.In contrast, for the second case where k > ⌊n/2⌋, we assume that w.l.o.g., k = n − 1 by simply only placing requests on the k nodes which have facilities at the beginning and on one additional node.
In the following, we let t i denote the end of an iteration i. Moreover suppose I is the total number of iterations, where we assume that I ≡ 0 (mod max{k, n − k}).
Case k ≤ ⌊n/2⌋ The idea behind the execution is to uniformly increase the number of requests on the n − k nodes that do not have the facility at the beginning of an iteration i (i.e., at time t i−1 ) in such a way that ON has to move at least once to satisfy (8) at the end of iteration i. Moreover the distribution of requests guarantees that any node without the facility at time t i−1 is a candidate to have the (free) facility of ON at time t i .Let v be any node in the set V of nodes.We use r v,t to denote the number of requests at node v after the arrival of t requests.When the context is clear, we omit the second subscript (i.e., t) The optimal offline algorithm, by contrast, need to move a facility from vON τ k to vON τ 1 during the interval with respect to the request distribution in (53).The node vON τ k is the node has αS * t + max{β, 1} requests within the interval due to (53) where t is the ending time of any iteration of the previous interval.Hence, at the end of any iteration i in the interval, the optimal offline assignment cost equals the optimal assignment cost and thus (8) remains satisfied.Consequently it implies that at most one movement by optimal offline algorithm is sufficient during the interval.This concludes that the number of movements by any optimal offline algorithm is at most I/k in this case.
Let t be the ending time of (c • max{k, n − k})-th iteration for any integer c ≥ 1.Using Corollary 4.2 and Claim 4.3 Thus the claim of the theorem holds.

Conclusion
In light of the limited research on the online variants of the mobile facility location problem (MFL), we introduce and examine the OMFL problem and its two subtypes: G-OMFL and M-OMFL.We establish tight bounds for G-OMFL on uniform metrics, where our lower bound for OMFL also applies to G-OMFL on uniform metrics since OMFL on uniform metrics is a special case of G-OMFL on uniform metrics.Additionally, we demonstrate a linear lower bound on the competitiveness for M-OMFL, even on uniform metrics.Motivated by the approach used by [9,3] for the k-server problem, we define and study G-OMFL on uniform metrics, similar to the allocation problem defined by [9,3] on uniform metrics.This is the first step towards solving OMFL on HSTs and general metrics.The second step, which remains an open question, is to adapt a similar approach for OMFL using the DGA algorithm presented in this paper.The idea is for each internal node of the HST to run an instance of G-OMFL to decide how to allocate its facilities among its children nodes.Starting from the root, which has k facilities, this recursive process determines the number of facilities at each leaf of the HST, providing a feasible solution for OMFL.
The second open question is whether the result provided by Theorem 1.3 is tight.Additionally, due to the similarities between the M-OMFL problem in uniform metrics and the paging problem, it would be interesting to explore the use of randomized online algorithms against oblivious adversaries for M-OMFL.The goal would be to achieve a sublinear competitive ratio for this problem.
at the time τ m of movement m.Let us further consider the configuration F m−1 of DGA immediately before movement m.Consider a pair of nodes u and v such that f * u > f u,m−1 and f v,m−1 > f * v .By the optimality of F * , we have

Table 1 :
. Results on page migration are elaborated A comparison of OMFL and related work in terms of resource type, service location, service duration, and cost function.