Minimality of EDF networks with resource sharing

We consider a general real-time, multi-resource network with soft customer deadlines, in which users require service from several shared resources simultaneously. We show that the preemptive earliest-deadline-first scheduling strategy minimizes, in a suitable sense, the system resource idleness with respect to customers with lead times not greater than any given threshold value on all the routes of the network. Related methods of performance evaluation for such systems are also discussed. Our arguments are pathwise, requiring no assumptions on the network topology and very mild assumptions, or even no assumptions, on the model stochastic primitives.


Introduction
The last two decades have brought a rapidly increasing demand for real-time services, in which jobs have specific timing requirements.Examples of such services include voice and video transmission in telecommunication networks, manufacturing systems, where the orders have due dates, tracking systems and real-time control systems.Another important class of applications arises in medical scheduling problems, like prioritizing admissions to emergency rooms or organ allocation.
In the theory of real-time systems, hard, firm and soft deadlines are usually distinguished.Every hard deadline has to be met, or a system failure occurs.This is the case, for example, in modelling avionics or car engine control systems.In order to satisfy all the job timing requirements, strong assumptions on the model primitives, like common, deterministic initial lead times of jobs from the same task, boundedness of the job execution times and positive lower bounds for the interarrival times, are necessary.The subsequent considerations are based on these bounds, leading to the worst case analysis and, in many cases, resulting in systems functioning at relatively low levels of average utilization.
Most real-time applications, like video conferencing or video-on-demand, turn out to be resilient to infrequent packet losses, occuring when a small fraction, of the order of 10 −5 , of packets are excessively delayed or dropped by the network (Sivaraman et al. 2001).To model such systems, firm or soft deadlines are used.A firm deadline can be missed, but there is no value in completing a task after its deadline has expired.In contrast, a system with soft deadlines permits lateness and uses the jobs completed after their deadlines.
A natural service protocol for real-time systems is Earliest Deadline First (EDF), in which the job with the shortest remaining lead time, i.e., the difference between its deadline and the current time, is selected for service.It is known that for a single server, single customer class queueing system with customer deadlines, this discipline is usually optimal.The precise meaning of this optimality was formalized in a number of ways, starting at least from Liu and Layland (1973) who proved optimality of the EDF scheduling algorithm in a hard deadline environment.Panwar and Towsley (1988) demonstrated that EDF minimizes the fraction of customers who miss their deadlines within the class of preemptive disciplines in a G/M/1 queue.In the context of firm deadlines, Panwar and Towsley (1992) showed that EDF minimizes the fraction of reneging customers, not served to completion due to elapsed deadlines, in a G/M/c queue.More recently, Kruk et al. (2011) obtained a similar result for a single server system, in which the amount of reneged work was used as a performance measure.
In this paper, we present counterparts of the above-mentioned EDF optimality results for preemptive resource sharing networks with arbitrary topology and soft job deadlines.In order to quantify the ability of such a network to meet the file transmission timing requirements, for each element i from the set I of available routes in this network and each time t ≥ 0, we introduce a (random) "cumulative idleness distribution function" Y i (t, •).Here, for each s ∈ R, Y i (t, s) denotes the cumulative idleness by time t with regard to transmission of flows with lead times at time t not greater than s.We want to find service protocols minimizing the vector of functions (Y i ) i∈I or their sum i∈I Y i -equivalently, maximizing the corresponding cumulative transmission times-with respect to the pointwise functional inequality.(The latter relation is defined as follows: for functions f , g of two variables t, s, we have f ≤ g if and only if f (t, s) ≤ g(t, s) for all their arguments t and s.)If such minimizing disciplines exist, we call them pathwise minimal or additively minimal, respectively.These notions are somewhat delicate, because for networks with multiple resources, the corresponding orderings are, in general, partial, but not necessarily linear.This means that some of the network protocols may not be comparable with one another and multiple minimal elements may exist.Nevertheless, without any distributional assumptions on the stochas-tic model primitives, we prove that the EDF resource sharing protocol is pathwise minimal (Theorem 1).A partial converse to this result is the fact that in a pathwise minimal resource sharing network, flows on every route are scheduled according to the EDF discipline (Theorem 2).In particular, a single-server, single customer class queueing system, which is a special case of our network with just a single resource, is pathwise minimal if and only if the system is working under the EDF service discipline (Corollary 1).Furthermore, we show that under mild distributional assumptions, the EDF resource sharing protocol is additively minimal (Theorem 3).We also discuss similar notions and results in which the idleness count is kept on the resource, rather than route, basis, and some related methods of performance evaluation.Our arguments are pathwise, somewhat similar in spirit to the proofs of the optimality result in Kruk et al. (2011) and Schrage's well known theorem stating that the Shortest Remaining Processing Time (SRPT) discipline minimizes the queue length in a single-server system (Schrage 1968).Several examples, illustrating our theoretical developments, are also provided.
To our knowledge, EDF resource sharing networks with soft deadlines have not been analyzed in the literature.Our hope is that this paper will provide a starting point for a systematic study of this important topic.
Related EDF resource sharing systems with hard deadlines were investigated in the computer science literature.The work most relevant to this paper is Baruah (2006), in which a computing platform consisting of a preemptable processor and a number of non-preemptable resources, shared by jobs of different tasks, is considered.The jobs are scheduled according to the EDF discipline, with access to shared resources arbitrated by the Stack Resource Policy (SRP).The main results of Baruah (2006) are a test for schedulability (i.e., the ability to meet all the deadlines) for this system and a proof of optimality of the EDF + SRP policy in the following sense: if a system fails the EDF + SRP schedulability test, then, for some data, it also misses deadlines under any other work-conserving discipline.
A lot of research activity has been devoted to investigating resource sharing networks with other service protocols.Particular interest has been paid to fair bandwidth sharing, introduced by Massoulié and Roberts (2000).The literature on this topic is large.Here we mention only Gromoll and Williams (2009), Kang et al. (2009), Vlasiou et al. (2015), as representative examples of related asymptotic results.Stability (or the lack thereof) of the SRPT protocol in resource sharing networks was investigated by Verloop et al. (2005).
Finally, let us mention a growing body of literature devoted to "conventional" multiserver, multiclass queueing networks with the EDF protocol.Several fluid and diffusion approximations for such systems have already been developed, see, e.g., Bramson (2001), Kruk et al. (2004) or Kruk (2011) and the references given there.This theory is fundamentally different from the one considered here, since customers of a multiclass queueing network visit different servers along their routes in succession, while flows in a bandwidth sharing network need access to all the resources on their routes simultaneously.
This paper is organized as follows.In Sect. 2 we define a stochastic model for an EDF resource sharing network.Sections 3 and 4 are devoted to the investigation of minimality and additive minimality of such networks, respectively.Section 5 contains the discussion of related notions of resource-wise minimality and resource-wise additive minimality.Finally, in Sect.6 we briefly discuss similar comparison methods and minimality concepts based on some statistics of the route/resource idleness distributions, rather than on the entire idleness distribution functions.

Notation
The following notation will be used throughout.For a finite set A, let |A| denote the cardinality of A and let 2 A denote the family of all the subsets of A. Let R denote the set of real numbers.For a, b ∈ R, we write a ∨ b (a ∧ b) for the maximum (minimum) of a and b, a + for a ∨ 0, a − for (−a) ∨ 0 and a for the largest integer less than or equal to a. Vector inequalities are to be interpreted componentwise, i.e., for Functional inequalities are to be interpreted pointwise, i.e., for f, g : A → R n , we write f ≤ g if and only if f (x) ≤ g(x) for all x ∈ A. By convention, a sum over the empty set of indices equals zero.
The Borel σ -field on R will be denoted by B(R).For B ∈ B(R), we denote the indicator of the set B by I B .The set of bounded, continuous real functions on R will be denoted by C b (R).For a function f (x, y) of two variables, let d x f (x, y) denote the differential of f (x, y) with respect to x, i.e., dg y (x), where g y (x) = f (x, y) is a function of x depending on a parameter y.
Let M denote the set of finite, nonnegative measures on B(R).For μ ∈ M, we define The set M is endowed with the weak topology.That is, for ξ n , ξ ∈ M, we have With this topology, M is a Polish space (see Prohorov 1956).We denote the zero measure in M by 0 and the measure in M that puts one unit of mass at a point x ∈ R by δ x .
All stochastic processes used in this paper are assumed to have paths that are right continuous with finite left limits (r.c.l.l.).For a Polish space S, we denote by D([0, ∞), S) the space of r.c.l.l.functions from [0, ∞) into S.

Network structure
We consider a network with a finite number of resources (nodes), labelled by j = 1, . . ., J , and a finite set of routes, labelled by i = 1, . . ., I .Each route may be identified with a nonempty subset of J = {1, . . ., J }, interpreted as the set of resources used by this route.Let A = [a ji ] be the J × I incidence matrix in which a ji = 1 if resource j is used by route i and a ji = 0 otherwise.Let I = {1, . . ., I }.Then the set R(i) of resources used by route i may be described by the equation R(i) = { j ∈ J : a ji = 1}.Similarly, the set F( j) of routes using the resource j is defined by the equation F( j) = {i ∈ I : a ji = 1}.By a flow on route i we mean a continuous transmission of a file through the resources used by this route.We assume that a flow takes simultaneous possession of all the resources on its route during the transmission.For convenience, we also assume that all the resources have a unit service rate.

Stochastic primitives
Let (Ω, A, P) be a probability space on which all the random objects to follow will be defined.The initial condition consists of the nonnegative, integer-valued random variables Q i (0), i ∈ I, counting the numbers of initial flows on each route at time zero, the strictly positive random initial file sizes of the initial flows ṽi,k and their corresponding initial lead times (deadlines) li,k , where i ∈ I, k = 1, . . ., Q i (0).The initial flow with service time ṽi,k and deadline li,k will be called flow k on route i.Let Let N i (•) be the exogenous arrival process for the route i ∈ I.For t ≥ 0, N i (t) represents the number of flows arriving to the ith route in the time interval (0, t].The kth arrival modelled by N i (•) will be called flow For i ∈ I and k ≥ 1, a random variable v i,k represents the initial size of the file associated with the Q i (0) + kth flow on route i, i.e., the cumulative transfer time of this flow through the network.We assume that for each i ∈ I the random variables {v i,k } k≥1 are strictly positive.
For i ∈ I and k ≥ 1, a random variable l i,k represents the initial lead time for the transmission of the file associated with the Q i (0) + kth flow on route i.Thus, the deadline for the Q i (0) + kth transmission on route i equals U i,k + l i,k .

Residual file sizes, lead times
For t ≥ 0, i ∈ I and k ≤ A i (t), let w i,k (t) denote the residual size of the file (transmission time) of flow k on route i at time t.Thus, w i,k (•) decreases at rate one during the transmission of the flow k on route i and it is constant otherwise.
To determine whether flows meet their timing requirements, one must keep track of each flows's lead time, where lead time = initial lead time − time elapsed since arrival for flows coming to the system after time zero and lead time = initial lead time − current time for initial flows.More formally, let t ≥ 0, i ∈ I and k ≤ A i (t).The lead time at time t of flow k on route i is defined by We combine the stochastic primitives defined above into the following measurevalued arrival processes: for i ∈ I and t ≥ 0, let (t) .

Basic performance processes
For t ≥ 0 and i ∈ I, the measure-valued state descriptors for route i are defined by The random measure Q i (t) (resp.W i (t)) puts the unit mass (resp., the mass equal to the corresponding residual transmission time) at the lead time of any flow present on route i at time t.
The current lead time process for route i will be denoted by

Service protocol
The network operates under the preemptive EDF policy, dynamically allocating bandwidth to flows with the shortest lead time.In the case of preemption, we assume preempt-resume and no setup, switchover or other type of overhead.Such a protocol is relatively straightforward to describe in the case of multiclass queueing networks, but it needs to be defined carefully in the case under consideration.
Let t ≥ 0 be such that Q(t) = 0 and let i 0 ∈ I, k 0 ≤ A i 0 (t) be such that w i 0 ,k 0 (t) > 0 and l i 0 ,k 0 (t) is the smallest of the lead times of the flows present in the system at time t.Here and elsewhere we assume that ties are broken in an arbitrary manner, for example here we may choose the smallest possible pair i 0 , k 0 with the required properties, according to the lexicographic order.The flow k 0 on route i 0 is chosen for transmission at time t.
, then the assignment of flows for transmission at time t is finished, because no more flows can be transmitted at that time.Otherwise let i is the smallest of the lead times of the flows present in the system at time t which are on routes belonging to the set I 1 .We choose the flow k 1 on route i 1 for transmission at time t.
we stop, otherwise we continue in this way until, at some step n, we get In what follows, we will often compare the performance of the EDF policy defined above with the performance of other service disciplines.In some of them, flows on every route are scheduled for transmission according to the EDF protocol, i.e., in the order of increasing lead times, although the choice of routes on which the transmission takes place does not have to conform to the EDF discipline.An example of such a system is a resource sharing network with fixed priorities of routes, in which flows on every route are served according to EDF.

Network equations
In order to define the network equations, we introduce the following random fields.For i ∈ I, t ≥ 0 and s ∈ R, let In other words, E i (t, s) is equal to the number of external arrivals by time t of flows on route i with lead times at time t less than or equal to s and Z i (t, s) is the number of flows on route i with lead times at time t less than or equal to s which are still present in the system at that time.Note that denote the number of departures (i.e., transmission completions) and the cumulative transmission time by time t corresponding to each route i of flows with lead times at time t less than or equal to s.Let Y i (t, s) = t − T i (t, s), i ∈ I, denote the cumulative idleness by time t with regard to transmission of flows on route i with lead times at time t less than or equal to s and let Y (t, s) = (Y i (t, s)) i∈I .For i ∈ I, t, t ≥ 0 and s ∈ R, let S i (t , t, s) denote the number of transmission completions of flows on route i having lead times at time t less than or equal to s, by the time the system has spent t units of time transmitting these flows.Finally, let (1) By definition, all the components of Moreover, all the coordinates of Z (t, •) and of the increments D(t, The process X satisfies the following network equations: valid for every for t ≥ t ≥ 0 and s ∈ R. In particular, the Eq. ( 5) imposes the resource capacity constaints.
In the next section we will show that the performance process X defined by ( 1) is efficient in the class of solutions of the corresponding network equations, in the sense of minimizing the associated idleness process Y .(For a precise statement, see Definitions 1-2, to follow.)To prove this result rigorously, we must exclude the possibility of "fake transmissions" of flows which are not present in the system during the transmission time.Such "fake transmissions" formally reduce the system idleness, although they make no influence on the real behavior of the network.More precisely, we will only consider systems satisfying the additional equation By definition, the process X defined above (and, in fact, the vector of performance processes corresponding to any "reasonable" service discipline) satisfies ( 6).The Eqs. ( 2)-( 6), together with the nonnegativity and monotonicity assumptions listed below (1), express general properties of resource sharing networks with all resources having the unit sevice rate.In this context, let us stress that the departure counting functions S i appearing in (3) depend not only on the stochastic primitives, but also on the network service protocol.For example, if I = J = 1, then the network under consideration is just a single server queue with a single customer class.It is well known that in this case the SRPT policy minimizes the queue lenght, and thus maximizes the total number of departures, at any given point of time (Schrage 1968).Therefore, the random function S for this system is evidently different from the one corresponding to the EDF service protocol.

Minimality
We have already noted that the Eqs.( 2)-( 6), are satisfied by systems working under various service disciplines, with different departure counting functions S i .In fact, even for given initial distribution Z (0, •), external arrival process E and the random functions S i , we do not have uniqueness of solutions to (2)-( 6), subject to the nonnegativity and monotonicity assumptions listed below (1).Indeed, the above equations do not imply any lower bounds on the transmission rates, allowing for excessive system idleness.To illustrate this point, let us consider the following example.
Example 1 (Idle queue) Let I = J = 1.To fix ideas, assume that the random function S is the same as for the EDF system described in Sect.2.5, with the stochastic primitives defined in Sect.2.2.For these stochastic primitives, let X be the process defined by (1), with 6), together with the nonnegativity and monotonicity assumptions listed below (1).At the same time, X is clearly not the vector of performance processes describing the dynamics of the corresponding EDF queue unless A ≡ 0.
On the other hand, EDF resource sharing networks, like other networks with resource sharing, usually do not have the non-idling property.The following example is typical.
Example 2 (Linear network) , then at time t a flow on route 1 is transmitted and the system is unable to transmit flows on route 3. Consequently, the second resource is idle, although there are flows on a route using this resource which are waiting for transmission at that time.
Below we shall define a notion of pathwise minimality which enforces the transmition of flows on every given route in the EDF order (see Theorem 2, to follow) and, moreover, it implies a counterpart of non-idleness for EDF resource sharing networks.
Definition 1 Let X (k) = (Z (k) , D (k) , T (k) , Y (k) ), k = 1, 2, be two performance processes of the form (1) for resource sharing networks, having the same incidence matrix A and the same stochastic primitives (in particular, with Z (1) (0, •) = Z (2) (0, •) and the same external arrival function E), satisfying ( 2)-( 6), together with the nonnegativity and monotonicity assumptions made below (1).We write Recall that vector inequalities are to be interpreted componentwise and functional inequalities are to be interpreted pointwise.For example, the condition Definition 2 A performance process X of the form (1), satisfying (2)-( 6), together with the nonnegativity and monotonicity assumptions made below (1), is called pathwise minimal if for any process X such that X X, we have X X .
In other words, the process X is pathwise minimal if the inequality X X for In the above definitions, we do not require that the networks under consideration have the same departure functions S i .Consequently, the ordering " " is suitable for comparing the effects of implementing different transmission protocols in the same stochastic system.
The relation " " is reflexive and transitive, although it is not necessarily antisymmetric.Indeed, the functions S i in the two systems under comparison are, in general different, so (3), together with the identity T (1) (ω) ≡ T (2) (ω), do not have to imply that D (1) (ω) ≡ D (2) (ω).However, if we truncate the performance processes of the type (1) to their last two coordinates (T (t, s), Y (t, s)), then " " is a partial ordering on the set of such pairs and a performance process is pathwise minimal if and only if it is a minimal element relative to this ordering.
Remark 1 In probability theory, it is customary to treat random variables as equivalence classes of measurable functions which coincide almost surely.Consequently, these objects are defined only up to a set of P measure zero.From this point of view, in Definition 1 we should have required that Y (1) (ω) ≤ Y (2) (ω) for almost all (instead of all) ω ∈ Ω.Here we take a slightly different, but equivalent, standpoint, treating every random variable defining the stochastic models under consideration as a representative from the coresponding equivalence class (arbitrarily chosen, but fixed thereafter), with well-defined values for each ω ∈ Ω.This convention is convenient for making pathwise comparisons for every (instead of almost every) possible scenario, but it does not have any essential influence on all that follows.
In general, there are multiple mimimal elements corresponding to the same primitives.For example, any resource sharing network with fixed priorities of routes, in which flows on any given route are served according to EDF, is pathwise minimal.For resource sharing networks, pathwise minimality means that the system serves flows on each route in the EDF order (see Theorem 2, to follow) and it is as efficient (i.e., non-idle) as it can be, given the network topology, the stochastic primitives and the prescribed algorithm for bandwidth allocation between the routes.The following observation motivates our further developments.
Theorem 1 The vector X of performance processes given by (1), corresponding to the EDF resource sharing protocol defined in Sect.2.5, is pathwise minimal.
Proof Fix ω ∈ Ω.For the remainder of the proof, all the random objects under consideration are evaluated at this ω.Suppose that X(ω) is not minimal.Let X = (Z , D , T , Y ) be such that X X and X X .Let Q (t) = lim s→∞ Z (t, s) for each t ≥ 0 and let Since Y (0, •) = Y (0, •) = 0 by assumption, the set in ( 7) is nonempty.On the other hand, the relation X X implies that t 0 < ∞.By (5) and the fact that T ( and let t ∈ [t 0 , t 1 ).In the remainder of the proof we will use the notation introduced in Sect.2.5.Assume first that Q(t 0 ) = 0.By the definition of the EDF service protocol, the k 0 th flow on route i 0 is transmitted in the time interval [t 0 , t 1 ) by the EDF system.This, together with the inequality X X, (8) and monotonicity of Y , implies that for t ∈ [t 0 , t 1 ) and s ≥ l i 0 ,k 0 (t 0 ), we have Consequently, for t and s as above, For t ∈ [t 0 , t 1 ) and s < l i 0 ,k 0 (t 0 ), we have s − (t − t 0 ) < l i 0 ,k 0 (t 0 ) − (t − t 0 ) = l i 0 ,k 0 (t) and hence, by the definition of i 0 , k 0 , Z i 0 (t, s − (t − t 0 )) = 0.In particular, Z i 0 (t 0 , s) = 0.However, by (8), we have T i 0 (t 0 , s) = T i 0 (t 0 , s) and hence Indeed, by Definition 1, the arrival processes for both the systems under comparison are the same, so the transmission time sufficient to empty the i 0 th route from the customers with lead times not greater than s at time t 0 in one of them is sufficient to accomplish the same task in the other one as well.The Eqs. ( 2), ( 9) and ( 11), imply that Z i 0 (t, s − (t − t 0 )) = 0 for t ∈ [t 0 , t 1 ), since there are no arrivals or departures in the system represented by X in the time interval (t 0 , t 1 ).This in turn, together with (6), implies for t ∈ [t 0 , t 1 ), so for such t, by (4), we have ).This equality, together with (10), shows that Y i 0 (t, s) = Y i 0 (t, s) for all t ∈ [t 0 , t 1 ), s ∈ R.
Proceeding similarly, for Ĩ := {i 0 , . . ., i n−1 }, we get If Ĩ = I, we have obtained If this is not the case, let i ∈ I\ Ĩ.By the definition of the service protocol in the EDF resource sharing system, at any time t ∈ [t 0 , t 1 ) no flow on route i is chosen for transmition.This may be due either to the equality Q i (t) = 0 on [t 0 , t 1 ), or to the fact that i / ∈ I n .In the first case, arguing as in the justification of (11), we get Q i (t 0 ) = Q i (t 0 ) = 0. This, together with (9), implies that Q i (t) = 0 for all t ∈ [t 0 , t 1 ).Hence, by (6), for any s ∈ R, we have (12) with i substituted for i 0 , which implies by the same argument, as the one at the end of the previous paragraph.If i / ∈ I n , then R(i) ∩ R(i m ) = ∅ for some i m ∈ Ĩ.By the definition of the EDF service protocol, for t ∈ [t 0 , t 1 ) and s ≥ l i m ,k m (t 0 ), we have Thus, by ( 4) and ( 13), T i m (t, s − (t − t 0 )) − T i m (t 0 , s) = t − t 0 , so by (5) with j ∈ R(i) ∩ R(i m ) and monotonicity of T , T we have (12) with i substituted for i 0 .The increment T i (t, • − t) − T i (t 0 , • − t 0 ) is nonnegative and nondecreasing, so the validity of ( 12), with i substituted for i 0 , for s ≥ l i m ,k m (t 0 ), implies its validity also for s < l i m ,k m (t 0 ).Consequently, by ( 4) and ( 8), we have (15).Hence, regardless of the case, under the assumption Q(t 0 ) = 0, (14) holds, which contradicts (7), (9).
Finally, if Q(t 0 ) = 0, then for each i ∈ I we proceed as in the case of i ∈ I\ Ĩ and Q i (t) = 0 for t ∈ [t 0 , t 1 ) described above.
The following result is a partial converse of Theorem 1.
Theorem 2 Let a performance process X for a resource sharing network be pathwise minimal.Then the flows on each route i ∈ I of this network are scheduled for transmission according to the EDF protocol.
Proof We follow the ideas of the proof of Theorem 5.1 in Kruk et al. (2011).We argue by contradiction.Suppose that for some ω ∈ Ω and some ī ∈ I, the service protocol on the route ī of the system modelled by the path X(ω) (called system X for simplicity) is different from the EDF scheduling.For the remainder of the proof, all the random objects under consideration are evaluated at this ω.Let t 0 be the first time when the scheduling on route ī in the system X deviates from the EDF policy, either because it transmits a flow with lead time greater than C ī (t 0 ), or because it uses a transmission rate lower than the highest available one (e.g., idles) when there are flows waiting for transmission on route ī and there is spare capacity at every resource j ∈ R( ī) (i.e., the inequality (5) is strict for every such j, t = t 0 and all t > t 0 ).In both cases we will construct a policy π , a modification of the policy π employed in X, yielding a sample path X = X (ω) with (We do not modify the sample paths X( ω), ω = ω, so X ( ω) = X( ω) for such ω by definition.)Let k be a flow on route ī at time t 0 with lead time C ī (t 0 ) at time t 0 .The policy π emulates π for all routes i = ī at all times, and also for route ī, except as noted below.
In the first case, let p be a flow on route ī , with lead time at time t 0 greater than C ī (t 0 ), which is being transmitted at time t 0 under the policy π .From time t 0 , whenever π transmits the flow p, π transmits the flow k, with the same rate, until the time t 1 when the latter transmission is completed.From time t 1 , π transmits the flow p whenever π transfers k, with the same rate.
In this case, by the definition of π , for 0 In the remainder of this proof, w ī,k (•) denotes the residual transmission time of the flow k on route ī under the policy π .For t ∈ [t 0 , t 1 ] and C ī (t 0 ) + t 0 ≤ s < l ī, p (t 0 ) + t 0 , we have Indeed, under the policy π , the transmission of the flow k on the route ī is completed at the moment when the extra transmission time assigned to it by π (in addition to the time already assigned to it by π ) equals its residual tranmission time under the policy π .By ( 17), for t ≥ t 1 and C ī (t 0 ) + t 0 ≤ s < l ī, p (t 0 ) + t 0 , we have We have proved that 16) holds.
In the second case, from time t 0 , whenever π "idles" on route ī, i.e., there is spare capacity at every resource j ∈ R( ī) under π , the policy π , in addition to emulating the transmissions of π , transmits the flow k, with the highest available rate, until the time t 1 at which its cumulative transfer time elapses.From time t 1 , when π transmits the flow k with some rate, the policy π lets the same transmission rate on route ī go unused.
To describe this in more detail, for j ∈ J, t ≥ 0 and s ∈ R, let be the cumulative transmission time of the resource j on the time interval [0, t] related to flows with lead times at time t not greater than s and let be the total transmission time of j on this time interval.The functions Tj are nondecreasing by definition and hence, by ( 5), Lipschitz continuous, with the Lipschitz constant 1.It is well known that Lipschitz functions are absolutely continuous, and thus differentiable almost everywhere (a.e.) with respect to the Lebesgue measure, see, e.g., Wheeden and Zygmund (1977), pp.115-116.For t ≥ t 0 , let r (t) be the unused capacity of the system X for the route ī in the time interval [t 0 , t], which may be defined mathematically by r (t 0 ) = 0 and The case assumption implies that t t 0 r (u)du > 0 for every t > t 0 .
As in the previous case, if either 0 ≤ t ≤ t 0 , or t > t 0 and s < C ī (t 0 ) + t 0 , then be the time when the system working under π completes the transmission of the flow k on route ī.Then for t 0 ≤ t ≤ t 1 and s ≥ C ī (t 0 ) + t 0 , we have where the last equality follows from (20).In particular, Y ī ≤ Y ī , Y ī = Y ī and ( 16) holds.
Summarizing, in both cases we have ( 16), which contradicts pathwise minimality of X.

Corollary 1
The performance process of a single-server, single customer class queueing system is pathwise minimal if and only if the system is working under the EDF service discipline.
This follows immediately from Theorems 1 and 2 in the case of I = J = 1.
Clearly, the relation X X implies X X, but the opposite implication is not necessarily true.Definition 4 A performance process X of the form (1), satisfying (2)-( 6), together with the nonnegativity and monotonicity assumptions made below (1), is called additively minimal if for any process X such that X X, we have X X .
Thus, the process X is additively minimal if the inequality X Remarks made after Definition 2, up to Remark 1, have (more or less obvious) counterparts for Definitions 3-4.We also have Remark 2 An additively minimal process X is pathwise minimal.
Proof Suppose that X is additively minimal and let X = (Z , D , T , Y ) be such that X X.Then X X.If the relation X X does not hold, then there exist Interestingly, a pathwise minimal process does not have to be additively minimal.In fact, even a preemptive EDF resource sharing network, defined in Sect.2.5, may fail to be additively minimal, because its tie-breaking rule is not always optimal in this respect.The following example illustrates this point.
Example 3 We consider two EDF resource sharing networks, both with I = 3, J = 2, R(1) = {1, 2}, R(2) = {2}, R(3) = {1} and the same stochastic primitives.In the first network, ties are broken according to the lexicographic order.In the second one, an "anti-lexicographic" order, with flows on route 3 (2) getting priority over the flows on routes 1, 2 (1) with the same lead times, is implemented on the time interval [0, 2], while the lexicographic tie-breaking rule is being used after time 2. Denote the performance processes for these systems by X and X , respectively.Assume that By definition, the corresponding statements are valid also for the second one.
For t ∈ [0, 1), in the first network the initial flow on route 1, with initial lead time 1, is being transmitted, while the initial flows on the other routes wait for transmission.Thus, In the same time interval, in the second network the initial flow on route 3, with initial lead time 1, is chosen for transmition and the initial flow on route 2, with initial lead time 2, is also being transmitted.Hence, For t ∈ [1, 2), the first system transmits the initial flows on routes 2, 3, while there are no flows on the first route, so In the second system, at time t ∈ [1, 2) only the initial flow on route 1, with initial lead time 1, is present.Consequently, At time 2 the both systems get empty and their performance processes coincide after that time.
Summarizing, we have X X and 3 , so the first EDF system is not additively minimal, although it is pathwise minimal by Theorem 1.
In the light of Example 3, it is tempting to look for a "smart" tie-breaking rule which makes the underlying EDF bandwidth sharing network additively minimal.For the topology from the last example the problem is easy: it suffices to give preemptive priorities to the shorter routes (2 and 3) over the longer one (1).This approach may be generalized to linear networks, but it is not clear to us how to construct such a "smart" rule in a general case.Moreover, such a "smart" tie-breaking rule has to adjust priorities dynamically to the current state of the network, as the following example illustrates.
Example 4 Consider an EDF resource sharing network with . ., 4. If the system assigns higher priority to route 1 on the time interval [0, 1), then the flows on routes 1 and 4 are being transmitted on this time interval, while the flows on routes 2, 3 wait for transmission, so for t ∈ [0, 1), On the time interval [1, 2), the flows on routes 2, 3 are being transmitted, so for In the opposite case, in which the system assigns higher priority to route 2 on the time interval [0, 1), the flows on routes 2 and 3 are being transmitted on this time interval, while the flows on routes 1, 4 wait, so for t ∈ [0, 1), On the time interval [1, 2), the flows on routes 1 and 4 are being transmitted, so for Consequently, an additively minimal EDF protocol uses the second tie-breaking rule and gives preference to route 2, instead of 1, on the time interval [0, 1).Suppose that at some time t 0 > 2, we have, in a sense, the "mirror image" of the situation at time 0, i.e., W and, moreover, the next arrival after time t 0 happens later than t 0 + 2.Then, by the same token, an additively minimal EDF protocol should give preference to route 1, instead of 2, on the time interval [t 0 , t 0 + 1).In particular, an additively minimal EDF tie-breaking rule for the system under consideration cannot be time-independent.
The following theorem shows that in the absence of ties, EDF resource sharing networks are additively minimal.
Theorem 3 Assume that the stochastic primitives for a resource sharing network are such that no two flows on different routes have the same deadline.Then the vector X of performance processes given by (1), corresponding to the EDF resource sharing protocol defined in Sect.2.5, is additively minimal.
The assumption of Theorem 3 is satisfied if no two initial flows on different routes have the same deadline, the initial condition is independent on other stochastic primitives, N i , i ∈ I, are mutually independent delayed renewal processes, {l i,k } k≥1 , i ∈ I, form mutually independent i.i.d.sequences, independent on N i , i ∈ I, and either all the interarrival time distributions, or all the initial lead time distributions are continuous.The latter assumption holds, for example, if the arrival processes N i , i ∈ I, are Poisson.(F rom the point of view presented in Remark 1, in this case we have to choose representatives of all the stochastic primitives under consideration in such a way that there is no tie for every ω ∈ Ω.) Consequently, the assumption of Theorem 3 is rather mild.

Proof of Theorem 3
Fix ω ∈ Ω.For the remainder of the proof, all the random objects under consideration are evaluated at this ω.Suppose that X(ω) is not additively minimal.Let X = (Z , D , T , Y ) be such that X X, but the relation X X does not hold.Let Q (t) = lim s→∞ Z (t, s) for each t ≥ 0 and let Since Y (0, •) = Y (0, •) = 0 by assumption, the set in ( 21) is nonempty.On the other hand, t 0 < ∞ by assumption.From the proof of Theorem 1 we know that Y (•, s − •), Y (•, s − •) are Lipschitz for each s.This, together with (4), implies Let t 1 be defined by ( 9).In the remainder of the proof we will use the notation introduced in Sect.2.5.
We will show that contrary to the definition of t 0 .The main idea of the proof is to check that in the time interval [t 0 , t 1 ), for m = 0, . . ., n − 1, a flow on route m with lead time l i m ,k m (t) at time t ∈ [t 0 , t 1 ) is being transmitted with unit rate by the system X (as well as by X).
Next, we verify that there are no flows on routes i ∈ I n in the system X (and in X) in this time interval.These two facts, together with the network topology, imply that the increments of the processes Y and Y on time intervals contained in [t 0 , t 1 ) coincide, which, together with ( 22), implies (23).We will now provide a detailed argument.Assume first that Q(t 0 ) = 0.By the definition of the EDF service protocol, the k 0 th flow on route i 0 is transmitted in the time interval [t 0 , t 1 ) by the EDF system.Fix s < l i 0 ,k 0 (t 0 ).Then in the EDF system we have i∈I Z i (t 0 , s) = 0.By ( 22), i∈I T i (t 0 , s) = i∈I T i (t 0 , s), so i∈I Z i (t 0 , s) = 0, because the both systems have the same stochastic primitives.However, for each i ∈ I, the equations Z i (t 0 , s) = Z i (t 0 , s) = 0 imply that T i (t 0 , s) = T i (t 0 , s) = V i (t 0 )(−∞, s], and hence, by (4), Y i (t 0 , s) = Y i (t 0 , s).Furthermore, by ( 9), we actually have Thus, by ( 4), ( 6), for all i ∈ I, t ∈ [t 0 , t 1 ) and s < l i 0 ,k 0 (t 0 ), For s ≥ l i 0 ,k 0 (t 0 ) and t ∈ [t 0 , t 1 ), we have Y i 0 (t, s − (t − t 0 )) = Y i 0 (t 0 , s).By ( 9), ( 24) and the no-tie assumption, for i = i 0 and t ∈ [t 0 , t 1 ), we have Z i (t, l i 0 ,k 0 (t 0 ) − (t − t 0 )) = 0, so If the inequality ( 26) is strict for some t ∈ [t 0 , t 1 ), then, by ( 4), ( 6), ( 9) and ( 25), there are flows with the same lead time (namely, l i 0 ,k 0 (t 0 ) at time t 0 ) on at least two different routes present in the system modelled by X at time t.This, however, contradicts the no-tie assumption.Hence, equality holds in (26) for each t ∈ [t 0 , t 1 ), so in the both systems, the flows on exactly one route with lead time l i 0 ,k 0 (t 0 ) at time t 0 are being transmitted on the time interval [t 0 , t 1 ).More precisely, the no-tie assumption and the identity of the stochastic primitives for the both systems imply that in both cases this route is the same, namely i 0 .Recalling (25), for t ∈ [t 0 , t 1 ) we get for i = i 0 and We want to extend the above analysis to all s Suppose that ( 29) is false and let Then there is a flow with lead time l at time t 0 on some route ī ∈ I 1 present in the system modelled by X at time t 0 .By ( 22), so, by the no-tie assumption and the identity of the stochastic primitives for the both systems, there is a flow with lead time l at time t 0 on the same route ī ∈ I 1 in the EDF system at time t 0 .This, however, contradicts the definitions of i 1 , k 1 .We have justified (29).By (29) and its counterpart for the EDF system (following directly from the EDF scheduling algorithm), together with ( 6) and ( 9), for i ∈ I 1 , t ∈ [t 0 , t 1 ) and s < l i 1 ,k 1 (t 0 ), we have In this way, we have shown that This, however, together with (22), contradicts (21).Finally, if Q(t 0 ) = 0, then an argument similar to the proof of (33) shows that Q (t) = 0 for all t ∈ [t 0 , t 1 ).Then, by ( 4) and ( 6), both the systems under consideration are idle on [t 0 , t 1 ), which, together with (22), contradicts (21).
Interestingly, a converse of Theorem 3 is, in general, false, even in the absence of ties, as the following example shows.
Example 5 Consider the network from Example 2, with W 1 (0) = δ 2 , W 2 (0) = δ 3 and W 3 (0) = δ 1 .Assume, for simplicity, that there are no external arrivals.Suppose that the network protocol gives priority to routes 1, 2 and let X be the corresponding performance process in the form (1).Then, at time t ∈ [0, 1), the flows on routes 1, 2 are being transmitted, while the flow on route 3 waits for transmission, so , so on the time interval [0, 1), at least two flows are being transmitted by the system represented by X .By the network topology, this is possible only if the flows on routes 1, 2 are being transmitted on this time interval.Consequently, the flow on route 3 is being transmitted in the time interval [1, 2) by both systems.Hence, X = X and the system represented by X is additively minimal.Note that, by Theorem 3, the EDF service protocol is also additively minimal for this model data, so there may be more than one additively minimal service policy for a given network.

Resource level minimality
The notions of minimality and additive minimality introduced in Sect.3-4 are based on comparing idleness in transmissions of flows on distinct routes for different service disciplines.It also makes sense to make similar comparisons on the level of the system resources, which leads to somewhat different, but strongly related, concepts.
Recall the processes Tj defined by (18).For j ∈ J, t ≥ 0 and s ∈ R, let be the cumulative idleness of the resource j on the time interval [0, t] related to flows with lead times at time t not greater than s.By ( 18), ( 35) and (4), we have The counterparts of Definitions 1-2 and 3-4 in this context are Definition 5 Let X (k) = (Z (k) , D (k) , T (k) , Y (k) ), k = 1, 2, be as in Definition 1.For and Ỹ (k) j are defined as in (18) (with T i replaced by T (k) i ) and ( 35), respectively.We write X Clearly, the relation X X implies X J X [see ( 36)], which, in turn, implies X J X, but the opposite implications are not necessarily true.
Definition 7 A performance process X of the form (1), satisfying (2)-( 6), together with the nonnegativity and monotonicity assumptions made below (1), is called resource-wise minimal if for any process X such that X J X, we have X J X .Furthermore, it is called resource-wise additively minimal if for any process X such that X J X, we have X J X .
An argument similar to the proof of Remark 2 shows that a resource-wise minimal process is pathwise minimal and a resource-wise additively minimal process is resource-wise minimal.
Example 6 Let X, X be as in Example 3. Recall that X = X and X X.However, it is easy to check that X J X and hence X J X .
Examples 3 and 6 show that the relation " " is, in a sense, incompatible with " J " and " J ". Consequently, the notions of minimality based on these relations are also different.In the above-mentioned examples, the inequality X X is an appreciation of the fact, that the "anti-lexicographic" tie-breaking rule allows for transmission on two different routes in the time-interval [0, 1), while the lexicographic rule allows for only one transmission.On the other hand, the only flow transmitted by both resources of the system X on this time interval has initial lead time 1, while the flow transmitted by the second resource in the system X has initial lead time 2. Thus, under the "antilexicographic" tie-breaking rule, the second resource is "doing worse", in terms of the transmitted flow urgency, and therefore X J X , X J X .
Another consequence of Example 6 is the fact that a preemptive EDF resource sharing network is not necessarily resource-wise additively minimal or resource-wise minimal.Nethertheless, the following analog of Theorem 3 holds.
Theorem 4 Assume that the stochastic primitives for a resource sharing network are such that no two flows on different routes have the same deadline.Then the vector X of performance processes given by (1), corresponding to the EDF resource sharing protocol defined in Sect.2.5, is resource-wise additively minimal.
This result can be justified by an easy modification of the proof of Theorem 3. It is easy to see that the "anti-EDF" network from Example 5 is also resource-wise additively minimal, so a converse of Theorem 4 is, in general, false, even in the absence of ties.

Related methods of performance evaluation
In real-time resource sharing networks it is natural that the service priorities on every route are assigned according to the EDF discipline, i.e., the corresponding vector of performance processes is pathwise minimal (Theorem 2).However, it is less clear how to allocate the available bandwidth between different routes.The criteria discussed in Sects.3-5 are based on pointwise comparisons of functions of two variables t, s.Consequently, the inequalities like X X or X J X provide a lot of information, but, because of this, many interesting service protocols are not comparable with one another, see Example 5.One way to reduce this problem is to "send s to infinity" in the arguments of Y i (t, s), Ỹ j (t, s), considering instead the functions of a single variable t ≥ 0, where In other words, for each t ≥ 0, we replace the "distribution function" Y i (t, •) ( Ỹ j (t, •)) by the corresponding "total mass" K i (t) ( K j (t)), representing the total idleness on route i (of the resource j) up to time t.In this way, we obtain the following notions.
It is clear that the inequalities X (1) X (2) , X (1) J X (2) imply X (1) X (2) and X (1) J X (2) , respectively, but the opposite implications are, in general, false.Definition 9 A performance process X of the form (1), satisfying (2)-( 6), together with the nonnegativity and monotonicity assumptions made below (1), is called totally minimal if for any process X such that X X, we have X X .Furthermore, it is called resource-wise totally minimal if for any process X such that X J X, we have X J X .
In other words, a service protocol for a resource sharing network is totally minimal (resource-wise totally minimal) if it maximizes the total transmission time on all the routes (of all the resources, respectively) in every time interval [0, t], t ≥ 0.
The argument from Example 5 shows that the "anti-EDF" system X considered there is both totally minimal and resource-wise totally minimal.If X is the performance process for the corresponding EDF network, then X X , X J X and the reverse inequalities do not hold, so the EDF service protocol is, in general, neither totally minimal, nor resource-wise totally minimal.On the other hand, in that example the EDF network transmits every flow on time, while the other one does not.Thus, in general, the objectives of keeping as many ongoing transmissions (transmitting resources) as possible and meeting the customer timing requirements, although strongly related, may collide with each other, even if the transmission on every route takes place in the EDF order.In fact, the criteria from Definition 9 do not take the customer deadlines into account.As such, they are not limited to real-time systems, but they are unable to reflect the level of satisfaction of the customer timing requirements.
Definitions 8-9 can be easily generalized by choosing an I -tuple of functions f i : R → R, i ∈ I, and replacing K i (t), K j (t) by R f i (s)Y i (t, ds) and i∈F ( j) R f i (s)Y i (t, ds), respectively.The above-mentioned definitions are a special, deadline-insensitive, case of this setup with f i ≡ 1 for all i.If we take f i to be strictly decreasing, at least on a half-line (−∞, a] for some a, for example f i (s) = s − or f i (s) = e −s , then the values of the integrals R f i (s)Y i (t, ds) will depend on "more urgent" flows in the system more than on "less urgent" ones, and consequently the corresponding minimal policies will be "more EDF-like".However, a more detailed study of the corresponding minimal performance processes is beyond the scope of this paper.