Linear Stochastic Fluid Networks: Rare-Event Simulation and Markov Modulation

We consider a linear stochastic fluid network under Markov modulation, with a focus on the probability that the joint storage level attains a value in a rare set at a given point in time. The main objective is to develop efficient importance sampling algorithms with provable performance guarantees. For linear stochastic fluid networks without modulation, we prove that the number of runs needed (so as to obtain an estimate with a given precision) increases polynomially (whereas the probability under consideration decays essentially exponentially); for networks operating in the slow modulation regime, our algorithm is asymptotically efficient. Our techniques are in the tradition of the rare-event simulation procedures that were developed for the sample-mean of i.i.d. one-dimensional light-tailed random variables, and intensively use the idea of exponential twisting. In passing, we also point out how to set up a recursion to evaluate the (transient and stationary) moments of the joint storage level in Markov-modulated linear stochastic fluid networks.


Introduction
Linear networks, as introduced in [12], can be informally described as follows.Consider a network consisting of L stations.Jobs, whose sizes are i.i.d.samples from some general Ldimensional distribution, arrive at the stations according to a Poisson process.At each of the nodes, in between arrivals the storage level decreases exponentially.Processed traffic is either transferred to the other nodes or leaves the network (according to a given routing matrix).In addition to this basic version of the linear network, there is also its Markov modulated counterpart [11], in which the arrival rate, the distribution of the job sizes, and the routing matrix depend on the state of an external, independently evolving finite-state continuous-time Markov chain (usually referred to as the background process).Linear networks can be seen as natural fluid counterparts of corresponding infinite-server queues.As such, they inherit several nice properties of those infinite-server queues.In particular, separate infinitesimally small fluid particles, moving through the network, do not interfere, and are therefore mutually independent.Essentially due to this property, linear networks allow explicit analysis; in particular, the joint Laplace transform of the storage levels at a given point in time can be expressed in closed form as a function of the arrival rate, the Laplace transform of the job-sizes and the routing matrix [12,Thm. 5.1].When Markov modulation is imposed, the analysis becomes substantially harder.Conditional on the path of the background process, again explicit expressions can be derived, cf.[11,Thm. 1].Deconditioning, however, cannot be done in a straightforward manner.As a consequence the results found are substantially less explicit than for the non-modulated linear network.In [11] also a system of ordinary differential equations has been set up that provides the transform of the stationary storage level; in addition, conditions are identified that guarantee the existence of such a stationary distribution.
In this paper we focus on rare events for Markov-modulated linear networks.More specifically, in a particular scaling regime (parameterized by n) we analyze the probability p n that at a given point in time the network storage vector is in a given rare set.By scaling the arrival rate as well as the rare set (which amounts to multiplying them by a scaling parameter n), the event of interest becomes increasingly rare.More specifically, under a Cramér-type assumption on the job-size distribution, application of large-deviations theory yields that p n decays (roughly) exponentially.As p n can be characterized only asymptotically, one could consider the option of using simulation to obtain precise estimates.The effectiveness, however, of such an approach is limited due to the rarity of the event under consideration: in order to get a reliable estimate, one needs sufficiently many runs in which the event occurs.This is the reason why one often resorts to simulation using importance sampling (or: change of measure).This is a variance reduction technique in which one simulates, rather than with the actual measure, using an alternative measure under which the event under consideration is not rare; correcting the simulation output with appropriate likelihood ratios yields an unbiased estimate.The crucial issue when setting up an importance sampling procedure concerns the choice of the alternative measure: one would like to select one that provides a substantial variance reduction, or is even (in some sense) optimal.The objective of this paper is to develop a change of measure with performs provably optimal.Our ultimate goal is to obtain an efficient simulation procedure for Markov-modulated linear networks.We do so by (i) first considering a single node without modulation, (ii) then multinode systems, still without modulation, and (iii) finally modulated multi-node systems.There are two reasons for this step-by-step setup: • For the non-modulated models we have more refined results than for the modulated models.More specifically, for the non-modulated models we have developed estimates for the number of runs required to obtain an estimate with predefined precision (and this number grows polynomially in the rarity parameter n), whereas for modulated models we can just prove that this number grows subexponentially.• In addition, this approach allows the reader to get gradually familiar with the concepts used in this paper.Directly analyzing the most general model, would make it hard to digest the underlying ideas.
In passing, we also point out how to set up a recursion to evaluate the (transient and stationary) moments of the joint storage level in Markov-modulated linear networks (where the results in [11] are restricted to just the first two stationary moments at epochs that the background process jumps).
The single-node model without modulation falls in the class of (one-dimensional) shot-noise models, for which efficient rare-event simulation techniques have been developed over the past, say, two decades.Asmussen and Nielsen [2] and Ganesh et al. [8] consider the probability that a shot-noise process decreased by a linear drift ever exceeds some given level.
Relying on sample-path large deviations results, an asymptotically efficient importance sampling algorithm is developed, under the same scaling as the one we consider in our paper.
The major difference with our model (apart from the fact that we can deal with considerably more general models, as we focus on networks and allow modulation) is that we focus on a rare-event probability that relates to the position of the process at a fixed point in time; in this setting we succeed in finding accurate estimates of the number of runs needed to get an estimate of given precision.
This paper is organized as follows.In Section 2 the focus is on a single-node network, without Markov modulation, Section 3 addresses the extension to multi-node systems, and in Section 4 the feature of modulation is added.In each of these three sections, we propose a change of measure, quantify its performance, and demonstrate its efficiency through simulation experiments.A discussion and concluding remarks are found in Section 5.

Single resource, no modulation
To introduce the concepts we work with in this paper, we analyze in this section a linear network consisting of a single node, in which the input is just compound Poisson (so no Markov modulation is imposed).More precisely, in the model considered, jobs arrive according to a Poisson process with rate λ, bring along i.i.d.amounts of work (represented by the sequence of i.i.d.random variables (B 1 , B 2 , . ..)), and the workload level decays exponentially at a rate r > 0. This model belongs to the class of shot-noise processes.As mentioned in the introduction, we gradually extend the model in the next sections.

2.1.
Preliminaries.We first present a compact representation for the amount of work in the system at time t, which we denote by X(t), through its moment generating function.
To this end, let N (t) denote a Poisson random variable with mean λt, and (U 1 , U 2 , . ..) i.i.d.uniformly distributed random variables (on the interval [0, t]).Assume in addition that the random objects (B 1 , B 2 , . ..), N (t), and (U 1 , U 2 , . ..) are independent.Then it is well-known that the value of our shot-noise process at time t can be expressed as where the distributional equality is a consequence of the fact that the distribution of U is symmetric on the interval [0, t].It is easy to compute the moment generating function (mgf) of X(t), by conditioning on the value of N (t): where β(•) is the mgf corresponding to B (throughout assumed to exist).By differentiating and inserting ϑ = 0, it follows immediately that Higher moments can be found by repeated differentiation.We note that, as t is held fixed throughout the document, we often write N rather than N (t).

2.2.
Tail probabilities, change of measure.The next objective is to consider the asymptotics of the random variable X(t) under a particular scaling.In this scaling we let the arrival rate be nλ rather than just λ.The value of the shot-noise process is now given by with the vector (X under this choice the random variable has the desired mean: The question is now: how to sample a random variable that has this mgf?To this end, notice that From this expression we can see how to sample the X i (t) under Q, as follows.In the first place we conclude that under Q the number of arrivals becomes Poisson with mean rather than λt (which is an increase).Likewise, it entails that under Q the distribution of the B j e −rU j should be twisted by ϑ , in the sense that these random variables should have under .
We now point out how such a random variable should be sampled.To this end, observe that From this representation two conclusions can be drawn.In the first place, supposing there are k arrivals, then the arrival epochs U 1 , . . ., U k are i.i.d.under Q, with the density given by β(e −rv ϑ ) dv .
In the second place, given that the k-th arrival occurs at time u, the density of the corresponding job size B k should be exponentially twisted by e −ru ϑ (where each of the job sizes is sampled independently of everything else).Now that we know how to sample from Q it is straightforward to implement the importance sampling.Before we describe its complexity (in terms of the number of runs required to obtain an estimate with given precision), we first provide an example in which we demonstrate how the change of measure can be performed.
Example 1.In this example we consider the case that the B i are exponentially distributed with mean µ −1 .Applying the transformation w := e −ru ϑ/µ, it is first seen that As ϑ solves the equation M (ϑ )/M (ϑ ) = a, we obtain the quadratic equation (where it is readily checked that ϑ ∈ (0, µ)).Now we compute what the alternative measure Q amounts to.In the first place, the number of arrivals should become Poisson with parameter λ r log µe rt − ϑ µ − ϑ (which is larger than λt).In addition, we can check that (rather than u/t).The function F Q U (u) has the value 0 for u = 0 and the value 1 for u = t, and is concave.This concavity reflects that the arrival epochs of the shots tend to be closer to 0 under Q than under P.This is because we swapped U and t − U in (1); along the most likely path of Y n (t) itself the shots will be typically closer to t under Q.Observe that one can sample U under Q using the classical inverse distribution function method [1, Section II.2a]: with H denoting a uniform number on [0, 1), we obtain such a sample by Also, conditional on a U i having attained the value u, the jobs B i should be sampled from an exponential distribution with mean (µ − e −ru ϑ ) −1 .

Remark 1.
In the model we study in this section, the input of the linear network is a compound Poisson process.As pointed out in [12] the class of inputs can be extended to the more general class of increasing Lévy processes in a straightforward manner.
Remark 2. Consider the situation that the X i (t) are i.i.d.copies of N i=1 B j e −rU j , but now we do not impose the assumption of the U j having a uniform distribution.It is readily checked that under Q the arrival rate becomes (in self-evident notation) , whereas the job sizes are exponentially twisted by e −ru ϑ for an arrival occurring at time u.Many of the results in this paper can be extended to this setting.

2.3.
Efficiency properties of importance sampling procedure.In this subsection we analyze the performance of the procedure introduced in the previous section.The focus is on a characterization of the number of runs needed to obtain an estimate with a given precision (at a given confidence level).In every run Y n (t) is sampled under Q, as pointed out above.As Q is an implementation of an exponential twist (with twist ϑ ), the likelihood ratio is given by and I is the indicator function of the event {Y n (t) na}.We keep performing this experiment until the ratio of the half-width of the confidence interval (with critical value T ) and the estimator drops below some predefined ε (say, 10%).Under P the number of runs needed is effectively inversely proportional to p n (a), hence exponentially increasing in n.We now focus on quantifying the reduction of the number of runs when using our importance sampling procedure based on the measure Q.
Using a Normal approximation, it is a standard reasoning that when performing N runs the ratio of the half-width of the confidence interval and the estimator is approximately and hence the number of runs needed is roughly We now analyze how Σ n behaves as a function of the 'rarity parameter' n.Due to the Bahadur-Rao result [3], Using the same proof technique as in [3], it can be shown that a) .
We can use these asymptotics, to conclude that under Q the number of runs required grows slowly in n.More specifically, Σ n is essentially proportional to √ n for n large.This leads to the following result.
2.4.Simulation experiments.In this subsection we present numerical results for the single-node model without Markov modulation.We focus on the case of exponential jobs, as in Example 1.We simulate until the estimate has reached the precision ε = 0.1, with confidence level 0.95 (such that the critical value is T = 1.96).The parameters chosen are: t = 1, r = 1, λ = 1, and µ = 1.We set a = 1 (which is larger than m(t) = 1 − e −1 ).As it turns out, ϑ = 0.2918 and The top-left panel of Fig. 1 confirms the exponential decay of the probability of interest, as a function of n.In the top-right panel we verify that the number of runs indeed grows proportionally to √ n; the value of α, as defined in (5), is 198.7, which is depicted by the horizontal line.The bottom-left panel shows the density of the arrival epochs, which confirms that the arrival epochs tend to be closer to 0 under Q than under P; recall that under P these epochs are uniformly distributed on [0, t].Recall that we reversed time in (1): for the actual shot-noise system that we are considering, it means that in order to reach the desired level at time t, the arrival epochs tend to be closer to t under Q than under P.The bottom-right panel presents the rate of the exponential job sizes as a function of u.Using (4), the arrival rate under Q turns out to be 1.2315.

Multi-node systems, no modulation
In this section we consider multi-node linear networks, of the type analyzed in the work by Kella and Whitt [12].It is instructive to first consider the simplest multi-node system: a tandem network without external input in the downstream node (and rates r i for node i, i = 1, 2); later we extend the ideas developed to general linear networks.
3.1.Preliminaries.As mentioned above, we first consider the two-node tandem.The content of the first node is, as before, (with N having a Poisson distribution with mean λt), but it can be argued that the content of the second node satisfies a similar representation.More specifically, using the machinery developed in [12], it turns out that As before, perform the scaling by n, meaning that the arrival rate λ is inflated by a factor n.
It leads to the random vectors (X (1) n (t)) and (X (2) n (t)).With these vectors we can define Y  n (t), analogously to how this was done in the single-node case; these two random quantities represent the contents of the upstream resource and the downstream resource, respectively.The state of this tandem system can be uniquely characterized in terms of its (bivariate) moment generating function.The technique to derive an explicit expression is by relying on the above distributional equality (6).Again, the key step is to condition on the number of shots that have arrived in the interval [0, t]: The above computation is for the two-node tandem system, but the underlying procedure can be extended to the case of networks with more than 2 nodes, and external input in each of the nodes.To this end, we consider the following network consisting of L nodes.Jobs are generated according to a Poisson process.At an arrival epoch, an amount is added to the content of resource ∈ {1, . . ., L}; this amount is distributed as the (non-negative) random variable B ( ) ; β(ϑ), for ϑ ∈ R L is the joint moment generating function of B (1) up to B (L) (note that the components are not assumed independent).In addition, let the traffic level at node i decay exponentially with rate r i .A fraction p ii 0 (i = i ) is then fed into node i , whereas a fraction p ii 0 leaves the network (with L i =1 p ii = 1).We denote r ii := r i p ii .As an aside we mention that this general model covers models in which some arrivals (of the Poisson process with parameter λ) actually lead to arrivals at only a subset of the L queues (since the job sizes B (1) , . . ., B (L) are allowed to equal 0).We now point out how the joint buffer content process can be analyzed.Again our objective is to evaluate the moment generating function.Define the matrix R as follows: its (i, i)-th entry is r ii + i =i r ii , whereas its (i, i )-th entry (with i = i ) is −r ii .We have, according to Kella and Whitt [12], with N again Poisson with mean λt, the following distributional equality: for any ∈ {1, . . ., L}, It means we can compute the joint moment generating function as follows, cf.[12, Thm.5.1]: which is the true matrix/vector-counterpart of the expression (2) that we found in the singlenode case.
3.2.Tail probabilities, change of measure.In this subsection we introduce the change of measure that we use in our importance sampling approach.Many of the concepts are analogous to concepts used for the single-node case in Section 2. Define (in self-evident notation) Due to the multivariate version of Cramér's theorem, with It is remarked that more refined asymptotics than the logarithmic asymptotics of ( 7) are available as well, but these are not yet relevant in the context of the present subsection; we return to these 'exact asymptotics' in Section 3.3.We assume that the set A is 'rare', in the sense that Let us first verify how the importance sampling measure can be constructed in the tandem model, mimicking the reasoning we used in the single-node case; after that we extend this to general linear networks.We let (ϑ 1 , ϑ 2 ) be the optimizing (ϑ 1 , ϑ 2 ) in the decay rate of p n (a 1 , a 2 ).As it turns out, in the first place the number of arrivals becomes Poisson with mean (rather than λt).The density of U under Q becomes .
Given a sample from this distribution attains the value u, the distribution of B should be twisted by Analogously to what we found above in the two-node tandem, we can identify Q for general linear networks.We find that under Q the number of arrivals becomes Poisson with parameter The arrival epochs should be drawn using the density Given an arrival at time u, (B (1) , . . ., B (L) ) should be exponentially twisted by (e −Ru ϑ ) 1 , . . ., (e −Ru ϑ ) L .

3.3.
Efficiency properties of importance sampling procedure.We now consider the efficiency properties of the change of measure proposed in the previous subsection.To this end, we first argue that the vector ϑ generally consists of a number of (at least one) strictly positive entries, whereas the other entries equal 0; i.e., there are no negative entries.To this end, we first denote by b the 'most likely point' in A: Suppose now that ϑ i (b ) < 0. Increasing the i-th component of the b (while leaving all other components unchanged) would lead to a vector that is still in A, but that by virtue of (8) corresponds to a lower value of the objective function I(•), thus yielding that b was not the optimizer; we have thus found a contradiction.Similarly, when ϑ i (b ) = 0 we have that b i > a i (as otherwise a reduction of the objective function value would be possible, which contradicts b being minimizer).Now define Θ as the subset of i ∈ {1, . . ., L} such that ϑ i > 0, and let D ∈ {1, . . ., L} the number of elements of Θ.We now argue that the number of runs needed to obtain an estimate of predefined precision scales as n D/2 .Relying on the results from [6] (in particular their Thm.3.4), it follows that p n (a) behaves (for n large) proportionally to n −D/2 exp(−nI(b)); using the same machinery, E Q (L 2 I) behaves proportionally to n −D/2 exp(−2nI(b)).Mimicking the line of reasoning of Section 2.3, we conclude that the number of runs needed is essentially proportional to n D/2 .The formal statement is as follows.
Proposition 2. As n → ∞, where τ is the determinant of the Hessian of log M (ϑ) in ϑ .
We further illustrate the ideas and intuition behind the qualitative result described in the above proposition by considering the case L = 2.It is noted that three cases may arise: (i) Θ = {1, 2}, (ii) Θ = {1}, (iii) Θ = {2}; as case (iii) can be dealt with in the same way as case (ii), we concentrate on the cases (i) and (ii) only.In case (i), where D = 2, the necessary condition [6, Eqn.(3.4)] is fulfilled as ϑ > 0 componentwise.As in addition the conditions A-C of [6] are in place, it is concluded that [6, Thm.3.4] can be applied, leading to b = a, and where τ is the determinant of the Hessian of log M (ϑ) in ϑ .Along the same lines, it can be shown that a) .
It now follows that Σ n is roughly linear in n: with ε and T as introduced in Section 2.3, In case (ii), we do not have that ϑ > 0 componentwise, and hence [6, Thm.3.4] does not apply; in the above terminology, D = 1 < 2 = L. Observe that in this case the exponential decay rate of the event {Y (1) n (t) < na 2 } strictly majorizes that of {Y (1) (informally: the former event is substantially less likely than the latter).It thus follows that b 1 = a 1 and b 2 > a 2 , and , and in addition As a consequence in this regime Σ n grows essentially proportional to √ n for n large: In case (iii) Σ n behaves proportionally to √ n as well.
3.4.Simulation experiments.We conclude this section by a few numerical illustrations.
In the first set we focus on the downstream queue only (i.e., we analyze p n (0, a 2 )), whereas in the second set we consider the joint exceedance probability p n (a).The precision and confidence have been chosen as in Example 1.Throughout we take t = 1, r 1 = 2, r 2 = 1, λ = 1, and µ = 1.
In the first set of experiments we take a 1 = 0 and a 2 = 1.Elementary numerical analysis yields that ϑ = 0.8104 and τ = 1.4774, leading to α, as defined in (10), equalling 474.3.The four panels of Fig. 2 can be interpreted as their counterparts in Fig. 1.Interestingly, the bottom-left panel indicates that in the tandem system it does not pay off to let jobs arrive right before t (as they first have to go through the first resource to end up in the second resource), as reflected by the shape of the density of the arrival epochs under Q; to this end, recall that we reversed time in (6), so that a low density at u = 0 in the graph corresponds to a high density at u = t in the actual system.The arrival rate under Q is 1.5103.
In the second set of experiments we take a 1 = 1.2 and a 2 = 1.1; all other parameters are the same as in the first set.As mentioned above, we now consider the joint exceedance probability.As it turns out, ϑ 1 = 0.1367 and ϑ 2 = 0.2225.The top-right panel of Fig. 3 shows that for this specific parameter setting Σ n /n converges to the limiting constant rather slowly.Concerning the bottom-left panel, note that in Section 2 we saw that to make sure the first queue gets large it helps to have arrivals at the end of the interval, whereas above we observed that to make the second queue large arrivals should occur relatively early.We now focus on the event that both queues are large, and consequently the arrival distribution becomes relatively uniform again, as shown in the bottom-left panel.The arrival rate under Q is 2.3478.

Multi-node systems under Markov modulation
In this section consider the networks analyzed in the previous section, but now in a random environment.More specifically, the type of random environment we focus on here is known as Markov modulation: the system dynamics are affected by the state of an external finite-state irreducible Markov process J(•) with transition matrix Q = (q jj ) d j,j =1 .When this Markov process (usually referred to as the background process) is in state j, arrivals occur according to a Poisson process with rate λ j , the mgf of the job size is β j (ϑ), and the routing matrix is R j .Analogously to the definitions used in the case without Markov modulation, this routing matrix' (i, i)-th entry is (R j ) ii := r which can be interpreted as the rate at which fluid leaves server i when the background process is in j.Likewise, for i = i , (R j ) ii := −r ii , which is the rate at which fluid flows from server i to i when the background process is in j.
Below we assume that J(0) = j 0 for a fixed state j 0 ∈ {1, . . ., d}; it is seen that all results generalize to an arbitrary initial distribution in a straightforward manner.
The structure of the section is in line with that of the previous two sections: we subsequently describe general results for the model under consideration (extending earlier results from [11]), propose an importance sampling measure, establish efficiency properties of the corresponding estimator, and present a number of numerical experiments.
4.1.Preliminaries.Multi-node systems under Markov modulation have been studied in detail by Kella and Stadje [11].We start this section by providing a compact derivation of a pde characterizing the system's transient behavior, which was not included in that paper.
To this end, we define, for j ∈ {1, . . ., d}, with 1 j (t) the indicator function of the event that J(t) = j.Using the standard 'Markov machinery', Ξ j (ϑ, t + ∆t) equals (up to o(∆t) terms) the sum of a contribution due to the scenario that an arrival occurs between t and t + ∆t, a contribution j =j q j j ∆t Ξ j (ϑ, t) due to the scenario that the modulating Markov process jumps between t and t + ∆t, and a contribution with q j := −q jj ; regarding the last term, observe that when the background process is in state j, and no new job arrives between t and t + ∆t, We thus find that This immediately leads to (by letting ∆t ↓ 0) Let us now compactly summarize the relation (11), in vector/matrix notation.This notation will prove practical when computing higher moments; in other (but related) contexts, similar procedures have been proposed in e.g.[9,14].Let M n 1 ×n 2 be the set of R-valued matrices of dimension n 1 × n 2 (for generic n 1 , n 2 ∈ N).In addition, I n is the identity matrix of dimension n ∈ N. We introduce the following three matrices in M d×d , M d×d , and M Ld×Ld , respectively: We use the conventional notation ⊗ for the Kronecker product.Recall that the Kronecker product is bilinear, associative and distributive with respect to addition; these properties we will use in the sequel without mentioning.It also satisfies the mixed product property We now consider some differentiation rules for matrix-valued functions which will allow us to iteratively evaluate moments.In the first place we define the operator ∇ ϑ for ϑ ∈ R L ; to keep notation compact, we often suppress the subscript ϑ, and write just ∇.Let f ≡ f (ϑ) be a mapping of R L to M n 1 ×n 2 .Then ∇f ≡ ∇f (ϑ) ∈ M n 1 L×n 2 is defined by , where In the above definition ∇f ij ≡ ∇f ij (ϑ) is to be understood as the usual gradient; the symbol ∂ i is used to denote the partial derivative with respect to the i-th variable, in the sense of Furthermore, we define inductively In the sequel we use a couple of differentiation rules, that we have listed below.Let A(•) be a matrix-valued function from R L to M n 1 ×n 2 , and B(•) a matrix-valued function from R L to M n 2 ×n 3 , and let I q be a q × q identity matrix (for some q ∈ N).Then, -Product rule: being an element of M Ln 1 ×n 3 .-Differentiation of Kronecker product (1): -Differentiation of Kronecker product (2): where K m,n is the commutation matrix defined by and H ij ∈ M m×n denotes a matrix with a 1 at its (i, j)-th position and zeros elsewhere, cf.[13].
The first rule can be checked componentwise and the second rule is trivial.The third rule follows from the first and second rule in combination with the fact that the Kronecker product commutes after a correction with the commutation matrices.Moreover, we use the property An overview of the properties of commutation matrices can be found in [13].In the introduced terminology, it follows that (11) can be written as We now point out how (transient and stationary) moments can be evaluated; note that [11] focuses on the first two stationary moments at epochs that the background process jumps.We throughout use the notation z i (t) for the i-th derivative of Ξ(ϑ, t) in (0, t), for t 0: for i ∈ N. Note that, with π j (t) = (exp(Qt)) j 0 ,j , Ξ(ϑ, 0) = e j 0 , Ξ(0, t) = π(t) T ≡ (π 1 (t), . . ., π d (t)).
• We start by characterizing the first moments.Applying the operator ∇ ≡ ∇ ϑ to the differential equation ( 12) yields using standard properties of the Kronecker product in combination with where e i denotes the L-dimensional column vector in which component i equals 1 and all other components are 0.Then, inserting ϑ = 0 into (13) yields the system of (non-homogeneous) linear differential equations In the stationary case, we obtain with π := lim t→∞ π(t) (being the unique non-negative solution of π T Q = 0 T such that the entries of π sum to 1).
• We now move to second moments.Applying the ∇ ϑ -operator to (13), and the factor ∇ ϑ ((( where z 1 (t) solves Eqn.(14).As before, the stationary quantities can be easily derived (by equating z 2 (t) to 0).One has to keep in mind, however, that some of the mixed partial derivatives occur multiple times in z k , for k ∈ {2, 3, . ..}, and therefore the solution will only be unique after removing the corresponding redundant rows.Alternatively, the system can be completed by including equations which state that these mixed partial derivatives are equal.• It now follows that higher moments can be found recursively, using the three differentiation rules that we stated above.
Remark 3. Various variants of our model can be dealt with similarly.In this remark we consider the slightly adapted model in which shots only occur simultaneously with a jump in the modulating Markov chain.Then (up to o(∆t) terms) Ξ j (ϑ, t + ∆t) is the sum of a contribution due to the scenario that there is a jump in the modulating chain in the interval [t, t + ∆t] (which also induces a shot), and a contribution of with q j := −q jj , in the scenario that there is no jump.Performing the same steps as above, we obtain which has a similar structure as (11).It follows that the moments can be found as before.
With Q := diag{q 1 , . . ., q d }, it turns out that the transient means are given by In particular, the stationary first moment equals Consider the following numerical example for the computation of the expected values and variances, in which the technique described above is illustrated.
Example 2. In this example, we choose the parameters in such a way that we see nonmonotonic behavior.Our example corresponds to a case in which the system does not start empty, which is dealt with by imposing suitable starting conditions.We consider a twodimensional (L = 2) queueing system, with a two-dimensional state space of the Markov modulating process (d = 2).We pick q 12 = q 21 = 1, (3,3), and the rate matrices Due to the symmetry in the choice of the parameters, one can expect that for both states of the background process expected value tends (as t grows large) to the same steady-state value;

Steady state
Cor(X 1 (t), X 2 (t)) the same is anticipated for the stationary variance.This is confirmed by Fig. 4. For t small, the two queues behave differently due to J(0) = 1, which implies that queue 1 drains faster.Note that E X 2 (t) even increases for t small, due to the fact that the flow from node 1 to 2 equals the flow from 2 to 1, constituting a net flow of zero, so that the additional contribution of external output to node 2 leads to a net increase of E X 2 (t).The transient correlation is plotted in Fig. 5.At time t = 0 the queues are perfectly correlated, since the starting state is deterministic.Then the correlation decreases due to the asymmetric flow rates until around t = 1, which is when the Markov chain J is expected to switch, after which the correlation monotonously tends to the steady state.4.2.Tail probabilities, change of measure.We now characterize the decay rate of the rare-event probability under study, and we propose a change of measure to efficiently estimate it.In the notation we have been using so far, we again focus on p n (a) := P Y (1)  n (t) na 1 , . . ., Y (L) n (t) na where n (t), . . ., Y (L) n (t)).First we find an alternative characterization of the state of the system at time t.Let F t denote the set of all functions from [0, t] onto the states {1, . . ., d}.Consider a path f ∈ F t .Let f have K(f ) jumps between 0 and t, whose epochs we denote by t 1 (f ) up to t K(f ) (f ) (and in addition t 0 (f ) := 0 and t K(f )+1 (f ) := t).Let (i.e., the state of f immediately after the i-th jump).We also introduce Suppose now that the Markov process J(•) follows the path f ∈ F t .Then the contribution to the mgf of X(t) due to shots that arrived between t i (f ) and t i+1 (f ) is, mimicking the arguments that we used in Section 3.2 for non-modulated networks, As a consequence, the mgf of X(t) given the path f is First conditioning on the path of J(•) ∈ F t between 0 and t and then deconditioning, it then immediately follows that the mgf of X(t) is given by Then, precisely as is shown in [4] for a related stochastic system, the decay rate can be characterized: Informally, the optimizing path f has the interpretation of the most likely path of J(•) given that the rare event under consideration happens.To make sure that the event under consideration is rare, we assume that for all f ∈ F t The change of measure we propose is the following.First we sample the path J(s) for s ∈ [0, t] under the original measure P (i.e., with J(0) = j 0 , and then using the transition rate matrix Q).We call the resulting path f ∈ F t .For this path, define ϑ f 0 as the optimizing ϑ in the definition of I(f ) in ( 16); b f ∈ A is the optimizing b.Conditional on the path f of the background process, under the new measure Q the number of external arrivals between t i (f ) and t i+1 (f ) is Poisson with parameter . The arrival epochs between t i (f ) and t i+1 (f ) should be drawn using the density Given an arrival at time u between t i (f ) and t i+1 (f ), the job sizes (B (1) , . . ., B (L) ) should be sampled from a distribution with mgf β j i (f ) (ϑ), but then exponentially twisted by 4.3.Efficiency properties of importance sampling procedure.We now analyze the speed up realized by the change of measure introduced in the previous subsection.Unlike our results for the non-modulated systems, now we cannot find the precise rate of growth of Σ n .What is possible though, is proving asymptotic efficiency (also sometimes referred to as logarithmic efficiency), in the sense that we can show that (where the second equality is a consequence of ( 16)).This equality is proven as follows.As by Jensen's inequality , we are left to prove the upper bound: If the path of J(•) equals f ∈ F t , it follows by an elementary computation that we have constructed the measure Q such that The fact that ϑ f is componentwise non-negative, in combination with the fact that Y n (t) a when I = 1, entails that noting that a and b f may only differ if the corresponding entry of ϑ f equals 0 (that is, a − b f , ϑ f = 0).The upper bound thus follows: with f the minimizing path in (16), recalling that J(•) is sampled under P, E Q (L 2 I) E e −2n I J (a) e −2n I f (a) .
We have established the following result.Proposition 3. As n → ∞, the proposed importance sampling procedure is asymptotically efficient.This means that the number of runs needed grows subexponentially: 4.4.Simulation experiments.We performed experiments featuring a single-node system under Markov modulation.In our example the job sizes stem from an exponential distribution.When the background process is in state i, the arrival rate is λ i , the job-size distribution is exponential with parameter µ i , and the rate at which the storage level decays is r i , for i ∈ {1, . . ., d}.
The change of measure is then implemented as follows.As pointed out in Section 4.2, per run a path f of the background process is sampled under the original measure P. Suppose along this path there are K transitions (remarking that, for compactness, we leave out the argument f here), say at times t 1 up to t K ; with t 0 = 0 and t K+1 = t, the state between t i and t i+1 is denoted by j i , for i = 0, . . ., K. Per run a specific change of measure is to be computed, parametrized by the t i and j i , as follows.We define P i (u) := Pi e r j i u , Pi := e −r j i t i+1 K i =i+1 e −r j i (t i +1 −t i ) ; 0 2,000 4,000 6,000 8,000 10,000 10 −5  the product in this expression should be interpreted as Let ϑ be the maximizing argument of ϑa − log M (ϑ).
We can now provide the alternative measure Q for this path of the background process.The number of arrivals between t i and t i+1 (for i = 0, . . ., K) becomes Poisson with parameter (where it is noted that this expression is larger than λ j i (t i+1 − t i ), which was the parameter under P).The density of each of the arrivals between t i and t i+1 becomes 1 µ j i − P i (u) ϑ t i+1 t i 1 µ j i − P i (v) ϑ dv = µ j i µ j i − P i (u) ϑ 1 r j i log µ j i − Pi e r j i t i ϑ µ j i e −r j i (t i+1 −t i ) − Pi e r j i t i ϑ (rather than a uniform distribution, as was the case under P); sampling from this distribution is easy, since the inverse distribution function can be determined in closed form.Given an arrival that takes place at time u between t i and t i+1 , the job size is exponential with parameter µ j i − P i (u) ϑ (rather than exponential with parameter µ j i ).
We now describe two examples in which the dimension of the background process is d = 2, q 12 = q 21 = 2, and t = 1.In the first example we fix a = 3, λ = (2, 1), µ = ( 1 2 , 1), and r = (5, 1), in the second example a = 0.8, λ = (0.9, 1), µ = (0.9 −1 , 1), and r = (0.3, 0.6).As before, we simulate until the precision of the estimate has reached ε = 0.1.The top two panels in Figs.6-7 should be read as those in Figs.1-3; the bottom two panels correspond to the density of the arrival epochs and the rate of the exponential job sizes, respectively, for f the 'empirical maximizer' of I f (a) (i.e., the maximizer of I f (a) over all paths f of the background process that were sampled in the simulation experiment).In the first example the thus obtained 'optimal path' subsequently visits states 1, 2, and 1, where the corresponding jump times are t 1 = 0.654 and t 2 = 0.739, and the decay rate is 0.573.The mean numbers of arrivals in the three parts of the optimal path are 1.392, 0.090 and 0.963 respectively, whereas for Monte Carlo sampling these are 1.308, 0.085 and 0.522 respectively.In the second example the optimal path subsequently visits states 2 and 1, where the corresponding jump time is t 1 = 0.790.In this case the decay rate has the value 0.000806.The mean numbers of arrivals in the two parts of the optimal path are 0.812 and 0.195 respectively, which are slightly higher than the corresponding values under Monte Carlo sampling (0.790 and 0.189 respectively).Observe that in this example the difference between the two measures is relative small, also reflected by the small value of the decay rate; the event under consideration technically qualifies as 'rare' in that p n (0.8) → 0 as n → ∞, but has a relatively high likelihood (e.g. as compared to the first example).As a consequence of the fact that both measures almost coincide, the two densities in the bottom-left panel can hardly be distinguished.We observe that the top panels confirm that in both examples (i) p n (a) decays roughly exponentially in n, (ii) the number of runs needed grows roughly linearly in n (in the first example slightly sublinearly).

Discussion and concluding remarks
In this paper we have considered the probability of attaining a value in a rare set A at a fixed point in time t.A relevant related quantity is the probability of having reached the set A before t: P ∃s t : Y (1)  n (s) na 1 , . . ., Y (L) n (s) na L ; observe that this probability increases to 1 as t → ∞.Alternatively, one could study the probability that all a (for = 1, . . ., L) are exceeded before t, but not necessarily at the same time: P ∃s 1 t : Y (1)  n (s 1 ) na 1 , . . ., ∃s L t : Y (L) n (s L ) na L .
Powerful novel sample-path large deviations results by Budhiraja and Nyquist [5], which deal with a general class of multi-dimensional shot-noise processes, may facilitate the development of efficient importance sampling algorithms for non-modulated linear networks.The results in [5] do not cover Markov modulation, though.
In the current setup of Section 4 the speed of the background process is kept fixed, i.e., not scaled by n.For modulated diffusions a sample-path large deviation principle has been recently established in [10] for the case that the background process is sped up by a factor n (which amounts to multiplying the transition rate matrix Q by n); the rate function decouples into (i) a part concerning the rare-event behavior of the background process and (ii) a part concerning the rare-event behavior of the diffusion (conditional on the path of the background process).With a similar result for the Markov-modulated linear networks that we have studied in this paper, one could potentially set up an efficient importance sampling procedure for the probabilities (17) and (18) under this scaling.

Figure 4 .
Figure 4. Transient expected values and variances of Example 2.