Abstract
We have explored a number of topics motivated by concrete applications. It is time to stitch together these ideas into a complete panorama. In addition, we provide some complements.
Section 15.1 discusses the general question of inference: what can one deduce from observations? Section 15.2 explains the important notion of sufficient statistic: what is the relevant data in a set of observations? Section 15.3 presents the theory of Markov chains where the number of states is infinite. Section 15.4 explains the Poisson process. Section 15.5 discusses the boosting algorithm for choosing among experts. What drug should you further research; what noisy channel should one use? These are examples of multi-armed bandit problems. In such problems one faces the trade-off between exploiting known possibilities versus exploring potentially more rewarding but less well understood alternatives. Section 15.6 explains a key results for such multi-armed bandit problems. Information Theory studies the limits of communication systems: how fast can one transmit bits reliably over a noisy channel? How many bits should be transmitted to convey some information? Section 15.7 introduces some key concepts and results of Information Theory. When estimating the likelihood of errors or the reliability of some estimates, one usually has to calculate bounds on the probability that a random variable exceeds a given value. Section 15.8 discusses some useful probability bounds. Section 15.9 explains the main ideas of the theory of martingales and shows how it provides a proof of the law of large numbers.
You have full access to this open access chapter, Download chapter PDF
Topics: Inference, Sufficient Statistic, Infinite Markov Chains, Poisson, Boosting, Multi-Armed Bandits, Capacity, Bounds, Martingales, SLLN
15.1 Inference
One key concept that we explored is that of inference. The general problem of inference can be formulated as follows. There is a pair of random quantities (X, Y ). One observes Y and one wants to guess X (Fig. 15.1).
Thus, the goal is to find a function g(⋅) such that \(\hat X := g(Y)\) is close to X, in a sense to be made precise. Here are a few sample problems:
-
X is the weight of a person and Y is her height;
-
X = 1 is a house is on fire, X = 0 otherwise, and Y is a measurement of the CO density at a sensor;
-
X ∈{0, 1}N is a bit string that a transmitter sends and \(Y \in \Re ^{[0, T]}\) is a signal that the receiver receives;
-
Y is one woman’s genome and X = 1 if she develops a specific form of breast cancer and X = 0 otherwise;
-
Y is a vector of characteristics of a movie and of one person and X is the number of stars that the person gives to the movie;
-
Y is the photograph of a person’s face and X = 1 if it is that of a man and X = 0 otherwise;
-
X is a sentence and Y is the signal that a microphone picks up.
We explained a few different formulations of this problem in Chaps. 7 and 9:
-
Known Distribution: We know the joint distribution of (X, Y );
-
Off-Line: We observe a set of sample values of (X, Y );
-
On-Line: We observe successive values of samples of (X, Y );
-
Maximum Likelihood Estimate: We do not want to assume a distribution for X, only the conditional distribution of Y given X; the goal is to find the value of X that makes the observed Y most likely;
-
Maximum A Posteriori Estimate: We know a prior distribution for X and the conditional distribution of Y given X; the goal is to find the value of X that is most likely given Y ;
-
Hypothesis Test: We do not want to assume a distribution for X ∈{0, 1}, only a conditional distribution of Y given X; the goal is to maximize the probability of correctly deciding that X = 1 while keeping the probability that we decide that X = 1 when in fact X = 0 below some given β.
-
MMSE: Given the joint distribution of X and Y , we want to find the function g(Y ) that minimizes E((X − g(Y ))2).
-
LLSE: Given the joint distribution of X and Y , we want to find the linear function a + bY that minimizes E((X − a − bY )2).
15.2 Sufficient Statistic
A useful notion for inference problems is that of a sufficient statistic. We have not discussed this notion so far. It is time to do it.
Definition 15.1 (Sufficient Statistic)
We say that h(Y ) is a sufficient statistic for X if
or, equivalently, it
◇
We leave the verification of this equivalence to the reader.
Before we discuss the meaning of this definition, let us explore some implications. First note that if we have a prior f X(x) and we want to calculate MAP[X|Y = y], we have
Consequently, the maximizer is some function of h(y). Hence,
for some function g(⋅). In words, the information in Y that is useful to calculate MAP[X|Y ] is contained in h(Y ).
In the same way, we see that MLE[X|Y ] is also a function of h(Y ).
Observe also that
Now,
where
Hence,
Thus, the conditional density of X given Y depends only on h(Y ). Consequently,
Now, consider the hypothesis testing problem when X ∈{0, 1}. Note that
Thus, the likelihood ratio depends only on h(y) and it follows that the solution of the hypothesis testing problem is also a function of h(Y ).
15.2.1 Interpretation
The definition of sufficient statistic is quite abstract. The intuitive meaning is that if h(Y ) is sufficient for X, then Y is some function of h(Y ) and a random variable Z that is independent of X and Y . That is,
For instance, say that Y = (Y 1, …, Y n) where the Y m are i.i.d. and Bernoulli with parameter X ∈ [0, 1]. Let h(Y ) = Y 1 + ⋯ + Y n. Then we can think of Y as being constructed from h(Y ) by selecting randomly which h(Y ) random variables among (Y 1, …, Y n) are equal to one. This random choice is some independent random variable Z. In such a case, we see that Y does not contain any information about X that is not already in h(Y ).
To see the equivalence between this interpretation and the definition, first assume that (15.1) holds. Then
so that h(Y ) is sufficient for X. Conversely, if h(Y ) is sufficient for X, then we can find some Z such that g(h(y), Z) has the density f Y |h(Y )[y|h(y)].
15.3 Infinite Markov Chains
We studied Markov chains on a finite state space \(\mathcal {X} = \{1, 2, \ldots , N\}\). Let us explore the countably infinite case where \(\mathcal {X} = \{0, 1, \ldots \}\).
One is given an initial distribution \(\pi = \{\pi (x), x \in \mathcal {X}\}\), where π(x) ≥ 0 and \(\sum _{x \in \mathcal {X}} \pi (x) = 1\). Also, one is given a set of nonnegative numbers \(\{P(x, y), x, y \in \mathcal {X}\}\) such that
The sequence {X(n), n ≥ 0} is then a Markov chain with initial distribution π and probability transition matrix P if
for all n ≥ 0 and all x 0, …, x n in \(\mathcal {X}\).
One defines irreducible and aperiodic as in the case of a finite Markov chain. Recall that if a finite Markov chain is irreducible, then it visits all its states infinitely often and it spends a positive fraction of time in each state.
That may not happen when the Markov chain is infinite. To see this, consider the following example (see Fig. 15.2). One has π(0) = 1 and P(i, i + 1) = p for i ≥ 1 and
Assume that p ∈ (0, 1). Then the Markov chain is irreducible. However, it is intuitively clear that X(n) →∞ as n →∞ if p > 0.5. To see that this is indeed the case, let Z(n) be i.i.d. random variables with P(Z(n) = 1) = p and P(Z(n) = −1) = q. Then note that
so that
Also,
where the convergence follows by the SLLN. This implies that X(n) →∞, as claimed.
Thus, X(n) eventually is larger than any given N and remains larger. This shows that X(n) visits every state only finitely many times. We say that the states are transient because they are visited only finitely often.
We say that a state is recurrent if it is not transient. In that case, the state is called positive recurrent if the average time between successive visits is finite; otherwise it is called null recurrent.
Here is the result that corresponds to Theorem 1.1
Theorem 15.1 (Big Theorem for Infinite Markov Chains)
Consider an infinite Markov chain.
-
(a)
If the Markov chain is irreducible, the states are either all transient, all positive recurrent, or all null recurrent. We then say that the Markov chain is transient, positive recurrent, or null recurrent, respectively.
-
(b)
If the Markov chain is positive recurrent, it has a unique invariant distribution π and π(i) is the long-term fraction of time that X(n) is equal to i.
-
(c)
If the Markov chain is positive recurrent and also aperiodic, then the distribution π n of X(n) converges to π.
-
(d)
If the Markov chain is not positive recurrent, it does not have an invariant distribution and the fraction of time that it spends in any state goes to zero.
\({\blacksquare }\)
It turns out that the Markov chain in Fig. 15.2 is null recurrent for p = 0.5 and positive recurrent for p < 0.5. In the latter case, its invariant distribution is
15.3.1 Lyapunov–Foster Criterion
Here is a useful sufficient condition for positive recurrence.
Theorem 15.2 (Lyapunov–Foster)
Let X(n) be an irreducible Markov chain on an infinite state space \(\mathcal {X}\) . Assume there exists some function \(V: \mathcal {X} \rightarrow [0, \infty )\) such that
where A is a finite set, α > 0 and β > 0.
Then the Markov chain is positive recurrent.
Such a function V is said to be a Lyapunov function for the Markov chain. \({\blacksquare }\)
The condition means that the Lyapunov function decreases by at least α on average when X(n) is outside some finite set A. The intuitive reason why this makes the Markov chain positive recurrent is that, since the Lyapunov function is nonnegative, it cannot decrease forever. Thus, it must spend a positive fraction of time inside the finite set A. By the big theorem, this implies that it is positive recurrent.
15.4 Poisson Process
The Poisson process is an important model in applied probability. It is a good approximation of the arrivals of packets at a router, of telephone calls, of new TCP connections, of customers at a cashier.
15.4.1 Definition
We start with a definition of the Poisson process. (See Fig. 15.3.)
Definition 15.2 (Poisson Process)
Let λ > 0 and {S 1, S 2, …} be i.i.d. Exp(λ) random variables. Let also T n = S 1 + ⋯ + S n for n ≥ 1. Define
with N t = 0 if t < T 1. Then, N := {N t, t ≥ 0} is a Poisson process with rate λ. Note that T n is the n-th jump time of N. ◇
15.4.2 Independent Increments
Before exploring the properties of the Poisson process, we recall two properties of the exponential distribution.
Theorem 15.3 (Properties of Exponential Distribution)
Let τ be exponentially distributed with rate λ > 0. That is,
In particular, the pdf of τ is \(f_\tau (t) = \lambda \exp \{- \lambda t\}\) for t ≥ 0. Also, E(τ) = λ −1 and var(τ) = λ −2.
Then,
This is the memoryless property of the exponential distribution.
Also,
\({\blacksquare }\)
Proof
□
The interpretation of this property is that if a lightbulb has an exponentially distributed lifetime, then an old bulb is exactly as good as a new one (as long as it is still burning).
We use this property to show that the Poisson process is also memoryless, in a precise sense.
Theorem 15.4 (Poisson Process Is Memoryless)
Let N := {N t, t ≥ 0} is a Poisson process with rate λ. Fix t > 0. Given {N s, s ≤ t}, the process {N s+t − N t, s ≥ 0} is a Poisson process with rate λ.
As a consequence, the process has stationary and independent increments. That is, for any 0 ≤ t 1 < t 2 < ⋯, the increments \(\{N_{t_{n+1}} - N_{t_n}, n \geq 1\}\) of the Poisson process are independent and the distribution of \(N_{t_{n+1}} - N_{t_n}\) depends only on t n+1 − t n. \({\blacksquare }\)
Proof
Figure 15.4 illustrates that result. Given {N s, s ≤ t}, the first jump time of {N s+t − N t, s ≥ 0} is Exp(λ), by the memoryless property of the exponential distribution. The subsequent inter-jump times are i.i.d. and Exp(λ). This proves the theorem. □
15.4.3 Number of Jumps
One has the following result.
Theorem 15.5 (The Number of Jumps Is Poisson)
N := {N t, t ≥ 0} is a Poisson process with rate λ. Then N t has a Poisson distribution with mean λt. \({\blacksquare }\)
Proof
There are a number of ways of showing this result. The standard way is as follows. Note that
Hence,
Thus,
Since P(N 0 = 0) = 1, this shows that \(P(N_t = 0) = \exp \{- \lambda t\}\) for t ≥ 0. Now, assume that
Then, the differential equation above shows that
i.e.,
This expression shows by induction that \(g(n, t) = \frac {(\lambda t)^n}{n!}\).
A different proof makes use of the density of the jumps. Let T n be the n-th jump of the process and S n = T n − T n−1, as before. Then
To derive this expression, we used the fact that the S n are i.i.d. Exp(λ). The expression above shows that, given that there are n jumps in [0, t], they are equally likely to be anywhere in the interval. Also,
where S = {t 1, …, t n|0 < t 1 < ⋯ < t n < t}. Now, observe that S is a fraction of [0, t]n that corresponds to the times t i being in a particular order. There are n! such orders and, by symmetry, each order corresponds to a subset of [0, t]n of the same size. Thus, the volume of S is t n∕n!. We conclude that
which proves the result. □
15.5 Boosting
You follow the advice of some investment experts when you buy stocks. Their recommendations are often contradictory. How do you make your decisions so that, in retrospect, you are not doing too bad compared to the best of the experts? The intuition is that you should try to follow the leader, but randomly. To make the situation concrete, Fig. 15.5 shows three experts (B, I, T) and the profits one would make by following their advice on the successive days.
On a given day, you choose which expert to follow the next day. Figure 15.6 shows your profit if you make the sequence of selections indicated by the red circles. In these selections, you choose to follow B the first 2 days, then I the next to days, then T the last day. Of course, you have to choose the day before, and the actual profit is only known the next day. The figure also shows the regrets that you accumulate when comparing your profit to that of the three experts. Your total profit is − 5 and the profit you would have made if you had followed B all the time would have been − 2, so your regret compared to B is − 2 − (−5) = 3, and similarly for the other two experts.
The problem is to make the expert selection every day so as to minimize the worst regret, i.e., the regret with respect to the most successful expert. More precisely, the goal is to minimize the rate of growth of the worst regret. Here is the result.
Theorem 15.6 (Minimum Regret Algorithm)
Generally, the worst regret grows like \(O(\sqrt {n})\) with the number n of steps. One algorithm that achieves this rate of regret is to choose expert E at step n + 1 with probability π n+1(E) given by
where η > 0 is a constant, A n is such that these probabilities add up to one, and P n(E) is the profit that expert E makes in the first n days. \({\blacksquare }\)
Thus, the algorithm favors successful experts. However, the algorithm makes random selections. It is easy to construct examples where a deterministic algorithm accumulates a regret that grows like n.
Figure 15.7 shows a simulation of three experts and of the selection algorithm in the theorem. The experts are random walks with drift 0.1. The simulation shows that the selection algorithm tends to fall behind the best expert by \(O(\sqrt {n})\).
The proof of the theorem can be found in Cesa-Bianchi and Lugosi (2006).
15.6 Multi-Armed Bandits
Here is a classical problem. You are given two coins, both with an unknown bias (the probability of heads). At each step k = 1, 2, … you choose a coin to flip. Your goal is to accumulate heads as fast as possible. Let X k be the number of heads you accumulate after k steps. Let also \(X_k^*\) be the number of heads that you would accumulate if you always flipped the coin with the largest bias. The regret of your strategy after n steps is defined as
Let θ 1 and θ 2 be the bias of coins 1 and 2, respectively. Then \(E(X_k^*) = k \max \{\theta _1, \theta _2\}\) and the best strategy is to flip the coin with the largest bias at each step. However, since the two biases are unknown, you cannot use that strategy. We explain below that there is a strategy such that the regret grows like \(\log (k)\) with the number of steps.
Any good strategy keeps on estimating the biases. Indeed, any strategy that stops estimating and then forever flips the coin that is believed to be best has a positive probability of getting stuck with the worst coin, thus accumulating a regret that grows linearly over time. Thus, a good strategy must constantly explore, i.e., flip both coins to learn their bias.
However, a good strategy should exploit the estimates by flipping the coin that is believed to be better more frequently than the other. Indeed, if you were to flip the two coins the same fraction of time, the regret would also grow linearly. Hence, a good strategy must exploit the accumulated knowledge about the biases.
The key question is how to balance exploration and exploitation. The strategy called Thompson Sampling does this optimally. Assume that the biases θ 1 and θ 2 of the two coins are independent and uniformly distributed in [0, 1]. Say that you have flipped the coins a number of times. Given the outcomes of these coin flips, one can in principle compute the conditional distributions of θ 1 and θ 2. Given these conditional distributions, one can calculate the probability that θ 1 > θ 2. The Thompson Sampling strategy is to choose coin 1 with that probability and coin 2 otherwise for the next flip. Here is the key result.
Theorem 15.7 (Minimum Regret of Thompson Sampling)
If the coins have different biases, then any strategy is such that
Moreover, Thompson Sampling achieves this lower bound. \({\blacksquare }\)
The notation \(O(\log {k})\) indicates a function g(k) that grows like \(\log {k}\), i.e., such that \(g(k)/\log {k}\) converges to a positive constant as k →∞.
Thus this strategy does not necessarily choose the coin with the largest expected bias. It is the case that the strategy favors the coin that has been more successful so far, thus exploiting the information. But the selection is random, which contributes to the exploration.
One can show that if flips of coin 1 have produced h heads and t tails, then the conditional density of θ 1 is g(θ;h, t), where
The same result holds for coin 2. Thus, Thompson Sampling generates \(\hat \theta _1\) and \(\hat \theta _2\) according to these densities.
For a proof of this result, see Agrawal and Goyal (2012). See also Russo et al. (2018) for applications of multi-armed bandits.
A rough justification of the result goes as follows. Say that θ 1 > θ 2. One can show that after flipping coin 2 a number n of times, it takes about n steps until you flip it again when using Thompson Sampling. Your regret then grows by one at times 1, 1 + 1, 2 + 2, 4 + 4, …, 2n, 2n+1, …. Thus, the regret is of order n after O(2n) steps. Equivalently, after N = 2n steps, the regret is of order \(n = \log {N}\).
15.7 Capacity of BSC
Consider a binary symmetric channel with error probability p ∈ (0, 0.5). Every bit that the transmitter sends has a chance of being corrupted. Thus, it is impossible to transmit any bit string fully reliably across this channel. No matter what the transmitter sends, the receiver can never be sure that it got the message right.
However, one might be able to achieve a very small probability of error. For instance, say that p = 0.1 and that one transmits a bit by repeating it N times, where N ≫ 1. As the receiver gets the N bits, it uses a majority decoding. That is, if it gets more zeros than ones, it decides that transmitter sent a zero, and conversely for a one. The probability of error can be made arbitrarily small by choosing N very large. However, this scheme gets to transmit only one bit every N steps. We say that the rate of the channel is 1∕N and it seems that to achieve a very small error probability, the rate has to become negligible.
It turns out that our pessimistic conclusion is wrong. Claude Shannon (Fig. 15.8), in the late 1940s, explained that the channel can transmit at any rate less than C(p), where (see Fig. 15.9)
with a probability less than 𝜖, for any 𝜖 > 0.
For instance, C(0.1) ≈ 0.53. Fix a rate less than C(0.1), say R = 0.5. Pick any 𝜖 > 0, say 𝜖 = 10−8. Then, it is possible to transmit bits across this channel at rate R = 0.5, with a probability of error per bit less than 10−8. The same is true if we choose 𝜖 = 10−12: it is possible to transmit at the same rate R with a probability of error less than 10−12. The actual scheme that we use depends on 𝜖, and it becomes more complex when 𝜖 is smaller; however, the rate R does not depend on 𝜖. Quite a remarkable result! Needless to say, it baffled all the engineers who had been busily designing various ad hoc transmission schemes.
Shannon’s key insight is that long sequences are typical. There is a statistical regularity in random sequences such as Markov chains or i.i.d. random variables and this regularity manifests itself in a characteristic of long sequences. For instance, flip many times a biased coin with P(head) = 0.1. The sequence that you will observe is likely to have about 10% of heads. Many other sequences are so unlikely that you will not see them. Thus, there are relatively few long sequences that are possible. In this example, although there are M = 2N possible sequences of N coin flips, only about \(\sqrt {M}\) are typical when P(head) = 0.1. Moreover, by symmetry, these typical sequences are all equally likely. For that reason, the errors of the BSC must correspond to relatively few patterns. Say that there are only A possible patterns of errors for N transmissions. Then, any bit string of length N that the sender transmits will correspond to A possible received “output” strings: one for every typical error sequence. Thus, it might be possible to choose B different “input” strings of length N for the transmitter so that the A received “output” strings for each one of these B input strings are all distinct. However, one might worry that choosing the B input strings would be rather complex if we want their sets of output strings to be distinct.
Shannon noticed that if we pick the input strings completely randomly, this will work. Thus, Shannon scheme is as follows. Pick a large N. Choose B strings of N bits randomly, each time by flipping a fair coin N times. Call these inputs strings X 1, …X B. These are the codewords. Let S 1 be the set of A typical outputs that correspond to X 1. Let Y j be the output that corresponds to input X j. Note that the Y j are sequences of fair coin flips, by symmetry of the channel. Thus, each Y j is equally likely to be any one of the 2N possible output strings. In particular, the probability that Y j falls in S 1 is A∕2N (Fig. 15.10).
In fact,
Indeed, the probability of a union of events is not larger than the sum of their probabilities. We explain below that A = 2NH(p). Thus, if we choose B = 2NR, we see that the expression above is less than or equal to
and this expression goes to zero as N increases, provided that
Thus, the receiver makes an error with a negligible probability if one does not choose too many codewords. Note that B = 2NR corresponds to transmitting NR different bits in N steps, thus transmitting at rate R.
How does the receiver recognize the bit string that the transmitter sent? The idea is to give the list of the B input strings, i.e., codewords, to the receiver. When it receives a string, the receiver looks in the list to find the codeword that is the closest to the string it received. With a very high probability, it is the string that the transmitter sent.
It remains to show that A = 2NH(p). Fortunately, this calculation is a simple consequence of the SLLN. Let X := {X(n), n = 1, …, N} be i.i.d. random variables with P(X(n) = 1) = p and P(X(n) = 0) = 1 − p. For a given sequence x = (x(1), …, x(N)) ∈{0, 1}N, let
Note that, with \(|\mathbf {x}| := \sum _{n=1}^N x(n)\),
Thus, the random string X of N bits is such that
But we know from the SLLN that |X|∕N → p as N →∞. Thus, for N ≫ 1,
This calculation shows that any sequence x of values that X takes has approximately the same value of ψ(x). But, by (15.3), this implies all the sequences x that occur have approximately the same probability
We conclude that there are 2NH(p) typical sequences and that they are all essentially equally likely. Thus, A = 2NH(p).
Recall that for the Gaussian channel with the MLE detection rule, the channel becomes a BSC with
Accordingly, we can calculate the capacity C(p(σ 2)) as a function of the noise standard deviation σ. Figure 15.11 shows the result.
These results of Shannon on the capacity, or achievable rates, of channels have had a profound impact on the design of communication systems. Suddenly, engineers had a target and they knew how far or how close their systems were to the feasible rate. Moreover, the coding scheme of Shannon, although not really practical, provided a valuable insight into the design of codes for specific channels. Shannon’s theory, called Information Theory, is an inspiring example of how a profound conceptual insight can revolutionize an engineering field.
Another important part of Shannon’s work concerns the coding of random objects. For instance, how many bits does it take to encode a 500-page book? Once again, the relevant notion is that of typicality. As an example, we know that to encode a string of N flips of a biased coin with P(head) = p, we need only NH(p) bits, because this is the number of typical sequences. Here, H(p) is called the entropy of the coin flip. Similarly, if {X(n), n ≥ 1} is an irreducible, finite, and aperiodic Markov chain with invariant distribution π and transition probabilities P(i, j), then one can show that to encode {X(1), …, X(N)} one needs approximately NH(P) bits, where
is called the entropy rate of the Markov chain. A practical scheme, called Liv–Zempel compression, essentially achieves this limit. It is the basis for most file compression algorithms (e.g., ZIP).
Shannon put these two ideas together: channel capacity and source coding. Here is an example of his source–channel coding result. How fast can one send the symbols X(n) produced by the Markov chain through a BSC channel? The answer is C(p)∕H(P). Intuitively, it takes H(P) bits per symbol X(n) and the BSC can send C(p) bits per unit time. Moreover, to accomplish this rate, one first encodes the source and one separately chooses the codewords for the BSC, and one then uses them together. Thus, the channel coding is independent of the source coding and vice versa. This is called the separation theorem of Claude Shannon.
15.8 Bounds on Probabilities
We explain how to derive estimates of probabilities using Chebyshev and Chernoff’s inequalities and also using the Gaussian approximation. These methods also provide a useful insight into the likelihood of events. The power of these methods is that they can be used in very complex situations.
Theorem 15.8 (Markov, Chernoff, and Jensen Inequalities)
Let X be a random variable. Then one has
-
(a)
Markov’s Inequality: Footnote 1
$$\displaystyle \begin{aligned} P( X \geq a) \leq \frac{E(f(X))}{f(a)}, \end{aligned} $$(15.4)for all f(⋅) that is nondecreasing and positive.
-
(b)
Chernoff’s Inequality (Fig. 15.12):Footnote 2
$$\displaystyle \begin{aligned} P( X \geq a) \leq E( \exp\{\theta (X - a)\} ), \end{aligned} $$(15.5)for all θ > 0.
-
(c)
Jensen’s Inequality (Fig. 15.13):Footnote 3
$$\displaystyle \begin{aligned} f(E(X)) \leq E(f(X)), \end{aligned} $$(15.6)for all f(⋅) that is convex.
\({\blacksquare }\)
These results are easy to show, so here is a proof.
Proof
-
(a)
Since f(⋅) is nondecreasing and positive, we have
$$\displaystyle \begin{aligned} 1\{X \geq a\} \leq \frac{f(X)}{f(a)}, \end{aligned}$$so that (15.4) follows by taking expectations.
-
(b)
The inequality (15.5) is a particular case of Markov’s inequality (15.4) for \(f(X) = \exp \{\theta X\}\) with θ > 0.
-
(c)
Let f(⋅) be a convex function. This means that it lies above any tangent. In particular,
$$\displaystyle \begin{aligned} f(X) \geq f(E(X)) + f'(E(X))(X - E(X)), \end{aligned}$$as shown in Fig. 15.14. The inequality (15.6) then follows by taking expectations.
□
15.8.1 Applying the Bounds to Multiplexing
Recall the multiplexing problem. There are N users who are independently active with probability p. Thus, the number of active users Z is B(N, p). We want to find m so that P(Z ≥ m) = 5%.
As a first estimate of m, we use Chebyshev’s inequality (2.2) which says that
Now, if Z = B(N, p), one has E(Z) = Np and var(Z) = Np(1 − p).Footnote 4 Hence, since ν = B(100, 0.2), one has E(ν) = 20 and var(ν) = 16. Chebyshev’s inequality gives
Thus, we expect that
because it is reasonable to think that the distribution of ν is almost symmetric around its mean, as we see in Fig. 3.4. We want to choose m = 20 + 𝜖 so that P(ν > m) ≤ 5%. This means that we should choose 𝜖 so that 8∕𝜖 2 = 5%. This gives 𝜖 = 13, so that m = 33. Thus, according to Chebyshev’s inequality, it is safe to assume that no more than 33 users are active and we can choose C so that C∕33 is a satisfactory rate for users.
As a second approach, we use Chernoff’s inequality (15.5) which states that
To calculate the right-hand size, we note that if Z = Bernoulli(N, p), then we can write as Z = X(1) + ⋯ + X(N), where the X(n) are i.i.d. random variables with P(X(n) = 1) = p and P(X(n) = 0) = 1 − p. Then,
To continue the calculation, we note that, since the X(n) are independent, so are the random variables \(\exp \{\theta X(n)\}\).Footnote 5 Also, the expected value of a product of independent random variables is the product of their expected values (see Appendix A). Hence,
where we define
Thus, Chernoff’s inequality says that
Since this inequality holds for every θ > 0, let us minimize the right-hand side with respect to θ. That is, let us define
Then, we see that
Figure 15.15 shows this function when p = 0.2.
We now evaluate Λ(θ) and Λ ∗(a). We find
so that
and
Setting to zero the derivative with respect to θ of the term between brackets, we find
which gives, for a > p,
Substituting back in Λ ∗(a), we get
Going back to our example, we want to find m = Na so that
Using (15.7), we need to find Na so that
i.e.,
Looking at Fig. 15.15, we find a = 0.30. This corresponds to m = 30. Thus, Chernoff’s estimate says that P(ν > 30) ≈ 5% and that we can size the network assuming that only 30 users are active at any one time.
By the way, the calculations we have performed above show that Chernoff’s bound can be written as
15.9 Martingales
A martingale represents the sequence of fortunes of someone playing a fair game of chance. In such a game, the expected gain is always zero. A simple example is a random walk with zero-mean step size. Martingales are good models of noise and of processes discounted based on their expected value (e.g., the stock market). This theory is due to Doob (1953).
Martingales have an important property that generalizes the strong law of large numbers. It says that a martingale bounded in expectation converges almost surely. This result is used to show that fluctuations vanish and that a process converges to its mean value. The convergence of stochastic gradient algorithms and approximations of random processes by differential equations follow from that property.
15.9.1 Definitions
Let X n be the fortune at time n ≥ 0 when one plays a game of chance. The game is fair if
In this expression, X n := {X m, m ≤ n}. Thus, in a fair game, one cannot expect to improve one’s fortune. A sequence {X n, n ≥ 0} of random variables with that property is a martingale.
This basic definition generalizes to the case where one has access to additional information and is still unable to improve one’s fortune. For instance, say that the additional information is the value of other random variables Y n. One then has the following definitions.
Definition 15.3 (Martingale, Supermartingale, Submartingale)
The sequence of random variables {X n, n ≥ 0} is a martingale with respect to {X n, Y n, n ≥ 0} if
with X n = {X m, m ≤ n} and Y n = {Y m, m ≤ n}.
If (15.9) holds with = replaced by ≤, then X n is a supermartingale; if it holds with ≥, then X n is a submartingale. ◇
In many cases, we do not specify the random variables Y n and we simply say that X n is a martingale, or a submartingale, or a supermartingale.
Note that if X n is a martingale, then
Indeed, E(X n) = E(E[X n|X 0, Y 0]) by the smoothing property of conditional expectation (see Theorem 9.5).
15.9.2 Examples
A few examples illustrate the definition.
Random Walk
Let {Z n, n ≥ 0} be independent and zero-mean random variables. Then X n := Z 0 + ⋯ + Z n for n ≥ 0 is a martingale. Indeed,
Note that if E(Z n) ≤ 0, then X n is a supermartingale; if E(Z n) ≥ 0, then X n is a submartingale.
Product
Let {Z n, n ≥ 0} be independent random variables with mean 1. Then X n := Z 0 ×⋯ × Z n for n ≥ 0 is a martingale. Indeed,
Note that if Z n ≥ 0 and E(Z n) ≤ 1 for all n, then X n is a supermartingale. Similarly, if Z n ≥ 0 and E(Z n) ≥ 1 for all n,, then X n is a submartingale.
Branching Process
For m ≥ 1 and n ≥ 0, let \(X_m^n\) be i.i.d. random variables distributed like X that take values in \(\mathbb {Z}_+ := \{0, 1, 2, \ldots \}\) and have mean μ. The branching process is defined by Y 0 = 1 and
The interpretation is that there are Y n individuals in a population at the n-th generation. Individual m in that population has \(X_m^n\) children.
One can see that
is a martingale. Indeed,
so that
Let f(s) = E(e s X) and q be the smallest nonnegative solution of q = f(q). One can then show that
is a martingale.
Proof
Exercise. □
Doob Martingale
Let {X n, n = 1, …, N} be random variables and Y = f(X 1, …, X N), where f is some bounded measurable real-valued function. Then
is a martingale (by the smoothing property of conditional expectation, see Theorem 9.5) called a Doob martingale. Here are a two examples.
-
1.
Throw N balls into M bins, and let Y be some function of the throws: the number of empty bins, the max load, the second-highly loaded bin, or some similar function. Let X n be the index of the bin into which ball n lands. Then Z n = E[Y ∣X n] is a martingale.
-
2.
Suppose we have r red and b blue balls in a bin. We draw balls without replacement from this bin: what is the number of red balls drawn? Let X n be the indicator for whether ball n is red, and let Y = X 1 + ⋯ + X n be the number of red balls. Then Z n is a martingale.
You Cannot Beat the House
To study convergence, we start by explaining a key property of martingales that says there is no winning recipe to play a fair game of chance.
Theorem 15.9 (You Cannot Win)
Let X n be a martingale with respect to {X n, Z n, n ≥ 0} and V n some bounded function of (X n, Z n). Then
with Y 0 := 0 is a martingale. \({\blacksquare }\)
Proof
One has
□
The meaning of Y n is the fortune that you would get by betting V m−1 at time m − 1 on the gain X m − X m−1 of the next round of the game. This bet must be based on the information (X m−1, Z m−1) that you have when placing the bet, not on the outcome of the next round, obviously. The theorem says that your fortune remains a martingale even after adjusting your bets in real time.
Stopping Times
When playing a game of chance, one may decide to stop after observing a particular sequence of gains and losses. The decision to stop is non-anticipative. That is, one cannot say “never mind, I did not mean to play the last three rounds.” Thus, the random stopping time τ must have the property that the event {τ ≤ n} must be a function of the information available at time n, for all n ≥ 0. Such a random time is a stopping time.
Definition 15.4 (Stopping Time)
A random variable τ is a stopping time for the sequence {X n, Y n, n ≥ 0} if τ takes values in {0, 1, 2, …} and
for some functions ϕ n. ◇
For instance,
where \(\mathcal {A}\) is a set in \(\Re ^2\) is a stopping time for the sequence {X n, Y n, n ≥ 0}. Thus, you may want to stop the first time that either you go broke or your fortune exceeds $1000.00.
One might hope that a smart choice of when to stop playing a fair game could improve one’s expected fortune. However, that is not the case, as the following fact shows.
Theorem 15.10 (Optional Stopping)
Let {X n, n ≥ 0} be a martingale and τ a stopping time with respect to {X n, Y n, n ≥ 0}. Then Footnote 6
\({\blacksquare }\)
In the statement of the theorem, for a random time σ one defines X σ := X n when σ = n.
Proof
Note that X τ∧n is the fortune Y n that one accumulates by betting V m = 1{τ ∧ n > m} at time m in (15.10), i.e., by betting 1 until one stops at time τ ∧ n. Since 1{τ ∧ n > m} = 1 −{τ ∧ n ≤ m} = ϕ(X m, Y m), the resulting fortune is a martingale. □
You will note that bounding τ ∧ n in the theorem above is essential. For instance, let X n correspond to the random walk described above with P(Z n = 1) = P(Z n = −1) = 0.5. If we define \(\tau = \min \{n \geq 0 \mid X_n = 10\}\), one knows that τ is finite. (See the comments below Theorem 15.1.) Hence, X τ = 10, so that
However, if we bound the stopping time, the theorem says that
This result deserves some thought.
One might be tempted to take the limit of the left-hand side of (15.11) as n →∞ and note that
because τ is finite. One then might conclude that the left-hand size of (15.11) goes to 10, which would contradict (15.11). However, the limit and the expectation do not interchange because the random variables X τ∧n are not bounded. However, if they were, one would get E[X τ|X 0] = X 0, by the dominated convergence theorem. We record this observation as the next result.
Theorem 15.11 (Optional Stopping—2)
Let {X n, n ≥ 0} be a martingale and τ a stopping time with respect to {X n, Y n, n ≥ 0}. Assume that |X n|≤ V for some random variable V such that E(V ) < ∞. Then
\({\blacksquare }\)
L 1-Bounded Martingales
An L 1-bounded martingale cannot bounce up and down infinitely often across an interval [a, b]. For if it did, you could increase your fortune without bound by betting 1 on the way up across the interval and betting 0 on the way down. We will see shortly that this cannot happen. As a result, the martingale must converge. (Note that this is not true if the martingale is not L 1-bounded, as the random walk example shows.)
Theorem 15.12 (L 1-Bounded Martingales Convergence)
Let {X n, n ≥ 0} be a martingale such that E(|X n|) ≤ K for all n. Then X n converges almost surely to a finite random variable X ∞. \({\blacksquare }\)
Proof
Consider an interval [a, b]. We show that X n cannot up-cross this interval infinitely often. (See Fig. 15.16.) Let us bet 1 on the way up and 0 on the way down. That is, wait until X n gets first below a, then bet 1 at every step until X n > b, then stop betting until X n gets below a, and continue in this way.
If X m crossed the interval U n times by time n, your fortune Y n is now at least (b − a)U n + (X n − a). Indeed, your gain was at least b − a for every upcrossing and, in the last steps of your playing, you lose at most X n − a if X n never crosses above b after you last resumed betting. But, since Y n is a martingale, we have
(We used the fact that X n ≥−|X n|, so that E(X n) ≥−E(|X n|) = −K. This shows that E(U n) ≤ B = (K + Y 0 + a)∕(b − a) < ∞. Letting n →∞, since U n ↑ U, where U is the total number of upcrossings of the interval [a, b], it follows by the monotone convergence theorem that E(U) ≤ B. Consequently, U is finite. Thus, X n cannot up-cross any given interval [a, b] infinitely often.
Consequently, the probability that it up-crosses infinitely often any interval with rational limits is zero (since there are countably many such intervals).
This implies that X n must converge, either to + ∞, −∞, or to a finite value. Since E(|X n|) ≤ K, the probability that X n converges to + ∞ or −∞ is zero. □
The following is a direct but useful consequence. We used this result in the proof of the convergence of the stochastic gradient projection algorithm (Theorem 12.2).
Theorem 15.13 (L 2-Bounded Martingales Convergence)
Let X n be a L 2 -bounded martingale, i.e., such that \(E(X_n^2) \leq K^2, \forall n \geq 0\) , then X n → X ∞ , almost surely, for some finite random variable X ∞. \({\blacksquare }\)
Proof
We have
by Jensen’s inequality. Thus, it follows that E(|X n|) ≤ K for all n, so that the result of the theorem applies to this martingale. □
One can also show that E(|X n − X ∞|2) → 0.
15.9.3 Law of Large Numbers
The SLLN can be proved as an application of the convergence of martingales, as Doob (1953) showed.
Theorem 15.14 (SLLN)
Let {X n, n ≥ 1} be i.i.d. random variables with E(|X n|) = K < ∞ and E(X n) = μ. Then
\({\blacksquare }\)
Proof
Let
Note that
by symmetry. Thus,
Thus, {…, Y −n−2, Y −n−1, Y −n, …} is a martingale. (It is a Doob martingale.) This implies as before that the number U n of upcrossings of an interval [a, b] is such that E(U n) ≤ B < ∞. As before, we conclude that \(U := \lim U_n < \infty \), almost surely. Hence, Y n converges almost surely to a random variable Y −∞.
Now, since
we see that Y −∞ is independent of (X 1, …, X n) for any finite n. Indeed, the limit does not depend on the values of the first n random variables. However, since Y −∞ is a function of {X n, n ≥ 1}, it must be independent of itself, i.e., be a constant.
Since E(Y ∞) = E(Y 1) = μ, we see that Y ∞ = μ. □
15.9.4 Wald’s Equality
A useful application of martingales is the following. Let {X n, n ≥ 1} be i.i.d. random variables. Let τ be a random variable independent of the X n’s that take values in {1, 2, …} with E(τ) < ∞. Then
This expression is known as Wald’s Equality.
To see this, note that Y n = X 1 + ⋯ + X n − nE(X 1) is a martingale. Also, τ is a stopping time. Thus,
which gives the identity with τ replaced by τ ∧ n. If E(τ) < ∞, one can let n go to infinity and get the result. (For instance, replace X i by \(X_i^+\) and use MCT, similarly for \(X_i^-\), then subtract.)
15.10 Summary
-
General inference problems: guessing X given Y , Bayesian or not;
-
Sufficient statistic: h(Y ) is sufficient for X;
-
Infinite Markov Chains: PR, NR, T;
-
Lyapunov–Foster Criterion;
-
Poisson Process: independent stationary increments;
-
Continuous-Time Markov Chain: rate matrix;
-
Shannon Capacity of BSC: typical sequences and random codes;
-
Bounds: Chernoff and Jensen;
-
Martingales and Convergence;
-
Strong Law of Large Numbers.
15.10.1 Key Equations and Formulas
Inference Problem | Guess X given Y : MAP, MLE, HT | S.15.1 |
---|---|---|
Sufficient Statistic | f Y |X[y|x] = f(h(y), x)g(y) | D.15.1 |
Infinite MC | Irreducible ⇒ T, NR or PR | T.15.1 |
Poisson Process | Jumps w.p. λ𝜖 in next 𝜖 seconds | D.15.2 |
Continuous-Time MC | Jumps from i to j w. rate Q(i, j) | D.6.1 |
Shannon Capacity C | Can transmit reliably at any rate R < C | S.15.7 |
“ ” of BSC(p) | C = 1 + plog2(p) + (1 − p)log2(1 − p) | (15.2) |
Chernoff | \(P(X > a) \leq E(\exp \{ \theta (X - a)\}), \forall \theta \geq 0\) | (15.5) |
Jensen | h convex ⇒ E(h(X)) ≥ h(E(X)) | (15.6) |
Martingales | zero expected increase | D.15.3 |
MG Convergence | A.s. to finite RV if L 1 or L 2 bounded | T.15.12 |
Wald | E(X 1 + ⋯ + X τ) = E(τ)E(X 1) | (15.13) |
15.11 References
For the theory of Markov chains, see Chung (1967). The text Harchol-Balter (2013) explains basic queueing theory and many applications to computer systems and operations research.
The book Bremaud (1998) is also highly recommended for its clarity and the breadth of applications. Information Theory is explained in the textbook Cover and Thomas (1991). I learned the theory of martingales mostly from Neveu (1975). The theory of multi-armed bandits is explained in Cesa-Bianchi and Lugosi (2006). The text Hastie et al. (2009) is an introduction to applications of statistics in data science (Fig. 15.17).
15.12 Problems
Problem 15.1
Suppose that y 1, …, y n are i.i.d. samples of N(μ, σ 2). What is a sufficient statistic for estimating μ given σ = 1. What is a sufficient statistic for estimating σ given μ = 1?
Problem 15.2
Customers arrive to a store according to a Poisson process with rate 4 (per hour).
-
(a)
What is the probability that exactly 3 customers arrive during 1 h?
-
(b)
What is the probability that more than 40 min is required before the first customer arrives?
Problem 15.3
Consider two independent Poisson processes with rates λ 1 and λ 2. Those processes measure the number of customers arriving in stores 1 and 2.
-
(a)
What is the probability that a customer arrives in store 1 before any arrives in store 2?
-
(b)
What is the probability that in the first hour exactly 6 customers arrive at the two stores? (The total for both is 6)
-
(c)
Given exactly 6 have arrived at the two stores, what is the probability all 6 went to store 1?
Problem 15.4
Consider the continuous-time Markov chain in Fig. 15.17.
-
(a)
Find the invariant distribution.
-
(b)
Simulate the MC and see that the fraction of time spent in state 1 converges to π(1).
Problem 15.5
Consider a first-come-first-served discrete-time queuing system with a single server. The arrivals are Bernoulli with rate λ. The service times are i.i.d. and independent of the arrival times. Each service time Z takes values in {1, 2, …, K} such that E(Z) = 1∕μ and λ < μ.
-
(a)
Construct the Markov chain that models the queue. What are the states and transition probabilities? [Hint: Suppose the head of the line task of the queue still requires z units of service. Include z in the state description of the MC.]
-
(b)
Use Lyapunov–Foster argument to show the queue is stable or equivalently the MC is positive recurrent.
Problem 15.6
Suppose that random variable X takes value in the set {1, 2, …, K} such that \(\Pr (X_1=k) = p_k > 0\), and \(\sum _{k=1}^K p_k =1\). Suppose X 1, X 2, …, X n is a sequence of n i.i.d. samples of X.
-
(a)
How many possible sequences exist?
-
(b)
How many typical sequences exist when n is large?
-
(c)
Find a condition that answers to parts (a) and (b) are the same.
Problem 15.7
Let {N t, t ≥ 0} be a Poisson process with rate λ. Let S n denote the time of the n-th event. Find
-
(a)
the pdf of S n.
-
(b)
E[S 5].
-
(c)
E[S 4|N(1) = 2].
-
(d)
E[N(4) − N(2)|N(1) = 3].
Problem 15.8
A queue has Poisson arrivals with rate λ. It has two servers that work in parallel. When there are at least two customers in the queue, two are being served. When there is only one customer, only one server is active. The service times are i.i.d. Exp(μ).
-
(a)
Argue that the queue length is a Markov Chain.
-
(b)
Draw the state transition diagram.
-
(c)
Find the minimum value of μ so that the queue is positive recurrent and solve the balance equations.
Problem 15.9
Let {X t, t ≥ 0} be a continuous-time Markov chain with rate matrix Q = {q(i, j)}. Define q(i) =∑j ≠ i q(i, j). Let also \(T_i = \inf \{t > 0 | X_t = i\}\) and \(S_i = \inf \{t > 0 | X_t \neq i\}\). Then (select the correct answers)
-
E[S i|X 0 = i] = q(i);
-
P[T i < T j|X 0 = k] = q(k, i)∕(q(k, i) + q(k, j)) for i, j, k distinct;
-
If α(k) = P[T i < T j|X 0 = k], then \(\alpha (k) = \sum _s \frac {q(k, s)}{q(k)} \alpha (s)\) for k∉{i, j}.
Problem 15.10
A continuous-time queue has Poisson arrivals with rate λ, and it is equipped with infinitely many servers. The servers can work in parallel on multiple customers, but they are non-cooperative in the sense that a single customer can only be served by one server. Thus, when there are k customers in the queue, k servers are active. Suppose that the service time of each customer is exponentially distributed with rate μ and they are i.i.d.
-
(a)
Argue that the queue length is a Markov chain. Draw the transition diagram of the Markov chain.
-
(b)
Prove that for all finite values of λ and μ the Markov chain is positive recurrent and find the invariant distribution.
Problem 15.11
Consider a Poisson process {N t, t ≥ 0} with rate λ = 1. Let random variable S i denote the time of the i-th arrival. [Hint: You recall that \(f_{S_i}(x) = \frac {x^{i-1}e^{-x}}{(i-1)!}1\{x \geq 0\}\).]
-
(a)
Given S 3 = s, find the joint distribution of S 1 and S 2. Show you work.
-
(b)
Find E[S 2|S 3 = s].
-
(c)
Find E[S 3|N 1 = 2].
Problem 15.12
Let \(S = \sum _{i=1}^N X_i\) denote the total amount of money withdrawn from an ATM in 8 h, where:
-
(a)
X i are i.i.d. random variables denoting the amount withdrawn by each customer with E[X i] = 30 and V ar[X i] = 400.
-
(b)
N is a Poisson random variable denoting the total number of customers with E[N] = 80.
Find E[S] and V ar[S].
Problem 15.13
One is given two independent Poisson processes M t and N t with respective rates λ and μ, where λ > μ. Find E(τ), where
(Note that this is a max, not a min.)
Problem 15.14
Consider a queue with Poisson arrivals with rate λ. The service times are all equal to one unit of time. Let X t be the queue length at time t (t ≥ 0).
-
(a)
Is X t a Markov chain? Prove or disprove.
-
(b)
Let Y n be the queue length just after the n-th departure from the queue (n ≥ 1). Prove that Y n is a Markov chain. Draw a state diagram.
-
(c)
Prove that Y n is positive recurrent when λ < 1.
Problem 15.15
Consider a queue with Poisson arrivals with rate λ. The queue can hold N customers. The service times are i.i.d. Exp(μ). When a customer arrives, you can choose to pay him c so that he does not join the queue. You also pay c when a customer arrives at a full queue. You want to decide when to accept customers to minimize the cost of rejecting them, plus the cost of the average waiting time they spend in the queue.
-
(a)
Formulate the problem as a Markov decision problem. For simplicity, consider a total discounted cost. That is, if x t customers are in the system at time t, then the waiting cost during [t, t + 𝜖] is e −βt x t 𝜖. Similarly, if you reject a customer at time t, then the cost is ce −βt.
-
(b)
Write the dynamic programming equations.
-
(c)
Use Python to solve the equations.
Problem 15.16
The counting process N := {N t, 0 ≤ t ≤ T} is defined as follows:
Given τ, {N t, 0 ≤ t ≤ τ} and {N t − N τ, τ ≤ t ≤ T} are independent Poisson processes with respective rates λ 0 and λ 1.
Here, λ 0 and λ 1 are known and such that 0 < λ 0 < λ 1. Also, τ is exponentially distributed with known rate μ > 0.
-
1.
Find the MLE of τ given N.
-
2.
Find the MAP of τ given N.
Problem 15.17
Figure 15.18 shows a system where a source alternates between the ON and OFF states according to a continuous-time Markov chain with the transition rates indicated. When the source is ON, it sends a fluid with rate 2 into the queue. When the source is OFF, it does not send any fluid. The queue is drained at constant rate 1 whenever it contains some fluid. Let X t be the amount of fluid in the queue at time t ≥ 0.
-
(a)
Plot a typical trajectory of the random process {X t, t ≥ 0}.
-
(b)
Intuitively, what are conditions on λ and μ that should guarantee the “stability” of the queue?
-
(c)
Is the process {X t, t ≥ 0} Markov?
Problem 15.18
Let {N t, t ≥ 0} be a Poisson process with rate λ that is exponentially distributed with rate μ > 0.
-
(a)
Find MLE[λ|N s, 0 ≤ s ≤ t];
-
(b)
Find MAP[λ|N s, 0 ≤ s ≤ t];
-
(c)
What is a sufficient statistic for λ given {N s, 0 ≤ s ≤ t};
-
(d)
Instead of λ being exponentially distributed, assume that λ is known to take values in [5, 10]. Give an estimate of the time t required to estimate λ within 5% with probability 95%.
Problem 15.19
Consider two queues in parallel in discrete time with Bernoulli arrival processes of rates λ 1 and λ 2, and geometric service rates of μ 1 and μ 2, respectively. There is only one server that can serve either queue 1 and queue 2 at each time. Consider the scheduling policy that serves queue 1 at time n if μ 1 Q 1(n) > μ 2 Q 2(n), and serve queue 2 otherwise, where Q 1(n) and Q 2(n) are queue lengths of the queues at time n. Use the Lyapunov function \(V(Q_1(n),Q_2(n)) = Q^2_1(n) + Q^2_2(n)\) to show that the queues are stable if λ 1∕μ 1 + λ 2∕μ 2 < 1. This scheduling policy is known as Max-Weight or Back-Pressure policy.
Notes
- 1.
Markov’s inequality is due to Chebyshev who was Markov’s teacher.
- 2.
Chernoff’s inequality is due to Herman Rubin (see Chernoff (2004)).
- 3.
Jensen’s inequality seems to be due to Jensen.
- 4.
See Appendix A.
- 5.
Indeed, functions of independent random variables are independent. See Appendix A.
- 6.
\(\tau \wedge n := \min \{\tau , n\}\)
References
S. Agrawal, N. Goyal, Analysis of Thompson sampling for the multi-armed bandit problem, in Proceedings of the 21st Annual Conference on Learning Theory (COLT), PMLR, vol. 23 (2012), pp. 39.1–39.26
P. Bremaud, An Introduction to Probabilistic Modeling (Springer, Berlin, 1998)
N. Cesa-Bianchi, G. Lugosi, Prediction Learning and Games (Cambridge University Press, Cambridge, 2006)
H. Chernoff, Some reminiscences of my friendship with Herman Rubin. Institute of Mathematical Statistics Lecture Notes – Monograph Series, vol. 4 (2004), pp. 1–4
K.L. Chung, Markov Chains with Stationary Transition Probabilities (Springer, Berlin, 1967)
T.M. Cover, J.A. Thomas, Elements of Information Theory (Wiley-Interscience, Hoboken, 1991)
J.L. Doob, Stochastic Processes (Wiley, Hoboken, 1953)
M. Harchol-Balter, Performance Modeling and Design of Computer Systems: Queueing Theory in Action. (Cambridge University Press, Cambridge, 2013)
T. Hastie, R. Tibshirani, J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd edn. (Springer, Berlin, 2009)
J. Neveu, Discrete Parameter Martingales (American Elsevier, North-Holland, 1975)
D. Russo, B. Van Roy, A. Kazerouni, I. Osband, Z. Wen. A Tutorial on Thompson Sampling problem. IEEE Trans. Signal Process. 11, 1–96 (2018)
Author information
Authors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2021 The Author(s)
About this chapter
Cite this chapter
Walrand, J. (2021). Perspective and Complements. In: Probability in Electrical Engineering and Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-030-49995-2_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-49995-2_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-49994-5
Online ISBN: 978-3-030-49995-2
eBook Packages: Computer ScienceComputer Science (R0)