A proof of Sanov's Theorem via discretizations

We present an alternative proof of Sanov's theorem for Polish spaces in the weak topology that follows via discretization arguments. We combine the simpler version of Sanov's Theorem for discrete finite spaces and well chosen finite discretizations of the Polish space. The main tool in our proof is an explicit control on the rate of convergence for the approximated measures.


Introduction
Sanov's Theorem is a well know result in the theory of large deviations principles. It provides the large deviations profile of the empirical measure of a sequence of i.i.d. random variables and characterizes its rate function as the relative entropy. This short note provides an alternative proof of this fact, by exploring the metric structure of the weak topology with the variational formulation of the relative entropy.
Formally, let (M, d) be a Polish space and let X n n∈N be a sequence of independent M -valued random elements identically distributed according to µ ∈ P(M ), where P(M ) is the set of Borel probability measures on M . We denote by δ x the probability measure degenerate at x ∈ M , and define the empirical measure of X 1 , . . . , X n by Also, given υ, µ ∈ P(M ), the relative entropy between υ and µ is defined as H(υ|µ) := sup f dυ − log e f dµ; f is measurable and bounded . (1.2) Sanov's Theorem is given by the following statement. When the space M is finite, the theorem above is proved in an elementary and elegant way (see Den Hollander [3,Theorem II.2], and Dembo and Zeitouni [2, Theorem 2.1.10]). In this work, we prove the theorem for general Polish metric spaces by extending this elementary proof via sequences of discretizations of the space. We split the set M in a finite number of subsets which belong to one of two distinct categories. The well-behaved sets are the ones with small diameter, while the badly behaved sets will have small µ-measure. We remark though that when the space M is compact, no badly behaved sets are necessary. These partitions define natural projections on the space and allow us to approximate the sequence X n n∈N by variables in the discretized spaces, and by consequence provide approximations for its empirical measures. The main technical observation is that the discretized relative entropy converges to the relative entropy (1.2) as we take thinner partitions (see Lemma 4.1) and the relative entropy is well approximated in balls (see Lemma 4.3).
Some ideas used to prove Lemma 4.3 are roughly inspired by the proof of the upper bound in Csiszár [1]. His work presents a proof of Sanov's Theorem for the τ-topology, a stronger topology than that of weak convergence, with an approach that differs greatly from more classical ones that can be found, for example, in [2, Theorem 6.2.10].
There are two proofs of Sanov's Theorem in [2], one by means of Cramér's Theorem for Polish spaces and the other following a projective limit approach. Although we strongly use the metric structure of the space, our proof does not require profound knowledge of large deviations theory or general topology.
Organization of the paper. In the next section we collect some preliminary notation and results that are used during the text. Section 3 introduces the discretization considered here. Section 4 contains the statement of the main lemmas used in the proof. We also show how Sanov's Theorem is proved in Section 4. Sections 5 and 6 contain the proofs of Lemmas 4.1 and 4.3, respectively.
Data Availability Statement . Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

Preliminaries
In this section we review some basic concepts. We provide the definition of large deviations principle and weak topology, and collect some properties of the relative entropy. For the next lemma, let x ∧ y denote the minimum between x and y.
Lemma 2.2. Let (X, Y ) be a coupling of two distributions µ and υ. Then Proof. Let x, y ∈ M and notice that, for each f ∈ BL(M ), since f is 1-Lipschitz and bounded by one. The proof is now complete by noting that Equation (1.2) is called the variational formulation of entropy, and it readily implies the so-called entropy inequality f dυ ≤ H(υ|µ) + log e f dµ, (2.8) for any measurable bounded function f . We will also make use of the integral formulation of the relative entropy, provided in the next lemma. This formulation will be the key result used to approximate relative entropies in the discrete case to the general case.
We refrain from presenting the proof of the lemma above, and refer the reader to [4, Theorem 5.2.1].

Discretization
In this section we present the discretization procedure used for the space M and related constructions for measures and random variables.
We start by discretizing the space. Let µ ∈ P(M ) and recall that, since M is a Polish space, there exists, for each m ∈ N, a compact set K m with The support of the measure µ is contained in the closure of the union of the compacts K m . Notice that the collection of probability measures supported on the closure of ∪ ∞ m=1 K m forms a closed subset of P(M ), and thus it is enough to prove a large deviations principle for this subspace (see [2, Lemma 4.1.5]). We assume from now on that Given a sequence of partitions A m m∈N , let F m and F ∞ denote the σ-algebras generated by A m and by the union ∪ ∞ m=1 A m , respectively. We write B(M ) for the Borel σ-algebra in M .
Proof. Notice that if we can construct partitions A m for each m that satisfy the three first requirements of the lemma without requiring them to be nested, it is possible to take refinements in order to obtain a nested sequence.
Recall the definition the compact set K m in (3.1). By compactness, it is possible to partition K m into subsets {C m,1 , . . . , C m,lm } of diameter at most 1 m , so that . . , C m,lm } defines a partition of M . Consider an enumeration B i i∈N of balls of rational radius and centered in a countable dense subset of M . We now define the partition A m via the intersections of sets in C m with B m and its complement. We write indicates the sets contained in K ∁ m . Notice that the first two statements about the partition A m are immediately verified. To check the last claim, notice that B i ∈ F i , and thus B i ∈ F ∞ , for all i ∈ N, which implies F ∞ = B(M ) and concludes the proof.
We select a subset M m := {a m,1 , . . . , a m,ℓm } ⊂ M such that a m,i ∈ A m,i for i = 1, . . . , ℓ m and turn (A m , M m ) into a tagged partition. We will furthermore assume that M m ⊂ M m+1 .
For each m ∈ N, the tagged partition (A m , M m ) defines a natural projection This allows us to define, for any measure υ ∈ P(M ), its discretized version υ m ∈ P(M ) as the probability measure supported in M m given by the pushforward of υ via the map π m , i.e.
Random elements are also discretized with the aid of the projection maps π m . If X i i∈N is an i.i.d. sequence of random elements with distribution µ ∈ P(M ), then The empirical measure for the discretized elements is given by Since, for each m ∈ N, the elements X m i take values on the finite space M m , we know that the sequence of empirical measures L m n n∈N satisfies a large deviations principle on the space P(M m ) with rate function H( · |µ m ). Via [2, Lemma 4.1.5], we can extend these large deviation principles to the whole space P(M ) with rate function also given by H( · |µ m ) (note that H(υ|µ m ) is infinite if υ / ∈ P(M m )). In this section we present our approach to the proof of Sanov's Theorem. Our goal is to deduce that the empirical measures L n given by (1.1) satisfy a large deviations principle from the information that the sequences L m n n∈N satisfy large deviations principles, for all m ∈ N. Since the rate function given by Sanov's Theorem (Theorem 1.1) is the relative entropy, the following two lemmas that relate the entropies in discrete and Polish spaces are the central pieces in our proof.     Proof. Observe that if X i ∈ A m,j for some j = 1, . . . ,l m then d(X i , X m i ) ≤ 1/m. In particular, In order to bound the last probability, we use union bound and independence to obtain concluding the proof.
We are now ready to work on the proof of Theorem 1.1. It is proved in [2, Lemma 6.2.6] that the sequence L n n∈N is exponentially tight. In particular, there exists a subsequence L n k n k that satisfies a large deviations principle with rate function I. From now on, we drop the subscript k in n k . Notice that − I(υ) = lim ε→0 lim n→∞ 1 n log P(L n ∈ B ε (υ)).

(4.8)
Even though the rate function I might depend on the subsequence, our goal is to prove that this is not the case. In fact, we prove that I( · ) = H( · |µ). In Proposition 4.5, we prove that H( · |µ) ≥ I( · ), while the opposite inequality is established in Proposition 4.6. This concludes the proof of Theorem 1.1, since any possible subsequence L n k n k that satisfies a large deviations principle does so with the same rate function H( · |µ), which implies that the whole sequence also satisfies a large deviations principle. Proof. Fix υ ∈ P(M ) and notice we can assume that H(υ|µ) is finite since the statement is trivially verified if otherwise.
Due to Lemma 4.1, we have for some positive constant c > 0. In particular, this implies which yields   Proof. Fix υ ∈ P(M ) and observe once again that Taking n → ∞ and ε → 0, we obtain with the aid of Lemma 4.4 for every m ≥ m 0 . Taking the supremum in m concludes the proof.

Proof of Lemma 4.1
In this section we prove Lemma 4.1. We start with the following preliminary lemma, which in particular implies the second part of Lemma 4.1. We prove the first part afterwards.
Lemma 5.1. If σ ∈ P(M ) is such that H(σ|µ) ≤ α, then, for any θ > 0, we have In particular, Proof. Consider X ∼ σ and notice that X m = π m (X) has distribution σ m . Therefore, Splitting on whether X ∈ K m or not, we obtain We now combine the entropy inequality with the bound log(1 + x) ≤ x to obtain, for θ > 0, by the choice of K m in (3.1). Combining the equation above with (5.4) concludes the proof of (5.1).
Choose now θ = m in (5.1) to obtain concluding the proof.
Second, we provide a martingale that will be useful during the proof.
is an uniformly-integrable martingale in the probability space M, B(M ), µ with respect to the filtration F m m∈N .
Proof. Assume first that H(υ|µ) < ∞. In this case, dυ dµ exists and is a uniformly-integrable martingale. It follows directly from the definition of conditional expectation and Radon-Nikodyn derivative thatŜ m = S m almost surely for every m ∈ N, concluding the proof of the first case.
Assume now that sup m H(υ m |µ m ) < ∞ and observe that this implies that S m is well defined for all m ∈ N, has expectation one, and is non-negative. We first have to verify that E[S m+1 |F m ] = S m . Take an element A m,k ∈ F m , with 0 ≤ k ≤ ℓ m , so that Therefore, S m is a uniformly-integrable martingale, concluding the proof of the lemma.
We are now in position to prove the first part of Lemma 4.1. We now work on the proof of the reverse inequality. The strategy of the proof is as follows. If at least one of the two quantities of interest is finite, we have access to the uniformly-integrable martingale S m given by the Radon-Hikodyin derivative of υ m with respect to µ m . As we will see, this martingale converges in L 1 and almost surely to dυ dµ , which will yield the result when combined with Fatou's Lemma. Assume that either H(υ|µ) < ∞ or sup m H(υ m |µ m ) < ∞. The martingale S m introduced in (5.7) is uniformly integrable and thus converges almost surely and in L 1 to a random variable X.
In the case H(υ|µ) < ∞, we have In this section we prove Lemma 4.3. We fix m 0 ∈ N and denote by Our goal is to show that I 0 (υ) = H(υ|µ). We will prove this in two steps, by checking that I 0 (υ) ≤ H(υ|µ) and I 0 (υ) ≥ H(υ|µ). The first inequality is verified in the next paragraph. The reverse inequality is more delicate and we dedicate the rest of the section to verify it. Let us check that I 0 (υ) ≤ H(υ|µ). Indeed, the inequality is trivial if H(υ|µ) = ∞. If on the other hand this entropy is finite, we have, in view of Lemma 4.1, for m large enough, from which our claim follows by noting that υ m ∈B 1 √ m (υ) and applying Lemma 4.1.