1 Introduction

The Legendre–Fenchel transforms of cumulant generating functions of given random variables are at the core of the large deviations theory (see e.g. [3, 4]). The Cramér function gives the rate of the exponentially decay of tails of distributions for the empirical means of sequences of i.i.d. random variables. It provides a nice connection between convex analysis and statistics.

Donsker and Varadhan in [5] proved that the Legendre–Fenchel transform \(\psi _X^*\) of the cumulant generating function \(\psi _X(s)=\ln Ee^{sX}\) of a random variable X satisfies the following variational principle

$$\begin{aligned} \psi _X^*(a)=\inf _{m\ll \mu _X,\;\int x dm=a}D(m\Vert \mu _X), \end{aligned}$$
(1)

where \(D(m\Vert \mu _X)=\int \ln \frac{dm}{d\mu _X}dm\) is the relative entropy of a probability distribution m with respect to the distribution \(\mu _X\) of X.

The aim of this paper is to prove some variational formula for the Cramér functions of series of independent random variables that depends on coefficients and Cramér functions of summands of a given series; see Theorem 2.3.

For series \(\sum t_ig_i\) of independent standard normal r.v.s, where \(\sum t_i^2<\infty \) and \(g_i\in \mathcal {N}(0,1)\), it is known their tail estimation of the form

$$\begin{aligned} Pr\left( \sum t_ig_i > \alpha \right) \le \exp \left( -\frac{\alpha ^2}{2\sum t_i^2} \right) ; \end{aligned}$$

see for instance [10]. The function \(\frac{\alpha ^2}{2\sum t_i^2}\) is the Cramér function of the random series \(\sum t_ig_i\) (see Example 2.5).

To realize our purposes we will need the general notion of the Legendre–Fenchel transform in topological spaces (see [6] or [1]). Let V be a real locally convex Hausdorff space and \(V^*\) its dual space. By \(\left\langle \cdot ,\cdot \right\rangle \) we denote the canonical pairing between V and \(V^*\). Let \(f:V\mapsto \mathbb {R}\cup \{\infty \}\) be a function non-identically \(\infty \). By \(\mathcal {D}(f)\) we denote the effective domain of f, i.e. \(\mathcal {D}(f)=\{u\in V:\;f(u)<\infty \}\). A function \(f^*:V^*\mapsto \mathbb {R}\cup \{\infty \}\) defined by

$$\begin{aligned} f^*(u^*)=\sup _{u\in V}\left\{ \left\langle u,u^*\right\rangle -f(u)\right\} =\sup _{u\in \mathcal {D}(f)}\left\{ \left\langle u,u^*\right\rangle -f(u)\right\} \qquad (u^*\in V^*) \end{aligned}$$

is called the Legendre–Fenchel transform (convex conjugate) of f and a function \(f^{**}:V\mapsto \mathbb {R}\cup \{\infty \}\) defined by

$$\begin{aligned} f^{**}(u)=\sup _{u^*\in V^*}\left\{ \left\langle u,u^*\right\rangle -f^*(u^*)\right\} =\sup _{u^*\in \mathcal {D}(f^*)}\left\{ \left\langle u,u^*\right\rangle -f^*(u^*)\right\} \qquad (u\in V) \end{aligned}$$

is called the convex biconjugate of f.

The functions \(f^*\) and \(f^{**}\) are convex and lower semicontinuous in the weak* and weak topology on \(V^*\) and V, respectively. Moreover, the biconjugate theorem states that the function \(f:V\mapsto \mathbb {R}\cup \{\infty \}\) not identically equal to \(+\infty \) is convex and lower semicontinuous if and only if \(f=f^{**}\).

Let us mention additional properties of the convex conjugates; see 4.3 Examples in [6]. Let V be a normed space. We denote by \(\Vert \cdot \Vert \) the norm of V and by \(\Vert \cdot \Vert _*\) the norm of \(V^*\). For conjugate exponents \(p,q\in (1,\infty )\) (\(\frac{1}{p}+\frac{1}{q}=1\)), a function \(\frac{1}{q}\Vert u^*\Vert _*^q\) is the convex conjugate of \(\frac{1}{p}\Vert u \Vert ^p\).

Remark 1.1

Let us emphasize that in Hilbert spaces a function \(\frac{1}{2}\Vert u \Vert ^2\) one can treat as the function invariant with respect to the Legendre–Fenchel transform.

Let us list two properties and the notion of the infimal convolution. The convex-conjugation is order-reversing:

$$\begin{aligned} \hbox {if} \quad f\le g\quad \hbox {then}\quad f^*\ge g^*\end{aligned}$$
(2)

and

$$\begin{aligned} \hbox {if}\quad g(u)=f(au)\quad \hbox {where}\quad (a\ne 0)\quad \hbox {then} \quad g^*(u^*)=f^*\left( \frac{u^*}{a}\right) . \end{aligned}$$
(3)

Let functions \(f_1,\ldots ,f_n\) are convex, lower semicontinuous and not identically equal to \(+\infty \). Suppose there is a point in \(\bigcap _{i=1}^n\mathcal {D}(f_i)\) at which \(f_1,\ldots ,f_{n-1}\) are continuous. Then the convex conjugate of their sum is given by the so called infimal (in this case even minimal) convolution, i.e.

$$\begin{aligned} (f_1+\cdots +f_n)^*(u^*)=\min _{u^*_1+\cdots +u^*_n=u^*}\{f_1^*(u^*_1)+\cdots +f_n^*(u^*_n)\} \end{aligned}$$

(see e.g. [7, Th. 1]).

It will turn out (see Remark 2.8) that the variational formula for the Cramér function of series of independent random variables is an example of an application of a generalization of the infimal convolution to the infinite case of summands. Often generalizations of formulas from finite numbers parameters (variables) to the case of infinite ones are not obvious. Other examples of generalizations of the convex conjugates of the logarithm of series of analytic functions, with applications to investigations of the convex conjugates of the spectral radius of the functions of weighted composition operators, one can find in [8, 12].

2 Main theorem

The cumulant generating function \(\psi _X(s)=\ln Ee^{sX}\) of any random variable X is convex and lower semicontinuous on \(\mathbb {R}\) (analytic on \(int \mathcal {D}(\psi _X)\)). It maps \(\mathbb {R}\) into \(\mathbb {R}\cup \{\infty \}\) and takes value zero at zero but it is possible that \(\psi _X(s)=\infty \) when \(s\ne 0\). We will assume that it is finite on some neighborhood of zero, i.e. X satisfies condition: \(\exists _{\lambda >0}\) s.t. \(Ee^{\lambda |X|}<\infty \). Let us emphasize that if \(EX=0\) then \(\psi _X\ge 0\) but the Cramér function \(\psi _X^*\) is always nonnegative and attains 0 at the value EX.

Let \(I\subset \mathbb {N}\) and \((X_i)_{i\in I}\) be a sequence of independent r.v.s. For \(\mathbf{t}=(t_i)_{i\in I}\in \ell ^2(I)\equiv \ell ^2\) consider \(X_\mathbf{t}=\sum _{i\in I}t_i X_i\) (convergence in \(L^2\) and almost surely). The cumulant generating function of \(X_\mathbf{t}\) we will denote by \(\psi _\mathbf{t}\), i.e. \(\psi _\mathbf{t}=\psi _{X_\mathbf{t}}\). Notice that for fixed s we can consider \(\psi _\mathbf{t}(s)\) as a functional of the variable \(\mathbf{t}\) in \(\ell ^2\). We will denote it by \(\psi ^s\). Let us emphasize that \(\psi ^s(\mathbf{t})=\psi _\mathbf{t}(s)\).

Before proving our main theorem, we show forms of the cumulant generating function and the Legendre–Fenchel transform of series of independent random variables.

Proposition 2.1

Let \((X_i)_{i\in I}\) be a sequence of zero-mean independent random variables with common bounded second moments. Let for each \(i\in I\) the cumulant generating function \(\psi _i(s):=\psi _{X_i}(s)=\ln Ee^{sX_i}\) is finite on some neighborhood of zero. Then for each \(\mathbf{t}=(t_i)_{i\in I}\in \ell ^2\) the cumulant generating function of a random series \(X_\mathbf{t}=\sum _{i\in I}t_iX_i\) is given by the following equality

$$\begin{aligned} \psi _\mathbf{t}(s):=\psi _{X_\mathbf{t}}(s)=\sum _{i\in I}\psi _i(st_i). \end{aligned}$$

Proof

Because \((X_i)\) are independent, centered and have common bounded second moments then for every \(\mathbf{t}\in \ell ^2\) the series \(X_\mathbf{t}=\sum _{i\in I}t_iX_i\) converges in \(L^2\) and a.s.. Let us emphasize that the convergence of series \(X_\mathbf{t}\) in \(L^2\) is equivalent to the convergence of sequences \(\mathbf{t}\) in \(\ell ^2\). Fixing s we can consider the cumulant generating function \(\psi _\mathbf{t}(s)\) as a functional of the variable \(\mathbf{t}\) in \(\ell ^2\). We will denote it by \(\psi ^s(\mathbf{t})\), i.e. \(\psi ^s(\mathbf{t})=\psi _\mathbf{t}(s)\). We show that for every \(s\in \mathbb {R}\) the functional \(\psi ^s\) is convex and lower semicontinuous on \(\ell ^2\).

Convexity one may check by using the Hölder inequality. Let \(\mathbf{t},\mathbf{u}\in \ell ^2\) and \(\lambda \in (0,1)\) then

$$\begin{aligned} \psi ^s(\lambda \mathbf{t}+(1-\lambda )\mathbf{u})= & {} \ln Ee^{s\sum _{i\in I}(\lambda t_i+(1-\lambda ) u_i)X_i}\\ \;= & {} \ln E \left[ \left( e^{s\sum _{i\in I}t_iX_i}\right) ^\lambda \left( e^{s\sum _{i\in I}u_iX_i}\right) ^{1-\lambda } \right] \\= & {} \ln E \left[ \left( e^{sX_\mathbf{t}}\right) ^\lambda \left( e^{sX_\mathbf{u}} \right) ^{1-\lambda }\right] . \end{aligned}$$

By the Hölder inequality for exponents \(1/\lambda \) and \(1/(1-\lambda )\) we get

$$\begin{aligned} E \left[ \left( e^{sX_\mathbf{t}}\right) ^\lambda \left( e^{sX_\mathbf{u}} \right) ^{1-\lambda } \right] \le \left( Ee^{sX_\mathbf{t}} \right) ^\lambda \left( Ee^{sX_\mathbf{u}} \right) ^{1-\lambda } \end{aligned}$$

and, in consequence,

$$\begin{aligned} \psi ^s(\lambda \mathbf{t}+(1-\lambda )\mathbf{u})\le & {} \lambda \ln Ee^{sX_\mathbf{t}} + (1-\lambda )\ln Ee^{sX_\mathbf{u}}\\ \;= & {} \lambda \psi ^s(\mathbf{t})+(1-\lambda )\psi ^s(\mathbf{u}). \end{aligned}$$

Lower semicontinuity follows from Fatou’s lemma. Let \(\mathbf{t}^n\rightarrow \mathbf{t}^0\) in \(\ell ^2\). Note that \(X_{\mathbf{t}^n}\) converges a.s. to \(X_{\mathbf{t}^0}\). Then

$$\begin{aligned} \liminf _{n\rightarrow \infty }\psi ^s(\mathbf{t}^n) = \liminf _{n\rightarrow \infty }\ln Ee^{sX_{\mathbf{t}^n}}\ge & {} \ln E(\liminf _{n\rightarrow \infty }e^{sX_{\mathbf{t}^n}})\\ \;= & {} \ln E(e^{s\lim _{n\rightarrow \infty }X_{\mathbf{t}^n}})=\psi ^s(\mathbf{t}^0). \end{aligned}$$

It means that \(\psi ^s\) is lower semicontinuous on \(\ell ^2\).

Let \(\ell _0\) denote the space of sequences with finite supports. Observe that \(\ell _0\) is a dense subset of \(\ell ^2\). For \(\mathbf{t}\in \ell _0\) we have

$$\begin{aligned} \psi ^s(\mathbf{t})= & {} \ln Ee^{sX_\mathbf{t}} = \ln \prod _{i\in I}Ee^{st_iX_i}\\ \;= & {} \sum _{i\in I}\psi _i(st_i). \end{aligned}$$

For \(\mathbf{t}\in \ell ^2\) consider a series \(\sum _{i\in I}\psi _i(st_i)\). Since \(EX_i=0\), \(\psi _i\ge 0\), it follows that \(\sum _{i\in I}\psi _i(st_i)\) is convergent or divergent to plus infinity. Since \(\psi _i\) are convex, this series defines a convex function on the whole \(\ell ^2\). Let \(\mathbf{t}^n\rightarrow \mathbf{t}^0\) in \(\ell ^2\). Hence for every \(i\in I\) \(t^n_i\rightarrow t^0_i\). By superadditivity of the limit inferior and, next, by lower semicontinuity of each \(\psi _i\), we get

$$\begin{aligned} \liminf _{n\rightarrow \infty }\psi ^s(\mathbf{t}^n) = \liminf _{n\rightarrow \infty }\sum _{i\in I}\psi _i(st^n_i)\ge & {} \sum _{i\in I}\liminf _{n\rightarrow \infty }\psi _i(st^n_i)\\ \;\ge & {} \sum _{i\in I}\psi _i(st^0_i)=\psi ^s(\mathbf{t}^0). \end{aligned}$$

Notice that both functions: \(\psi ^s(\mathbf{t})\) and the series \(\sum _{i\in I}\psi _i(st_i)\) are convex and lower semicontinuous on \(\ell ^2\) and moreover coincide on \(\ell _0\) (a dense subset of \(\ell ^2\)). It follows that these functions are equal on whole \(\ell ^2\), i.e.

$$\begin{aligned} \psi ^s(\mathbf{t})=\sum _{i\in I}\psi _i(st_i) \end{aligned}$$

for every \(\mathbf{t}\) in \(\ell ^2\). \(\square \)

Let us observe that for \(s=0\) we have \(\psi ^0\equiv 0\) and its convex conjugate \((\psi ^0)^*(\mathbf{a})=0\) for \(\mathbf{a}=\mathbf{0}\) and \(\infty \) otherwise. From now on we assume that \(s\ne 0\). A form of \((\psi ^s)^*\) for \(s\ne 0\) is described in the following:

Proposition 2.2

Under the assumptions of Proposition 2.1 the convex conjugate of \(\psi ^s(\mathbf{t})=\sum _{i\in I}\psi _i(st_i)\) defined on \(\ell ^2\) equals

$$\begin{aligned} (\psi ^s)^*(\mathbf{a})=\sum _{i\in I}\psi _i^*\left( \frac{a_i}{s}\right) \quad (s\ne 0) \end{aligned}$$

for \(\mathbf{a}\in \ell ^2\), where \(\psi ^*_i\)’s are the Cramér functions of \(X_i\)’s.

Proof

The convex conjugate \((\psi ^s)^*\) is convex and lower semicontinuous on \((\ell ^2)^*\simeq \ell ^2\). Assume first that I is a finite set. By virtue of the form of \(\psi ^s\), the convex conjugate of a separated sum (see e.g. [2, Prop. 13.27]) and the property (3), for \(\mathbf{a}\) in \(\ell ^2(I)\), we get

$$\begin{aligned} (\psi ^s)^*(\mathbf{a})=\sum _{i\in I}\psi _i^*\left( \frac{a_i}{s}\right) \quad (s\ne 0). \end{aligned}$$

Define now a functional \(\sum _{i\in I}\psi _i^*(\frac{a_i}{s})\) on whole space \(\ell ^2\). Since \(\psi ^*_i\)’s are convex and lower semicontinuous, this functional is convex and, similarly as in the case of \(\sum _{i\in I}\psi _i\), one can show that it is also lower semicontinuous on \(\ell ^2\). Because this functional coincides with \((\psi ^s)^*\) on the dense subspace \(\ell _0\) then both functionals are equal on \(\ell ^2\). \(\square \)

Let us emphasize that the functions \((\psi ^s)^*\) are nonnegative and lower semicontinuous. In large deviation theory such functions are called rate functions (good rate functions when sublevel sets are not only closed but also compact). In the main theorem below we show that the contraction principle applied to the function \((\psi ^1)^*\) by using a functional \(\left\langle \mathbf{t}, \cdot \right\rangle \) over \(\ell ^2\) gives the Cramér function of \(X_\mathbf{t}\).

Theorem 2.3

Let a sequence of r.v.’s \((X_i)_{i\in I}\) satisfies the assumptions of Proposition 2.1. Then for every \(\mathbf{t}=(t_i)_{i\in I}\in \ell ^2\) the Cramér function \(\psi _{X_\mathbf{t}}^*= \psi _\mathbf{t}^*\) of a random series \(X_\mathbf{t}=\sum _{i\in I}t_iX_i\) is given by the following variational formula

$$\begin{aligned} \psi ^*_\mathbf{t}(\alpha )=\inf _{\begin{array}{c} \mathbf{b}\in \ell ^2:\; \left\langle \mathbf{t}, \mathbf{b}\right\rangle =\alpha \end{array}}\sum _{i\in I}\psi ^*_i(b_i), \end{aligned}$$

for \(\alpha \in int \mathcal {D}(\psi _\mathbf{t}^*)\), where \(\psi ^*_i\)’s are the Cramér function of \(X_i\)’s.

Proof

The functional \(\psi ^s\) is convex and lower semicontinuous on \(\ell ^2\). By virtue of the biconjugate theorem we have

$$\begin{aligned} \psi ^s(\mathbf{t})=\sup _{\mathbf{a}\in \ell ^2}\{\left\langle \mathbf{t}, \mathbf{a} \right\rangle -(\psi ^s)^*(\mathbf{a})\}, \end{aligned}$$

where \((\psi ^s)^*(\mathbf{a})=\sum _{i\in I}(\psi _i)^*(\frac{a_i}{s})\) \((s\ne 0)\). Substituting \(\mathbf{a}=s\mathbf{b}\) we get

$$\begin{aligned} (\psi ^s)^*(\mathbf{a})=\sum _{i\in I}\psi _i^*\left( \frac{a_i}{s} \right) =\sum _{i\in I}\psi _i^*(b_i)=(\psi ^1)^*(\mathbf{b}) \end{aligned}$$

and we can rewrite the above as follows

$$\begin{aligned} \psi ^s(\mathbf{t})=\sup _{\mathbf{b}\in \ell ^2}\{s\left\langle \mathbf{t}, \mathbf{b}\right\rangle -(\psi ^1)^*(\mathbf{b})\}. \end{aligned}$$
(4)

Let us return to the function \(\psi _\mathbf{t}\) which is convex and lower semicontinuous on \(\mathbb {R}\). By the biconjugate theorem we have

$$\begin{aligned} \psi _\mathbf{t}(s)=\sup _{\alpha \in \mathbb {R}}\{s\alpha -\psi _\mathbf{t}^*(\alpha )\}. \end{aligned}$$

Let us recall that \(\psi _\mathbf{t}(s)=\psi ^s(\mathbf{t})\). If we split the supremum of (4) into two parts: over \(\mathbb {R}\) and hyperplanes \(\{\mathbf{b}\in \ell ^2:\; \left\langle \mathbf{b}, \mathbf{t}\right\rangle =constant\}\) then we get

$$\begin{aligned} \psi _\mathbf{t}(s)= & {} \psi ^s(\mathbf{t})=\sup _{\alpha \in \mathbb {R}}\sup _{\begin{array}{c} \mathbf{b}\in \ell ^2:\\ \left\langle \mathbf{t}, \mathbf{b}\right\rangle =\alpha \end{array}}\{s\left\langle \mathbf{t}, \mathbf{b}\right\rangle -(\psi ^1)^*(\mathbf{b})\}\\ \;= & {} \sup _{\alpha \in \mathbb {R}}\{s\alpha -\inf _{\begin{array}{c} \mathbf{b}\in \ell ^2:\; \left\langle \mathbf{t}, \mathbf{b}\right\rangle =\alpha \end{array}}(\psi ^1)^*(\mathbf{b})\}. \end{aligned}$$

Let \(\varphi _\mathbf{t}(\alpha )\) denote the function \(\inf _{\begin{array}{c} \mathbf{b}\in \ell ^2:\; \left\langle \mathbf{t}, \mathbf{b}\right\rangle =\alpha \end{array}}(\psi ^1)^*(\mathbf{b})\). The functional \((\psi ^1)^*\) is convex on \(\ell ^2\). Convexity is preserved under contraction by linear transformation (see [4, Th. III.32]). It suffices to state that \(\varphi _\mathbf{t}\) and \(\psi _\mathbf{t}^*\) coincide on \(int \mathcal {D}(\psi _\mathbf{t}^*)\) (both ones take \(\infty \) on a complement of \(cl\mathcal {D}(\psi _\mathbf{t}^*)\)) that is

$$\begin{aligned} \psi ^*_\mathbf{t}(\alpha )=\inf _{\begin{array}{c} \mathbf{b}\in \ell ^2:\; \left\langle \mathbf{t}, \mathbf{b}\right\rangle =\alpha \end{array}}(\psi ^1)^*(\mathbf{b}), \end{aligned}$$

for \(\alpha \in int \mathcal {D}(\psi _\mathbf{t}^*)\), where \((\psi ^1)^*(\mathbf{b})=\sum _{i\in I}\psi ^*_i(b_i)\). \(\square \)

Remark 2.4

We cannot prove in general that \(\varphi _\mathbf{t}\) is lower semicontinuous. Sometimes it is obvious when for instance the effective domain of \(\psi _\mathbf{t}^*\) is open subset of \(\mathbb {R}\) (or even whole \(\mathbb {R}\); see Examples 2.5 and 2.6). Under an assumption that \((\psi ^1)^*\) is a good rate function with respect to weak* topology we can state lower-semicontinuity of \(\varphi _\mathbf{t}\) (see Example 2.7).

Example 2.5

The moment generating function of a standard normal r.v. \(g\in \mathcal {N}(0,1)\) equals \( Ee^{sg}=\frac{1}{\sqrt{2\pi }}\int _\mathbb {R}e^{st-\frac{t^2}{2}}dt=e^\frac{s^2}{2} \) and its cumulant generating function to be

$$\begin{aligned} \psi _g(s):=\ln Ee^{sg}=\frac{s^2}{2}. \end{aligned}$$

The function \(\frac{s^2}{2}\) is invariant with respect to Legendre transform (see Remark 1.1) that is the Cramér function of g is given by

$$\begin{aligned} \psi _g^*(\alpha )=\frac{\alpha ^2}{2}. \end{aligned}$$

By virtue of Proposition 2.1 the cumulant generating function of the series \(X_\mathbf{t}=\sum _{i\in I}t_ig_i\), where \(g_i\) are independent and standard normal distributed, equals

$$\begin{aligned} \psi _\mathbf{t}(s)=\sum _{i\in I}\psi _g(st_i)=\frac{1}{2}s^2\sum _{i\in I}t_i^2=\frac{1}{2}s^2\Vert \mathbf{t} \Vert ^2. \end{aligned}$$

By the scaling property (3) we obtain the evident form of the Cramér function

$$\begin{aligned} \psi _\mathbf{t}^*(\alpha )=\frac{\alpha ^2}{2\Vert \mathbf{t} \Vert ^2}. \end{aligned}$$

On the other hand by Theorem 2.3 we get

$$\begin{aligned} \psi _\mathbf{t}^*(\alpha )=\frac{1}{2}\inf _{\begin{array}{c} \mathbf{b}\in \ell ^2:\\ \left\langle \mathbf{t},\mathbf{b}\right\rangle =\alpha \end{array}}\Vert \mathbf{b}\Vert ^2. \end{aligned}$$

Using the Lagrange multipliers technique one can check that this infimum is attained at a sequence \(\mathbf{b}=(\frac{\alpha t_i}{\Vert \mathbf {t} \Vert ^2})_{i\in I}\).

Example 2.6

Let X be r.v. with the Laplace density \(\frac{1}{2}e^{-|x|}\). Its moment generating function \(Ee^{sX}=\frac{1}{1-s^2}\) for \(|s|<1\) and \(\infty \) otherwise. Let us observe that

$$\begin{aligned} Ee^{sX}=\frac{1}{1-s^2}=\sum _{n=0}^\infty s^{2n}\ge \sum _{n=0}^\infty \frac{s^{2n}}{n!2^n}=e^\frac{s^2}{2}=Ee^{sg}, \end{aligned}$$

where g is standard normal distributed. Thus \(\psi _X\ge \psi _g\) and, since the Legendre–Fenchel transform is order-reversing, we have

$$\begin{aligned} \psi _X^*(\alpha )\le \psi _g^*(\alpha )=\frac{\alpha ^2}{2}. \end{aligned}$$
(5)

Moreover by using the classical Legendre transform we can calculate an evident form of \(\psi _X^*\) and get

$$\begin{aligned} \psi _X^*(\alpha )=\frac{\alpha ^2}{\sqrt{1+\alpha ^2}+1}+\ln \frac{2}{\sqrt{1+\alpha ^2}+1}. \end{aligned}$$

Consider a sequence \((X_i)_{i\in I}\) of independent r.v.s with the same Laplace distribution. By (5) we have

$$\begin{aligned} (\psi ^1)^*(\mathbf{b})=\sum _{i\in I}\psi ^*_X(b_i)\le \frac{1}{2}\Vert \mathbf{b} \Vert ^2. \end{aligned}$$

It means that \((\psi ^1)^*\) takes finite values on the whole space \(\ell ^2\) and for every \(\alpha \in \mathbb {R}\) there is a finite infimum:

$$\begin{aligned} \inf _{\begin{array}{c} \mathbf{b}\in \ell ^2:\; \left\langle \mathbf{t}, \mathbf{b}\right\rangle =\alpha \end{array}}\sum _{i\in I}\left( \frac{b_i^2}{\sqrt{1+b_i^2}+1}+\ln \frac{2}{\sqrt{1+b_i^2}+1}\right) , \end{aligned}$$

that is the finite value of \(\psi _\mathbf{t}^*\) at \(\alpha \).

In the paper [11] one can find an example of the variational formula for the Cramér function of series of weighted symmetric Bernoulli random variables but with coefficients belonging to the space \(\ell ^1\). In the context of our Theorem 2.3, we recall the main result of this paper but now with coefficients in the larger space \(\ell ^2\).

Example 2.7

If X is a symmetric Bernoulli r.v., i.e. \(Pr(X=\pm 1)=\frac{1}{2}\), then \(Ee^{sX}=\cosh s\). By power series expansions one has \(\cosh s\le \exp (\frac{s^2}{2})\). In this example, conversely as in previous one, we get that \(\psi _X(s)\le \psi _g(s)=\frac{s^2}{2}\) and

$$\begin{aligned} \psi _X^*(\alpha )\ge \frac{\alpha ^2}{2}. \end{aligned}$$
(6)

One can check that

$$\begin{aligned} \psi _X^*(\alpha )=\frac{1}{2}[(1+\alpha )\ln (1+\alpha )+(1-\alpha )\ln (1-\alpha )] \end{aligned}$$

for \(|\alpha |\le 1\) and \(\infty \) otherwise; we take \(0\ln 0=0\). Note that \(\psi _X^*(\pm 1)=\ln 2\).

For a sequence of independent Bernoulli r.v.s, by the above inequality, we have

$$\begin{aligned} (\psi ^1)^*(\mathbf{b})\ge \frac{1}{2}\Vert \mathbf{b} \Vert ^2. \end{aligned}$$

Since \((\psi ^1)^*\) is lower semicontinuous in the weak* topology, sublevel sets \(\{\mathbf{b}\in \ell ^2:\;(\psi ^1)^*(\mathbf{b})\le c\}\) are weak* closed; for \(c<0\) are empty sets. By the above, for each \(c\ge 0\) the sublevel set of \((\psi ^1)^*\) is contained in a closed ball \(\overline{B}(\mathbf{0};\sqrt{2c})=\{\mathbf{b}\in \ell ^2:\;\Vert \mathbf{b}\Vert \le \sqrt{2c}\}\). By virtue of the Banach–Alaoglo theorem (see [9, Th. 3.15]) balls \(\overline{B}(\mathbf{0};\sqrt{2c})\) are weak* compact (are polar sets of balls \(\overline{B}(\mathbf{0};1/\sqrt{2c})\)). It follows that the sublevel sets \(\{\mathbf{b}\in \ell ^2:\;(\psi ^1)^*(\mathbf{b})\le c\}\) are weak* compact as closed subsets of compact sets. It means that in this topology \((\psi ^1)^*\) is a good rate function and by the contraction principle (see [3, Th. 4.2.1]) \(\varphi _\mathbf{t}\) is also good a rate function on \(\mathbb {R}\), in particular, it is lower semicontinuous.

Let us emphasize that if we know that \((\psi ^1)^*\) is a good rate function then we can prove lower-semicontinuity of \(\varphi _\mathbf{t}\).

Remark 2.8

Let assume now that I is a finite set and \(Y_i=t_iX_i\) then \(X_\mathbf{t}=\sum _{i\in I} Y_i\). Note that

$$\begin{aligned} \psi _\mathbf{t}(s)=\sum _{i\in I}\psi _{Y_i}(s). \end{aligned}$$

The functions \(\psi _{Y_i}\), \(i\in I\), are convex, lower semicontinuous, not identically equals \(+\infty \) and continuous at 0 that is ones satisfy the assumption of Theorem 1 [7]. The convex conjugate of their sum is given by the infimal convolution

$$\begin{aligned} \psi _\mathbf{t}^*(\alpha )=\min _{\sum \alpha _i=\,\alpha }\sum _{i\in I}\psi _{Y_i}^*(\alpha _i). \end{aligned}$$
(7)

Since \(\psi _{Y_i}(s)=\psi _{t_iX_i}(s)=\psi _{X_i}(t_is)\), by scaling property (3), one has

$$\begin{aligned} \psi _{Y_i}^*(\alpha _i)=\psi _{t_iX_i}^*(\alpha _i)=\psi _{X_i}^*\left( \frac{\alpha _i}{t_i}\right) . \end{aligned}$$

Substituting \(\alpha _i=b_it_i\) into (7) we obtain

$$\begin{aligned} \psi ^*_\mathbf{t}(\alpha )=\min _{\sum b_it_i\,=\,\alpha }\sum _{i\in I}\psi _{X_i}^*(b_i). \end{aligned}$$

It means that Theorem 2.3 one can treat as the special case of a generalization of the infimal convolution to the situation of infinite numbers of summands.