First, here is a restatement of the theorem. Let

*X*_{1},

*X*_{2}, …,

*X*_{ n } be a random sample from a distribution with mean

*μ* and variance

*σ*^{2}. Then, if

*Z* is a standard normal random variable,

$$ \mathop{{\lim }}\limits_{{n \to \infty }} P\left( {\frac{{\overline{X} - \mu }}{{\sigma /\sqrt {n} }} \ <\ z} \right) = P(Z \ <\ z) $$

The theorem says that the distribution of the standardized \( \overline{X} \) approaches the standard normal distribution. Our proof is only for the special case in which the moment generating function exists, which implies also that all its derivatives exist and that they are continuous. We will show that the moment generating function of the standardized \( \overline{X} \) approaches the moment generating function of the standard normal distribution. However, convergence of the moment generating function does not by itself imply the desired convergence of the distribution. This requires a theorem, which we will not prove, showing that convergence of the moment generating function implies the convergence of the distribution.

The standardized

\( \overline{X} \) can be written as

$$ Y = \frac{{\overline{X} - \mu }}{{\sigma /\sqrt {n} }} = \frac{{(1/n)[({X_1} - \mu )/\sigma + ({X_2} - \mu )/\sigma + \cdots + ({X_n} - \mu )/\sigma ] - 0}}{{1/\sqrt {n} }} $$

The mean and standard deviation for the first ratio come from the first proposition of Section 6.2, and the second ratio is algebraically equivalent to the first. It says that, if we define

*W* to be the standardized

*X*, so

*W*_{ i } *=* (

*X*_{ i } *– μ*)

*/σ, i =* 1 2

*,…*,

*n*, then the standardized

\( \overline{X} \) can be written as the standardized

\( \bar{W} \),

$$ Y = \frac{{\overline{X} - \bar{\mu }}}{{{{\sigma } \left/ {{\sqrt {n} }} \right.}}} = \frac{{\overline{W} - 0}}{{{{1} \left/ {{\sqrt {n} }} \right.}}}. $$

This allows a simplification of the proof because we can work with the simpler variable

*W*, which has mean 0 and variance 1. We need to obtain the moment generating function of

$$ Y = \frac{{\overline{W} - 0}}{{1/\sqrt {n} }} = \sqrt {n} \ \overline{W} = ({W_1} + {W_2} + \cdots + {W_n})/\sqrt {n} $$

from the moment generating function

*M*(

*t*) of

*W*. With the help of the Section 6.3 proposition on moment generating functions of linear combinations of independent random variables, we get

\( {M_Y}(t) = M{\left( {{{t} \left/ {{\sqrt {n} }} \right.}} \right)^n} \). We want to show that this converges to the moment generating function of a standard normal random variable,

\( {M_Z}(t) = {e^{{{{{{t^2}}} \left/ {2} \right.}}}} \). It is easier to take the logarithm of both sides and show instead that

\( \ln [{M_Y}(t)] = n\ln [M(t/\sqrt {n}\, ] \to {t^2}/2 \). This is equivalent because the logarithm and its inverse are continuous functions.

The limit can be obtained from two applications of L’Hôpital’s rule if we set

\( x = {{1} \left/ {{\sqrt {n} }} \right.} \),

\( \ln [{M_Y}(t)] = n\ln [M(t/\sqrt {n} )] = \ln [M(tx)]/{x^2} \). Both the numerator and the denominator approach 0 as

*n* gets large and

*x* gets small (recall that

*M*(

*0*) = 1 and

*M*(

*t*) is continuous), so L’Hôpital’s rule is applicable. Thus, differentiating the numerator and denominator with respect to

*x*$$ \mathop{{\lim }}\limits_{{x \to 0}} \frac{{\ln [M(tx)]}}{{{x^2}}} = \mathop{{\lim }}\limits_{{x \to 0}} \frac{{M^{\prime}(tx)t/M(tx)}}{{2x}} = \mathop{{\lim }}\limits_{{x \to 0}} \frac{{M^{\prime}(tx)t}}{{2xM(tx)}} $$

Recall that

*M*(0) = 1

*, M*′(0)

*= E*(

*W*) = 0 and

*M*(

*t*) and its derivative

*M*′(

*t*) are continuous, so both the numerator and denominator of the limit on the right approach 0. Thus we can use L’Hôpital’s rule again.

$$ \mathop{{\lim }}\limits_{{x \to 0}} \frac{{M^{\prime}(tx)t}}{{2xM(tx)}} = \mathop{{\lim }}\limits_{{x \to 0}} \frac{{M^{{\prime}{\prime}}(tx){t^2}}}{{2M(tx) + 2xM^{\prime}(tx)t}} = \frac{{1({t^2})}}{{2(1) + 2(0)(0)t}} = {t^2}/2 $$

In evaluating the limit we have used the continuity of *M*(*t*) and its derivatives and *M*(0) = 1, *M*′(0) = *E*(*W*) = 0, *M*′′(0) = *E*(*W*^{ 2 }) = 1. We conclude that the mgf converges to the mgf of a standard normal random variable.