In this chapter, we turn our attention to functional series of the form ∑f k (x) whose terms are functions of x rather than real numbers. In Section 8.1, we present two fundamental tests: the ratio and root tests for the convergence of numerical series. In Section 8.2, we shall begin our discussion with an important particular case in which \({f}_{k}(x) = {a}_{k}{(x - a)}^{k}\), and we shall be especially interested in deriving important properties of such series, which may be thought of as polynomials of infinite degree, although some of their properties are quite different from those of polynomials. In this section, we also discuss convergence of power series as well as term-by-term differentiation and integration of power series. The uniqueness of power series may be used in a number of ways. So whatever trick we use to find a convergent power series representing a function, it must be the Taylor series. In particular, we present some practical methods of computing the interval of convergence of a given power series.

8.1 The Ratio Test and the Root Test

The ratio and the root tests for series whose terms are real numbers are easy and useful consequences of the direct comparison test. Later, we shall obtain their analogues when the terms of the series are functions of x.

8.1.1 The Ratio Test

Theorem 8.1 (Ratio test). 

Consider the series ∑ a k , where a k > 0 for all k ≥ N 0 . Let

$$L ={ limsup}_{k\rightarrow \infty }\frac{{a}_{k+1}} {{a}_{k}} \quad \mbox{ and}\quad \mathcal{l} ={ liminf }_{k\rightarrow \infty }\frac{{a}_{k+1}} {{a}_{k}}.$$

Then the series ∑ a k converges if L < 1, and diverges if ℓ > 1. This test offers no conclusion concerning the convergence of the series if ℓ ≤ 1 ≤ L.

Proof.

First we present a direct proof. The second proof is a simple consequence of the root test and Lemma 2.59.

Let L < 1. Choose any r such that 0 ≤ L < r < 1, e.g., \(r = (1 + L)/2\). Then by the definition of limit superior, there exists an N > 0 such that

$$\frac{{a}_{k+1}} {{a}_{k}} < r\quad \mbox{ for all $k \geq N$ }\quad (\geq {N}_{0}).$$

Thus,

$${a}_{N+1} < {a}_{N}r,\ {a}_{N+2} < {a}_{N+1}r < {a}_{N}{r}^{2},\ldots,\ {a}_{ N+k} < {a}_{N}{r}^{k}\quad \mbox{ for $k \geq 1$}.$$

Then by the comparison test, the series ∑a k converges. Indeed, since 0 < r < 1,

$$\sum\limits_{k=0}^{\infty }{a}_{ N+k} < {a}_{N}\sum\limits_{k=0}^{\infty }{r}^{k} = \frac{{a}_{N}} {1 - r},$$

which means that the series ∑ k = 0 a k is dominated by a convergent series

$$\sum\limits_{k=0}^{N-1}{a}_{ k} + {a}_{N}(1 + r + {r}^{2} + \cdots \,) =\sum\limits_{k=0}^{N-1}{a}_{ k} + \frac{{a}_{N}} {1 - r},$$

and so converges.

If  > 1, then we choose R, e.g., \(R = (\mathcal{l} + 1)/2\), such that  > R > 1. Then there exists an N ( ≥ N 0) > 0 such that a N + k  > a N R k for all k ≥ 1. But R > 1, and so for k ≥ 1,

$${a}_{N+k} > {a}_{N}{R}^{k} > {a}_{ N},$$

and hence the general term cannot tend to zero. Thus by the divergence test, the series ∑a k is divergent.

To prove that the ratio test is inconclusive if  ≤ 1 ≤ L, we consider the harmonic p-series \(\sum\limits_{k=1}^{\infty }1/{k}^{p}\) (with p = 1, 2), for which \(\mathcal{l} = L = 1\). □ 

Alternatively, one can quickly obtain a proof of the ratio test as a consequence of the root test (as demonstrated later in Section 8.1.2). However, since an absolutely convergent series is convergent, the ratio test is often stated in the following equivalent form.

Theorem 8.2 (Ratio test). 

Given a series ∑ a k of nonzero terms, let

$$L ={ limsup}_{k\rightarrow \infty }\Big{\vert }\frac{{a}_{k+1}} {{a}_{k}} \Big{\vert }\quad \mbox{ and}\quad \mathcal{l} ={ liminf }_{k\rightarrow \infty }\Big{\vert }\frac{{a}_{k+1}} {{a}_{k}} \Big{\vert }.$$

Then the series ∑ a k converges absolutely if L < 1 and diverges if ℓ > 1. If ℓ ≤ 1 ≤ L, then the series may or may not converge.

We recall that if \({\lim }_{k\rightarrow \infty }\vert {a}_{k+1}/{a}_{k}\vert \) exists, then it is equal to both and L, and hence the ratio test gives information unless, of course, \({\lim }_{k\rightarrow \infty }\vert {a}_{k+1}/{a}_{k}\vert \) equals 1. Thus, in this case the ratio test takes the following simple form, which is the familiar ratio test in calculus, especially when a k  > 0 for all k.

Corollary 8.3 (Simple form of the ratio test). 

Given the series ∑ a k of nonzero terms, let

$$L {=\lim }_{k\rightarrow \infty }\Big{\vert }\frac{{a}_{k+1}} {{a}_{k}} \Big{\vert }.$$

Then the series ∑ a k converges (absolutely) if L < 1 and diverges if L > 1. If L = 1, then the series may or may not converge.

Proof.

Again, we present a direct proof because of its independent interest. If L < 1, choose r such that 0 ≤ L < r < 1. Then for \(\epsilon = r - L > 0\), there exists an N such that

$$\left \vert \frac{{a}_{k+1}} {{a}_{k}} \right \vert - L \leq \left \vert \left \vert \frac{{a}_{k+1}} {{a}_{k}} \right \vert - L\right \vert < \epsilon \quad \mbox{ for all $k \geq N$}.$$

In particular, \(\vert {a}_{k+1}\vert < (L + \epsilon )\vert {a}_{k}\vert = \vert {a}_{k}\vert r\). Consequently,

$$\vert {a}_{N+k}\vert < \vert {a}_{N}\vert {r}^{k}\quad \mbox{ for all $k \geq 1$},$$

and thus by the comparison test, the series ∑a k converges absolutely.

The proof of the second part is similar. Indeed, we choose R such that L > R > 1. Then for \(\epsilon = L - R > 0\), there exists an N such that

$$L -\left \vert \frac{{a}_{k+1}} {{a}_{k}} \right \vert \leq \left \vert \left \vert \frac{{a}_{k+1}} {{a}_{k}} \right \vert - L\right \vert < L - R\quad \mbox{ for all $k \geq N$},$$

which implies that | a N + k  |  >  | a N  | R k for all k ≥ 1, and so certainly {a n } does not converge to zero. The test gives no information, since L = 1 for the divergent series ∑(1 ∕ k) and the convergent series ∑(1 ∕ k 2). □ 

Here are some examples to illustrate the ratio test, namely Corollary 8.3. Moreover, the ratio test is most useful with series involving factorials or exponentials.

Example 8.4.

Test the series ∑ k = 1 a k for convergence, where a k equals:

  1. (a)

    \(\frac{{5}^{k}} {k!}\). (b) \(\frac{{k}^{k}} {k!}\). (c) \(\frac{1} {2k - 3}\). (d) \(\frac{k!} {{k}^{k}}\). (e) \(\frac{{2}^{k}k!} {{k}^{k}}\). (f) \(\frac{{3}^{k}k!} {{k}^{k}}\).

solution.

  1. (a)

    Let \({a}_{k} = {5}^{k}/k!\). Then we note that

    $$L {=\lim }_{k\rightarrow \infty }\frac{{a}_{k+1}} {{a}_{k}} {=\lim }_{k\rightarrow \infty } \frac{{5}^{k+1}k!} {(k + 1)!{5}^{k}} {=\lim }_{k\rightarrow \infty } \frac{5} {k + 1} = 0.$$

    Thus L < 1, and the ratio test tells us that the given series converges.

  2. (b)

    Let \({a}_{k} = {k}^{k}/k!\). Then the series ∑a k diverges, because

    $$L {=\lim }_{k\rightarrow \infty }\frac{{a}_{k+1}} {{a}_{k}} {=\lim }_{k\rightarrow \infty }\frac{k!{(k + 1)}^{k+1}} {{k}^{k}(k + 1)!} {=\lim }_{k\rightarrow \infty }\Big{(}1 + \frac{1} {k}{\Big{)}}^{k} = \mathrm{e} > 1.$$

    Also note that a k  ↛ 0 as k → .

  3. (c)

    Let \({a}_{k} = 1/(2k - 3)\). Then a k  > 0 for all k ≥ 2, and we find that

    $$L {=\lim }_{k\rightarrow \infty }\frac{{a}_{k+1}} {{a}_{k}} {=\lim }_{k\rightarrow \infty }\frac{2k - 3} {2k - 1} = 1.$$

    The ratio test is inconclusive. We can use the comparison test to determine convergence. Indeed, for k ≥ 2,

    $${a}_{k} = \frac{1} {2k - 3} > \frac{1} {2k} = {c}_{k}\quad \mbox{ and}\quad \sum\limits_{k=2}^{\infty }{c}_{ k}\quad \mbox{ diverges.}$$

    Consequently, we conclude that the given series is divergent.

  4. (d)

    From (b) we see that the corresponding limit value L in this case is \(L = 1/\mathrm{e} < 1\), and so the series converges.

  5. (e)

    Since

    $$L {=\lim }_{k\rightarrow \infty }\frac{{a}_{k+1}} {{a}_{k}} ={ \frac{2} {\lim }_{k\rightarrow \infty }\Big{(}1 + \frac{1} {k}{\Big{)}}^{k}} = \frac{2} {\mathrm{e}} < 1,$$

    the series ∑ k = 1 a k converges.

  6. (f)

    In this case, we see that \(L = (3/\mathrm{e}) > 1\), and so ∑a k diverges. ∙ 

8.1.2 The Root Test

On the one hand, it is easier to test the convergence of series such as \(\sum\limits_{k=1}^{\infty }{(k!)}^{2}/(2k!)\) by the ratio test. On the other hand, direct computation using the root test might lead to an unpleasant situation, although the structure of the test is quite similar to that of the ratio test. However, whenever the root test is applicable, it provides almost complete information, as demonstrated in a number of examples, The root test is particularly useful with a series involving a kth power in the kth term.

Theorem 8.5 (kth root test). 

Suppose that {a k } is a sequence of nonnegative real numbers, and let

$$L ={ limsup}_{k\rightarrow \infty }\root{k}\of{{a}_{k}}.$$

Then the series ∑ a k converges if L < 1, and diverges if L > 1. If L = 1, the root test is inconclusive. That is, if L = 1, the series may or may not converge.

Proof.

Suppose that L < 1 and a k  ≥ 0 for all k. To show that the series converges, it suffices to show that the sequence of partial sums is bounded above. Note that L ≥ 0. Choose r such that 0 ≤ L < r < 1. Then by the definition of limit superior, for \(\epsilon = r - L > 0\), there exists an N such that

$$0 \leq \root{k}\of{{a}_{k}} < L + \epsilon = r,\quad \mbox{ i.e., }0 \leq {a}_{k} < {r}^{k}\mbox{ for all $k \geq N$}.$$

Since \(\sum\limits_{k=N}^{\infty }{r}^{k} = {r}^{N}/(1 - r)\), the direct comparison test shows that the series ∑ k = N a k converges. Consequently, the series ∑a k also converges.

Suppose that L > 1. Then a k 1 ∕ k > 1 for infinitely many values of k. But this implies that a k  > 1 for infinitely many k and thus {a k } does not converge to zero as k → . Therefore, the series diverges.

For the divergent series \(\sum\limits_{k=1}^{\infty }(1/k)\) and the convergent series \(\sum\limits_{k=1}^{\infty }(1/{k}^{2})\), L turns out to equal 1, because

$${\left (\frac{1} {k}\right )}^{1/k} = \frac{1} {{k}^{1/k}} \rightarrow 1\quad \mbox{ and}\quad {\left ( \frac{1} {{k}^{2}}\right )}^{1/k} = \frac{1} {{k}^{2/k}} \rightarrow 1\quad \mbox{ as $k \rightarrow \infty $}.$$

Thus, the test is inconclusive if L = 1. □ 

Example 8.6.

Consider the series

$$\frac{1} {3} + \frac{1} {4} + \frac{1} {{3}^{2}} + \frac{1} {{4}^{2}} + \frac{1} {{3}^{3}} + \frac{1} {{4}^{3}} + \cdots \,.$$

We may write the general term explicitly:

$${a}_{k} = \left \{\begin{array}{ll} \frac{1} {{3}^{(k+1)/2}} & \mbox{ if $k$ is odd,} \\ \frac{1} {{4}^{k/2}} & \mbox{ if $k$ is even.} \end{array} \right.$$

Then

$$\frac{{a}_{k+1}} {{a}_{k}} = \left \{\begin{array}{ll} {\left (\frac{3} {4}\right )}^{(k+1)/2} & \mbox{ if $k$ is odd}, \\ \frac{1} {3}{\left (\frac{4} {3}\right )}^{k/2} & \mbox{ if $k$ is even}, \end{array} \right.$$

so that

$${limsup}_{k\rightarrow \infty }\frac{{a}_{k+1}} {{a}_{k}} = \infty \quad \mbox{ and}\quad {liminf }_{k\rightarrow \infty }\frac{{a}_{k+1}} {{a}_{k}} = 0.$$

The ratio test gives no information. On the other hand,

$${a}_{k}^{1/k} = \left \{\begin{array}{ll} \frac{1} {\sqrt{3}}\left ( \frac{1} {{3}^{1/{2}^{k} }}\right )&\mbox{ if $k$ is odd,} \\ \frac{1} {2} &\mbox{ if $k$ is even}, \end{array} \right.$$

so that \({limsup}_{k\rightarrow \infty }{a}_{k}^{1/k} = 1/\sqrt{3} < 1.\) Hence the series ∑a k converges. ∙ 

Finally, if a k  = k, then

$${\lim }_{k\rightarrow \infty }{a}_{k}^{1/k} = 1 {=\lim }_{ k\rightarrow \infty }\frac{{a}_{k+1}} {{a}_{k}},$$

and so both tests fail. However, ∑k is divergent.

Since an absolutely convergent series is convergent, the root test is often stated in the following equivalent form.

Theorem 8.7 (Root test). 

Suppose that {a k } is a sequence of real numbers, and let \(L ={ limsup}_{k\rightarrow \infty }\root{k}\of{\vert {a}_{k}\vert }\) . Then the series ∑ a k converges (absolutely) if L < 1, and diverges if L > 1. If L = 1 the series may or may not converge.

If \({\lim }_{k\rightarrow \infty }\root{k}\of{\vert {a}_{k}\vert }\) exists, then the root test gives information unless, of course, \({\lim }_{k\rightarrow \infty }\root{k}\of{\vert {a}_{k}\vert } = 1\). Because of its independent interest, we include here a direct proof of it which is also referred to as a root test.

Corollary 8.8 (Simple form of the root test). 

Suppose that {a k } is a sequence of real numbers, and let

$$L {=\lim }_{k\rightarrow \infty }\root{k}\of{\vert {a}_{k}\vert }.$$

Then the series ∑ a k converges (absolutely) if L < 1, and diverges if L > 1. If L = 1, the series may or may not converge.

Proof.

Suppose that L < 1. Then we note that L ≥ 0. Choose any r such that 0 ≤ L < r < 1. Then for \(\epsilon = r - L > 0\), there exists an N such that

$$\root{k}\of{\vert {a}_{k}\vert }- L \leq \left \vert \root{k}\of{\vert {a}_{k}\vert }- L\right \vert < \epsilon \quad \mbox{ for all $k \geq N$}.$$

This gives

$$\root{k}\of{\vert {a}_{k}\vert } < L + \epsilon = r\quad \mbox{ or}\quad 0 \leq \vert {a}_{k}\vert < {r}^{k}\quad \mbox{ for all $k \geq N$},$$

which means that the series ∑ | a k  | is dominated by the convergent geometric series ∑r k. Consequently, the series ∑a k converges absolutely by the comparison test.

Suppose that L > 1. Choose R such that L > R > 1. Then for \(\epsilon = L - R > 0\), there exists an N such that

$$L -\root{k}\of{\vert {a}_{k}\vert }\leq \left \vert \root{k}\of{\vert {a}_{k}\vert }- L\right \vert < \epsilon \quad \mbox{ for all $k \geq N$},$$

which gives \(\root{k}\of{\vert {a}_{k}\vert } > R\), or | a k  |  > R k for all k ≥ N where R > 1. Thus, the general term cannot tend to zero as k → . Therefore, the series diverges.

As remarked in the proof of Theorem 8.5, for the series \(\sum\limits_{k=1}^{\infty }(1/k)\) and \(\sum\limits_{k=1}^{\infty }(1/{k}^{2})\), L turns out to be equal to 1, and therefore the test is inconclusive if L = 1. □ 

Remark 8.9.

In the ratio and the root tests, the case L > 1 includes the case L = . ∙ 

Finally we outline the proof of the ratio test as a consequence of the root test.

Proof of Theorem 8.1. Let \(\alpha ={ limsup}_{n\rightarrow \infty }{a}_{n}^{1/n}\), where L and are defined in Theorem 8.1. From Lemma ??, we obtain that  ≤ α ≤ L. If L < 1, then the right-hand inequality give α < 1, and so the series ∑a k converges by the root test. Similarly, if  > 1, then α > 1, and so the series ∑a k diverges. The condition α = 1 is equivalent to  ≤ 1 ≤ L, and so the test is inconclusive. □ 

Example 8.10.

Test the series ∑ k = 2 a k for convergence, where a k equals

  1. (a)

    \(\frac{1} {{(\log k)}^{ck}}\) (c > 0), (b) \({(1 + 1/k)}^{2{k}^{2} }\), (c) \(\frac{k!} {1 \cdot 4 \cdot 7\cdots (3k + 1)}\).

solution.

Because a k involves a power for cases (a) and (b), it is appropriate to use the root test for those series.

  1. (a)

    Let \({a}_{k} = 1/{(\log k)}^{ck}\), c > 0. Then a k  > 0 for all k ≥ 2, and so

    $$L {=\lim }_{k\rightarrow \infty }\root{k}\of{{a}_{k}} {=\lim }_{k\rightarrow \infty } \frac{1} {{(\log k)}^{c}} = 0.$$

    Because L < 1, the root test tells us that the given series converges.

  2. (b)

    We note that

    $$L {=\lim }_{k\rightarrow \infty }{\left [{\left (1 + \frac{1} {k}\right )}^{2{k}^{2} }\right ]}^{1/k} {=\lim }_{ k\rightarrow \infty }{\left (1 + \frac{1} {k}\right )}^{2k} ={ \mathrm{e}}^{2} > 1.$$

    Since L > 1, the series diverges.

  3. (c)

    Because the corresponding a k involves k! , we may try the ratio test. Now

    $$\frac{{a}_{k+1}} {{a}_{k}} = \frac{[1 \cdot 4 \cdot 7\cdots (3k + 1)] \cdot (k + 1)!} {k! \cdot [1 \cdot 4 \cdot 7\cdots (3k + 4)]} = \frac{k + 1} {3k + 4} \rightarrow \frac{1} {3}\quad \mbox{ as }k \rightarrow \infty.$$

    Thus \(L = 1/3\), and by the ratio test, the series converges. ∙ 

8.1.3 Questions and Exercises

Questions 8.11.

  1. 1.

    Which of the ratio and root tests is more appropriate on which occasions?

  2. 2.

    Which of the ratio and root tests implies the other? When do they produce the same conclusion? When do they not?

  3. 3.

    Does \(\sum\limits_{k=1}^{\infty }\Big{(}{3}^{k}/{(4 - {(-1)}^{k})}^{k}\Big{)}\) converge?

Exercises 8.12.

  1. 1.

    For what values of \(a \in \mathbb{R}\) can we use the ratio test to prove that \(\sum\limits_{k=1}^{\infty }({a}^{k}k!/{k}^{k})\) is a convergent series?

  2. 2.

    Test the convergence of ∑ k = 1 a k , where

    1. (a)

      \({a}_{2k} = \frac{1} {{2}^{k}},\ \ {a}_{2k-1} = \frac{1} {{2}^{k+1}}\). (b) \({a}_{2k} = {3}^{k},\ \ {a}_{ 2k-1} = {3}^{-k}\).

  3. 3.

    Discuss the convergence of the series ∑a k by either the ratio test or the root test, whichever is applicable.

    1. (a)

      \({a}_{k} = {3}^{{(-1)}^{k}-k }\). (b) \({a}_{2k} = {2}^{-k},\ \ {a}_{ 2k-1} = {3}^{-k}\).

  4. 4.

    Sum the series \(\sum\limits_{k=1}^{\infty }k{3}^{-k}\) and \(\sum\limits_{k=1}^{\infty }({k}^{2} + k){5}^{-k}\). Justify your method.

  5. 5.

    Which of the following series converge?

    1. (a)

      \(\sum\limits_{k=1}^{\infty }\frac{{k}^{5}} {{5}^{k}}\). (b) \(\sum\limits_{k=1}^{\infty }{\left (1 + \frac{1} {\sqrt{k}}\right )}^{-{k}^{3/2} }\). (c) \(\sum\limits_{k=1}^{\infty }{\left (\frac{3k + 1} {k + 7} \right )}^{k}\).

  6. 6.

    Sum the series ∑ k = 1 a k , where a k equals

    1. (a)

      \(\frac{k} {{3}^{k-1}}\). (b) \(\frac{{k}^{2}} {{3}^{k-1}}\). (c) \(\frac{{k}^{3}} {{3}^{k-1}}\). (d) \(\frac{{k}^{2}} {k!\,{3}^{k-1}}\).

  7. 7.

    Suppose that {a n } is a sequence of nonnegative real numbers, and

    $${L}_{1} ={ limsup}_{n\rightarrow \infty }\frac{{a}_{n+1}} {{a}_{n}} \quad \mbox{ and}\quad {L}_{2} ={ limsup}_{n\rightarrow \infty }\root{n}\of{{a}_{n}}.$$

    Prove the following:

    1. (a)

      If either 0 ≤ L 1 < 1 or 0 ≤ L 2 < 1, then a n  → 0 as n → .

    2. (b)

      If either L 1 > 1 or L 2 > 1, then a n  →  as n → .

8.2 Basic Issues around the Ratio and Root Tests

In analogy to numerical series whose terms are real numbers, we now consider functional series, which are series of functions of the form

$$\sum\limits_{k=0}^{\infty }{f}_{ k}(x),$$

where f k (x)’s (\(k \in {\mathbb{N}}_{0}\)) are functions defined on a subset of \(\mathbb{R}\). Since the sequence {S n (x)} of partial sums given by

$${S}_{n}(x) =\sum\limits_{k=0}^{n}{f}_{ k}(x)$$

is a function of x, we need to determine for what values of x the sequence {S n (x)} converges. The most interesting case is \({f}_{k}(x) = {a}_{k}{(x - a)}^{k}\), where a is a real constant. In this case, the functional series takes the form

$$\sum\limits_{k=0}^{\infty }{a}_{ k}{(x - a)}^{k} = {a}_{ 0} + {a}_{1}(x - a) + {a}_{2}{(x - a)}^{2} + \cdots \,,$$

which we call a power series with center at x = a or a power series about a, or a Taylor series about x = a. The numbers a 0, a 1, a 2,  are called the coefficients of the power series. If a = 0, then the series has the form

$$\sum\limits_{k=0}^{\infty }{a}_{ k}{x}^{k} = {a}_{ 0} + {a}_{1}x + {a}_{2}{x}^{2} + {a}_{ 3}{x}^{3} + \cdots $$

and is called a Maclaurin series. Clearly, a Maclaurin series is a Taylor series with a = 0. Moreover, these power series have a meaning only for those values of x for which the series converges. We also note that although each term of the power series is defined for all real x, it is not expected that the series will converge for all real x. The set of all values of x for which a given functional series ∑f k (x) (e.g., \({f}_{k}(x) = {a}_{k}{(x - a)}^{k}\)) converges is called the convergence set or the region of convergence for ∑f k (x).

As a motivation, we shall begin our discussion with a number of examples by treating functional series as numerical series by fixing x and then applying the ratio test or root test to determine whether the series converges for that particular x. In addition, interesting conclusions will be drawn just by looking at a k , the coefficients of the power series. We shall discuss this issue in detail later.

Example 8.13.

Find all real values of x for which the series ∑ k = 0 k 5 x k converges.

solution.

Clearly, the series converges for x = 0. Keeping x fixed (that is, as a constant) nonzero real number, we apply the ratio test:

$$L {=\lim }_{k\rightarrow \infty }\left \vert \frac{{(k + 1)}^{5}{x}^{k+1}} {{k}^{5}{x}^{k}} \right \vert {=\lim }_{k\rightarrow \infty }{\left (\frac{k + 1} {k} \right )}^{5}\vert x\vert = \vert x\vert.$$

Thus, according to the ratio test, the series converges if | x |  < 1 and diverges if | x |  > 1. The test fails if | x |  = 1, i.e., if x = 1 or − 1. When x = 1, the series becomes ∑k 5, which diverges by the divergence test. When \(x = -1\), the series becomes ∑( − 1)k k 5, which diverges, because the general term does not tend to zero. Similarly, it is easy to see that the power series ∑ k = 1 kx k converges for | x |  < 1 and diverges for | x | ≥ 1. ∙ 

Example 8.14.

Test the series \(\sum\limits_{k=1}^{\infty }(3 -\cos k\pi ){x}^{k}\) for convergence.

solution.

Set \({a}_{k}(x) = (3 -\cos k\pi ){x}^{k}\). Then a k (x)≠0 for all k ≥ 1 and for each fixed x≠0. In order to apply the root test, we need to compute

$$\vert {a}_{k}(x){\vert }^{1/k} = {(3-\cos k\pi )}^{1/k}\vert x\vert = \left \{\begin{array}{cc} \vert x\vert ({2}^{1/k})&\mbox{ if $k$ is even,} \\ \vert x\vert ({2}^{2/k})& \mbox{ if $k$ is odd.} \end{array} \right.$$

Since \({2}^{1/k} =\exp ((1/k)\log 2) \rightarrow {\mathrm{e}}^{0} = 1\) (also \({2}^{2/k} = {({2}^{1/k})}^{2} \rightarrow 1\)) as k → , we have

$$\vert {a}_{k}(x){\vert }^{1/k} \rightarrow \vert x\vert \quad \mbox{ as }k \rightarrow \infty,$$

and the root test (see Theorem 8.7 and Corollary 8.8) tell us that the series ∑a k (x) converges absolutely for all x with  | x |  < 1, and diverges for | x |  > 1. For | x |  = 1, the series clearly diverges, since the general term does not approach zero.

Notice that the ratio test as in Corollary 8.3 is not applicable, because

$$\frac{{a}_{k+1}(x)} {{a}_{k}(x)} = \left \{\begin{array}{cc} 2\vert x\vert &\mbox{ if $k$ is even,}\\ (1/2)\vert x\vert & \mbox{ if $k$ is odd,} \end{array} \right.$$

showing that

$${limsup}_{k\rightarrow \infty }\left \vert \frac{{a}_{k+1}(x)} {{a}_{k}(x)} \right \vert = 2\vert x\vert \quad \mbox{ and}\quad {liminf }_{k\rightarrow \infty }\left \vert \frac{{a}_{k+1}(x)} {{a}_{k}(x)} \right \vert = \frac{\vert x\vert } {2}.$$

However, the ratio test as in Theorem 8.2 is applicable. According to this, ∑a k (x) converges absolutely for all x with | x |  < 1 ∕ 2 and diverges for all x with | x |  > 2. Note that this test does not give information when 1 ∕ 2 ≤ | x | ≤ 2. However, the root test gives full information for | x |  < 1 and for | x |  > 1. ∙ 

Example 8.15.

Test the series \(\sum\limits_{k=0}^{\infty }(3 +\sin (k\pi /2)){(x - 1)}^{k}\) for convergence.

solution.

Set \({a}_{k} = (3 +\sin (k\pi /2))\). Then

$${a}_{k} = 3+\sin (k\pi /2) = \left \{\begin{array}{ll} 3 + {(-1)}^{(k-1)/2} & \mbox{ if $k$ is odd,} \\ 3 &\mbox{ if $k$ is even,} \end{array} \right.$$

so that {a k } k ≥ 0 is

$$\{3,4,3,2,3,4,3,2,\ldots \}.$$

Therefore, if \({a}_{k}(x) = {a}_{k}{(x - 1)}^{k}\), then a k (x)≠0 for all k ≥ 0 and for each fixed x≠1, so that the quotient \({a}_{k+1}(x)/{a}_{k}(x)\) assumes the values

$$\left \{\frac{4} {3}(x - 1),\ \frac{3} {4}(x - 1),\ \frac{2} {3}(x - 1),\ \frac{3} {2}(x - 1)\right \}$$

infinitely often. Fixing x≠1, we see that

$${limsup}_{k\rightarrow \infty }\left \vert \frac{{a}_{k+1}(x)} {{a}_{k}(x)} \right \vert = \frac{3} {2}\vert x - 1\vert \quad \mbox{ and}\quad {liminf }_{k\rightarrow \infty }\left \vert \frac{{a}_{k+1}(x)} {{a}_{k}(x)} \right \vert = \frac{2} {3}\vert x - 1\vert,$$

showing that (by Theorem 8.2) the series ∑a k (x) converges absolutely for all x with \(\vert x - 1\vert < 2/3\) and diverges for all x with \(\vert x - 1\vert > 3/2\). However, this test does not give information when \(2/3 \leq \vert x - 1\vert \leq 3/2\).

On the other hand, | a k (x) | 1 ∕ k assumes the values

$$\left \{{3}^{1/k}\vert x - 1\vert,\ {4}^{1/k}\vert x - 1\vert,\ {2}^{1/k}\vert x - 1\vert \right \}$$

infinitely often. Thus, we have

$${\lim }_{k\rightarrow \infty }\vert {a}_{k}(x){\vert }^{1/k} = \vert x - 1\vert,$$

and the root test (see Theorem 8.7 and Corollary 8.8) imply that the series ∑a k (x) converges absolutely for all x with | x − 1 |  < 1, and diverges for | x − 1 |  > 1. The root test does not tell us what happens when \(\vert x - 1\vert = 1\), but we by a direct verification that the series diverges (at x = 0, 2), since the general term does not approach zero. ∙ 

8.2.1 Convergence of Power Series

Power series of the form ∑a k (x − a)k occur as representations for certain functions, such as

$$\frac{1} {1 - x} =\sum\limits_{k=0}^{\infty }{x}^{k}\quad (\vert x\vert < 1),$$

and in solutions of a large class of differential equations. Now we wish to continue our discussion on the convergence of power series. More precisely, we ask, for what values of x does a given power series converge? Theorem 8.17 answers this question for the case a = 0, and a corresponding theorem for a power series about a is a consequence of this result (see Theorem 8.22). We begin with a lemma.

Lemma 8.16.

For a power series

$$\sum\limits_{k=0}^{\infty }{a}_{ k}{x}^{k},$$
(8.1)

we have the following:

  1. (1)

    If the series converges at x = x 0 (x 0 ≠0), then it converges absolutely for |x| < |x 0 |.

  2. (2)

    If the series diverges at x 1 , then it diverges for |x| > |x 1 |.

Proof.

Suppose that the series (8.1) converges at x = x 0 (x 0≠0). Then ∑a k x 0 k converges, and so a k x 0 k → 0 as k → . In particular, since a convergent sequence is bounded, | a k x 0 k | ≤ M for all k ≥ 0. We claim that the series (8.1) converges absolutely for | x |  <  | x 0 | . For each x with | x |  <  | x 0 | , we can write

$$\vert {a}_{k}{x}^{k}\vert = \vert {a}_{ k}{x}_{0}^{k}\vert {\left \vert \frac{x} {{x}_{0}}\right \vert }^{k} \leq M{r}^{k}\quad \mbox{ for all }k \geq 0\quad (r = \vert x/{x}_{ 0}\vert < 1),$$

showing that the series ∑ | a k x k | is dominated by the convergent geometric series Mr k. So the given series (8.1) is absolutely convergent for | x |  <  | x 0 | , and hence must converge for | x |  <  | x 0 | (see Figure 8.1). Next suppose that the series ∑a k x 1 k diverges. If x is such that | x |  >  | x 1 | and ∑a k x k converges, then by (1), ∑a k x 1 k converges absolutely, contrary to the assumption. Hence ∑a k x k diverges for | x |  >  | x 1 | (see Figure 8.2).  □ 

Theorem 8.17 (Convergence of Maclaurin series). 

For a power series defined by (8.1), exactly one of the following is true:

  1. (1)

    The series converges only for x = 0.

  2. (2)

    The series converges for all x.

  3. (3)

    Neither (1) nor (2) holds, and there exists an R > 0 such that the series converges absolutely for |x| < R and diverges for x > R.

Proof.

The case (1) is self-explanatory, for the series (8.1) always converges when x = 0. If it diverges for all other values of x, then we set R = 0 by convention.

Fig. 8.1
figure 1

Sketch for the convergence of ∑ a k x k at x 0.

Fig. 8.2
figure 2

Sketch for the divergence of ∑ a k x k at x 1.

We can now suppose that the series ∑a k x k converges at x = x 0 (x 0≠0). By Lemma 8.16, the series ∑a k x k must converge absolutely for all x with | x |  <  | x 0 | .

Thus, if the series ∑a k x k diverges for all x with | x |  >  | x 0 | , then | x 0 | is the value of R mentioned in the statement of the theorem. Otherwise, there exists a number x 0 with | x 0  |  >  | x 0 | such that the series ∑a k x k converges (absolutely) for all x with | x |  <  | x 0  | . Now let

$$I = \left \{r > 0 : \sum \vert {a}_{k}{x}^{k}\vert \ \mbox{ converges for }\vert x\vert < r\right \}.$$

If I has no upper bound, then the series ∑a k x k converges for all x. That is, it converges for | x |  < R, R = .

If I is bounded, then we let R = supI. We cannot say what happens at the endpoints \(x = R,-R\). □ 

Examples 8.18.

  1. (a)

    By the alternating series test, the series \(\sum\limits_{k=1}^{\infty }({x}^{k}/k)\) converges for \(x = -1\), and so it must converge for all | x |  < 1. Also, the series for x = 1 is actually the harmonic series, which is divergent. Hence the series must diverge for | x |  > 1.

  2. (b)

    The series \(\sum\limits_{k=1}^{\infty }({x}^{k}/{k}^{2})\) converges absolutely for | x |  ≤ 1, because \(\vert x{\vert }^{k}/{k}^{2} \leq 1/{k}^{2}\) and ∑(1 ∕ k 2) converges.

  3. (c)

    The series \(\sum\limits_{k=1}^{\infty }({(-1)}^{k-1}/k){x}^{k}\) converges at x = 1 and diverges at \(x = -1\). Therefore, it must converge for | x |  < 1 and diverge for | x |  > 1. ∙ 

Note: When Case (3) in Theorem 8.17 occurs, then the series (8.1) could converge absolutely, converge conditionally, or diverge for | x |  = R, that is, at the endpoints x = R and − R.

In the case of a finite radius of convergence (defined below), Examples 8.18 illustrate why Theorem 8.17 makes no assertion about the behavior of a power series at the endpoints of the interval of convergence.

8.2.2 Radius of Convergence of Power Series

According to Theorem 8.17, the set of points at which the series ∑ k = 0 a k x k converges is an interval about the origin. We call this interval the interval of convergence of the series; this interval is either {0}, the set of all real numbers, or an interval of positive finite length centered at x = 0 that may contain both, neither, or one of its endpoints; that is, the interval may be open, half-open, or closed. If this interval has length 2R, then R is called the radius of convergence of ∑ k = 0 a k x k, as shown in Figure 8.3. We follow the convention that if the series about the origin converges only for x = 0, then we say that the series has radius of convergence R = 0, and if it converges for all x, we say that R = , so that \(\mathbb{R}\) is treated as an interval of infinite radius. Theorem 8.17 (with the above understanding in the extreme cases R = 0, ) may now be rephrased as follows:

Fig. 8.3
figure 3

Interval of convergence for a Taylor series.

Theorem 8.19.

For each power series about the origin, there is an R in [0,∞) ∪{∞}, the radius of convergence of the series, such that the series converges for |x| < R and diverges for |x| > R.

Note: If the radius of convergence R is a finite positive number, then as demonstrated in a number of examples (see also examples below), the behavior at the endpoints is unpredictable, and so the interval of convergence for the series ∑ k = 0 a k x k is one of the four intervals

$$(-R,R),\ [-R,R),\ (-R,R],\ [-R,R].$$

The following examples show that each of the three possibilities stated in Theorem 8.17 occurs.

Example 8.20.

Find the interval of convergence of the power series ∑ k = 1 a k x k, where a k equals

  1. (a)

    \(\frac{1} {k!}\); (b) k! ; (c) \(\frac{1} {\sqrt{k}}\); (d) \(\frac{{5}^{k}} {k}\); (e) \(\frac{{5}^{k}} {k!}\); (f) \({\left (\frac{k + 1} {k} \right )}^{2{k}^{2} }\).

solution.

For convenience, we let a k (x) = a k x k. If x = 0, then the series trivially converges.

  1. (a)

    For x≠0, let \({a}_{k}(x) = {x}^{k}/k!\). Then a k (x)≠0 for x≠0, and we use the ratio test to obtain

    $$L {=\lim }_{k\rightarrow \infty }\left \vert \frac{{a}_{k+1}(x)} {{a}_{k}(x)} \right \vert {=\lim }_{k\rightarrow \infty }\left \vert \frac{{x}^{k+1}k!} {(k + 1)!{x}^{k}}\right \vert ={ \left \vert x\right \vert \lim }_{k\rightarrow \infty } \frac{1} {k + 1} = 0.$$

    Because L = 0 and thus L < 1, the series converges (absolutely) for all x. Thus R = , and the interval of convergence is the entire real line.

  2. (b)

    Let a k (x) = k! x k. Then for x≠0, we use the ratio test to obtain

    $$L {=\lim }_{k\rightarrow \infty }\left \vert \frac{{a}_{k+1}(x)} {{a}_{k}(x)} \right \vert {=\lim }_{k\rightarrow \infty }\left \vert \frac{(k + 1)!{x}^{k+1}} {k!{x}^{k}} \right \vert = \vert x{\vert \lim }_{k\rightarrow \infty }(k + 1).$$

    For any x other than 0, we have L = . Hence, the power series converges only when x = 0. Thus the power series ∑k! x k has radius of convergence 0.

  3. (c)

    For each fixed x≠0, let \({a}_{k}(x) = {x}^{k}/\sqrt{k}\). Then using the ratio test, we find that

    $$L {=\lim }_{k\rightarrow \infty }\left \vert \frac{{a}_{k+1}(x)} {{a}_{k}(x)} \right \vert ={ \left \vert x\right \vert \lim }_{k\rightarrow \infty } \frac{\sqrt{k}} {\sqrt{k + 1}} = \left \vert x\right \vert.$$

    The power series converges absolutely if x < 1, and diverges if x > 1. We must also check the convergence of the series at the endpoints of the interval x < 1, namely at \(x = -1\) and 1:

    • At \(x = -1\):

      $$\sum\limits_{k=1}^{\infty }\frac{{(-1)}^{k}} {\sqrt{k}} \quad \mbox{ {\it converges} by the alternating series test}$$
    • At x = 1:

      $$\sum\limits_{k=1}^{\infty } \frac{1} {\sqrt{k}}\quad \mbox{ {\it diverges} ($p$-series with $p = \frac{1} {2} < 1$).}$$

    Thus, the power series \(\sum ({x}^{k}/\sqrt{k})\) converges for − 1 ≤ x < 1 and diverges otherwise. The interval of convergence is [ − 1, 1), and the radius of convergence is R = 1.

  4. (d)

    For x≠0, let \({a}_{k}(x) = {5}^{k}{x}^{k}/k\). Then we apply the ratio test to obtain

    $$L {=\lim }_{k\rightarrow \infty }\left \vert \frac{{a}_{k+1}(x)} {{a}_{k}(x)} \right \vert {=\lim }_{k\rightarrow \infty } \frac{5k} {k + 1}\left \vert x\right \vert = 5\left \vert x\right \vert.$$

    Thus, the series converges absolutely for 5x < 1; that is, for \(\vert x\vert < \frac{1} {5}\). The radius of convergence is \(R = \frac{1} {5}\). At the endpoints, we have:

    • At \(x = \frac{1} {5}\),

      $$\sum\limits_{k=1}^{\infty }\frac{{5}^{k}} {k}{ \left (\frac{1} {5}\right )}^{k} =\sum\limits_{k=1}^{\infty }\frac{1} {k},\quad \mbox{ which is divergent}.$$
    • At \(x = -\frac{1} {5}\),

      $$\sum\limits_{k=1}^{\infty }\frac{{5}^{k}} {k}{ \left (-\frac{1} {5}\right )}^{k} =\sum\limits_{k=1}^{\infty }\frac{{(-1)}^{k}} {k},\quad \mbox{ which is convergent}.$$

    The interval of convergence is \(-\frac{1} {5} \leq x < \frac{1} {5}\).

  5. (e)

    Set \({a}_{k}(x) = {5}^{k}{x}^{k}/k!\), for x≠0. Applying the ratio test, we find that L = 0. Thus, the power series converges absolutely for all x, and so the radius of convergence is infinite, and interval of convergence is the entire real line.

  6. (f)

    Using the root test, we find that

    $$L {=\lim }_{k\rightarrow \infty }\vert {a}_{k}(x){\vert }^{1/k} {=\lim }_{ k\rightarrow \infty }\Big{(}1 + \frac{1} {k}{\Big{)}}^{2k}\left \vert x\right \vert ={ \mathrm{e}}^{2}\left \vert x\right \vert.$$

    Thus, the power series converges absolutely for e2 x < 1, that is, for | x |  < e − 2. It follows that the radius of convergence is \(R ={ \mathrm{e}}^{-2}\). How about at the endpoints? ∙ 

Example 8.21.

The series ∑ k = 0 x k converges absolutely to \(1/(1 - x)\) for | x |  < 1. It follows from Theorem 5.62 that the Cauchy product

$${\left (\sum\limits_{k=0}^{\infty }{x}^{k}\right )}^{2} =\sum\limits_{n=0}^{\infty }\left (\sum\limits_{k=0}^{n}{x}^{k}{x}^{n-k}\right ) =\sum\limits_{n=0}^{\infty }(n + 1){x}^{n}$$

also converges absolutely and has sum \(1/{(1 - x)}^{2}\) for | x |  < 1. ∙ 

In some applications, we will encounter series about x = a. The procedure for determining the interval of convergence of such power series is exactly the same, and for the sake of completeness, it is illustrated in the following theorem and subsequent examples.

Theorem 8.22 (Convergence of a general Taylor series). 

For a power series

$$\sum\limits_{k=0}^{\infty }{a}_{ k}{(x - a)}^{k},$$
(8.2)

exactly one of the following is true:

  1. (1)

    The series converges only for x = a, i.e., R = 0.

  2. (2)

    The series converges for all x, i.e., R = ∞.

  3. (3)

    The series converges absolutely for |x − a| < R, i.e., for all x in the open interval \((a - R,a + R)\) , and diverges for x − a > R, i.e., for all x in \((-\infty,a - R) \cup (a + R,\infty )\) . It may converge absolutely, converge conditionally, or diverge at each of the endpoints of the interval \(x = a + R\) and \(x = a - R\) . Here R is called the radius of convergence of (8.2).

Proof.

Use the transformation \(X = x - a\) and apply the proof of Theorem 8.17 to the power series in the new variable X. □ 

Figure 8.3 illustrates the various types of interval of convergence of the series (8.2). Figure 8.4 demonstrates the fact that no conclusion can be drawn about the convergence of the series the endpoints of the interval of convergence. A power series that converges for all x is called an everywhere convergent power series. A power series that converges only at x = a is often referred to as a nowhere convergent power series.

Example 8.23.

Consider the power series

$$\sum\limits_{k=0}^{\infty }\frac{{(x + 1)}^{k}} {{3}^{k}}.$$

With the introduction of a new variable \(X = (x + 1)/3\), the given series becomes the geometric series ∑X k, which converges for | X |  < 1 and diverges for | X | ≥ 1. Consequently, the given series converges absolutely for x + 1 < 3 and diverges for x + 1 ≥ 3. Therefore, the interval of convergence of the given series is ( − 4, 2). ∙ 

8.2.3 Methods for Finding the Radius of Convergence

Applying the root test (Theorem 8.7), we have the following theorem.

Theorem 8.23 (Cauchy–Hadamard). 

The power series ∑ k=0 a k x k has radius of convergence R, where

$$\frac{1} {R} ={ limsup}_{n\rightarrow \infty }\vert {a}_{n}{\vert }^{1/n}.$$

Here we observe the conventions \(1/0 = \infty \) and \(1/\infty = 0\).

Proof.

For a fixed x≠0, we have

$${L}_{x} ={ limsup}_{n\rightarrow \infty }\vert {a}_{n}{x}^{n}{\vert }^{1/n} = \vert x\vert {limsup}_{ n\rightarrow \infty }\vert {a}_{n}{\vert }^{1/n} = \frac{1} {R}\vert x\vert.$$

Since L x  < 1 if and only if | x |  < R, according to Theorem 8.7, the series ∑ k = 0 a k x k converges absolutely when | x |  < R, and diverges when | x |  > R. In view of Theorem 8.19, the radius of convergence is R.

Fig. 8.4
figure 4

Finite radius R of convergence of ∑ k ≥ 1 a k (x − a)k.

When R = , the series converges everywhere; and when R = 0, the series converges only at x = 0. □ 

The following result generally suffices to explain the convergence of a Maclaurin or Taylor series.

Corollary 8.24.

The radius of convergence R of the power series ∑ k=0 a k x k is determined by

  1. (a)

    \(\frac{1} {R} {=\lim }_{n\rightarrow \infty }\vert {a}_{n}{\vert }^{1/n}\) (b) \(\frac{1} {R} {=\lim }_{n\rightarrow \infty }\Big{\vert }\frac{{a}_{n+1}} {{a}_{n}} \Big{\vert }\)

provided these limits exist. Again, we follow the conventions \(1/0 = \infty \) and \(1/\infty = 0\).

From Corollary 2.60 we recall that if \({\lim }_{n\rightarrow \infty }\big{\vert }{a}_{n+1}/{a}_{n}\big{\vert }\) exists (with the same limit), then lim n →   | a n  | 1 ∕ n exists, but the converse is not true. This observation gives the following corollary.

Corollary 8.25.

Suppose that \({\lim }_{n\rightarrow \infty }\vert {a}_{n+1}/{a}_{n}\vert \) exists. Then radius of convergence R of the power series ∑ k=0 a k x k is determined by

$$\frac{1} {R} {=\lim }_{n\rightarrow \infty }\vert {a}_{n}{\vert }^{1/n} {=\lim }_{ n\rightarrow \infty }\Big{\vert }\frac{{a}_{n+1}} {{a}_{n}} \Big{\vert }.$$

Example 8.28.

Consider the power series

$$\sum\limits_{k=0}^{\infty }{7}^{-k}{x}^{5k}.$$

Note that in this power series the quotient \({a}_{n+1}/{a}_{n}\) is undefined if n≠5k, \(k \in {\mathbb{N}}_{0}\). Hence Corollary 8.24 (b) is not applicable. However, we can either apply directly the ratio test for numerical series (by fixing x and ignoring the vanishing terms) or else introduce a change of variable. Thus, by the introduction of a new variable \(X = {x}^{5}/7\), the given series becomes a geometric series ∑X k, which converges for | X |  < 1 and diverges for | X | ≥ 1. Consequently, the given series converges absolutely for \(\vert x\vert < \root{5}\of{7}\) and diverges for \(\vert x\vert \geq \root{5}\of{7}\). Therefore, the radius of convergence of the given series is \(R = \root{5}\of{7}\). ∙ 

Example 8.29.

Discuss the convergence of \(\sum\limits_{n=1}^{\infty }({x}^{5n}/n{2}^{n}).\)

solution.

For x≠0, let \({a}_{n} = {x}^{5n}/{2}^{n}n.\) Then

$$\left \vert \frac{{a}_{n+1}} {{a}_{n}} \right \vert = \left \vert \frac{{x}^{5(n+1)}} {{2}^{n+1}(n + 1)} \cdot \frac{{2}^{n}n} {{x}^{5n}} \right \vert = \frac{\vert x{\vert }^{5}} {2} \left ( \frac{n} {n + 1}\right ) \rightarrow \frac{\vert x{\vert }^{5}} {2} \quad \mbox{ as }n \rightarrow \infty,$$

so that by the ratio test, the series converges for | x | 5 < 2 (including the trivial case x = 0) and diverges for | x | 5 > 2. Let \(x = {2}^{1/5}\). Then \({a}_{n} = 1/n\), and the series is \(\sum\limits_{n=1}^{\infty }(1/n)\), which is divergent. If \(x = -{2}^{1/5}\), then \({a}_{n} = {(-1)}^{n}/n\), which gives the convergent series \(\sum\limits_{n=1}^{\infty }({(-1)}^{n}/n)\). We conclude that the given series converges for \(-{2}^{1/5} \leq x < {2}^{1/5}\) and diverges for all other values of x. For each \(-{2}^{1/5} \leq x < {2}^{1/5}\), we readily obtain that (see, for instance, Examples 8.46)

$$\sum\limits_{n=1}^{\infty }\frac{1} {n}{\left (\frac{{x}^{5}} {2} \right )}^{n} = -\log (1 - {x}^{5}/2).$$

 ∙ 

We wish to know whether a convergent power series \(f(x) =\sum\limits_{k=0}^{\infty }{a}_{k}{x}^{k}\) is differentiable. If term-by-term differentiation is allowed, then we get a new series \(g(x) =\sum\limits_{k=1}^{\infty }k{a}_{k}{x}^{k-1}\), which we call a derived series. It is natural to ask whether these two series have the same radius of convergence. Also, we ask whether f is differentiable, and if so, whether g(x) = f′(x). We answer these questions below. Later, as an alternative proof, we also obtain our result as a special case of a more general result (see Corollary ??).

Lemma 8.30.

Suppose that {a n } n≥1 is a bounded sequence of real numbers. Then we have

  1. (a)

    {n 1∕n a n } is bounded;

  2. (b)

    \({limsup}_{n\rightarrow \infty }{a}_{n} ={ limsup}_{n\rightarrow \infty }{n}^{1/n}{a}_{n}\).

Proof.

  1. (a)

    In Example 2.18, we have shown that n 1 ∕ n → 1, and so (a) follows easily.

  2. (b)

    Let \(a = limsup{a}_{n},\ b = limsup{n}^{1/n}{a}_{n}\), and let \(\{{a}_{{n}_{k}}\}\) be a subsequence of {a n } converging to a. Using the properties of the limit superior, we easily have

    $${n}_{k}^{1/{n}_{k} }{a}_{{n}_{k}} \rightarrow a\quad \mbox{ and}\quad a \leq b.$$

    Now we suppose that (see Theorem ??) \(\{{m}_{k}^{1/{m}_{k}}{a}_{{ m}_{k}}\}\) converges to b. This implies that \(\{{a}_{{m}_{k}}\}\) converges to b, because \({m}_{k}^{1/{m}_{k}} \rightarrow 1\) as k → . Consequently,

    $$b \leq limsup{a}_{n},\quad \mbox{ i.e., $b \leq a$.}$$

    We obtain a = b, as desired. □ 

Lemma 8.31.

The two power series ∑ k=0 a k x k and ∑ k=1 ka k x k have the same radius of convergence.

Proof.

By Lemma 8.30, we have

$${limsup}_{n\rightarrow \infty }\vert n{a}_{n}{\vert }^{1/n} {=\lim }_{ n\rightarrow \infty }{n}^{1/n}{ limsup}_{ n\rightarrow \infty }\vert {a}_{n}{\vert }^{1/n} ={ limsup}_{ n\rightarrow \infty }\vert {a}_{n}{\vert }^{1/n}.$$

The result now follows from Theorem 8.23. □ 

It might be of interest to have a direct proof of Lemma 8.31 without using the definition of limit superior, which is needed in order to utilize Theorem 8.23 as well as the fact that \(\lim {n}^{1/n} = 1\). Let R and R′ be the radii of convergence of ∑ k = 0 a k x k and \(\sum\limits_{k=1}^{\infty }k{a}_{k}{x}^{k-1}\), respectively. Fix an arbitrary x with 0 <  | x |  < R. Choose x 0 such that | x |  <  | x 0 |  < R. Now, ∑ k = 0 a k x 0 k and \(\sum\limits_{k=1}^{\infty }{a}_{k}{x}_{0}^{k-1}\) both converge absolutely. Thus, {a k x 0 k − 1} is bounded. This means that there exists an M such that | a k x 0 k − 1 | ≤ M for all k ≥ 1. Consequently,

$$\left \vert k{a}_{k}{x}^{k-1}\right \vert = k\left \vert {a}_{ k}{x}_{0}^{k-1}\right \vert {\left \vert \frac{x} {{x}_{0}}\right \vert }^{k-1} \leq Mk{r}^{k-1}\quad \mbox{ for }k \geq 1\quad (r = \vert x/{x}_{ 0}\vert ).$$

Since 0 < r < 1, the ratio test shows that \(\sum\limits_{k=1}^{\infty }k{r}^{k-1}\) converges. But then by the comparison test, \(\sum\limits_{k=1}^{\infty }k{a}_{k}{x}^{k-1}\) converges absolutely for | x |  <  | x 0 | . Since x 0 is arbitrary, we conclude that the derived series converges absolutely for all | x |  < R. Thus, R ≤ R′.

To obtain the reverse inequality, we fix x with | x |  < R′ and observe that

$$\vert {a}_{k}{x}^{k-1}\vert \leq \vert k{a}_{ k}{x}^{k-1}\vert \quad \mbox{ for all }k \geq 1,$$

so that if ∑ | ka k x k | converges at x≠0 ( | x |  < R′), then ∑ | a k x k − 1 | , and hence ∑ | a k x k | converges at x. This shows that R ≥ R′. Consequently, R = R′, and proof is complete.

Caution: Lemma 8.31 does not say that ∑ k = 0 a k x k and \(\sum\limits_{k=1}^{\infty }k{a}_{k}{x}^{k-1}\) have the same interval of convergence (see Questions 8.50(8)), although they have the same radius of convergence.

We can repeat the differentiation process and obtain the following theorem.

Theorem 8.32.

A power series ∑ k≥0 a k x k and the n-fold derived series defined by \(\sum\limits_{k\geq n}k(k - 1)\,\cdots \,(k - n + 1){a}_{k}{x}^{k-n}\) have the same radius of convergence.

Next we present the following result.

Theorem 8.33 (Term-by-term differentiation in power series). 

If ∑ k≥0 a k x k has radius of convergence R > 0, then f(x) = ∑ k≥0 a k x k is differentiable in |x| < R and

$$f\prime(x) =\sum\limits_{k\geq 1}k{a}_{k}{x}^{k-1}\quad (\vert x\vert < R).$$
(8.3)

Moreover, f (n) (x) exists for every n ≥ 1 and every x with |x| < R, and

$${f}^{(n)}(x) =\sum\limits_{k=n}^{\infty }k(k - 1)\cdots (k - n + 1)){a}_{ k}{x}^{k-n}\ \ (\vert x\vert < R).$$
(8.4)

The coefficients a n are uniquely determined, and \({a}_{n} = {f}^{(n)}(0)/n\).

Proof.

Let f(x) =  ∑ k ≥ 0 a k x k have radius of convergence R. We have to prove the existence of f′(x) in | x |  < R and that f′ is of the stated form. By Theorem 8.32 with k = 1, the derived series ∑ k ≥ 1 ka k x k − 1 converges for | x |  < R and defines a function, say g(x), in | x |  < R. We show that

$$f\prime(x) {=\lim }_{h\rightarrow 0}\frac{f(x + h) - f(x)} {h} = g(x)\quad \mbox{ for all $x \in (-R,R).$}$$

Let x ∈ ( − R, R) be fixed. Then choose a positive r ( < R) such that | x |  < r, e.g., \(r = (R + \vert x\vert )/2\). Also, let \(h \in \mathbb{R}\) with \(0 < \vert h\vert < (R -\vert x\vert )/2\). We have

$$\vert x + h\vert \leq \vert x\vert + \vert h\vert < \vert x\vert + \frac{R -\vert x\vert } {2} = \frac{\vert x\vert + R} {2} = r,$$

and for all nonzero h such that \(0 < \vert h\vert < (R -\vert x\vert )/2\), we consider

$$\frac{f(x + h) - f(x)} {h} - g(x) =\sum\limits_{k\geq 2}{a}_{k}\left (\frac{{(x + h)}^{k} - {x}^{k}} {h} - k{x}^{k-1}\right ),$$
(8.5)

where x and x + h are now such that max{ | x | , | x + h | } ≤ r < R. As an application of Taylor’s theorem (which we prove for convenience at a later stage) on the interval with endpoints x and x + h, we get

$${(x + h)}^{k} = {x}^{k} + k{x}^{k-1}h + \frac{k(k - 1)} {2} {c}_{k}^{k-2}{h}^{2},$$

where c k is some number between x and x + h. (This may also be verified directly.) Note also that | c k  | ≤ r, and so

$$\left \vert \frac{{(x + h)}^{k} - {x}^{k}} {h} - k{x}^{k-1}\right \vert \leq \vert h\vert \,\frac{k(k - 1)} {2} {r}^{k-2}.$$

So we must show that as h → 0,

$$\left \vert \frac{f(x + h) - f(x)} {h} - g(x)\right \vert = \left \vert \sum\limits_{k\geq 2}{a}_{k}\left (\frac{{(x + h)}^{k} - {x}^{k}} {h} - k{x}^{k-1}\right )\right \vert \rightarrow 0.$$

Since the derived series \(\sum\limits_{k\geq 2}k(k - 1){a}_{k}{x}^{k-2}\) of ∑ k ≥ 1 ka k x k − 1 is also convergent for | x |  < R, we conclude that it is absolutely convergent for | x | ≤ r ( < R). Using this and the triangle inequality, we see that as h → 0,

$$\left \vert \sum\limits_{k\geq 2}{a}_{k}\left (\frac{{(x + h)}^{k} - {x}^{k}} {h} - k{x}^{k-1}\right )\right \vert \leq \frac{\vert h\vert } {2} \,\sum\limits_{k\geq 2}\vert {a}_{k}\vert k(k - 1){r}^{k-2} \rightarrow 0.$$

Consequently, by (8.5), it follows that f′(x) exists and equals g(x). Since x is arbitrary, this holds at any interior point in | x |  < R.

A repeated application of this argument shows that all the derivatives f′, f′, , f (n),  exist in | x |  < R, and (8.3) holds. The substitution x = 0 in (8.4) yields that f (n)(0) = n! a n , as required. □ 

The theorem just proved shows that inside the interval of convergence (not necessarily at the endpoints \(x = R,-R\)), every power series can be differentiated term by term, and the resulting derived series will converge to the derivative of the limit function of the original series.

For example, the geometric series \({(1 - x)}^{-1} =\sum\limits_{k\geq 0}{x}^{k}\), which converges for | x |  < 1, after n-fold differentiation yields

$$\frac{1} {{(1 - x)}^{n+1}} =\sum\limits_{k\geq n}<mfenced-6 separators="" open="(" close=")"> <mfrac-1 linethickness="0"> <mrow>k</mrow> <mrow>n</mrow> </mfrac> </mfenced>{x}^{k-n} =\sum\limits_{m\geq 0}\frac{(m + n)!} {n!m!} {x}^{m}\quad \mbox{ for $\vert x\vert < 1.$}$$

In particular,

$$\frac{1} {{(1 - x)}^{2}} =\sum\limits_{k\geq 1}k{x}^{k-1}\quad \mbox{ and}\quad \frac{2} {{(1 - x)}^{3}} =\sum\limits_{k=2}^{\infty }k(k - 1){x}^{k-2}\quad \mbox{ for $\vert x\vert < 1$.}$$

Consequently, expressions such as the one above may be used to evaluate sums such as

$$\sum\limits_{k=1}^{\infty }{(-1)}^{k} \frac{k} {{3}^{k}},\quad \sum\limits_{k=1}^{\infty } \frac{k} {{3}^{k}},\quad \sum\limits_{k=2}^{\infty }\frac{k(k - 1){(-1)}^{k}} {{3}^{k}}.$$

Finally, we remark that the following corollary shows that there is one and only one Taylor series for a function f, meaning that whatever method one uses, one obtains the same Taylor coefficients.

Corollary 8.34 (Uniqueness of the coefficients). 

Let R > 0 be the radius of convergence of \(f(x) =\sum\limits_{k=0}^{\infty }{a}_{k}{x}^{k}\) and let \(g(x) =\sum\limits_{k=0}^{\infty }{b}_{k}{x}^{k}\) be such that

$$\sum\limits_{k=0}^{\infty }{a}_{ k}{x}^{k} =\sum\limits_{k=0}^{\infty }{b}_{ k}{x}^{k}\quad \mbox{ for $\vert x\vert < R$}.$$

Then a k = b k for each k ≥ 0.

Proof.

The proof follows easily from Theorem 8.33. □ 

The conclusion of Corollary 8.34 can be deduced from a weaker hypothesis (see Theorem 8.37), but a proof requires some preparation.

8.2.4 Uniqueness Theorem for Power Series

Suppose that \(f(x) =\sum\limits_{k=0}^{\infty }{a}_{k}{x}^{k}\) converges for | x |  < R and that there exists a sequence {x n } n ≥ 1 of distinct points converging to zero and at each of these points

$$f({x}_{n}) =\sum\limits_{k=0}^{\infty }{a}_{ k}{x}_{n}^{k} = 0\quad \mbox{ for each $n \geq 1$.}$$

Then by the continuity of f(x) at the origin, \({a}_{0} = f(0) = 0\). Thus f takes the form f(x) = xg(x), where

$$g(x) =\sum\limits_{k=1}^{\infty }{a}_{ k}{x}^{k-1},$$

which is also a convergent series in | x |  < R. Because f(x n ) = 0 for all n ≥ 1, it follows that g(x n ) = 0. Continuity of g at the origin implies that a 1 = 0. Continuing this process, we obtain the following.

Lemma 8.35.

Suppose that the power series \(f(x) =\sum\limits_{k=0}^{\infty }{a}_{k}{x}^{k}\) converges for |x| < R. If there exists a sequence {x n } n≥1 of distinct points converging to zero such that f(x n ) = 0 for all n ≥ 1, then a k = 0 for all k ≥ 0 and

$$f(x) =\sum\limits_{k=0}^{\infty }{a}_{ k}{x}^{k} = 0\quad \mbox{ in $\vert x\vert < R$.}$$

In particular, this lemma implies that if there exists a neighborhood D 0 of zero in | x |  < R for which

$$f(x) =\sum\limits_{k=0}^{\infty }{a}_{ k}{x}^{k} = 0\quad \mbox{ in ${D}_{ 0}$},$$

then a k  = 0 for all k ≥ 0. It is natural to look for a similar result if the sequence {x n } converges to a point other than the center (origin). In order to solve this problem, we need to prove the following result.

Lemma 8.36.

Suppose that the series \(f(x) =\sum\limits_{k=0}^{\infty }{a}_{k}{x}^{k}\) converges for |x| < R and a is a point such that |a| < R. Then the Taylor series expansion of f about x = a is given by

$$f(x) =\sum\limits_{k=0}^{\infty }\frac{{f}^{(k)}(a)} {k!} {(x - a)}^{k},$$

which converges at least for \(\vert x - a\vert < R -\vert a\vert \).

Proof.

We write

$${x}^{k} = {(x - a + a)}^{k} =\sum\limits_{m=0}^{k}<mfenced-6 separators="" open="(" close=")"> <mfrac-1 linethickness="0"> <mrow> k</mrow> <mrow>m</mrow> </mfrac> </mfenced>{a}^{k-m}{(x - a)}^{m},$$

so that

$$\begin{array}{rcl} \sum\limits_{k=0}^{\infty }{a}_{ k}{x}^{k}& =& \sum\limits_{k=0}^{\infty }{a}_{ k}\sum\limits_{m=0}^{k}<mfenced-6 separators="" open="(" close=")"> <mfrac-1 linethickness="0"> <mrow> k</mrow> <mrow>m</mrow> </mfrac> </mfenced>{a}^{k-m}{(x - a)}^{m} \\ & =& \sum\limits_{m=0}^{\infty }\left (\sum\limits_{k=m}^{\infty }<mfenced-6 separators="" open="(" close=")"> <mfrac-1 linethickness="0"> <mrow> k</mrow> <mrow>m</mrow> </mfrac> </mfenced>{a}_{k}{a}^{k-m}\right ){(x - a)}^{m}\end{array}$$

This implies that

$$f(x) =\sum\limits_{m=0}^{\infty }\frac{{f}^{(m)}(a)} {m!} {(x - a)}^{m},$$
(8.6)

as desired. We need justifications for two steps. The interchange of the order of summation is justified, since

$$\sum\limits_{k=0}^{\infty }\vert {a}_{ k}\vert \sum\limits_{m=0}^{k}<mfenced-6 separators="" open="(" close=")"> <mfrac-1 linethickness="0"> <mrow> k</mrow> <mrow>m</mrow> </mfrac> </mfenced>\vert a{\vert }^{k-m}\vert x - a{\vert }^{m} =\sum\limits_{k=0}^{\infty }\vert {a}_{ k}\vert {(\vert a\vert + \vert x - a\vert )}^{k},$$

and by hypothesis ∑ k = 0 a k x k converges absolutely for | x |  < R, so

$$\sum\limits_{k=0}^{\infty }\vert {a}_{ k}\vert {(\vert x - a\vert + \vert a\vert )}^{k}$$

converges at least for \(\vert x - a\vert < R -\vert a\vert \). Also, by Theorem 8.33,

$$\frac{{f}^{(m)}(x)} {m!} =\sum\limits_{k=m}^{\infty }<mfenced-6 separators="" open="(" close=")"> <mfrac-1 linethickness="0"> <mrow> k</mrow> <mrow>m</mrow> </mfrac> </mfenced>{a}_{k}{x}^{k-m}\quad \mbox{ in $\vert x\vert < R$.}$$

In particular,

$$\frac{{f}^{(m)}(a)} {m!} =\sum\limits_{k=m}^{\infty }<mfenced-6 separators="" open="(" close=")"> <mfrac-1 linethickness="0"> <mrow> k</mrow> <mrow>m</mrow> </mfrac> </mfenced>{a}_{k}{a}^{k-m},$$

which proves (8.6). □ 

Theorem 8.37 (Uniqueness/identity theorem for power series). 

Suppose that the series \(f(x) =\sum\limits_{k=0}^{\infty }{a}_{k}{x}^{k}\) converges for |x| < R. Suppose that S, the set of all x for which f(x) = 0, has a limit point in |x| < R. Then a k = 0 for all k ≥ 0.

Proof.

Let I = { x :  | x |  < R} and \(S =\{ x \in I :\, f(x) = 0\}\). We spilt I into two sets:

$$A =\{ x \in I :\, \mbox{ $x$ is a limit point of $S$}\}\quad \mbox{ and}\quad B =\{ x \in I :\, x\not\in A\} = I\setminus A.$$

Then I = AB and A ∩ B = . Clearly, B is open. Next we show that A is open. Since a ∈ A by hypothesis, A is nonempty. Since | a |  < R, by Lemma 8.36, f(x) can be expanded in a power series about a:

$$f(x) =\sum\limits_{k=0}^{\infty }{c}_{ k}{(x - a)}^{k}\quad \mbox{ for $\vert x - a\vert < R -\vert a\vert $}.$$

We claim that c k  = 0 for all k ≥ 0. Since a is a limit point of S, there exists a sequence of points x n in S, x n a, x n  → a, with f(x n ) = 0 for all n so that

$${c}_{0} = f(a) {=\lim }_{n\rightarrow \infty }f({x}_{n}) = 0$$

(by the continuity of f). Thus, it suffices to prove that c k  = 0 for all k > 0. Suppose not. Then there would be a smallest positive integer m such that c m ≠0. Thus, f(x) has the form

$$f(x) = {(x - a)}^{m}g(x),\quad g(x) =\sum\limits_{k=m}^{\infty }{c}_{ k}{(x - a)}^{k-m}\quad \mbox{ for $\vert x - a\vert < R -\vert a\vert $.}$$

The continuity of g at a and g(a) = c m ≠0 imply the existence of a δ > 0 such that

$$g(x)\neq 0\quad \mbox{ for $0 < \vert x - a\vert < \delta (< R -\vert a\vert )$.}$$

We conclude that f(x)≠0 in 0 <  | x − a |  < δ. This contradicts the fact that a is a limit point of S. Consequently, c k  = 0 for all k ≥ 0, so that f(x) = 0 in a neighborhood of a. Hence, A is open.

Since AB = I is connected and nonempty, it cannot be written as a union of two nonempty disjoint open sets. Hence we must have either A =  or B = . But by the hypothesis, a ∈ A, and therefore B = . Thus A = I, which gives that a k  = 0 for all k ≥ 0 and \(f(x) =\sum\limits_{k=0}^{\infty }{a}_{k}{x}^{k} = 0\) for all | x |  < R, as desired. □ 

Corollary 8.38 (Identity/uniqueness theorem). 

Suppose that \(f(x) =\sum\limits_{k=0}^{\infty }{a}_{k}{x}^{k}\) and \(g(x) =\sum\limits_{k=0}^{\infty }{b}_{k}{x}^{k}\) converge for |x| < R. Set

$$S =\{ x :\, \vert x\vert < R\ \mbox{ with $f(x) = g(x)$}\}.$$

If S has a limit point in |x| < R, then a n = b n for all n ≥ 0; i.e., f(x) = g(x) for all |x| < R.

Proof.

Apply Theorem 8.37 to \(h(x) = f(x) - g(x) =\sum\limits_{k=0}^{\infty }({a}_{k} - {b}_{k}){x}^{k}\) in | x |  < R. □ 

8.2.5 Real Analytic Functions

Functions that are expressible as a convergent power series are of particular interest. For instance, consider \(f(x) = 1/(1 - x)\) on \(\mathbb{R} \ \{ 1\}\). For each a≠1, f(x) has a power series about a:

$$f(x) = \frac{1} {1 - a}\left ( \frac{1} {1 - (x - a)/(1 - a)}\right ) =\sum\limits_{k=0}^{\infty } \frac{{(x - a)}^{k}} {{(1 - a)}^{k+1}}\quad \mbox{ for }\vert x-a\vert < \vert 1-a\vert.$$

This suggests the following definition of a real analytic function:

Definition 8.39.

Let I ⊂ R be open, and \(f : I \rightarrow \mathbb{R}\) and a ∈ I. We say that f is real analytic at a if f can be represented as a Taylor series about a valid in a neighborhood of a. We say that f is real analytic on I if it is real analytic at each a ∈ I. That is, for every a ∈ I, there exist a number δ a  > 0 and a sequence {a k } of real numbers such that

$$f(x) =\sum\limits_{k=0}^{\infty }{a}_{ k}{(x - a)}^{k}\quad \mbox{ for every $x \in I$ with $\vert x - a\vert < {\delta }_{ a}$.}$$

In view of Theorem 8.33, a k in the above Taylor representation of f must be f (k)(a) ∕ k! . Thus, for f to be real analytic at x = a, it is necessary that f (k)(a) exist for each k. However, the converse is not necessarily true. In Example 8.48, we present an example of an infinitely differentiable function f on \(\mathbb{R}\) such that f (k)(0) exists for k ≥ 0, but the resulting Taylor series about x = 0 does not converge to f in any neighborhood of the origin. Thus, the function f in Example 8.48 is not real analytic.

Examples 8.40.

  1. 1.

    Every polynomial in x with real coefficients is real analytic in \(\mathbb{R}\).

  2. 2.

    The exponential function ex and the trigonometric functions sinx and cosx are all real analytic on \(\mathbb{R}\).

  3. 3.

    The function \(1/(1 + {x}^{2})\) is real analytic on \(\mathbb{R}\), whereas \(1/(1 - x)\) is real analytic only on \(\mathbb{R} \ \{ 1\}\).

  4. 4.

    If \(f(x) =\sum\limits_{k=0}^{\infty }{a}_{k}{x}^{k}\) has radius of convergence R > 0, then f is real analytic on | x |  < R (by Lemma 8.36 and Theorem 8.33).

  5. 5.

    The function | x | is not real analytic at x = 0 because f′(0) does not exist.

  6. 6.

    The function | x | 3 is not real analytic at x = 0 because, although f′(0) and f′(0) exist, f′(0) and other higher-order derivatives at the origin do not exist. ∙ 

By Theorem 8.33, we have the following:

Theorem 8.41.

Real analytic functions are infinitely differentiable.

8.2.6 The Exponential Function

In Example 2.33, we defined

$${\mathrm{e}}^{x} {=\lim }_{ n\rightarrow \infty }{T}_{n}(x),\quad {T}_{n}(x) ={ \left (1 + \frac{x} {n}\right )}^{n} =\sum\limits_{k=0}^{n}<mfenced-6 separators="" open="(" close=")"> <mfrac-1 linethickness="0"> <mrow>n</mrow> <mrow>k</mrow> </mfrac> </mfenced>{\left (\frac{x} {n}\right )}^{k},\quad x > 0.$$

In Example 8.20, we showed that the series \(\sum\limits_{k=0}^{\infty }({x}^{k}/k!)\) converges for all \(x \in \mathbb{R}.\) We now show that

$${ \mathrm{e}}^{x} =\sum\limits_{k=0}^{\infty }\frac{{x}^{k}} {k!} \quad \mbox{ for all $x$},$$
(8.7)

using the former definition of ex. This can be easily done using the method of proof of Theorem ??.

Theorem 8.42.

For \(x \in \mathbb{R}\)

$${\mathrm{e}}^{x} =\sum\limits_{k=0}^{\infty }\frac{{x}^{k}} {k!} {=\lim }_{n\rightarrow \infty }{\left (1 + \frac{x} {n}\right )}^{n}.$$

Proof.

Let us first supply a proof for x > 0. Let \({S}_{n}(x) =\sum\limits_{k=0}^{n}({x}^{k}/k!)\) and T n (x) be as above. The proof relies on the following simple observation:

$$<mfenced-6 separators="" open="(" close=")"> <mfrac-1 linethickness="0"> <mrow>n</mrow> <mrow>k</mrow> </mfrac> </mfenced> \frac{{x}^{k}} {{n}^{k}} = \frac{n!} {(n - k)!{n}^{k}} \frac{{x}^{k}} {k!} = \Big{(}1 - \frac{1} {n}\Big{)}\Big{(}1 - \frac{2} {n}\Big{)}\cdots \Big{(}1 -\frac{k - 1} {n} \Big{)}\frac{{x}^{k}} {k!} \leq \frac{{x}^{k}} {k!},$$

which gives

$$0 < \Big{(}1 + \frac{x} {n}{\Big{)}}^{n} \leq {S}_{ n}(x).$$

Thus, we have

$${ \lim }_{n\rightarrow \infty }\Big{(}1 + \frac{x} {n}{\Big{)}}^{n} {\leq \lim }_{ n\rightarrow \infty }{S}_{n}(x),\quad \mbox{ i.e., }{\mathrm{e}}^{x} {\leq \lim }_{ n\rightarrow \infty }{S}_{n}(x).$$
(8.8)

Moreover, for n ≥ m,

$$\begin{array}{rcl}{ T}_{n}(x)& \geq & 1 + n\Big{(}\frac{x} {n}\Big{)} + \frac{n(n - 1)} {2!} \Big{(}\frac{x} {n}{\Big{)}}^{2} + \cdots + \frac{n(n - 1)\cdots (n - (m - 1))} {m!} \Big{(}\frac{x} {n}{\Big{)}}^{m} \\ & =& 1 + x + \Big{(}1 - \frac{1} {n}\Big{)}\frac{{x}^{2}} {2!} + \cdots + \Big{(}1 - \frac{1} {n}\Big{)}\cdots \Big{(}1 -\frac{m - 1} {n} \Big{)}\frac{{x}^{m}} {m!} \end{array}$$

Allow n → , keeping m fixed, and obtain

$${\lim }_{n\rightarrow \infty }{T}_{n}(x) \geq {S}_{m}(x).$$

Because {S m (x)} is an increasing sequence for each fixed x > 0, allowing m →  in this inequality, we finally get

$${ \lim }_{n\rightarrow \infty }{T}_{n}(x) {\geq \lim }_{m\rightarrow \infty }{S}_{m}(x).$$
(8.9)

Equations (8.8) and (8.9) show that the theorem holds for x > 0. To present a proof for the case x < 0, we first claim that

$$\left (\sum\limits_{k=0}^{\infty }\frac{{x}^{k}} {k!} \right )\left (\sum\limits_{k=0}^{\infty }\frac{{(-x)}^{k}} {k!} \right ) = 1.$$

By the Cauchy product rule for series, we can write the left-hand side of the last expression as ∑ k = 0 c n , where c 0 = 1 and

$${c}_{n} =\sum\limits_{k=0}^{n}\frac{{x}^{k}} {k!} \frac{{(-x)}^{n-k}} {(n - k)!} = \frac{1} {n!}\sum\limits_{k=0}^{n} \frac{n!} {k!(n - k)!}{x}^{k}{(-x)}^{n-k} = \frac{{(x - x)}^{n}} {n!} = 0$$

for each n ≥ 1. The claim follows. Thus, for x < 0 (so that − x > 0), we have

$$\sum\limits_{k=0}^{\infty }\frac{{x}^{k}} {k!} = \frac{1} {\sum\limits_{k=0}^{\infty }\frac{{(-x)}^{k}} {k!} } ={ \frac{1} {\lim }_{n\rightarrow \infty }{T}_{n}(-x)} = \frac{1} {{\mathrm{e}}^{-x}} ={ \mathrm{e}}^{x}.$$

The proof of the theorem is complete. □ 

Let us now write down some basic properties of the exponential function.

  1. (a)

    Because the series (8.7) that represents the exponential function ex converges absolutely for all x, Theorem ?? on the Cauchy product is applicable with \({a}_{k} = {x}^{k}/k!\) and \({b}_{k} = {y}^{k}/k!\). This gives

    $${c}_{n} =\sum\limits_{k=0}^{n}{a}_{ k}{b}_{n-k} =\sum\limits_{k=0}^{n}\frac{{x}^{k}} {k!} \frac{{y}^{n-k}} {(n - k)!} = \frac{1} {n!}\sum\limits_{k=0}^{\infty }<mfenced-6 separators="" open="(" close=")"> <mfrac-1 linethickness="0"> <mrow>n</mrow> <mrow>k</mrow> </mfrac> </mfenced>{x}^{k}{y}^{n-k} = \frac{{(x + y)}^{n}} {n!},$$

    and so we obtain the fundamental property of the exponential function—called the addition formula:

    $${\mathrm{e}}^{x+y} ={ \mathrm{e}}^{x}{\mathrm{e}}^{y}\quad \mbox{ for all $x,y$.}$$
  2. (b)

    By Theorem 8.33, a power series can be differentiated term by term, and so we have

    $$\frac{\mathrm{d}} {\mathrm{d}x}{\mathrm{e}}^{x} ={ \mathrm{e}}^{x}\quad \mbox{ for all $x$.}$$

    In particular, the addition formula gives

    $${\mathrm{e}}^{x}{\mathrm{e}}^{-x} ={ \mathrm{e}}^{x-x} ={ \mathrm{e}}^{0} = 1,$$

    and so ex≠0 for all x.

  3. (c)

    By (8.7), ex > 0 for x ≥ 0, and the last relation in (b) shows that ex > 0 for all real x. Also, (8.7) gives ex > 1 + x for x > 0, so that

    $${\mathrm{e}}^{x} \rightarrow \infty \quad \mbox{ as $x \rightarrow \infty $}\quad \mbox{ and}\quad {\mathrm{e}}^{-x} = \frac{1} {{\mathrm{e}}^{x}} \rightarrow 0\quad \mbox{ as $x \rightarrow \infty $.}$$

    Again, (8.7) gives

    $$0 < x < y \Rightarrow {\mathrm{e}}^{x} <{ \mathrm{e}}^{y}\ (\;\Longleftrightarrow\;{\mathrm{e}}^{-y} <{ \mathrm{e}}^{-x}),$$

    or we may use the fact that (ex) = ex > 0 for all \(x \in \mathbb{R}\). Thus ex is a strictly increasing continuous function on the whole real axis, and the image of \(\mathbb{R}\) under ex is (0, ):

    $$\exp (\mathbb{R}) = (0,\infty ).$$

    Thus, ex is a bijection of \(\mathbb{R}\) onto (0, ).

These facts help us to obtain the graph of \({\mathrm{e}}^{x},\ x \in \mathbb{R}\) (see Figure 8.5).

Fig. 8.5
figure 5

Graphs of the exponential and logarithmic functions.

8.2.7 Taylor’s Theorem

We state and prove the single-variable version of Taylor’s theorem for real-valued functions. We see that this is a generalization of the first mean value theorem. The several-variable version of Taylor’s theorem will be proved in the author’s book [7].

Theorem 8.43 (Taylor’s theorem). 

Suppose that \(f :\, (\alpha,\beta ) \subseteq \mathbb{R} \rightarrow \mathbb{R}\) is such that

  1. (i)

    f,f′,…,f (n) are all continuous on [a,b] ⊂ (α,β)

  2. (ii)

    f (n+1) exists on (a,b).

Then there exists a point c in (a,b) such that

$$f(b) =\sum\limits_{k=0}^{n}\frac{{f}^{(k)}(a)} {k!} {(b - a)}^{k} + \frac{{f}^{(n+1)}(c)} {(n + 1)!} {(b - a)}^{n+1}.$$

Proof.

We want to show that

$$\left [f(a) +\sum\limits_{k=1}^{n}\frac{{f}^{(k)}(a)} {k!} {(b - a)}^{k} + \frac{{f}^{(n+1)}(c)} {(n + 1)!} {(b - a)}^{n+1}\right ] - f(b) = 0.$$

Fix the interval [a, b] and introduce a new function ϕ by

$$\phi (x) = \left [f(x) +\sum\limits_{k=1}^{n}\frac{{f}^{(k)}(x)} {k!} {(b - x)}^{k} + \frac{M} {(n + 1)!}{(b - x)}^{n+1}\right ] - f(b),$$
(8.10)

where M has been chosen in such a way that ϕ(a) = 0. Now

  • ϕ is continuous on [a, b], by (i);

  • ϕ is differentiable on (a, b), by (i) and (ii);

  • \(\phi (a) = 0 = \phi (b)\).

By Rolle’s theorem, it follows that there exists a number c ∈ (a, b) such that ϕ(c) = 0. Since

$$\begin{array}{rcl} \phi \prime(x)& =& f\prime(x) +\sum\limits_{k=1}^{n}\left (\frac{{f}^{(k+1)}(x)} {k!} {(b - x)}^{k} - \frac{{f}^{(k)}(x)} {(k - 1)!}{(b - x)}^{k-1}\right ) -\frac{M} {n!} {(b - x)}^{n} \\ & =& \frac{{f}^{(n+1)}(x)} {n!} {(b - x)}^{n} -\frac{M} {n!} {(b - x)}^{n}, \\ \end{array}$$

ϕ(c) = 0 implies that \(M = {f}^{(n+1)}(c)\). When this value is substituted into (8.10), the condition ϕ(a) = 0 yields the desired formula. □ 

In particular, a small change in notation gives the following:

Theorem 8.44.

If f is (n + 1)-times differentiable on an open interval containing [a,x], then there exists some c between a and x such that

$$f(x) = {S}_{n}(x) + {R}_{n}(x),$$

where

$${S}_{n}(x) =\sum\limits_{k=0}^{n}{f}^{(k)}(a)\frac{{(x - a)}^{k}} {k!} \quad \mbox{ and}\quad {R}_{n}(x) = {f}^{(n+1)}(c)\frac{{(x - a)}^{n+1}} {(n + 1)!}.$$

We recall the convention that f (0) (x) = f(x).

Here the polynomial S n (x) is called the nth-degree Taylor polynomial (or approximation) for f at a, while the remainder term R n (x) is usually called the Lagrange form of the remainder, or sometimes the error term. There are several other forms of the remainder term, which have some advantages in some situations, but the Lagrange form is the simplest. The integral form of the remainder term is stated in Exercise 8.51(20).

We remark that if n = 1, then Taylor’s expansion is just the one-dimensional mean value theorem. Note that S n (x) → f(x) if and only if R n (x) → 0. Moreover, Theorem 8.44 gives the following.

Corollary 8.45 (Taylor series). 

If f has derivatives of all orders on an open interval containing points a and x, and if R n (x) → 0 as n →∞, then we have

$$f(x) =\sum\limits_{k=0}^{\infty }\frac{{f}^{(k)}(a)} {k!} {(x - a)}^{k}.$$

Unfortunately, it is not always the case that R n (x) → 0 as n →  even though f has derivatives of all orders (see Example 8.48). We call f the sum function of the corresponding power series, namely, the Taylor series. As remarked earlier, the series for the case a = 0 is often called a Maclaurin series. In a later section, we shall prove the converse of this corollary, namely that every convergent power series represents an infinitely differentiable function.

Examples 8.46.

Using familiar differential properties, we can illustrate Taylor’s theorem with some standard examples. Here is a list of a few well-known Maclaurin series expansions:

  1. 1.

    Consider f(x) = sinx for \(x \in \mathbb{R}\). Then \(f\prime(x) =\cos x =\sin (\pi /2 + x)\),

    $$f\prime\prime(x) =\cos (\pi /2 + x) =\sin (\pi + x),\quad f\prime\prime\prime(x) =\cos (\pi + x) =\sin (3\pi /2 + x).$$

    More generally, \({f}^{(n)}(x) =\sin (n\pi /2 + x)\), and so

    $${f}^{(n)}(0) =\sin (n\pi /2) = \left \{\begin{array}{ll} 0 &\mbox{ if $n$ is even,} \\ {(-1)}^{k-1} & \mbox{ if $n$ is odd with $n = 2k - 1$.}\end{array} \right.$$

    Taylor’s theorem applied for | x | ≤ r gives

    $$\vert {R}_{n}(x)\vert = \left \vert \frac{\sin ((n + 1)\pi /2 + c)} {(n + 1)!} {x}^{n+1}\right \vert \leq \frac{\vert x{\vert }^{n+1}} {(n + 1)!} \leq \frac{{r}^{n+1}} {(n + 1)!} =: {a}_{n+1}$$

    for each n and for every r > 0 with | x | ≤ r. Note that

    $$\frac{{a}_{n+1}} {{a}_{n}} = \frac{r} {n + 1} \rightarrow 0\quad \mbox{ as $n \rightarrow \infty $},$$

    and so a n  → 0 as n → . Thus, R n (x) → 0 as n → , and for each x with | x |  < r. This gives

    $$\sin x =\sum\limits_{k=0}^{\infty }{(-1)}^{k} \frac{{x}^{2k+1}} {(2k + 1)!}\quad \mbox{ for all $x \in \mathbb{R}$}.$$

    Differentiating with respect to x, or directly, we see that

    $$\cos x =\sum\limits_{k=0}^{\infty }{(-1)}^{k} \frac{{x}^{2k}} {(2k)!}\quad \mbox{ for all $x \in \mathbb{R}$}.$$

    Finally, we remark that the Taylor polynomials for f(x) = sinx at 0 are

    $${S}_{1}(x) = {S}_{2}(x) = x,\ {S}_{3}(x) = {S}_{4}(x) = x -\frac{{x}^{3}} {3!},\ {S}_{5}(x) = {S}_{6}(x) = x -\frac{{x}^{3}} {3!} + \frac{{x}^{5}} {5!}$$

    and

    $${S}_{7}(x) = {S}_{8}(x) = x -\frac{{x}^{3}} {3!} + \frac{{x}^{5}} {5!} -\frac{{x}^{7}} {7!}.$$

    The graphs in Figures 8.6 and 8.7 illustrate how the approximation of f(x) = sinx by S n (x) gets better as n increases.

    Fig. 8.6
    figure 6

    The graphs of y = sinx, y = S 1(x); y = sinx, y = S 3(x) near x = 0.

    Fig. 8.7
    figure 7

    The graphs of y = sinx, y = S 5(x); y = sinx, y = S 7(x) near x = 0.

  2. 2.

    Consider f(x) = ex for \(x \in \mathbb{R}\). This function has the nice property that f (n)(x) = ex for all \(n \in {\mathbb{N}}_{0}\), so that f (n)(0) = 1 for all \(n \in {\mathbb{N}}_{0}\). Then for each x, | x | ≤ r, there exists c n between 0 and x such that

    $${\mathrm{e}}^{x} =\sum\limits_{k=0}^{n}\frac{{x}^{k}} {k!} + \frac{{\mathrm{e}}^{{c}_{n}}} {(n + 1)!}{x}^{n+1}\ \mbox{ for all $\vert x\vert \leq r$}.$$

    Since f is increasing on \(\mathbb{R}\), e − | x |  ≤ ec ≤ e | x |  for all c between 0 and x. Thus, as in the previous case, it follows that for | x | ≤ r,

    $$\vert {R}_{n}(x)\vert = \left \vert \frac{{\mathrm{e}}^{{c}_{n}}} {(n + 1)!}{x}^{n+1}\right \vert \leq \frac{{\mathrm{e}}^{r}{r}^{n+1}} {(n + 1)!} \rightarrow 0\quad \mbox{ as $n \rightarrow \infty $},$$

    and since x was arbitrary, we obtain

    $${\mathrm{e}}^{x} =\sum\limits_{k=0}^{\infty }\frac{{x}^{k}} {k!} \quad \mbox{ for all $x \in \mathbb{R}$}.$$

    If one wishes to use Taylor’s formula to approximate e0. 1 by a quadratic polynomial with an error estimate, we need to consider

    $$\left \vert {\mathrm{e}}^{x} -\left (1 + x + \frac{{x}^{2}} {2!} \right )\right \vert < \frac{{\mathrm{e}}^{\vert c\vert }} {3!} \vert x{\vert }^{3}\quad (\vert c\vert < 0.1),$$

    so that for x = 0. 1, this inequality gives

    $$\vert {\mathrm{e}}^{0.1} - (1.1 + 0.005)\vert < \frac{{\mathrm{e}}^{0.1}} {6} \left ( \frac{1} {1{0}^{3}}\right ),\quad \mbox{ i.e., }\vert {\mathrm{e}}^{0.1} - 1.105\vert < 0.000184,$$

    which gives a fairly decent approximation of e0. 1. For large values of x, a good approximation for ex requires a Taylor polynomial of degree n, where n is a large number. The graphs in Figure 8.8 illustrate how the approximation of f(x) = ex (for x near the origin) by the Taylor polynomials T n (x) gets better as n increases. The last series expansion for ex also gives

    $${\mathrm{e}}^{-x} =\sum\limits_{k=0}^{\infty }\frac{{(-1)}^{k}} {k!} {x}^{k}\quad \mbox{ for all $x \in \mathbb{R}$},$$

    so that

    $$\sinh x = \frac{{\mathrm{e}}^{x} -{\mathrm{e}}^{-x}} {2} = x + \frac{{x}^{3}} {3!} + \frac{{x}^{5}} {5!} + \frac{{x}^{7}} {7!} + \cdots \quad \mbox{ for all $x \in \mathbb{R}$}$$

    and

    $$\cosh x = \frac{{\mathrm{e}}^{x} +{ \mathrm{e}}^{-x}} {2} = 1 + \frac{{x}^{2}} {2!} + \frac{{x}^{4}} {4!} + \frac{{x}^{6}} {6!} + \cdots \quad \mbox{ for all $x \in \mathbb{R}$}.$$
  3. 3.

    Consider \(f(x) = -\log (1 - x)\) for 1 − x > 0. Then f(0) = 0,

    $$f\prime(x) = \frac{1} {1 - x},\quad {f}^{(k)}(x) = \frac{(k - 1)!} {{(1 - x)}^{k}}\quad \mbox{ for $k \geq 1$}.$$

    In particular, \({f}^{(k)}(0) = (k - 1)!\). Thus, f has derivatives of all orders for x < 1. In particular, by Taylor’s theorem applied to ( − R, 1), for any R > 0,

    $$f(x) =\sum\limits_{k=0}^{n}\frac{{f}^{(k)}(0)} {k!} {x}^{k} + {R}_{ n}(x) =\sum\limits_{k=1}^{n}\frac{{x}^{k}} {k} + {R}_{n}(x),$$

    where

    $${R}_{n}(x) = \frac{{f}^{(n+1)}(c)} {(n + 1)!} {x}^{n+1} = \frac{1} {n + 1} \frac{{x}^{n+1}} {{(1 - c)}^{n+1}} = \frac{1} {n + 1}{\left ( \frac{x} {1 - c}\right )}^{n+1}$$

    and c is a number in ( − R, 1) for any R > 0. We observe that R n (x) → 0 only when | x |  < 1. Consequently,

    $$-\log (1 - x) = x + \frac{{x}^{2}} {2} + \frac{{x}^{3}} {3} + \frac{{x}^{4}} {4} + \cdots \quad \mbox{ for all $\vert x\vert < 1$}.$$

    Finally, by considering \(\log (1 + x) -\log (1 - x)\), we see that

    $$\frac{1} {2}\log \left (\frac{1 + x} {1 - x}\right ) =\sum\limits_{k=1}^{\infty }\ \frac{{x}^{2k-1}} {2k - 1}\quad \mbox{ if }\vert x\vert < 1.$$

     ∙ 

Example 8.47.

If we apply Taylor’s theorem with

$$f(x) =\sin x\quad \mbox{ for $x \in [-3,3]$},$$

then we have (for c ∈ ( − 3, 3))

$$\left \vert \sin x - x + \frac{{x}^{3}} {3!} \right \vert = \left \vert \sin (c)\frac{{x}^{4}} {4!} \right \vert < \frac{{3}^{4}} {4!} \quad \mbox{ for $\vert x\vert < 3$}.$$

Similarly, we see that

$$\left \vert {\mathrm{e}}^{x} -\Big{(}1 + x + \frac{{x}^{2}} {2!} + \frac{{x}^{3}} {3!} \Big{)}\right \vert = \left \vert {\mathrm{e}}^{c}\frac{{x}^{4}} {4!} \right \vert < \frac{{3}^{4}{\mathrm{e}}^{3}} {4!} \quad \mbox{ for $\vert x\vert < 3$}.$$

 ∙ 

Example 8.48 (Not all infinitely differentiable functions are analytic). 

Consider (see Figure 8.9)

$$f(x) = \left \{\begin{array}{ll} {\mathrm{e}}^{-1/{x}^{2} } & \mbox{ if }x\neq 0, \\ 0 &\mbox{ if }x = 0. \end{array} \right.$$

Note that this function is clearly continuous for all \(x \in \mathbb{R}\). In fact,

$${\lim }_{x\rightarrow 0}{\mathrm{e}}^{-1/{x}^{2} } {=\lim }_{u\rightarrow +\infty } \frac{1} {{\mathrm{e}}^{{u}^{2} }} = 0.$$

At this place, it might be important to remark that if \(g(x) ={ \mathrm{e}}^{-1/x}\), x≠0, then it would not have worked in this way, since as x → 0 − , one would then get

$${\lim }_{x\rightarrow 0-}{\mathrm{e}}^{-1/x} ={ \mathrm{e}}^{+\infty } = \infty.$$

Thus, in this case, one could perhaps modify g to ϕ in the following form:

$$\phi (x) = \left \{\begin{array}{ll} {\mathrm{e}}^{-1/x}&\mbox{ if }x > 0, \\ 0 &\mbox{ if }x \leq 0. \end{array} \right.$$

However, we proceed to show that \(f \in {C}^{\infty }(\mathbb{R})\), i.e., f is differentiable infinitely often in ( − , ). Before we look at the properties of f(x), we observe that for x≠0, f′(x) exists by the chain rule, so that

$$f\prime(x) = 2{x}^{-3}{\mathrm{e}}^{-1/{x}^{2} }\quad \mbox{ for $x\neq 0$}.$$

Now,

$${\lim }_{x\rightarrow 0}\frac{f(x) - f(0)} {x} {=\lim }_{x\rightarrow 0} \frac{1/x} {{\mathrm{e}}^{1/{x}^{2} }},$$

so that, by l’Hôpital’s rule,

$${\lim }_{x\rightarrow 0+} \frac{1/x} {{\mathrm{e}}^{1/{x}^{2} }} {=\lim }_{u\rightarrow +\infty } \frac{u} {{\mathrm{e}}^{{u}^{2} }} {=\lim }_{u\rightarrow +\infty } \frac{1} {2u{\mathrm{e}}^{{u}^{2} }} = 0$$

and

$${\lim }_{x\rightarrow 0-} \frac{1/x} {{\mathrm{e}}^{1/{x}^{2} }} {=\lim }_{u\rightarrow -\infty } \frac{u} {{\mathrm{e}}^{{u}^{2} }} = {-\lim }_{u\rightarrow +\infty } \frac{u} {{\mathrm{e}}^{{u}^{2} }} = 0.$$

Therefore, f′(0) exists and equals zero. Next, we show that f′(x) is continuous on \(\mathbb{R}\). Obviously, f′ is continuous for x≠0, and so we need to check the continuity only at 0. For this, we find that (again by l’Hôpital’s rule)

$${\lim }_{x\rightarrow 0}f\prime(x) = {2\,\lim }_{x\rightarrow 0} \frac{1/{x}^{3}} {{\mathrm{e}}^{1/{x}^{2} }} = {2\,\lim }_{u\rightarrow \pm \infty }\frac{{u}^{3}} {{\mathrm{e}}^{{u}^{2} }} = 0 = f\prime(0).$$

Note that because of the square term in the denominator, we need to consider u →  +  and u → −  separately, and in both cases the limit value is seen to be 0. Thus, \(f \in {C}^{1}(\mathbb{R})\). Our next aim is to show that \(f \in {C}^{\infty }(\mathbb{R})\). For this, we observe that for n = 0, 1, 2, ,

$${ \lim }_{x\rightarrow 0}\frac{{\mathrm{e}}^{-1/{x}^{2} }} {{x}^{n}} {=\lim }_{u\rightarrow \pm \infty }\frac{{u}^{n}} {{\mathrm{e}}^{{u}^{2} }},$$
(8.11)

which is seen to be 0, by l’Hôpital’s rule. The case n = 0 shows that f is continuous at 0, whereas the case n = 1 implies that f is continuously differentiable on \(\mathbb{R}\). We have discussed these two cases above, and so we now consider the case n ≥ 2. Note that the function f(x) has the form e − g(x), where \(g(x) = 1/{x}^{2}\). Clearly, for x≠0, the higher derivatives f (n)(x) are given by

$${f}^{(n)}(x) = \frac{{p}_{n}(1/x)} {{\mathrm{e}}^{1/{x}^{2} }},$$

where p n is a polynomial of the form

$${p}_{n}(x) = {a}_{n}{x}^{n} + {a}_{ n-1}{x}^{n-1} + \cdots + {a}_{ 1}x + {a}_{0}.$$

This fact can easily be proved by induction and the chain rule. By (8.11), we now have

$${\lim }_{x\rightarrow 0}{f}^{(n)}(x) {=\lim }_{ u\rightarrow \infty }\frac{{p}_{n}(u)} {{\mathrm{e}}^{{u}^{2} }} = 0\quad \mbox{ for $n \geq 2$},$$

since \({\mathrm{e}}^{{u}^{2} }\) goes to infinity faster than any polynomial (one can use l’Hôpital’s rule). We have already shown that f′(0) = 0, and thus by induction it can be shown that f (n)(0) = 0 for all n. In fact, if f (k)(0) = 0 for all k = 1, 2, , n, then

$$\begin{array}{rcl}{ f}^{(n+1)}(0)& =& {\lim }_{ x\rightarrow 0}\frac{{f}^{(n)}(x) - {f}^{(n)}(0)} {x} \\ & =& {\lim }_{x\rightarrow 0}\frac{(1/x){p}_{n}(1/x)} {{\mathrm{e}}^{1/{x}^{2} }} \\ & =& {\lim }_{x\rightarrow 0}\frac{{p}_{n+1}(1/x)} {{\mathrm{e}}^{1/{x}^{2} }} \\ & =& {\lim }_{x\rightarrow 0}{f}^{(n+1)}(x) = 0, \\ \end{array}$$

and hence f (n + 1)(0) exists and is zero. Thus, f has derivatives of all orders at 0, and hence \(f \in {C}^{\infty }(\mathbb{R})\). Therefore, Taylor’s series certainly converges, but not to f(x). ∙ 

Remark 8.49.

From Example 8.48, we note that if \(f(x) ={ \mathrm{e}}^{-1/{x}^{2} }\) had a Taylor series expansion about the origin, then we would have f(x) = 0 in a neighborhood of the origin, since f (k)(0) = 0 for all k. Since the exponential function never vanishes, \(f(x) ={ \mathrm{e}}^{-1/{x}^{2} }\) certainly cannot be identically zero and so does not admit a Taylor series expansion about the origin.

Fig. 8.8
figure 8

The approximation of ex.

Fig. 8.9
figure 9

The graph of \(f(x) =\exp (-1/{x}^{2})\), f(0) = 0.

Is it correct to say that f (n)(0) = 0 for all n implies f(x) = 0 for all x in a neighborhood of 0? (Compare with Example 8.48.) What is your answer in the case of a complex-valued function? Is it something to do with the uniqueness theorem for complex-valued analytic functions in some domain?Footnote 1 This is beyond the scope of this book, but an interested reader can compare it. ∙ 

8.2.8 Questions and Exercises

Questions 8.50.

  1. 1.

    Suppose that the series ∑a k x k converges at all positive integer values of x. Must the series be convergent at all real values of x? What can be said about the radius of convergence of the power series?

  2. 2.

    Can a power series of the form \(\sum\limits_{k=0}^{\infty }{a}_{k}{(x - 2)}^{k}\) be convergent at x = 0 and divergent at x = 3?

  3. 3.

    What can be said about the radius of convergence of the power series \(\sum\limits_{k=0}^{\infty }{a}_{k}{(x - 2)}^{k}\) if it is convergent at x = 1 and divergent at x = 5?

  4. 4.

    Suppose that \(\sum\limits_{k=0}^{\infty }{a}_{k}{(x - 2)}^{k}\) converges at x = 5 but diverges at \(x = -1\). What can be said about the radius of converges of the given series?

  5. 5.

    Suppose that \(f(x) =\sum\limits_{k=0}^{\infty }{a}_{k}{x}^{k}\) on ( − R, R) (R > 0) such that f is odd on ( − R, R). What can be said about f?

  6. 6.

    What can be said about the radius of convergence of the power series ∑ k = 0 a k x k if there exists a real sequence {x k } converging to 0 and the series is divergent at each x k ?

  7. 7.

    If {b n } is a decreasing sequence converging to zero, must the functional series ∑ k = 1 b k coskx be convergent on [0, 2π]?

  8. 8.

    Let I 1, I 2, and I 3 denote the intervals of convergence of ∑ k = 0 a k x k, \(\sum\limits_{k=1}^{\infty }k{a}_{k}{x}^{k-1}\), and \(\sum\limits_{k=2}^{\infty }k(k - 1){a}_{k}{x}^{k-2}\), respectively. Must we have I 1 = I 2 ? Must we have I 1 ⊆ I 2 ? Must we have I 2 ⊆ I 1 ? Is there a simple relationship between the sets I 1, I 2, and I 3?

  9. 9.

    Suppose that f(x) =  ∑a k x k and g(x) =  ∑b k x k have positive radii of convergence R 1 and R 2, respectively.

    1. (a)

      Does the Cauchy product of the two power series converge for x≠0? If so, does it converge to f(x)g(x) in | x |  < min{R 1, R 2}?

    2. (b)

      If | a k  | ≤ | b k  | for all large values k, must there be a relationship between R 1 and R 2?

  10. 10.

    Are the sum and difference of two real analytic functions real analytic?

  11. 11.

    Is the product of two real analytic functions real analytic?

  12. 12.

    Is the reciprocal of a nowhere-vanishing real analytic function real analytic?

  13. 13.

    Are there infinitely differentiable functions that are not real analytic?

  14. 14.

    Suppose that f is real analytic on an open interval (a, b). Must f be infinitely differentiable on (a, b)?

  15. 15.

    Suppose that g is real analytic on an open set I and J is an open set such that g(I) ⊂ J and f is analytic on J. Must the composition f ∘ g be real analytic on I?

  16. 16.

    Suppose that f is such that f(0) = 0 and \(f(x) ={ \mathrm{e}}^{-1/{x}^{2} }\) for x≠0. Must f be real analytic on \(\mathbb{R} \ \{ 0\}\)? Can f be real analytic on \(\mathbb{R}\)?

  17. 17.

    We have mentioned that \(f(x) = 1/{(1 + x)}^{2}\) is real analytic on \(\mathbb{R}\) but the power series expansion of f(x) about x = 0 converges only for | x |  < 1. Why does the power series not converge at all values of \(\mathbb{R}\)?

  18. 18.

    Suppose that f is continuous on \(\mathbb{R}\) and that both f 2 and f 3 are real analytic. Must f be real analytic?

Exercises 8.51.

  1. 1.

    Let f(x) be as in Example 8.48, where we have shown that f (n)(0) = 0 for all n ≥ 1. Conclude that the remainder term in Taylor’s theorem (with a = 0) does not converge to zero as n →  for x≠0.

  2. 2.

    Show that the power series \(\sum\limits_{k=1}^{\infty }({(-1)}^{k-1}/(2k - 1)){x}^{2k-1}\) converges for | x | ≤ 1.

  3. 3.

    If R is the radius of convergence of ∑ k = 0 a k x k, determine the radius of convergence of the following series:

    1. (a)

      k = 0 a k x pk, (b) k = 0 a pk x k, (c) k = 0 a k p x k

    where p is a positive integer.

  4. 4.

    Give an example of a power series that has:

    1. (a)

      radius of convergence zero.

    2. (b)

      radius of convergence .

    3. (c)

      finite radius of convergence.

    4. (d)

      conditional convergence at both endpoints of the interval of convergence.

    5. (e)

      conditional convergence only at one of the endpoints.

    6. (f)

      absolute convergence at both endpoints of the interval of convergence.

  5. 5.

    Give an example of a power series ∑ k = 0 a k x k (if it exists) that:

    1. (a)

      converges at \(x = -1\) but diverges at x = 2.

    2. (b)

      converges at x = 1 but diverges at \(x = -1\).

    3. (c)

      converges at x = 2 but diverges at \(x = -1,1\).

    4. (d)

      converges at \(x = -1,1\) but diverges at x = 2.

  6. 6.

    Determine the radius of convergence of each of the following power series ∑ k = 1 a k x k when a k equals:

    $$\begin{array}{lllllllllllllllllll} \mathbf{(a)}&{k}^{\log k}. &&&\mathbf{(b)}&\frac{(3k)!} {{(k!)}^{2}}.&&&\mathbf{(c)} &{k}^{{k}^{2} }.&&&\mathbf{(d)}&{\prod }Q_{m=0}^{k}\frac{(a + m)(b + m)} {(c + m)(1 + m)}\ (a,b,c > 0). \\ \mathbf{(e)} &\frac{{3}^{k}} {k!}.&&&\mathbf{(f)} &{k}^{\sqrt{k}}. &&&\mathbf{(g)}&\frac{{2}^{k}} {k!}. &&&\mathbf{(h)}&{r}^{{k}^{2} }\ (\vert r\vert < 1)\end{array}$$
  7. 7.

    Determine the radius and the interval of convergence of each of the following power series:

    $$\begin{array}{llllllllllllllll} \mathbf{(a)} &\sum\limits_{k=1}^{\infty }k!{x}^{3k+3}. &&&\mathbf{(b)}&\sum\limits_{k=2}^{\infty }\frac{{k}^{2} + 1} {{k}^{2} - 1}{x}^{k}. &&&\mathbf{(c)}&\sum\limits_{k=0}^{\infty }\frac{{k}^{3}} {k!} \frac{{x}^{2k}} {{2}^{k}}. \\ \mathbf{(d)}&\sum\limits_{k=1}^{\infty }{x}^{{k}^{3} }. &&&\mathbf{(e)} &\sum\limits_{k=1}^{\infty } \frac{{x}^{{k}^{2} }} {{(k!)}^{k}}. &&&\mathbf{(f)} &\sum\limits_{k=1}^{\infty }{\left (1 + \frac{2} {k}\right )}^{k}{(x - 3)}^{k}. \\ \mathbf{(g)} &\sum\limits_{k=1}^{\infty }{r}^{{k}^{3} }{(x + 2)}^{k}.&&&\mathbf{(h)}&\sum\limits_{k=1}^{\infty }\frac{{2}^{k}} {k} {(x + 1)}^{k+1}.&&&\mathbf{(i)} &\sum\limits_{k=1}^{\infty }\frac{k(k - 1)} {{k}^{2} + 5} {(x + 2)}^{k}\end{array}$$
  8. 8.

    Consider the power series ∑ k = 0 a k x k, where

    $${a}_{k} = \left \{\begin{array}{ll} \frac{1} {{5}^{k}} &\mbox{ if $k$ is odd,} \\ {3}^{k} &\mbox{ if $k$ is even,}\end{array} \quad k \in {\mathbb{N}}_{0}.\right.$$
    1. (a)

      Show that neither \({\lim }_{k\rightarrow \infty }\root{k}\of{\vert {a}_{k}\vert }\) nor \({\lim }_{k\rightarrow \infty }\vert {a}_{k+1}/{a}_{k}\vert \) exists.

    2. (b)

      Determine the radius of convergence of

      $$\sum\limits_{k=0}^{\infty } \frac{1} {{5}^{2k+1}}{x}^{2k+1}\quad \mbox{ and}\quad \sum\limits_{k=0}^{\infty }{3}^{2k}{x}^{2k}.$$
    3. (c)

      What is the radius of convergence of the original power series? Can this be obtained from (b)?

    4. (d)

      What is the interval of convergence of the original power series?

  9. 9.

    For each \(\alpha \in \mathbb{R}\), show that

    $${(1 + x)}^{\alpha } = 1 +\sum\limits_{k=1}^{\infty }\frac{\alpha (\alpha - 1)\cdots (\alpha - k + 1)} {k!} {x}^{k}\quad \mbox{ for $\vert x\vert < 1$.}$$
  10. 10.

    Show that

    $$\sum\limits_{k=0}^{\infty }{(k + 1)}^{2}{x}^{k} = \frac{1 + x} {{(1 - x)}^{3}}\quad \mbox{ for $\vert x\vert < 1$.}$$
  11. 11.

    Suppose that 1960 ≤ | a n  | ≤ 2008 for all n ≥ 0. Discuss the radius of convergence of the series \(\sum\limits_{k=0}^{\infty }{a}_{k}{(x - 2009)}^{k}\), and justify your answer.

  12. 12.

    Find the set of values of x for which the power series

    $$\sum\limits_{k=1}^{\infty }(1 + {(-3)}^{k-1}){x}^{k}$$

    converges.

  13. 13.

    For what values of \(x \in \mathbb{R}\) does the functional series \(\sum\limits_{k=0}^{\infty }{((x - 1)/(x + 2))}^{k}{x}^{k}\) converge absolutely.

  14. 14.

    Find all the values of \(x \in \mathbb{R}\) for which the functional series \(\sum\limits_{k=0}^{\infty }{(1 - {x}^{2})}^{n}\) is absolutely convergent.

  15. 15.

    Suppose that

    $${a}_{k} = \left \{\begin{array}{ll} 1 &\mbox{ if $k = 3m$}, \\ \frac{{(-1)}^{m}} {m} &\mbox{ if $k = 3m + 1$}, \\ \frac{1} {{m}^{2}} & \mbox{ if $k = 3m + 2$,}\end{array} \right.\quad \mbox{ and}\quad {b}_{k} = \left \{\begin{array}{ll} {3}^{m} &\mbox{ if $k = 2m$}, \\ \frac{{(-1)}^{m}} {m} &\mbox{ if $k = 2m + 1$},\end{array} \right.$$

    for m ≥ 0. Find the interval of convergence of ∑a k x k and ∑b k x k, respectively.

  16. 16.

    Using the Cauchy product rule and the power series expansion of ex and \(1/(1 - x)\), determine a k for k = 0, 1, 2, 3 in

    $$\frac{{\mathrm{e}}^{x}} {1 - x} =\sum\limits_{k=0}^{\infty }{a}_{ k}{x}^{k}.$$
  17. 17.

    Find the real solution greater than 1 of the equation

    $$x =\sum\limits_{n=1}^{\infty }\frac{n(n + 1)} {{x}^{n}}.$$
  18. 18.

    Suppose the series \(f(x) =\sum\limits_{k=0}^{\infty }{a}_{k}{x}^{k}\) and \(g(x) =\sum\limits_{k=0}^{\infty }{b}_{k}{x}^{k}\) converge for | x |  < 1 and f(x) = g(x) for \(x = 1/(n + 1),\ n \in \mathbb{N}\). Show that a k  = b k for all k ≥ 0.

  19. 19.

    Let f be infinitely differentiable on ( − 1, 1). Prove that f is real analytic in some neighborhood of the origin if and only if there exist two positive real numbers r and K such that

    $$\left \vert \frac{{f}^{(n)}(x)} {n!} \right \vert \leq K\quad \mbox{ for all $x \in (-1,1)$ with $\vert x\vert < r$.}$$
  20. 20.

    Let I be an open interval containing [a, b] and \(f \in {C}^{n}(I, \mathbb{R})\). Show that the Taylor formula can be put into the following form:

    $$f(b) =\sum\limits_{k=0}^{n}\frac{{f}^{(k)}(a)} {k!} {(b - a)}^{k} + {R}_{ n},\quad {R}_{n} = \frac{1} {n!}{\int \nolimits \nolimits }_{a}^{b}{(b - t)}^{n}{f}^{(n+1)}(t)\,\mathrm{d}t,$$

    where R n is called the integral form of the remainder term.