Introduction

A unique feature of Charlier polynomials [1,2,3,4,5] is their affinity with the Poisson distribution. This has many important applications. Charlier polynomials concisely express the behavior of Erlang loss systems, a fundamental concept in queueing theory [6,7,8]. Here, Charlier polynomials (and the transition to the Hermite function) are instrumental when computing the probability of the first loss for a time-variable number of servers [9]. Another example is the generalization of stochastic integrals over Poisson distributions to multiple stochastic integrals, which can be effectively computed using Charlier polynomials [10, 11], whereas yet another is that of random matrices over Poisson distributions [12], which can be characterized by Charlier polynomial zeros.

High-dimensional or asymptotic problems typically engage Charlier polynomials of high degree and order (index). For instance, the asymptotic behavior in the number of servers of Erlang loss systems is described by Charlier polynomials whose degree and order tend to infinity simultaneously according to the Halfin–Whitt regime [13]. At a first glance, the classical formula ([5, Eq. 9.14.12], [14, p. 532], [4, Eq. 18.21.9], [3, Eq. 2.82.7])

$$\begin{aligned} \underset{a\rightarrow \infty }{\mathop {\lim }}\,{{\left( 2a \right) }^{n/2}}C_{n}(a+x\sqrt{2a},a)={{\left( -1 \right) }^{n}}{{H}_{n}}(x) \end{aligned}$$
(1)

appears useful for reducing Charlier polynomials in this limit, but unfortunately, this formula holds only for non-negative integer n. In light of the long standing of this formula, it can somewhat surprisingly be shown that

$$\begin{aligned} \underset{a\rightarrow \infty }{\mathop {\lim }}\,{{\left( 2a \right) }^{\nu /2}}C_{\lceil a- x\sqrt{2a}\rceil }(\nu ,a)=H_\nu (x) \end{aligned}$$
(2)

for any real x and \(\nu \), a much stronger statement (Fig. 1). Here, the ceiling function \(\lceil x \rceil \) denotes the smallest integer not smaller than x, and \(H_\nu (x)\) denotes the Hermite function [15, Ch. 10].

A proof of (1) has been given via Krawtchouk polynomials [3, pp. 36–37]. When \(\nu \) is non-negative real and \(a-x\sqrt{2a}\) is integer, pointwise convergence of (2) (without rate bound) follows implicitly from [16].

Fig. 1
figure 1

Transition of Charlier polynomials \({{\left( 2a \right) }^{\nu /2}}C_{\lceil a+\sqrt{a}\rceil }(\nu ,a)\) (thin lines) to the Hermite function \(H_\nu (-1/\sqrt{2})\) (thick line). The different values of the parameter a are 100 (dotted line), 400 (dashed line), and 1600 (solid thin line). Figure adapted from Fig. 2 in [9] under Creative Commons license

In Sect. 2, it is proved rigorously that convergence to the Hermite function holds for any real \(\nu \), and that convergence is uniform for \(\nu \) and x in any bounded interval, i.e., locally uniform. A sharp rate bound is established. The same technique is then employed in Sect. 3 to prove that there is a similar transition of the derivative with respect to \(\nu \), and a sharp rate bound is provided here, too. These results are used in Sect. 4 for proving that zeros of Charlier polynomials converge to zeros of the Hermite function.

Below is first a recollection of some well-known definitions and recurrence relations from [2, 4, 5, 15] to make the paper self-contained. This is followed by three sections, each proving an aspect of the transition of Charlier polynomials to Hermite functions.

The notation “\(A\triangleq B\)” is used for “A is defined as B”, to make the introduction of new symbols more explicit. The expression “bounded \(\nu \le -3\)” is shorthand for “\(\nu \) in any bounded interval \([{{\nu }_{0}},-3]\).” Charlier polynomials \(C_n(x,a)\) will be abbreviated as \(c_{n}^{a}(x)\), \(c_{n}(x)\), \(c_{n}^{a}\), or even \(c_{n}\), unless there is a risk for misunderstanding. They can be defined for positive a and non-negative integer n by [2, Eq. 10.25.4], [4, Eq. 18.20.8],

$$\begin{aligned} c_{n}^{a}\left( x \right) \triangleq \underset{k=0}{\overset{n}{\mathop \sum }}\,\left( \begin{matrix} n \\ k \\ \end{matrix} \right) \left( \begin{matrix} x \\ k \\ \end{matrix} \right) k!{{\left( -a \right) }^{-k}} , \end{aligned}$$
(3)

where

$$\begin{aligned} \left( \begin{matrix} x \\ k \\ \end{matrix} \right) \triangleq {\left\{ \begin{array}{ll} {x\left( x-1 \right) \cdot \ldots ~\cdot (x-k+1)}/{k!} &{}\quad \text {for}~k\ge 1 \\ 1 &{}\quad \text {for}~k=0 \\ \end{array}\right. } . \end{aligned}$$

These polynomials obey the three-term recurrence relation [2, Eq. 10.25.8], [4, Eq. 18.22.2],

$$\begin{aligned} -xc_{n}^{a}\left( x \right) =ac_{n+1}^{a}\left( x \right) -\left( n+a \right) c_{n}^{a}\left( x \right) +nc_{n-1}^{a}(x) , \end{aligned}$$
(4)

and the difference equation [2, Eq. 10.25.9], [4, Eq. 18.22.12],

$$\begin{aligned} -nc_{n}^{a}\left( x \right) =ac_{n}^{a}\left( x+1 \right) -\left( x+a \right) c_{n}^{a}\left( x \right) +xc_{n}^{a}(x-1) , \end{aligned}$$
(5)

as well as the backward recurrence relation [5, Eq. 9.14.8],

$$\begin{aligned} \frac{x}{a}c_{n-1}^{a}\left( x-1 \right) =c_{n-1}^{a}(x)-c_{n}^{a}(x) . \end{aligned}$$
(6)

The Hermite function \({{H}_{\nu }}(x)\) is a solution of the differential equation [15, Eq. 10.2.3]

$$\begin{aligned} {y}''=2x{y}'-2\nu y , \end{aligned}$$
(7)

and satisfies the three-term recurrence [15, Eq. 10.4.7]

$$\begin{aligned} {{H}_{\nu +1}}\left( x \right) -2x{{H}_{\nu }}\left( x \right) +2\nu {{H}_{\nu -1}}\left( x \right) =0 , \end{aligned}$$
(8)

and the derivative rule [15, Eq. 10.4.4]

$$\begin{aligned} {H'_{\nu }}\left( x \right) =2\nu {{H}_{\nu -1}}(x) . \end{aligned}$$
(9)

It can be defined by [15, Eq. 10.2.8]

$$\begin{aligned} {{H}_{\nu }}\left( x \right) \triangleq {{2}^{\nu }}\sqrt{\pi }\left[ \left( \frac{1}{\Gamma \left( \frac{1- \nu }{2} \right) } \right) M\left( -\frac{\nu }{2};\frac{1}{2};{{x}^{2}} \right) - \frac{2x}{\Gamma \left( -\frac{\nu }{2} \right) }M\left( \frac{1- \nu }{2};\frac{3}{2};{{x}^{2}} \right) \right] .\nonumber \\ \end{aligned}$$
(10)

where M is the confluent hypergeometric function of the first kind. When the expression involves a gamma function of a non-positive integer argument, the expression should be interpreted by its limiting value.

Transition of Charlier Polynomials

Theorem 1

For real x, \(\nu \), and positive a,

$$\begin{aligned} {{\left( 2a \right) }^{\nu /2}}c_{\lceil a-x\sqrt{2a}\rceil }^{a}(\nu )={{H}_{\nu }}\left( x \right) +O\left( \frac{1}{\sqrt{a}} \right) , \end{aligned}$$

where \(c_{n}^{a}(\nu )\) are Charlier polynomials and \({{H}_{\nu }}(x)\) is the Hermite function. The error bound \(O\left( {1}/{\sqrt{a}}\; \right) \) is locally uniform for \(\nu \) and x and is sharp in the sense that there are \(\nu \) and x such that the error is proportional to \(1/\sqrt{a}\) for arbitrarily large a.

Proving asymptotic properties of Charlier polynomials is difficult, since these do not satisfy a second-order linear ordinary differential equation with respect to the independent variable [17]. However, the three-term recurrence relation (4) is a discretization of such a differential equation (7). This can be used to prove the theorem in the following way: It is first proven for the special case \(x=0\) and \(\nu \le -4\) (Lemmas 15) and then generalized to arbitrary real \(\nu \) (Lemma 6). After that, the scaled polynomials are shown to approximate a Cauchy polygon converging to the \({{H}_{\nu }}\left( x \right) \) solution of the Hermite differential equation initial value problem (Lemma 7).

Convergence for \(x=0\) and \(\nu \le -4\)

For notational convenience, define \(A\triangleq \lceil a \rceil \) and

$$\begin{aligned} \begin{matrix} y_{\nu }^{a}(x)\triangleq {{\left( 2a \right) }^{\nu /2}}c_{\lceil a-x\sqrt{2a}\rceil }^{a}(\nu ) \\ \end{matrix} . \end{aligned}$$
(11)

The superscript will be left out in \(y_{\nu }^{a}\) and \(c_{n}^{a}\) unless there is a risk for misunderstanding. Consider the case \(x=0\) and \(\nu \le -4\). By the definition of Charlier polynomials (3),

$$\begin{aligned} {{c}_{n}}\left( \nu \right) =\underset{k=0}{\overset{n}{\mathop \sum }}\,\left( \begin{matrix} n \\ k \\ \end{matrix} \right) \frac{\Gamma \left( k-\nu \right) }{\Gamma \left( -\nu \right) }~{{a}^{-k}} . \end{aligned}$$

To prove that

$$\begin{aligned}\underset{a\rightarrow \infty }{\mathop {\lim }}\,{{y}_{\nu }}(0)={{H}_{\nu }}\left( 0 \right) =\frac{{{2}^{\nu }}\sqrt{\pi }}{\Gamma \left( \frac{1-\nu }{2} \right) }\end{aligned}$$

using the definition of \({{H}_{\nu }}(x)\) in (10), \({{y}_{\nu }}(0)\) can be expressed as a sum

$$\begin{aligned} {{y}_{\nu }}(0)=\frac{{{2}^{\nu /2}}}{\Gamma \left( -\nu \right) }\underset{k=0}{\overset{A}{\mathop \sum }}\,{{T}_{k}} \end{aligned}$$
(12)

of terms

$$\begin{aligned} {{T}_{k}}\triangleq {{a}^{\nu /2}}\frac{\Gamma \left( k-\nu \right) }{k!}~\frac{A!{{a}^{-k}}}{\left( A-k \right) !} . \end{aligned}$$

When \(\nu \) is negative, these are all positive. The series is difficult to sum due to multiple levels of numerical cancellation, but can be estimated by separating the factors. Another difficulty is the changing behavior of \(T_k\) with increasing a. This problem can be remedied by defining a border between “head” and “tail” sections that increases with a properly tuned power of a.

Lemma 1

The factor

$$\begin{aligned} p\left( k \right) \triangleq \frac{A!{{a}^{-k}}}{\left( A-k \right) !}={{\left( \frac{a}{A} \right) }^{- k}}~\underset{j=0}{\overset{k-1}{\mathop \prod }}\,\left( 1-\frac{j}{A} \right) \end{aligned}$$

for \(1\le k\le A~\)satisfies

$$\begin{aligned} p\left( k \right) \le \exp \left( -\frac{{{k}^{2}}}{2A} \right) \left[ 1+O\left( \frac{k}{a} \right) \right] \end{aligned}$$

and for \(1\le k<A/2\),

$$\begin{aligned} p\left( k \right) \ge \exp \left( -\frac{{{k}^{2}}}{2A} \right) \left[ 1+O\left( \frac{k}{a}+\frac{{{k}^{3}}}{{{a}^{2}}} \right) \right] . \end{aligned}$$

Proof

Define the “nuisance factor” due to truncation by the ceiling function by

$$\begin{aligned} \beta \triangleq {{\left( \frac{a}{A} \right) }^{k}}={{\left( \frac{a}{a+\left( \lceil a\rceil -a \right) } \right) }^{k}}={{\left( \frac{a}{a+\theta } \right) }^{k}}=1+O\left( \frac{k}{a} \right) , \end{aligned}$$
(13)

where \(0\le \theta <1\). For \(1\le k\le A\), taking the logarithm of \(\beta p(k)\) and Taylor expanding,

$$\begin{aligned} \ln \left[ \beta p\left( k \right) \right]&= \underset{j=0}{\overset{k-1}{\mathop \sum }}\,\ln \left( 1- \frac{j}{A} \right) \nonumber \\&= \underset{j=0}{\overset{k-1}{\mathop \sum }}\,\left( -\frac{j}{A}- \frac{{{j}^{2}}}{2{{A}^{2}}}-\frac{{{j}^{3}}}{3{{A}^{3}}}-\frac{{{j}^{4}}}{4{{A}^{4}}}-\cdots \right) \nonumber \\&= -\frac{k\left( k-1 \right) }{2A}-\underset{j=0}{\overset{k-1}{\mathop \sum }}\,\left( \frac{{{j}^{2}}}{2{{A}^{2}}}+\frac{{{j}^{3}}}{3{{A}^{3}}}+\frac{{{j}^{4}}}{4{{A}^{4}} }+\cdots \right) \nonumber \\&\triangleq -\frac{k\left( k-1 \right) }{2A}-{{R}_{p} } , \end{aligned}$$
(14)

where \({{R}_{p}}\ge 0\). By re-exponentiation,

$$\begin{aligned}\beta p\left( k \right) \le \exp \left( -\frac{k(k-1)}{2A} \right) =\exp \left( - \frac{{{k}^{2}}}{2A}+\frac{k}{2A} \right) =\exp \left( -\frac{{{k}^{2}}}{2A} \right) \left[ 1+O\left( \frac{k}{a} \right) \right] , \end{aligned}$$

so

$$\begin{aligned}p\left( k \right) \le \exp \left( -\frac{{{k}^{2}}}{2A} \right) \left[ 1+O\left( \frac{k}{a} \right) \right] . \end{aligned}$$

On the other hand, for \(1\le k\le A/2\), by comparison with a geometric series,

$$\begin{aligned} {{R}_{p}}&=\underset{j=0}{\overset{k-1}{\mathop \sum }}\,\left( \frac{{{j}^{2}}}{2{{A}^{2}}}+\frac{{{j}^{3}}}{3{{A}^{3}}}+\frac{{{j}^{4}}}{4{{A}^{4}}}\cdots \right) \\&\le \underset{j=0}{\overset{k-1}{\mathop \sum }}\,\left( \frac{{{j}^{2}}}{2{{A}^{2}}}+\frac{{{j}^{3}}}{2{{A}^{3}}}+\frac{{{j}^{4}}}{2{{A}^{4}}}\cdots \right) \\&=\underset{j=0}{\overset{k-1}{\mathop \sum }}\,\left( \frac{{{j}^{2}}}{2{{A}^{2}}}\frac{1}{1-j/A} \right) \le \underset{j=0}{\overset{k-1}{\mathop \sum }}\,\frac{{{j}^{2}}}{{{A}^{2}}}\le \frac{{{k}^{3}}}{{{A}^{2}}} , \end{aligned}$$

so by (14),

$$\begin{aligned}\beta p\left( k \right) \ge \exp \left( -\frac{k(k-1)}{2A}-\frac{{{k}^{3}}}{{{A}^{2}}} \right) \ge \exp \left( -\frac{{{k}^{2}}}{2A}-\frac{{{k}^{3}}}{{{A}^{2}}} \right) , \end{aligned}$$

By (13)

$$\begin{aligned} p\left( k \right)&\ge \exp \left( -\frac{{{k}^{2}}}{2A}-\frac{{{k}^{3}}}{{{A}^{2}}} \right) \left[ 1+O\left( \frac{k}{a} \right) \right] \\&=\exp \left( -\frac{{{k}^{2}}}{2A} \right) \exp \left( -\frac{{{k}^{3}}}{{{A}^{2}}} \right) \left[ 1+O\left( \frac{k}{a} \right) \right] \\&=\exp \left( -\frac{{{k}^{2}}}{2A} \right) ~\left[ 1+O\left( \frac{k}{a}+\frac{{{k}^{3}}}{{{a}^{2}}} \right) \right] . \end{aligned}$$

\(\square \)

The following lemma is similar to Gautschi’s inequality [18], but whereas the latter inequality is restricted to \(-1\le \nu \le 0\), the lemma here needs to hold for arbitrary negative \(\nu \).

Lemma 2

The factor

$$\begin{aligned}q\left( k \right) \triangleq \frac{\Gamma \left( k-\nu \right) }{k!}\end{aligned}$$

for \(1\le k\le A\) and \(\nu \le 0\) satisfies

$$\begin{aligned}q\left( k \right) ={{k}^{-\nu -1}}\left[ 1+O\left( \frac{1}{k} \right) \right] . \end{aligned}$$

Proof

By Stirling’s approximation for \(k>0\) [19, §6.1.37-38], [4, Eq. 5.11.3]

$$\begin{aligned}\Gamma \left( k \right) =\sqrt{\frac{2\pi }{k}}{{\left( \frac{k}{e} \right) }^{k}}\left[ 1+O\left( \frac{1}{k} \right) \right] , \end{aligned}$$

and the relation

$$\begin{aligned}{{\left( 1-\frac{\nu }{k} \right) }^{k}}=\exp \left[ k~\ln \left( 1-\frac{\nu }{k} \right) \right] =\exp \left[ -\nu +O\left( \frac{1}{k} \right) \right] ={{e}^{-\nu }}\left[ 1+O\left( \frac{1}{k} \right) \right] \end{aligned}$$

gives

$$\begin{aligned} \frac{\Gamma \left( k-\nu \right) }{\Gamma \left( k \right) }&=\sqrt{\frac{k}{k-\nu }}{{\left( \frac{k-\nu }{e} \right) }^{k-\nu }}{{\left( \frac{e}{k} \right) }^{k}}\left[ 1+O\left( \frac{1}{k} \right) \right] \\&={{\left( 1-\frac{\nu }{k} \right) }^{-\frac{1}{2}-\nu }}{{\left( 1-\frac{\nu }{k} \right) }^{k}}{{e}^{\nu }}{{k}^{-\nu }}\left[ 1+O\left( \frac{1}{k} \right) \right] \\&=\left[ 1+O\left( \frac{1}{k} \right) \right] {{e}^{-\nu }}{{e}^{\nu }}{{k}^{- \nu }}\left[ 1+O\left( \frac{1}{k} \right) \right] \\&={{k}^{-\nu }}\left[ 1+O\left( \frac{1}{k} \right) \right] , \end{aligned}$$

so from the definition of q(k),

$$\begin{aligned}q\left( k \right) =\frac{\Gamma \left( k-\nu \right) }{k!}=\frac{\Gamma \left( k-\nu \right) }{k\,\Gamma \left( k \right) }={{k}^{-\nu - 1}}\left[ 1+O\left( \frac{1}{k} \right) \right] . \end{aligned}$$

\(\square \)

Now, it is time to take on the sum (12), split in a head and tail part at index \(M\triangleq \lceil {{A}^{3/4}}\rceil \),

$$\begin{aligned} \underset{k=0}{\overset{A}{\mathop \sum }}\,{{T}_{k}}=\underset{k=0}{\overset{M- 1}{\mathop \sum }}\,{{T}_{k}}+\underset{k=M}{\overset{A}{\mathop \sum }}\,{{T}_{k}}\triangleq {{R}_\mathrm{head}}+{{R}_\mathrm{tail}} . \end{aligned}$$
(15)

Define \(\Delta t\triangleq 1/\sqrt{A}\) and the function

$$\begin{aligned}{{f}_{\nu }}\left( t \right) \triangleq {{t}^{-\nu -1}}\exp \left( -\frac{{{t}^{2}}}{2} \right) . \end{aligned}$$

Clearly, the functions \({{f}_{\nu }}\left( t \right) \) (Fig. 2) and

$$\begin{aligned} f''_\nu (t) = \left[ t^4 + (1+2\nu )t^2+(\nu ^2+3\nu +2)\right] t^{-\nu -3} e^{-t^2/2} \end{aligned}$$

are continuous and bounded for bounded \(\nu \le -3\) and \(t\ge 0\).

Fig. 2
figure 2

The function \({{f}_{\nu }}(t)\) for \(\nu =-4\)

Lemma 3

The following relations hold for \(\nu \le -3\):

$$\begin{aligned} \underset{k=M}{\overset{A~}{\mathop \sum }}\,{{f}_{\nu }}\left( k\Delta t \right) \Delta t=O\left( \frac{1}{\sqrt{a}} \right) \end{aligned}$$
(16)

and

$$\begin{aligned} \sum _{k=0}^A f_{\nu } \left( k\Delta t \right) \Delta t= 2^{-\nu /2-1} \Gamma \left( -\frac{\nu }{2} \right) +O\left( \frac{1}{\sqrt{a}} \right) . \end{aligned}$$
(17)

Proof

According to the well-known trapezoidal rule, since \({{f}_{\nu }}\left( t \right) \) and \({f''_{\nu }}(t)\) are bounded for \(\nu \le -3\) and \(t\ge 0\), and for some \(\tau \in \left[ M\Delta t,A\Delta t \right] \),

$$\begin{aligned} \underset{M\Delta t}{\overset{A\Delta t}{\mathop \int }}\,{{f}_{\nu }}\left( t \right) \mathrm{d}t&= \left( \frac{f_\nu ( A\Delta t)}{2}+\frac{f_\nu ( M\Delta t)}{2} \right) \Delta t \\&\quad + \underset{k=M+1}{\overset{A-1~}{\mathop \sum }}\,{{f}_{\nu }}\left( k\Delta t \right) \Delta t- \frac{A\Delta t-M\Delta t}{12}\Delta {{t}^{2}}{f''_\nu }\left( \tau \right) \\&= \frac{f_\nu ( A\Delta t)+f_\nu ( M\Delta t)}{2}\,\Delta t +\underset{k=M+1}{\overset{A-1~}{\mathop \sum }}\,{{f}_{\nu }}\left( k\Delta t \right) \Delta t+O\left( A \Delta t^3\right) , \end{aligned}$$

so that

$$\begin{aligned} \sum _{k=M}^A f_\nu \left( k\Delta t\right) \Delta t = \int _{M\Delta t}^{A\Delta t} f_\nu \left( t \right) \mathrm{d}t + O\left( \frac{1}{\sqrt{a}}\right) . \end{aligned}$$

By substituting \({{t}^{2}}/2=u\) in the integral of \({{f}_{\nu }}\), the upper incomplete gamma function [19, §6.5.3], [4, Eq. 8.2.2] is obtained,

$$\begin{aligned} \mathop {\int }^{}{{f}_{\nu }}(t)\mathrm{d}t&=\mathop {\int }^{}{{t}^{-\nu -1}}{{e}^{-\frac{{{t}^{2}}}{2}}}\mathrm{d}t \nonumber \\&=\mathop {\int }^{}{{\left( \sqrt{2u} \right) }^{-\nu -2}}{{e}^{-u}}\mathrm{d}u \nonumber \\&={{2}^{-\nu /2-1}}\mathop {\int }^{}{{u}^{-\nu /2-1}}{{e}^{-u}}\mathrm{d}u \nonumber \\&=-{{2}^{-\nu /2-1}}~\Gamma \left( -\frac{\nu }{2},u \right) +C . \end{aligned}$$
(18)

Asymptotically [19, §6.5.32], [4, Eq. 8.11.2-3],

$$\begin{aligned} \Gamma \left( s,z \right) ={{z}^{s-1}}{{e}^{-z}}\left[ 1+O\left( \frac{1}{z} \right) \right] , \end{aligned}$$
(19)

implying that when z increases, \(\Gamma (s,z)\) approaches zero faster than any negative power of z, including \(1/\sqrt{a}\), i.e.,

$$\begin{aligned} \Gamma \left( -\frac{\nu }{2},\frac{{{(M\Delta t)}^{2}}}{2} \right) = O\left( \frac{1}{\sqrt{a}}\right) . \end{aligned}$$

This proves the first relation. For the second relation, by (18),

$$\begin{aligned} \int _0^{A\Delta t} f_\nu (t)\mathrm{d}t&= 2^{-\nu /2-1} \Gamma \left( -\frac{\nu }{2}\right) - 2^{-\nu /2-1} \Gamma \left( -\frac{\nu }{2},\frac{(A\Delta t)^2}{2} \right) \\&= 2^{-\nu /2-1} \Gamma \left( -\frac{\nu }{2}\right) + O\left( \frac{1}{\sqrt{a}}\right) . \end{aligned}$$

\(\square \)

Lemma 4

For bounded \(\nu \le -3\), \({{R}_\mathrm{tail}}=O\left( 1/\sqrt{a} \right) \).

Proof

By Lemmas 1 and 2,

$$\begin{aligned} 0<~{{R}_\mathrm{tail}}&= {{a}^{\nu /2}}\underset{k=M}{\overset{A~}{\mathop \sum }}\,q\left( k \right) p\left( k \right) \\&\le {{a}^{\nu /2}}\underset{k=M}{\overset{A~}{\mathop \sum }}\,{{k}^{-\nu - 1}}\left[ 1+ O\left( \frac{1}{k} \right) \right] ~{{e}^{-{{k}^{2}}/2A}}\left[ 1+O\left( \frac{k}{a} \right) \right] \\&={{a}^{\nu /2}}\underset{k=M}{\overset{A~}{\mathop \sum }}\,{{k}^{-\nu -1}}~{{e}^{- {{k}^{2}}/2A}}O\left( 1 \right) . \end{aligned}$$

Substituting \(k=k\Delta t \sqrt{A}\),

$$\begin{aligned} {{R}_\mathrm{tail}}&={{a}^{\nu /2}}\underset{k=M}{\overset{A~}{\mathop \sum }}\,{{\left( k\Delta t\sqrt{A} \right) }^{-\nu -1}}{{e}^{- {{\left( k\Delta t \right) }^{2}}/2}}\Delta t\sqrt{A}\cdot O\left( 1 \right) \\&={{\left( \frac{a}{A} \right) }^{\frac{\nu }{2}}}\underset{k=M}{\overset{A~}{\mathop \sum }}\,{{f}_{\nu }}\left( k\Delta t \right) \Delta t\cdot O\left( 1 \right) \\&=O\left( \frac{1}{\sqrt{a}} \right) \end{aligned}$$

by Lemma 3. \(\square \)

The term \({{R}_\mathrm{head}}\) in (15) can be computed in a similar way.

Lemma 5

For bounded \(\nu \le -4\),

$$\begin{aligned} {{R}_\mathrm{head}}={{2}^{-\nu /2-1}}\Gamma \left( -\frac{\nu }{2} \right) +O\left( \frac{1}{\sqrt{a}} \right) . \end{aligned}$$

Proof

This time \(k<M\), and by Lemmas 1 and 2,

$$\begin{aligned} {{R}_\mathrm{head}}&={{a}^{\nu /2}}\underset{k=0}{\overset{M-1}{\mathop \sum }}\,{{k}^{-\nu - 1}}\left[ 1+O\left( \frac{1}{k} \right) \right] {{e}^{- \frac{{{k}^{2}}}{2A}}}\left[ 1+O\left( \frac{k}{a}+\frac{{{k}^{3}}}{{{a}^{2}}} \right) \right] \nonumber \\&={{\left( \frac{a}{A} \right) }^{\nu /2}}\underset{k=0}{\overset{M-1}{\mathop \sum }}\,{{f}_{\nu }}\left( k\Delta t \right) \Delta t\left[ 1+O\left( \frac{1}{k}+\frac{k}{a}+\frac{{{k}^{3}}}{{{a}^{2 }}} \right) \right] \nonumber \\&=\underset{k=0}{\overset{M-1}{\mathop \sum }}\,{{f}_{\nu }}\left( k\Delta t \right) \Delta t\left[ 1+O\left( \frac{1}{a}+\frac{\Delta t }{k\Delta t}+\frac{k\Delta t}{\sqrt{a}}+\frac{{{\left( k\Delta t \right) }^{3}}}{\sqrt{a}} \right) \right] \nonumber \\&=\underset{k=0}{\overset{M-1}{\mathop \sum }}\,{{f}_{\nu }}\left( k\Delta t \right) \Delta t+\underset{k=0}{\overset{M-1}{\mathop \sum }}\,{{f}_{\nu }}\left( k\Delta t \right) \Delta t\cdot O\left( \frac{\Delta t}{k\Delta t}+\frac{k\Delta t}{\sqrt{a}}+\frac{{{\left( k\Delta t \right) }^{3}}}{\sqrt{a}} \right) \nonumber \\&\triangleq S+\Delta S . \end{aligned}$$
(20)

Using the identity \({{f}_{\nu }}\left( t \right) {{t}^{n}}={{f}_{\nu -n}}(t)\), the error term \(\Delta {}S\) is

$$\begin{aligned} \Delta S&= \underset{k=0}{\overset{M-1}{\mathop \sum }}\,{{f}_{\nu +1}}\left( k\Delta t \right) \Delta t\cdot O\left( \Delta t \right) \\&\quad + \underset{k=0}{\overset{M-1}{\mathop \sum }}\,{{f}_{\nu -1}}\left( k\Delta t \right) \Delta t\cdot O\left( {1}/{\sqrt{a}} \right) \\&\quad + \underset{k=0}{\overset{M-1}{\mathop \sum }}\,{{f}_{\nu - 3}}\left( k\Delta t \right) \Delta t\cdot O\left( {1}/\sqrt{a}\right) . \end{aligned}$$

Since

$$\begin{aligned} 0 \le \underset{k=0}{\overset{M-1}{\mathop \sum }}\,{{f}_{\nu }}\left( k\Delta t \right) \Delta t \le \underset{k=0}{\overset{A}{\mathop \sum }}\,{{f}_{\nu }}\left( k\Delta t \right) \Delta t , \end{aligned}$$

which by Lemma 3 is bounded for \(\nu \le -3\),

$$\begin{aligned} \Delta S = O\left( \frac{1}{\sqrt{a}} \right) \end{aligned}$$

for \(\nu \le -4\). For the sum S in (20), again using Lemma 3,

$$\begin{aligned} S&= \sum _{k=0}^{M-1}f_\nu \left( k\Delta t \right) \Delta t \\&= \sum _{k=0}^{A}f_\nu \left( k\Delta t \right) \Delta t - \sum _{k=M}^{A}f_\nu \left( k\Delta t \right) \Delta t \\&={{2}^{-\nu /2-1}}\Gamma \left( -\frac{\nu }{2} \right) +O\left( \frac{1}{\sqrt{a}}\right) . \end{aligned}$$

\(\square \)

By (12), and combining Lemmas 5 and 4,

$$\begin{aligned} y_{\nu }^{a}\left( 0 \right) =\frac{{{2}^{\nu /2}}}{\Gamma \left( -\nu \right) }\left( {{R}_\mathrm{head}}+{{R}_\mathrm{tail}} \right) =\frac{\Gamma \left( - \frac{\nu }{2} \right) }{2~\Gamma \left( -\nu \right) }+O\left( \frac{1}{\sqrt{a}}\right) . \end{aligned}$$

By the gamma function duplication rule [15, Eq. 1.2.3], [4, Eq. 5.5.5],

$$\begin{aligned} \frac{\Gamma \left( z \right) }{\Gamma \left( 2z \right) }=\frac{{{2}^{1- 2z}}~\sqrt{\pi }\text { }\!\!~ }{\Gamma \left( z+\frac{1}{2} \right) } , \end{aligned}$$

substituting \(z=-\nu /2\),

$$\begin{aligned} y_{\nu }^{a}\left( 0 \right) = \frac{{{2}^{\nu }}\sqrt{\pi }}{\Gamma \left( \frac{1- \nu }{2} \right) }+O\left( \frac{1}{\sqrt{a}} \right) ={{H}_{\nu }}(0)+O\left( \frac{1}{\sqrt{a}} \right) . \end{aligned}$$
(21)

Convergence for \(x=0\) and Arbitrary \(\nu \)

Lemma 6

For \(\nu \) in any bounded interval,

$$\begin{aligned} {{y}_{\nu }}\left( 0 \right) ={{H}_{\nu }}\left( 0 \right) +O\left( \frac{1}{\sqrt{a}} \right) , \end{aligned}$$

and for \(\Delta x=1/\sqrt{2a}\),

$$\begin{aligned} \frac{{{y}_{\nu }}\left( 0 \right) -{{y}_{\nu }}\left( -\Delta x \right) }{\Delta x}={H'_\nu \left( 0 \right) }+O\left( \frac{1}{\sqrt{a}} \right) . \end{aligned}$$

Proof

Given that \({{y}_{\nu }}\left( 0 \right) ={{H}_{\nu }}(0)+O\left( 1/\sqrt{a} \right) \) for bounded \(\nu <{{\nu }_{0}}\), then for \({{\nu }_{0}}\le \nu <{{\nu }_{0}}+1\) the difference equation (5) can be rewritten into

$$\begin{aligned} {{c}_{n}}\left( \nu +1 \right) =\frac{\nu +a-n}{a}{{c}_{n}}\left( \nu \right) - \frac{\nu }{a}{{c}_{n}}\left( \nu -1 \right) , \end{aligned}$$
(22)

so that for \(n=A=\lceil a \rceil \) and by (21),

$$\begin{aligned} {{\left( 2a \right) }^{(\nu +1)/2}}{{c}_{A}}\left( \nu +1 \right)&={{\left( 2a \right) }^{(\nu +1)/2}}~\left[ \frac{\nu +a-A}{a}{{c}_{A}}\left( \nu \right) -\frac{\nu }{a}{{c}_{A}}\left( \nu -1 \right) \right] \nonumber \\&=~\sqrt{2a}\frac{\nu +a-A}{a}~{{y}_{\nu }}\left( 0 \right) -\frac{\nu }{a}2a{{y}_{\nu -1}}\left( 0 \right) \nonumber \\&=O\left( \frac{1}{\sqrt{a}} \right) -2\nu {{y}_{\nu -1}}\left( 0 \right) \nonumber \\&=-2\nu H_\nu (0) +O\left( \frac{1}{\sqrt{a}} \right) \nonumber \\&={{H}_{\nu +1}}(0)+O\left( \frac{1}{\sqrt{a}} \right) . \end{aligned}$$
(23)

By induction, \({{y}_{\nu }}\left( 0 \right) ={{H}_{\nu }}\left( 0 \right) +O\left( {1}/{\sqrt{a}} \right) \) locally for \(\nu \). Additionally, by the backward recurrence relation (6) and the derivative rule for the Hermite function (9),

$$\begin{aligned} \frac{{{y}_{\nu }}\left( 0 \right) -{{y}_{\nu }}\left( -\Delta x \right) }{\Delta x}&=\frac{{{\left( 2a \right) }^{\nu /2}}{{c}_{A}}\left( \nu \right) - {{\left( 2a \right) }^{\nu /2}}{{c}_{A+1}}\left( \nu \right) }{1/\sqrt{2a}} \nonumber \\&=\frac{\nu }{a}{{\left( 2a \right) }^{\nu /2+1/2}}{{c}_{A}}\left( \nu -1 \right) \nonumber \\&=2\nu {{y}_{\nu -1}}\left( 0 \right) \nonumber \\&=2\nu {{H}_{\nu -1}}\left( 0 \right) +O\left( \frac{1}{\sqrt{a}} \right) \nonumber \\&={H'_\nu }\left( 0 \right) +O\left( \frac{1}{\sqrt{a}} \right) . \end{aligned}$$
(24)

\(\square \)

Convergence for Arbitrary x and Arbitrary \(\nu \)

To prove that \({{y}_{\nu }}\left( x \right) \) in (11) converges to the solution of the Hermite differential equation (7) having initial conditions \(y\left( 0 \right) ={{H}_{\nu }}(0)\) and \(y'(0)=H'_\nu (0)\), it can be rewritten in normal form as

$$\begin{aligned} {\varvec{y}}'={\varvec{A}}(x){\varvec{y}} , \end{aligned}$$
(25)

where \({\varvec{y}}(x)\triangleq {{\left( y\left( x \right) ,~{y}'(x) \right) }^{T}}\)and

$$\begin{aligned}{\varvec{A}}\left( x \right) \triangleq \left( \begin{matrix} 0 &{}\quad 1 \\ -2\nu &{}\quad 2x \\ \end{matrix} \right) . \end{aligned}$$

Let \(r\triangleq \sqrt{2a}\), \(\Delta x\triangleq 1/r\), and \({{x}_{k}}\triangleq k\Delta x\). Define a Cauchy polygon \({\varvec{u}}(x)\) for the differential equation (25) by linear interpolation between points \(\left( {{x}_{k}},{{{\varvec{u}}}_{k}} \right) \), where \({{{\varvec{u}}}_{0}}={\varvec{y}}(0)\) and

$$\begin{aligned} {{{\varvec{u}}}_{k+1}}\triangleq {{{\varvec{u}}}_{k}}+\Delta x~{\varvec{A}}({{x}_{k}})~{{{\varvec{u}}}_{k}} . \end{aligned}$$
(26)

Lemma 7

For x and \(\nu \) in bounded intervals \([0,\xi ]\) and \([-\psi ,\psi ]\), respectively, the Cauchy polygon \({\varvec{u}}(x)\) converges uniformly to the Hermite function solution with an error bound

$$\begin{aligned} |{\varvec{u}}(x)-{\varvec{y}}(x) |\le O\left( \frac{1}{\sqrt{a}} \right) . \end{aligned}$$

Proof

The Euclidean norm \(\left\Vert {\varvec{A}}(x)\right\Vert \) of \({\varvec{A}}\) in (25) equals the largest singular value of the matrix, so

$$\begin{aligned} \left\Vert {\varvec{A}}(x)\right\Vert ={{\sigma }_{\max }}\left( {\varvec{A}}(x) \right) \le \sqrt{\text {tr}\left( {\varvec{A}}{{\left( x \right) }^{T}}{ \varvec{A}}(x) \right) }=\sqrt{1+4{{\nu }^{2}}+4{{x}^{2}}} . \end{aligned}$$

Given arbitrary \(\xi ,\psi >0\) and \(L\triangleq ~\sqrt{1+4{{\psi }^{2}}+4{{\xi }^{2}}}\), for x in \(\left[ 0,\xi \right] \) and \(\nu \) in \([-\psi ,\psi ]\), by the definition of the Euclidean norm,

$$\begin{aligned} \frac{|{\varvec{A}}\left( x \right) \left( {\varvec{y}}-{\varvec{z}} \right) |}{|{\varvec{y}}-{\varvec{z}} |}\le \left\Vert {\varvec{A}}(x)\right\Vert \le L , \end{aligned}$$

so L is additionally a Lipschitz constant for (25) when \(x\in \left[ 0,\xi \right] \). A definition and two theorems proved in [20, Sect. 7.3] are now handy:

Definition 1

A vector function \({\varvec{u}}(x)\) is an approximate solution with deviation at most \(\epsilon \) in the interval \(a \le x \le \xi +a\) of the vector differential equation

$$\begin{aligned} \mathrm{d}{\varvec{y}}/\mathrm{d}x = {\varvec{Y}}({\varvec{y}},x),{} a\le x\le a+\xi , \end{aligned}$$

when \({\varvec{u}}(x)\) is continuous and satisfies the differential inequality

$$\begin{aligned} |{\varvec{u}}'(x)-{\varvec{Y}}({\varvec{u}}(x),x)|\le \epsilon . \end{aligned}$$

for all except a finite number of points x of the interval \(\left[ a, a+\xi \right] \).

Theorem 2

(Birkhoff and Rota, Th. 7.1) Let the continuously differentiable function \({\varvec{Y}}\) satisfy \(|{\varvec{Y}}|\le M\), \(|\partial {\varvec{Y}}/\partial x|\le C\), and L be a Lipschitz constant in the cylinder \(D: |{\varvec{y}}-{\varvec{c}}|\le K\), \(a \le x \le a + \xi \). Then, any Cauchy polygon in D with partition \(\pi \) is an approximate solution of \({\varvec{y}}'(x)={\varvec{Y}}({\varvec{y}},x)\) with deviation at most \((C + LM)|\pi |\).

Theorem 3

(Birkhoff and Rota, Th. 7.3) Let \({\varvec{y}}(x)\) be an exact solution and \({\varvec{u}}(x)\) be an approximate solution, with deviation \(\epsilon \), of the differential equation \({\varvec{y}}'(x)={\varvec{Y}}({\varvec{y}},x)\). Let \({\varvec{Y}}\) satisfy a Lipschitz condition with Lipschitz constant L. Then, for \(x \ge a\),

$$\begin{aligned} |{\varvec{y}}(x) - {\varvec{u}}(x) |\le |{\varvec{y}}(a) - {\varvec{u}}(a)|e^{L(x-a)} + \left( \epsilon / L \right) \left( e^{L(x-a)} - 1\right) . \end{aligned}$$

Bounds for \(|{\varvec{A}}\left( x \right) {\varvec{u}}(x) |\) and \(|\partial \left( {\varvec{A}}\left( x \right) {\varvec{u}}(x) \right) /\partial x |\) in \([0,\xi ]\) can be chosen

$$\begin{aligned} |{\varvec{A}}\left( {{x}_{k}} \right) {{{\varvec{u}}}_{k}} |\le L|{{{\varvec{u}}}_{k}} |\le L|{{{\varvec{u}}}_{0}} |\underset{j=0}{\overset{k-1}{\mathop \prod }}\,\left\Vert I+\Delta x~{\varvec{A}}\left( {{x}_{j}} \right) \right\Vert \le L|{{{\varvec{u}}}_{0}} |{{\left( 1+L\Delta x~ \right) }^{k}}\le L|{{{\varvec{u}}}_{0}} |{{e}^{L\xi }}\triangleq M \end{aligned}$$
(27)

and

$$\begin{aligned} {{\left|\frac{\partial \left( {\varvec{A}}\left( x \right) {\varvec{u}}\left( x \right) \right) }{\partial x} \right|}_{x={{x}_{k}}}}=\left|\left( \begin{matrix} 0 &{}\quad 0 \\ 0 &{}\quad 2 \\ \end{matrix} \right) {{{\varvec{u}}}_{k}} \right|\le 2|{{{\varvec{u}}}_{k}} |\le 2|{{{\varvec{u}}}_{0}} |{{e}^{L\xi }}\triangleq C . \end{aligned}$$

By Theorem 2, Theorem 3, and Lemma 6, for \(x\in [0,\xi ]\) and \(\nu \in [-\psi ,\psi ]\),

$$\begin{aligned} |{\varvec{u}}(x)-{\varvec{y}}(x) |&\le |{\varvec{u}}(0)-{\varvec{y}}(0) |{{e}^{Lx}}+\Delta x\left( \frac{C}{L}+M \right) \left( {{e}^{Lx}}-1 \right) \nonumber \\&=O\left( \frac{1}{\sqrt{a}} \right) ~{{e}^{L\xi }}+\Delta x\left( \frac{2}{L}+L \right) |{{{\varvec{u}}}_{0}} |{{e}^{L\xi }}\left( {{e}^{L\xi }}-1 \right) \nonumber \\&=O\left( \frac{1}{\sqrt{a}} \right) , \end{aligned}$$
(28)

which is independent of x and \(\nu \), so the Cauchy polygon (26) converges uniformly to the Hermite function when \(a\rightarrow \infty \). \(\square \)

Define \({\varvec{z}}_0\triangleq {\varvec{u}}_0\) and

$$\begin{aligned} {{\varvec{z}}_{k+1}}&\triangleq \left( { \begin{array}{c} {{y}_{\nu }}\left( {{x}_{k+1}} \right) \\ \dfrac{{{y}_{\nu }}\left( {{x}_{k+1}} \right) -{{y}_{\nu }}\left( {{x}_{k}} \right) }{\Delta x} \\ \end{array} } \right) \\&= {{{\varvec{z}}}_{k}}+\Delta x\left( {\begin{array}{c} \dfrac{{{y}_{\nu }}\left( {{x}_{k+1}} \right) -{{y}_{\nu }}\left( {{x}_{k}} \right) }{\Delta x} \\ \dfrac{{{y}_{\nu }}\left( {{x}_{k+1}} \right) -2{{y}_{\nu }}\left( {{x}_{k}} \right) +{{y}_{\nu }}\left( {{x}_{k-1}} \right) }{{\Delta {x}^{2}}} \nonumber \\ \end{array}} \right) . \end{aligned}$$
(29)

Let \(m\triangleq \lceil a-{{x}_{k}}r\rceil =a-{{x}_{k}}r+\left( \lceil a \rceil -a \right) =a-{{x}_{k}}r+\theta \), where \(0\le \theta <1\). For simplicity of notation, the argument of \({{c}_{m}}\) is dropped when it is \(\nu \). Consequently,

$$\begin{aligned}{{{\varvec{z}}}_{k}}= \left( \begin{matrix} {{r}^{\nu }}{{c}_{m}} \\ {{r}^{\nu +1}}\left( {{c}_{m-1}}-{{c}_{m}} \right) \\ \end{matrix} \right) \end{aligned}$$

and

$$\begin{aligned} {{{\varvec{z}}}_{k+1}}={{{\varvec{z}}}_{k}}+\Delta x\left( {\begin{matrix} {{r}^{\nu +1}}\left( {{c}_{m-1}}-{{c}_{m}} \right) \\ {{r}^{\nu +2}}\left( {{c}_{m-1}}-2{{c}_{m}}+{{c}_{m+1}} \right) \\ \end{matrix}} \right) . \end{aligned}$$

Multiplying the three-term recurrence relation (4) by two, and substituting \(x=\nu \) and \(m=n\) gives the identity

$$\begin{aligned} -2\nu {{c}_{m}}=2a{{c}_{m+1}}-\left( 2m+2a \right) {{c}_{m}}+2m{{c}_{m-1}} .\end{aligned}$$

Rearranging, and using the facts that \(2a={{r}^{2}}\) and \(m=a-{{x}_{k}}r+\theta \),

$$\begin{aligned} {{r}^{2}}{{c}_{m-1}}-2{{r}^{2}}{{c}_{m}}+{{r}^{2}}{{c}_{m+1}}=2{{x}_{k}}r\left( {{c}_{m-1}}- {{c}_{m}} \right) -2\nu {{c}_{m}}-2\theta \left( {{c}_{m-1}}-{{c}_{m}} \right) ,\nonumber \\ \end{aligned}$$
(30)

by which

$$\begin{aligned} {{{\varvec{z}}}_{k+1}}&={{{\varvec{z}}}_{k}}+\Delta x\left( \begin{matrix} {{r}^{\nu +1}}\left( {{c}_{m-1}}-{{c}_{m}} \right) \\ 2\left( {{x}_{k}}-{\theta }/{r} \right) {{r}^{\nu +1}}\left( {{c}_{m-1}}-{{c}_{m}} \right) -2\nu {{r}^{\nu }}{{c}_{m}} \\ \end{matrix} \right) \\&={{{\varvec{z}}}_{k}}+\Delta x\left( \begin{matrix} 0 &{}\quad 1 \\ -2\nu &{}\quad 2{{x}_{k}}-2\theta /r \\ \end{matrix} \right) {{{\varvec{z}}}_{k}} \\&={{{\varvec{z}}}_{k}}+\Delta x~{\varvec{A}}\left( {{x}_{k}} \right) {{{\varvec{z}}}_{k}}+\Delta x\,\left( \begin{matrix} 0 &{}\quad 0 \\ 0 &{}\quad -2\theta /r \\ \end{matrix} \right) {{{\varvec{z}}}_{k}} . \end{aligned}$$

This is nearly the same expression as for the Cauchy polygon (26), with only the \(\theta \)-term differing. Understanding the product sign below to multiply matrices in the proper order, and \(\varvec{I}\) to denote the identity matrix,

$$\begin{aligned} \frac{|{{{\varvec{z}}}_{k+1}}-{{{\varvec{u}}}_{k+1}} |}{|{{{\varvec{u}}}_{0}} |}&\le \left|\left|\underset{j=0}{\overset{k}{\mathop \prod }}\left[ \varvec{I}+\Delta x~{\varvec{A}}\left( {{x}_{j}} \right) +\Delta x\left( \begin{matrix} 0 &{}\quad 0 \\ 0 &{}\quad -2\theta /r \\ \end{matrix} \right) \right] -\underset{j=0}{\overset{k}{\mathop \prod }}\left[ \varvec{I}+\Delta x\,{\varvec{A}}\left( {{x}_{j}} \right) \right] \right|\right|. \end{aligned}$$

Bounding the factor \(\left|\left|{\varvec{I} + \Delta x~{\varvec{A}}{\left( x_j \right) } }\right|\right|\le \exp (L\xi )\) in the same way as in (27),

$$\begin{aligned} \frac{|{{{\varvec{z}}}_{k+1}}-{{{\varvec{u}}}_{k+1}} |}{|{{{\varvec{u}}}_{0}} |}&\le \underset{j=1}{\overset{k}{\mathop \sum }}\,\left( \begin{matrix} k \\ j \\ \end{matrix} \right) \Delta {{x}^{j}} {\left|\left|{ {\left( \begin{matrix} 0 &{}\quad 0 \\ 0 &{}\quad -2\theta /r \\ \end{matrix} \right) } }\right|\right|^j e^{L\xi } } \nonumber \\&= \underset{j=1}{\overset{k}{\mathop \sum }}\,\left( \begin{matrix} k \\ j \\ \end{matrix} \right) {{\left( \frac{2\theta \Delta x}{r} \right) }^{j}}{{e}^{L\xi }} =O\left( \frac{\xi }{r} \right) {{e}^{L\xi }}=O\left( \frac{1}{\sqrt{a}} \right) , \end{aligned}$$
(31)

demonstrates that \({\varvec{z}}\) converges uniformly to \({\varvec{u}}\) for \(x \in \left[ 0,\xi \right] \) and \(\nu \in [-\psi ,\psi ]\). The proof for the descending direction from \(x=0\) is omitted, since it is exactly analogous. By (28) and Lemma 7,

$$\begin{aligned} |{\varvec{z}}_k - {\varvec{y}}(x_k)|\le |{\varvec{z}}_k - {\varvec{u}}_k |+ |{\varvec{u}}(x_k) - {\varvec{y}}(x_k) |= O\left( \frac{1}{\sqrt{a}} \right) , \end{aligned}$$
(32)

so for \(x_k \le x < x_{k+1}\),

$$\begin{aligned} |y^a_\nu (x)-H_\nu (x) |&\le |y^a_\nu (x_k)-H_\nu (x_k) |+ |y^a_\nu (x_k)-y^a_\nu (x_{k+1}) |\nonumber \\&\le |{\varvec{z}}_k-{\varvec{y}}(x_k) |+ |{\varvec{z}}_k-{\varvec{z}}_{k+1} |\nonumber \\&= O\left( \frac{1}{\sqrt{a}} \right) , \end{aligned}$$
(33)

where the right hand side is independent of x and \(\nu \) for these parameters in any bounded interval.

To demonstrate the sharpness of the bound, choose \(\nu =2\), any real x, and arbitrarily large a such that \(n=a-x\sqrt{2a}\) is integer. Since \(c_{2}^{a}\left( n \right) =1-\left( 1+2a \right) n/{{a}^{2}}+{{n}^{2}}/{{a}^{2}}\) and \({{H}_{2}}\left( x \right) =4{{x}^{2}}-2\),

$$\begin{aligned} y_{2}^{a}\left( x \right) -{{H}_{2}}\left( x \right) =\frac{2x\sqrt{2}}{\sqrt{a}} . \end{aligned}$$
(34)

This completes the proof of Theorem 1. \(\square \)

Transition of the Derivative

Theorem 4

For real x, \(\nu \), and positive a,

$$\begin{aligned} {\frac{\partial }{\partial \nu } \left\{ { {\left( 2a \right) }^{\nu /2}}c_{\lceil a-x\sqrt{2a}\rceil }^{a}(\nu ) \right\} =\frac{\partial }{\partial \nu }{{H}_{\nu }}\left( x \right) +O\left( \frac{1}{\sqrt{a}} \right) } , \end{aligned}$$

where \(c_{n}^{a}(\nu )\) are Charlier polynomials and \({{H}_{\nu }}(x)\) is the Hermite function. The error bound \(O\left( {1}/{\sqrt{a}}\; \right) \) is sharp and locally uniform for \(\nu \) and x.

The proof of this theorem uses the same technique as the proof of Theorem 1, so the procedure is abbreviated. First, the theorem is proved for the special case \(x=0\) and \(\nu \le 5\), then generalized to arbitrary \(\nu \), and finally shown to converge to the solution of a differential equation uniquely solved by the derivative of the Hermite function.

Convergence for \(x=0\) and \(\nu \le -5\)

Differentiating (12) with respect to \(\nu \),

$$\begin{aligned} {\frac{\partial {y}_{\nu }(0)}{\partial \nu }} =\frac{\partial }{\partial \nu } \left\{ \frac{{{2}^{\nu /2}}}{\Gamma \left( -\nu \right) }\right\} \underset{k=0}{\overset{A}{\mathop \sum }}\,{{T}_{k}} + \frac{{{2}^{\nu /2}}}{\Gamma \left( -\nu \right) }\underset{k=0}{\overset{A}{\mathop \sum }}\,\frac{\partial {{T}_{k}}}{\partial \nu } . \end{aligned}$$
(35)

The first sum \(\sum T_k\) is given by Lemmas 4 and 5. Consider the second sum

$$\begin{aligned} \underset{k=0}{\overset{A}{\mathop \sum }}\, \frac{\partial {{T}_{k}}}{\partial \nu } =\underset{k=0}{\overset{M- 1}{\mathop \sum }}\, \frac{\partial {{T}_{k}}}{\partial \nu } +\underset{k=M}{\overset{A}{\mathop \sum }}\, \frac{\partial {{T}_{k}}}{\partial \nu } \triangleq {{R}_\mathrm{head}}+{{R}_\mathrm{tail}} . \end{aligned}$$
(36)

Here,

$$\begin{aligned} \frac{\partial {{T}_{k}}}{\partial \nu }= \left[ \ln \sqrt{a} - \psi (k - \nu )\right] T_k , \end{aligned}$$
(37)

and \(\psi (z)=\Gamma '(z)/\Gamma (z)\). By [19, §6.3.5 and §6.3.2], [4, Eqs. 5.4.12, 5.4.14, and 5.5.2],

$$\begin{aligned} \psi (k-\nu ) = \psi (k) + O\left( \frac{1}{k}\right) = \ln {k}+O\left( \frac{1}{k}\right) =\ln {k}\cdot \left[ 1+O\left( \frac{1}{k \ln {k}}\right) \right] . \end{aligned}$$

Define

$$\begin{aligned}{{g}_{\nu }}\left( t \right) \triangleq \ln {t}\cdot f_\nu (t) = \ln {t}\cdot {{t}^{-\nu -1}}\exp \left( -\frac{{{t}^{2}}}{2} \right) =- \frac{\partial }{\partial \nu } f_{\nu }(t) . \end{aligned}$$

Since \(t \ln t \rightarrow 0\) when \(t \rightarrow 0^+\), taking zero as the value at \(t = 0\), the functions \({{g}_{\nu }}\left( t \right) \) and

$$\begin{aligned} g''_\nu (t) =\ln t \cdot f''_\nu (t) - (3+2\nu +2t^2)f_{\nu -2}(t) \end{aligned}$$

are continuous and bounded for bounded \(\nu \le -4\) and \(t\ge 0\).

Lemma 8

For bounded \(\nu \le -4\), \({{R}_\mathrm{tail}}=O\left( 1/\sqrt{a} \right) \).

Proof

By Lemmas 1 and 2, and (37),

$$\begin{aligned} 0<~{{R}_\mathrm{tail}}&= \frac{\mathrm{d}}{\mathrm{d}\nu } \left\{ {{a}^{\nu /2}}\underset{k=M}{\overset{A~}{\mathop \sum }}\, q\left( k \right) p\left( k \right) \right\} \\&\le {{a}^{\nu /2}}\underset{k=M}{\overset{A~}{\mathop \sum }}\, (\ln \sqrt{a} - \ln k) \cdot {{k}^{-\nu - 1}}\left[ 1+ O\left( \frac{1}{k} \right) \right] ~{{e}^{-{{k}^{2}}/2A}}\left[ 1+O\left( \frac{k}{a} \right) \right] \\&={{a}^{\nu /2}}\underset{k=M}{\overset{A~}{\mathop \sum }}\, (\ln \sqrt{a} - \ln k) \cdot {{k}^{-\nu -1}}~{{e}^{- {{k}^{2}}/2A}}O\left( 1 \right) . \end{aligned}$$

But substituting \(k=k\Delta t \sqrt{A}\),

$$\begin{aligned} {{R}_\mathrm{tail}}&= {{a}^{\nu /2}}\underset{k=M}{\overset{A~}{\mathop \sum }}\,{ \left[ \ln \sqrt{a} -\ln (k\Delta t\sqrt{A})\right] \cdot {\left( k\Delta t\sqrt{A} \right) }^{-\nu -1}}{{e}^{- {{\left( k\Delta t \right) }^{2}}/2}}\Delta t\sqrt{A}\cdot O\left( 1 \right) \\&= {{\left( \frac{a}{A} \right) }^{\frac{\nu }{2}}}\underset{k=M}{\overset{A~}{\mathop \sum }}\, \left[ \ln \sqrt{\frac{a}{A}} -\ln (k\Delta t)\right] {f_{\nu }} \left( k\Delta t \right) \Delta t\cdot O\left( 1 \right) \\&= -\underset{k=M}{\overset{A~}{\mathop \sum }}\, {g_{\nu }} \left( k\Delta t \right) \Delta t\cdot O\left( 1 \right) . \end{aligned}$$

Since \(\vert \ln t\vert \le 1/t\) for \(0 < t \le 1\) and \(\vert \ln t\vert \le t\) for \(t\ge 1\), \(\vert g_\nu (t)\vert \le f_{\nu +1}(t) + f_{\nu -1}(t)\) for \(t \ge 0\), and for \(\nu \le -4\),

$$\begin{aligned} \vert R_\mathrm{tail}\vert \le \underset{k=M}{\overset{A~}{\mathop \sum }}\, {\left[ f_{\nu +1 } \left( k\Delta t \right) +f_{\nu -1 }\left( k\Delta t \right) \right] } \Delta t\cdot O\left( 1 \right) = O\left( \frac{1}{\sqrt{a}} \right) \end{aligned}$$

by Lemma 3. \(\square \)

Lemma 9

For bounded \(\nu \le -4\),

$$\begin{aligned} {{R}_\mathrm{head}}= \frac{\mathrm{d}}{\mathrm{d}\nu } \left\{ {{2}^{-\nu /2-1}}\Gamma \left( -\frac{\nu }{2} \right) \right\} +O\left( \frac{1}{\sqrt{a}} \right) . \end{aligned}$$

Proof

This time \(k<M\), and by Lemmas 1 and 2, like (20),

$$\begin{aligned} {{R}_\mathrm{head}}&= {{\left( \frac{a}{A} \right) }^{\nu /2}}\, \sum _{k=0}^{M-1}\, \left[ \ln \sqrt{a} -\ln (k\Delta t\sqrt{A})\right] \left( k\Delta t \right) \Delta t\left[ 1+O\left( \frac{1}{k}+\frac{k}{a}+\frac{{{k}^{3}}}{{{a}^{2 }}} \right) \right] \nonumber \\&= -\underset{k=0}{\overset{M-1}{\mathop \sum }}\,{{g}_{\nu }}\left( k\Delta t \right) \Delta t + \underset{k=0}{\overset{M-1}{\mathop \sum }}\,{{g}_{\nu }}\left( k\Delta t \right) \Delta t\cdot O\left( \frac{\Delta t}{k\Delta t}+\frac{k\Delta t}{\sqrt{a}}+\frac{{{\left( k\Delta t \right) }^{3}}}{\sqrt{a}} \right) \nonumber \\&\triangleq S+\Delta S . \end{aligned}$$
(38)

Since \(\vert {{g}_{\nu }}\left( t \right) {{t}^{n}}\vert =\vert {{g}_{\nu -n}}(t)\vert \le f_{\nu - n +1}+f_{\nu -n-1}\), the error term \(\Delta {}S\) is

$$\begin{aligned} \vert \Delta S\vert&\le \underset{k=0}{\overset{M-1}{\mathop \sum }} \,{\left[ f_{\nu +2}\left( k\Delta t \right) +f_{\nu }\left( k\Delta t \right) \right] }\Delta t\cdot O\left( \Delta t \right) \\&\quad + \underset{k=0}{\overset{M-1}{\mathop \sum }} \,{\left[ f_{\nu }\left( k\Delta t \right) +f_{\nu -2}\left( k\Delta t \right) \right] }\Delta t\cdot O\left( {1}/{\sqrt{a}} \right) \\&\quad + \underset{k=0}{\overset{M-1}{\mathop \sum }} \,{\left[ f_{\nu -2}\left( k\Delta t \right) +f_{\nu -4}\left( k\Delta t \right) \right] }\Delta t\cdot O\left( {1}/\sqrt{a}\right) = O\left( {1}/\sqrt{a}\right) \end{aligned}$$

for \(\nu \le -4\) by Lemma 3.

For the sum S in (38), again using the trapezoidal rule, as in the proof of Lemma 3,

$$\begin{aligned} {{R}_\mathrm{head}}&= -\underset{k=0}{\overset{M-1}{\mathop \sum }}\, g_\nu (t) \left( k\Delta t \right) \Delta t \\&= -\int _0^{M\Delta t}g_\nu \left( t \right) \mathrm{d}t + O\left( \frac{1}{\sqrt{a}} \right) \\&= \int _0^{\infty } \frac{\mathrm{d}}{\mathrm{d}\nu } f_\nu \left( t \right) \mathrm{d}t + \int _{M\Delta t}^{\infty } g_\nu \left( t \right) \mathrm{d}t + O\left( \frac{1}{\sqrt{a}} \right) . \end{aligned}$$

For \(\nu \le -4\),

$$\begin{aligned} \int _{M\Delta t}^{\infty }|g_\nu \left( t \right) |\mathrm{d}t \le \int _{M\Delta t}^{\infty } f_{\nu +1}\left( t \right) \mathrm{d}t = 2^{-(\nu +3)/2} \Gamma \left( -\frac{\nu +1}{2},\frac{(M\Delta t)^2}{2}\right) = O\left( \frac{1}{\sqrt{a}} \right) \end{aligned}$$

by (18) and (19).

The function \(f_\nu \) satisfies \(\int _0^\infty {f_{\nu } \mathrm{d}t} < \infty \) for \(\nu \le -1\), and for \(|h|\le 1\) and \(t\ge 1\),

$$\begin{aligned} \left( \frac{t^h - 1}{h}\right) = \left( \frac{e^{h \ln t} - 1}{h}\right) = \ln t + \frac{\ln ^2 t}{2!}h + \frac{\ln ^3 t}{3!}h^2+ \cdots \le e^{\ln t} - 1 < t . \end{aligned}$$

The function \(f_{\nu -1}(t)\) is an integrable function dominating \(|{f_{\nu +h}(t)-f_\nu (t)}|/{h}\) for \(|h|\le 1\) and \(t\ge 1\), since

$$\begin{aligned} \left|\frac{f_{\nu +h}(t)-f_\nu (t)}{h}\right|= \left( \frac{t^h - 1}{h}\right) f_\nu (t) < t f_{\nu }(t) = f_{\nu -1}(t) , \end{aligned}$$

so by Lebesgue’s dominant convergence theorem, the integration and differentiation order can be switched in the integral

$$\begin{aligned} \int _0^{\infty } \frac{\mathrm{d}}{\mathrm{d}\nu } f_\nu \left( t \right) \mathrm{d}t = \frac{\mathrm{d}}{\mathrm{d}\nu } \left\{ \int _0^{\infty } f_\nu \left( t \right) \mathrm{d}t \right\} = \frac{\mathrm{d}}{\mathrm{d}\nu } \left\{ 2^{-\nu /2-1} \Gamma \left( {-\frac{\nu }{2}}\right) \right\} . \end{aligned}$$

\(\square \)

Using Lemmas 9 and 8, Eq. (35) becomes

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}\nu } y_{\nu }^{a}\left( 0 \right)&= \frac{\mathrm{d}}{\mathrm{d}\nu } \left\{ \frac{{2^{\nu /2}}}{\Gamma \left( -\nu \right) } \right\} \cdot \sum _{k=0}^{A} T_k + \frac{{{2}^{\nu /2}}}{\Gamma \left( -\nu \right) } \left( {{R}_\mathrm{head}}+{{R}_\mathrm{tail}} \right) \\&= \frac{\mathrm{d}}{\mathrm{d}\nu } \left\{ \frac{{2^{\nu /2}}}{\Gamma \left( -\nu \right) } \right\} \cdot \left\{ 2^{-\nu /2-1} \Gamma \left( -\frac{\nu }{2}\right) +O\left( \frac{1}{\sqrt{a}} \right) \right\} \\&\quad + \frac{{{2}^{\nu /2}}}{\Gamma \left( -\nu \right) } \cdot \left\{ \frac{\mathrm{d}}{\mathrm{d}\nu } \left[ 2^{-\nu /2-1} \Gamma \left( -\frac{\nu }{2} \right) \right] + O\left( \frac{1}{\sqrt{a}}\right) \right\} \\&= \frac{\mathrm{d}}{\mathrm{d}\nu } \left\{ \frac{\Gamma \left( - \frac{\nu }{2} \right) }{2~\Gamma \left( -\nu \right) } \right\} +O\left( \frac{1}{\sqrt{a}} \right) \\&= \frac{\mathrm{d}}{\mathrm{d}\nu } H_\nu (0)+O\left( \frac{1}{\sqrt{a}} \right) . \end{aligned}$$

Convergence for \(x=0\) and Arbitrary \(\nu \)

Lemma 10

For \(\nu \) in any bounded interval,

$$\begin{aligned} \frac{\partial {y}_{\nu }(0)}{\partial \nu } = {\frac{\partial {H}_{\nu }(0)}{\partial \nu }}+O\left( \frac{1}{\sqrt{a}} \right) , \end{aligned}$$

and for \(\Delta x=1/\sqrt{2a}\),

$$\begin{aligned} \frac{\partial }{\partial \nu } \left\{ \frac{{{y}_{\nu }(0)}-{y}_{\nu }(-\Delta x)}{\Delta x} \right\} = \frac{\partial H'_\nu \left( 0 \right) }{\partial \nu }+O\left( \frac{1}{\sqrt{a}} \right) . \end{aligned}$$

Proof

Induction can be applied again, just as in the proof of Lemma 6. Given that \({\partial {y}_{\nu }(0)/\partial \nu }={\partial {H}_{\nu }}(0)/\partial \nu +O\left( 1/\sqrt{a} \right) \) for bounded \(\nu <{{\nu }_{0}}\) and \(n=A=\lceil a \rceil \), again using (22),

$$\begin{aligned} \frac{\partial }{\partial \nu }y_{\nu +1}(0)&= \frac{\partial }{\partial \nu } \left\{ {{\left( 2a \right) }^{(\nu +1)/2}}{{c}_{A}}\left( \nu +1 \right) \right\} \\&= \frac{\partial }{\partial \nu } \left\{ ~\sqrt{2a}\frac{\nu +a-A}{a}~{{y}_{\nu }}\left( 0 \right) -\frac{\nu }{a}2a{{y}_{\nu -1}}\left( 0 \right) \right\} \\&= \frac{\sqrt{2}}{\sqrt{a}} \frac{\partial }{\partial \nu } \left\{ \nu y_{\nu }(0) \right\} + \frac{(a-A)\sqrt{2}}{\sqrt{a}}\frac{\partial }{\partial \nu } y_{\nu } \left( 0 \right) + \frac{\partial }{\partial \nu } \left\{ -2{\nu }{{y}_{\nu -1}}\left( 0 \right) \right\} \\&= O\left( \frac{1}{\sqrt{a}} \right) + \frac{\partial }{\partial \nu }\left\{ -2\nu {{y}_{\nu -1}}\left( 0 \right) \right\} \\&= -2 {{y}_{\nu -1}}\left( 0 \right) -2\nu \frac{\partial }{\partial \nu }\left\{ {{y}_{\nu -1}}\left( 0 \right) \right\} + O\left( \frac{1}{\sqrt{a}} \right) . \end{aligned}$$

Applying the induction step,

$$\begin{aligned} \frac{\partial }{\partial \nu }y_{\nu +1}(0)&= -2 {{H}_{\nu -1}}\left( 0 \right) -2\nu \frac{\partial }{\partial \nu }\left\{ {{H}_{\nu -1}}\left( 0 \right) \right\} + O\left( \frac{1}{\sqrt{a}} \right) \\&= \frac{\partial }{\partial \nu }\left\{ -2\nu {{H}_{\nu -1}}\left( 0 \right) \right\} + O\left( \frac{1}{\sqrt{a}} \right) \\&= \frac{\partial }{\partial \nu }\left\{ {{H}_{\nu +1}}\left( 0 \right) \right\} + O\left( \frac{1}{\sqrt{a}} \right) . \end{aligned}$$

This implies that \({{y}_{\nu }}\left( 0 \right) ={{H}_{\nu }}\left( 0 \right) +O\left( {1}/{\sqrt{a}} \right) \) locally for \(\nu \). Using the backward recurrence relation (6), as in (24),

$$\begin{aligned} \frac{\partial }{\partial \nu } \left\{ \frac{{{y}_{\nu }(0)}-{y}_{\nu }(-\Delta x)}{\Delta x} \right\}&= \frac{\partial }{\partial \nu } 2\nu {{y}_{\nu -1}}\left( 0 \right) \\&= 2 {{y}_{\nu -1}}\left( 0 \right) + 2\nu \frac{\partial }{\partial \nu } {{y}_{\nu -1}}\left( 0 \right) \\&= 2 {{H}_{\nu -1}}\left( 0 \right) + 2\nu \frac{\partial }{\partial \nu } {{H}_{\nu -1}}\left( 0 \right) +O\left( \frac{1}{\sqrt{a}} \right) \\&= \frac{\partial }{\partial \nu }\left\{ 2\nu {{H}_{\nu -1}}\left( 0 \right) \right\} +O\left( \frac{1}{\sqrt{a}} \right) \\&= \frac{\partial }{\partial \nu } {H'_\nu }\left( 0 \right) +O\left( \frac{1}{\sqrt{a}} \right) . \end{aligned}$$

\(\square \)

Convergence for Arbitrary x and Arbitrary \(\nu \)

By differentiating Eq. (7) with respect to \(\nu \), and defining \(w \triangleq \partial y/\partial \nu \),

$$\begin{aligned} w''= 2xw'-2\nu w-2y = 2xw'-2\nu w-2H_\nu (x) . \end{aligned}$$
(39)

This equation has the particular solution \(w(x) = \partial H_\nu (x)/\partial \nu \). The homogeneous equation is again the Hermite equation, so the general solution of (39) is

$$\begin{aligned} w = \frac{\partial H_\nu (x)}{\partial \nu } + A H_\nu (x) + B H_\nu (-x) . \end{aligned}$$

For initial conditions \(w(0)=\partial H_\nu (0)/\partial \nu \) and \(w'(0) = \partial H'_\nu (0)/\partial \nu \), the unique solution of (39) is obviously \(w(x) = \partial H_\nu (x)/\partial \nu \).

Let \({\varvec{y}}(x)\triangleq {{\left( y\left( x \right) ,~{y}'(x),~w(x),~w'(x) \right) }^{T}}\). Equation (39) can be rewritten in the normal form (25), where

$$\begin{aligned}{\varvec{A}}\left( x \right) \triangleq \left( \begin{array}{llll} 0 &{}\quad 1 &{}\quad 0 &{}\quad 0\\ -2\nu &{}\quad 2x &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 \\ -2 &{}\quad 0 &{} \quad -2\nu &{}\quad 2x \\ \end{array} \right) . \end{aligned}$$

This time, the Euclidean norm of \({\varvec{A}}\) satisfies \( \left\Vert {\varvec{A}}(x)\right\Vert \le \sqrt{6+8{{\nu }^{2}}+8{{x}^{2}}} \), giving the Lipschitz constant \(\sqrt{6+8{{\psi }^{2}}+8{{\xi }^{2}}}\). In analogy with (26), a Cauchy polygon \({\varvec{u}}(x)\) can be defined such that for x and \(\nu \) in bounded intervals \([0,\xi ]\) and \([-\psi ,\psi ]\), respectively,

$$\begin{aligned} {{\left|\frac{\partial \left( {\varvec{A}}\left( x \right) {\varvec{u}}\left( x \right) \right) }{\partial x} \right|}_{x={{x}_{k}}}}=\left|\left( \begin{array}{llll} 0 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 2 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 2 \\ \end{array} \right) {{{\varvec{u}}}_{k}}\right|\le 2|{{{\varvec{u}}}_{k}} |\le 2|{{{\varvec{u}}}_{0}} |{{e}^{L\xi }} . \end{aligned}$$

As in the proof of Lemma 7, this means that for bounded x and \(\nu \), \({\varvec{u}}(x)\) converges uniformly to the solution \({\varvec{y}}\) with an error bound

$$\begin{aligned} |{\varvec{u}}(x)-{\varvec{y}}(x) |\le O\left( \frac{1}{\sqrt{a}} \right) . \end{aligned}$$

Now, extending the definition of \({\varvec{z}}_k\) (29) to four components, \({\varvec{z}}_0\triangleq {\varvec{u}}_0\) and

$$\begin{aligned} {{\varvec{z}}_{k+1}} \triangleq \left( { \begin{array}{c} {{y}_{\nu }}\left( {{x}_{k+1}} \right) \\ \dfrac{{{y}_{\nu }}\left( {{x}_{k+1}} \right) -{{y}_{\nu }}\left( {{x}_{k}} \right) }{\Delta x} \\ {{w}_{\nu }}\left( {{x}_{k+1}} \right) \\ \dfrac{{{w}_{\nu }}\left( {{x}_{k+1}} \right) -{{w}_{\nu }}\left( {{x}_{k}} \right) }{\Delta x} \\ \end{array} } \right) = {{{\varvec{z}}}_{k}}+\Delta x\left( {\begin{array}{c} \dfrac{{{y}_{\nu }}\left( {{x}_{k+1}} \right) -{{y}_{\nu }}\left( {{x}_{k}} \right) }{\Delta x} \\ \dfrac{{{y}_{\nu }}\left( {{x}_{k+1}} \right) -2{{y}_{\nu }}\left( {{x}_{k}} \right) +{{y}_{\nu }}\left( {{x}_{k-1}} \right) }{{\Delta {x}^{2}}} \\ \dfrac{{{w}_{\nu }}\left( {{x}_{k+1}} \right) -{{w}_{\nu }}\left( {{x}_{k}} \right) }{\Delta x} \\ \dfrac{{{w}_{\nu }}\left( {{x}_{k+1}} \right) -2{{w}_{\nu }}\left( {{x}_{k}} \right) +{{w}_{\nu }}\left( {{x}_{k-1}} \right) }{{\Delta {x}^{2}}} \\ \end{array}} \right) , \end{aligned}$$

and writing \(d_m \triangleq \partial c_m/\partial \nu \),

$$\begin{aligned}{{{\varvec{z}}}_{k}}= \left( \begin{matrix} {{r}^{\nu }}{{c}_{m}} \\ {{r}^{\nu +1}}\left( {{c}_{m-1}}-{{c}_{m}} \right) \\ {{r}^{\nu }}{{d}_{m}} \\ {{r}^{\nu +1}}\left( {{d}_{m-1}}-{{d}_{m}} \right) \end{matrix} \right) , \end{aligned}$$

and

$$\begin{aligned} {{{\varvec{z}}}_{k+1}}={{{\varvec{z}}}_{k}}+\Delta x\left( {\begin{matrix} {{r}^{\nu +1}}\left( {{c}_{m-1}}-{{c}_{m}} \right) \\ {{r}^{\nu +2}}\left( {{c}_{m-1}}-2{{c}_{m}}+{{c}_{m+1}} \right) \\ {{r}^{\nu +1}}\left( {{d}_{m-1}}-{{d}_{m}} \right) \\ {{r}^{\nu +2}}\left( {{d}_{m-1}}-2{{d}_{m}}+{{d}_{m+1}} \right) \\ \end{matrix}} \right) . \end{aligned}$$

Differentiating (30) with respect to \(\nu \),

$$\begin{aligned}{{r}^{2}}{{d}_{m+1}}-2{{r}^{2}}{{d}_{m}}+{{r}^{2}}{{d}_{m-1}}=2{{x}_{k}}r\left( {{d}_{m-1}}- {{d}_{m}} \right) -2\nu {{d}_{m}}-2\theta \left( {{d}_{m-1}}-{{d}_{m}} \right) -2 c_m , \end{aligned}$$

leads to

$$\begin{aligned} {{{\varvec{z}}}_{k+1}}&= {{{\varvec{z}}}_{k}}+\Delta x\left( \begin{matrix} 0 &{}\quad 1 &{}\quad 0 &{}\quad 0\\ -2\nu &{}\quad 2{{x}_{k}}-2\theta /r &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 \\ -2 &{}\quad 0 &{}\quad -2\nu &{}\quad 2{{x}_{k}}-2\theta /r \\ \end{matrix} \right) {{{\varvec{z}}}_{k}} \\&= {{{\varvec{z}}}_{k}}+\Delta x~{\varvec{A}}\left( {{x}_{k}} \right) {{{\varvec{z}}}_{k}}+ \Delta x \left( \begin{matrix} 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad -2\theta /r &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad -2\theta /r \\ \end{matrix} \right) {{{\varvec{z}}}_{k}} . \end{aligned}$$

By a procedure similar to the application of Eqs. (31)–(33) in Sect. 2,

$$\begin{aligned} \left|\frac{\partial y^a_\nu (x)}{\partial \nu } - \frac{\partial H_\nu (x)}{\partial \nu } \right|\le O\left( \frac{1}{\sqrt{a}} \right) , \end{aligned}$$

where the right hand side is independent of x and \(\nu \) for these parameters in any bounded interval.

The sharpness of the bound can be proved by contradiction: Suppose that \({\partial y^a_{\nu }(x)}/{\partial \nu } -\partial H_\nu (x)/{\partial \nu } = O\left( b(a)\right) \) where O(b(a)) is tighter than \(O\left( 1/\sqrt{a}\right) \). Integrating this difference,

$$\begin{aligned} \int _{\nu _1}^{\nu _2} \left[ \frac{\partial y^a_{\nu }(x)}{\partial \nu } - {\frac{\partial H_\nu (x)}{\partial \nu }} \right] \mathrm{d}\nu = \left[ y^a_{\nu _2}(x) - H_{\nu _2}(x)\right] - \left[ y^a_{\nu _1}(x) - H_{\nu _2}(x)\right] =O\left( b(a)\right) . \end{aligned}$$

Choosing \(\nu _1=0\) and \(\nu _2=2\), arbitrary x, arbitrarily large a such that \(a - x\sqrt{2a}\) is integer, and using (34),

$$\begin{aligned} \left[ y^a_2(x) - H_2(x)\right] - \left[ y^a_0(x) - H_0(x)\right] = \frac{2x\sqrt{2}}{\sqrt{a}} - (1 - 1) = \frac{2x\sqrt{2}}{\sqrt{a}} , \end{aligned}$$

which is a contradiction. This completes the proof of Theorem 4. \(\square \)

Convergence of Zeros

Theorem 5

For fixed real x and positive \(a\rightarrow \infty \), let \(~n\triangleq ~\lceil a-x\sqrt{2a}\rceil \). For a convergent sequence of zeros \({{\nu }_{n}}\rightarrow \nu \) such that \(c_{n}^{a}\left( {{\nu }_{n}} \right) =0\), the limit \(\nu \) is a zero of the Hermite function, \({{H}_{\nu }}\left( x \right) =0\), satisfying \(\nu ={{\nu }_{n}}+O\left( {1}/{\sqrt{a}}\; \right) \). Conversely, for a positive real zero \(\nu \) of the Hermite function, there is a convergent sequence \({{\nu }_{n}}\rightarrow \nu \) of zeros of \(c_{n}^{a}\) satisfying \(\nu ={{\nu }_{n}}+O\left( {1}/{\sqrt{a}}\; \right) \).

Fig. 3
figure 3

The Hermite function must have a zero near the Charlier polynomial zero

Proof

Define \({{w}_{n}}\left( z \right) \triangleq {{\left( \sqrt{2a} \right) }^{n}}c_{n}^{a}(z)\) and note that \({{w}_{n}}\) has the same zeros in z as \(c_{n}^{a}\). The proof is based on the well-known fact that the zeros of a Charlier polynomial are real, simple, and positive [8]. Taylor-expanding \({{w}_{n}}\left( z \right) \) around one of its zeros \(z={{\nu }_{n}}\), writing \({w'_n}({{\nu }_{n}})\) for \(\partial {{w}_{n}}\left( z \right) /\partial z\) at \(z={{\nu }_{n}}\),

$$\begin{aligned} {{w}_{n}}\left( {{\nu }_{n}}+\varepsilon \right) ={{w}_{n}}\left( {{\nu }_{n}} \right) +\varepsilon {{w'_n}}({{\nu }_{n}})+O\left( {{\varepsilon }^{2}} \right) =\varepsilon \left( w'_{n}({{\nu }_{n}})\text { }\!\!~ +O\left( \varepsilon \right) \right) \triangleq \varepsilon W\left( {{\nu }_{n}},\varepsilon \right) . \end{aligned}$$

Since the zeros of a Charlier polynomial are simple, \({{w'}_{n}}({{\nu }_{n}})\text { }\!\!~ \ne 0\), the expression \(W\left( {{\nu }_{n}},\varepsilon \right) \) must be nonzero for \(\varepsilon \) in some sufficiently small interval \(I=[-\delta ,\delta ]\), where \(0 < \delta \le {{\nu }_{n}}\). Assume that \({w'_{n}}({{\nu }_{n}})>0\). The case \(w'_{n}({{\nu }_{n}})\text { }\!\!~ <0\) is treated in an analog way. Let \(c\triangleq \inf _{\varepsilon \in I} \,W\left( {{\nu }_{n}},\varepsilon \right) \). Figure 3 illustrates \(\vert (z-\nu _n)c\vert \) as a lower bound for \(\vert w_n(z)\vert \). By Theorem 1, due to the uniform convergence, for \(z\in [{{\nu }_{n}}- \delta ,{{\nu }_{n}}+\delta ]\), there is a b, independent of n and z, such that

$$\begin{aligned} |{{H}_{z}}\left( x \right) -{{w}_{n}}\left( z \right) |\le \frac{b}{\sqrt{a}} . \end{aligned}$$

Choose \(\varepsilon \triangleq \left( 1+b \right) /\left( c\sqrt{a} \right) \), which satisfies \(\varepsilon <\delta \) for sufficiently large a. For \(z = {{\nu }_{n}}+\varepsilon \),

$$\begin{aligned} {{H}_{z}}\left( x \right) \ge w_n\left( z\right) - \frac{b}{\sqrt{a}} = \varepsilon W\left( {{\nu }_{n}},\varepsilon \right) - \frac{b}{\sqrt{a}}\ge \frac{1+b}{c\sqrt{a}}c-\frac{b}{\sqrt{a}}=\frac{1}{\sqrt{a}}>0 . \end{aligned}$$

Similarly, \(z = {{\nu }_{n}}-\varepsilon \) implies that \({{H}_{z}}\left( x \right) <0\). Since \({{H}_{z}}(x)\) is an entire function and changes sign for z in \([{{\nu }_{n}}- \varepsilon ,{{\nu }_{n}}+\varepsilon ]\), it must have a zero there. By letting \(a\rightarrow \infty \), the theorem is proved in one direction. For the reverse direction, switch the roles of w and H. Assume that \({{H}_{\nu }}\left( x \right) =0\). Since \({{H}_{0}}(x)\equiv 1\), \(\nu \) cannot be zero. Expand \({{H}_{z}}(x)\) around \(z=\nu \), writing \({\partial {{H}_{\nu }}(x)}/{\partial \nu }\) for \(\partial {{H}_{z}}\left( x \right) /\partial z\) at \(z=\nu \),

$$\begin{aligned} {{H}_{\nu +\varepsilon }}\left( x \right) ={{H}_{\nu }}\left( x \right) +\varepsilon {\partial {{H}_{\nu }}(x)}/{\partial \nu }+O\left( {{\varepsilon }^{2}} \right) =\varepsilon \left( {\partial {{H}_{\nu }}(x)}/{\partial \nu }+O\left( \varepsilon \right) \right) \triangleq \varepsilon Z\left( \nu ,\varepsilon \right) . \end{aligned}$$

Let \(x(\nu )\) be defined as the pth zero in x of \(H_\nu (x)=0\). It is known that \(x(\nu )\) is a strictly monotonic function of \(\nu \) for \(\nu \ge 0\), so \(\mathrm{d}x/\mathrm{d}\nu \ne 0\) [21]. Differentiating the equation by \(\nu \),

$$\begin{aligned} \frac{\partial {{H}_{\nu }}(x)}{\partial \nu } + \frac{\partial {{H}_{\nu }}(x)}{\partial x}\frac{\mathrm{d}x}{\mathrm{d}\nu } = 0 , \end{aligned}$$

so obviously, \({\partial {{H}_{\nu }}(x)}/{\partial \nu } = 0\) if and only if \({\partial {{H}_{\nu }}(x)}/{\partial x} = 0\). But if the latter derivative is zero, then \({{H}_{\nu -1}}\left( x \right) =0\) by the derivative rule (9), and according to the three-term recurrence for Hermite functions (8), all derivatives of \({{H}_{z}}(x)\) would be zero at \(z=\nu \), entailing that H, being analytic, would be identically zero. In other words, all positive real zeros \(\nu \) of \(H_{\nu }(x)\) are simple.

Consequently, \({\partial {{H}_{\nu }}(x)}/{\partial \nu } \ne 0\), and similarly to the first half of the proof, \(Z\left( \nu ,\varepsilon \right) \) must be nonzero for \(\varepsilon \) in some sufficiently small interval. It follows that \({{w}_{n}}\left( z \right) \) must be zero for some \(z\in [\nu -\varepsilon ,\nu +\varepsilon ]\), where \(\varepsilon =O\left( 1/\sqrt{a} \right) \). \(\square \)

Conclusions

It has been shown that the scaled Charlier polynomials, their scaled derivatives, and zeros converge to the Hermite function, the derivative of the Hermite function, and the zeros of the Hermite function, respectively. The convergence rates are inversely proportional to the square root of the order of the Charlier polynomial.

The proof technique used for showing the convergence of Charlier polynomials and their first derivatives is applicable to higher derivatives of the polynomials. There is a possibility that the Charlier polynomials may be extensible to an entire function. Such an extension could simplify the convergence proofs, but finding it appears nontrivial.