1 Introduction

Integrals defined with the Hermite weight on the real semiaxis appear in many applications (see, for instance, [1, 2, 7, 17]). Such integrals can be computed using modified moments via product rules. In [6], Gautschi observed a great loss of accuracy in computing such modified moments, having only three correct decimal digits for the size of the problem equal to 6,  and in [7], Gautschi described a way to overcome this problem by computing the modified moments with extended precision arithmetic.

In this paper, we revisit the problem of computing the modified moments for the system of orthonormal Laguerre polynomials on the real semiaxis with the Hermite weight, also called half-range Hermite weight. We propose a new method to compute these modified moments, based on the construction of the null-space of a rectangular matrix derived from the three-term recurrence relation of the system of orthonormal Laguerre polynomials. It is shown that the proposed algorithm computes the modified moments with high relative accuracy in floating point arithmetic, thereby avoiding the use of extended precision arithmetic. Numerical examples show the effectiveness of the proposed approach.

The paper is organized as follows. The basic properties of Laguerre polynomials are introduced in Sect. 2. The main results of the paper are in Sects. 3 and 4. There, we show that the modified moments can be computed via the null–space of a particular matrix, derived from the three-term recurrence formulas of the Laguerre polynomials, and we give an algorithm of linear complexity to compute them. In Sect. 5, we give an application of the use of modified moments based on a product rule and show the numerical efficiency of this approach in Sect. 6. We then end with a few concluding remarks in Sect. 7 and an Appendix including the Matlab codes of the algorithms described in the manuscript.

2 Orthonormal Laguerre polynomials

The orthonormal Laguerre polynomials are orthogonal polynomials satisfying the following three-term recurrence relation [16, p. 101]:

$$\begin{aligned} \left\{ \begin{array}{l} \mathcal{L}_{0}(x)=1, \\ \beta _1 \mathcal{L}_{1}(x)= (x-\alpha _1)\mathcal{L}_{0}(x),\\ \beta _{\ell }\mathcal{L}_{\ell }(x)= (x- \alpha _{\ell }) \mathcal{L}_{\ell -1}(x)-\beta _{\ell -1}\mathcal{L}_{\ell -2}(x), \;\; \ell \ge 2, \end{array} \right. \end{aligned}$$
(1)

with \( \beta _{\ell } = -\ell , \) and \( \alpha _{\ell }= 2 \ell -1, \) \( \ell =0,1,\ldots . \)

They are orthonormal in the interval \( [0, \infty )\) with respect to the weight function \( \omega (x)= e^{-x} \), i.e.,

$$ \int _{0}^{\infty }e^{-x} \mathcal{L}_{i}(x)\mathcal{L}_{j}(x)dx = \delta _{i,j}. $$

Let us denote by \( J_{\ell } \) the symmetric tridiagonal matrix of order \( \ell ,\)

$$ J_{\ell } =\left[ \begin{array}{ccccc} \alpha _{1} &{} \beta _1 &{} &{} &{} \\ \beta _1 &{} \alpha _2 &{} \beta _2 &{} &{} \\ &{} \beta _2 &{} \ddots &{}\ddots &{} \\ &{} &{} \ddots &{} \alpha _{\ell -1} &{} \beta _{\ell -1} \\ &{} &{} &{} \beta _{\ell -1} &{}\alpha _{\ell } \end{array} \right] , $$

with eigenvalue decomposition \( J_{\ell } \!=\! Z_{\ell } \Lambda _{\ell } Z_{\ell }^T, \) where \( \Lambda _{\ell }\!=\!\text { diag} \left( \!\lambda _1^{(\ell )},\lambda _2^{(\ell )},\ldots , \lambda _{\ell }^{(\ell )}\!\right) \) and \(Z_{\ell } =[z_{i,j}^{(\ell )}]_{i,j=1}^{\ell }\) orthogonal, i.e., \(Z_\ell Z_\ell ^T=I_\ell \). It turns out that \(\lambda _k^{(\ell )}\) are the zeros of \(\mathcal{L}_{\ell }(x),\) i.e., \(\mathcal{L}_{\ell }(\lambda _k^{(\ell )})=0,\) and, denoted by \( \textbf{z}_k^{(\ell )} \) the k–th column of \(Z_{\ell }\), \( k=1,2,\ldots ,\ell ,\) i.e.,

$$ Z_{\ell }= \left[ \begin{array}{ccccc} \textbf{z}_1^{(\ell )},&\textbf{z}_2^{(\ell )},&\cdots ,&\textbf{z}_{\ell -1}^{(\ell )},&\textbf{z}_{\ell }^{(\ell )} \end{array}\right] , $$

then [10, 11]

$$\begin{aligned} \textbf{z}_k^{(\ell )} = \frac{\hat{ \textbf{z}}_k^{(\ell )}}{\Vert \hat{ \textbf{z}}_k^{(\ell )}\Vert _2},\;\; \text { with}\;\; \hat{ \textbf{z}}_k^{(\ell )}= \pm \left[ \begin{array}{c} \mathcal{L}_{0}(\lambda _{k}^{(\ell )}) \\ \mathcal{L}_{1}(\lambda _{k}^{(\ell )}) \\ \vdots \\ \mathcal{L}_{\ell -2}(\lambda _{k}^{(\ell )}) \\ \mathcal{L}_{\ell -1}(\lambda _{k}^{(\ell )}) \end{array}\right] . \end{aligned}$$
(2)

The eigenvector matrix \( Z_{\ell } \) can be computed with multiple relative robust representation [5, 15].

The nodes and weights of the \(\ell \)–point Gaussian quadrature rule with respect to the Laguerre weight [8, 10] are given by \(\lambda _k^{(\ell )}\) for \(k=1,\ldots ,\ell \) and since

$$\begin{aligned} \mu _{0}:=\int _{0}^{\infty }e^{-x} dx = 1, \end{aligned}$$

the weights \(\omega _k^{(\ell )}\) can be simplified to

$$\begin{aligned} \omega _k^{(\ell )} = {\textbf{z}}_k^{(\ell )^2}(1) =\left( \sum _{j=0}^{\ell -1} \mathcal{L}_{j}^2(\lambda _{k}^{(\ell )})\right) ^{-1 }, \quad k=1,2,\ldots ,\ell . \end{aligned}$$
(3)

Furthermore, the following relation holds [16, p. 102]

$$\begin{aligned} \frac{d}{dx} \mathcal{L}_{\ell }(x) =\frac{\ell }{x} \left( \mathcal{L}_{\ell }(x)-\mathcal{L}_{\ell -1}(x) \right) . \end{aligned}$$
(4)

3 Modified moments

Let us first consider the modified moments

$$ \hat{\mathcal{M}}_\ell =\int _{0}^{\infty }e^{-x^2} \hat{\mathcal{L}}_{\ell }(x)dx, \qquad \ell =0,1,2,\ldots , $$

where \(\hat{\mathcal{L}}_{\ell }(x)\) is the monic Laguerre polynomial of degree \( \ell \). Such moments can be expressed as [7]

$$ \hat{\mathcal{M}}_\ell = \frac{(-1)^{\ell } \ell !}{2} \sum _{i=0}^{\ell } \hat{\sigma }_i^{(\ell )},\;\;\text { with}\;\;\hat{\sigma }_i^{(\ell )}= (-1)^i \frac{\ell !\Gamma (\frac{ i+1}{2})}{(\ell -i)! i!^2},\qquad \ell =0,1,2,\ldots , $$

where \( \Gamma \) is the gamma function [20], and the terms in the above sum can be generated recursively from the initial value \(\hat{\sigma }_0^{(\ell )}= \sqrt{\pi }\) for i even, and from the initial value \(\hat{\sigma }_1^{(\ell )}= -\ell \) for i odd. An algorithm for computing \( \hat{\mathcal{M}}_\ell , \; \ell =0,1,\ldots ,\) was described in [6]. Unfortunately, a great loss of accuracy was observed, since only three correct decimal digits were obtained for \(\hat{\mathcal{M}}_6. \) In order to overcome this severe loss of accuracy, Gautschi proposed in [7] to compute the modified moments with extended precision arithmetic.

Here, we focus on the modified moments

$$\begin{aligned} \mathcal{M}_\ell =\int _{0}^{\infty }e^{-x^2} \mathcal{L}_{\ell }(x)dx, \qquad \ell =0,1,2,\ldots , \end{aligned}$$
(5)

where \({\mathcal{L}}_{\ell }(x)\) is the normalized Laguerre polynomial of degree \( \ell \) defined in (1).

Since \( \mathcal{L}_{\ell }(x)\) are orthonormal polynomials, the sequence \(\left\{ \mathcal{M}_{\ell } \right\} _{\ell =0}^{\infty } \) goes to zero for \({\ell }\rightarrow \infty \) [6, 7]. Similarly to \(\hat{\mathcal{M}}_\ell ,\) they can be expressed as

$$\begin{aligned} \mathcal{M}_\ell = \frac{1}{2} \sum _{i=0}^{\ell }\sigma _i^{(\ell )},\;\;\textrm{ with}\;\;\sigma _i^{(\ell )}=(-1)^{i} \frac{\ell !\Gamma (\frac{ i+1}{2})}{(\ell -i)! i!^2}, \qquad \ell =0,1,2,\ldots . \end{aligned}$$
(6)

Since

$$ \Gamma (\frac{ i+1}{2})= \left\{ \begin{array}{ll} \frac{ i!\sqrt{\pi }}{{2^i} \frac{i}{2}!}, &{}\textrm{ for } \;i \;\; \textrm{ even}, \\ \\ \frac{ i-1}{2}!, &{} \textrm{ for } \; i \;\; \textrm{ odd}, \end{array} \right. $$

it follows that

$$\begin{aligned} \mathcal{M}_\ell = \frac{1}{2} \sum _{i=0}^{\ell } \sigma _i^{(\ell )}=\frac{1}{2} \left( \Sigma ^{(\ell )}_1+ \Sigma ^{(\ell )}_2\right) , \end{aligned}$$
(7)

where \(\Sigma ^{(\ell )}_1\) is the sum of the \( \sigma ^{(\ell )}_i \) for i even, i.e., \(\Sigma ^{(\ell )}_1= \sum _{i=0}^{\lfloor \frac{\ell }{2} \rfloor } \sigma ^{(\ell )}_{2i}, \) and \(\Sigma ^{(\ell )}_2\) is the sum of the \( \sigma ^{(\ell )}_i \) for i odd, i.e., \(\Sigma ^{(\ell )}_2= \sum _{i=0}^{\lfloor \frac{\ell -1}{2} \rfloor } \sigma ^{(\ell )}_{2i+1}, \) with \(\lfloor \eta \rfloor \) rounding \(\eta \in {\mathbb {R}}\) to the nearest smaller or equal integer.

The even and the odd \( \sigma ^{(\ell )}_i\) are recursively and independently computed as follows:

$$\begin{aligned} \sigma ^{(\ell )}_{i+2}= \sigma ^{(\ell )}_{i} \frac{(\ell -i)(\ell -i-1)}{2(i+2)^2(i+1)},\qquad i=0,1\ldots , \ell -2, \end{aligned}$$
(8)

with initial values \( \sigma ^{(\ell )}_0 = \sqrt{\pi } \) and \( \sigma ^{(\ell )}_1 = -\ell . \)

Observe that all the terms in \(\Sigma ^{(\ell )}_1\) are positive and all the terms in \(\Sigma ^{(\ell )}_2\) are negative (with the exception of \(\sigma ^{(\ell )}_1 =0\) when \( \ell =0\)). Therefore, both \(\Sigma ^{(\ell )}_1\) and \(\Sigma ^{(\ell )}_2\) can be computed with high relative accuracy.

Since \(\;\;\lim _{\ell \rightarrow \infty } \mathcal{M}_\ell =0, \) then \( \lim _{\ell \rightarrow \infty } \Sigma ^{(\ell )}_1 =- \lim _{\ell \rightarrow \infty }\Sigma ^{(\ell )}_2. \) Hence, although \(\Sigma ^{(\ell )}_1\) and \(\Sigma ^{(\ell )}_2 \) are computed with high relative accuracy, a great loss of accuracy occurs in computing \(\mathcal{M}_\ell =\frac{1}{2}\left( \Sigma ^{(\ell )}_1+\Sigma ^{(\ell )}_2\right) \) as \( \ell \) increases, due to numerical cancellation (see blue plot in Fig. 1).

Fig. 1
figure 1

Plot, on a logarithmic scale, of the modified moments \( {\mathcal{M}}_{k},\; k=0,1,\ldots , 400,\) in absolute value, computed by using (8) (Matlab function MM_1.m in the Appendix) in double precision (denoted by “\({\times }\)”), and in extended precision with 200 digits (denoted by ‘\(*\)”)

Below, we propose a different method to compute the modified moments in floating point arithmetic, without requiring any extended precision.

Let us define

$$ \mathcal{N}_\ell =\int _{0}^{\infty }e^{-x^2} x\mathcal{L}_{\ell }(x)dx, \qquad \ell =0,1,2,\ldots . $$

Then, multiplying both sides of (1) by \( e^{-x^2}\) and considering the integral in the interval \( [0, \infty )\), and multiplying both sides of (1) by \( xe^{-x^2}\) and considering the integral in the interval \( [0, \infty )\), the following system of recurrence relations for \(\mathcal{M}_\ell \) and \(\mathcal{N}_\ell \) holds

$$\begin{aligned} \mathcal{M}_{0}= & {} \frac{\displaystyle \sqrt{\pi }}{\displaystyle 2}, \quad \mathcal{N}_{0}=\frac{\displaystyle 1}{\displaystyle 2}, \end{aligned}$$
(9)
$$\begin{aligned} \mathcal{M}_{1}= & {} -\frac{\displaystyle 1}{\displaystyle 2} +\frac{\sqrt{\displaystyle \pi }}{\displaystyle 2}, \quad \mathcal{N}_{1}= \frac{\displaystyle 1}{\displaystyle 2} -\frac{\sqrt{\displaystyle \pi }}{\displaystyle 4},\end{aligned}$$
(10)
$$\begin{aligned} \ell \mathcal{M}_{\ell }= & {} (2\ell -1)\mathcal{M}_{\ell -1} - \mathcal{N}_{\ell -1} -(\ell -1)\mathcal{M}_{\ell -2}, \;\; \ell \ge 2, \end{aligned}$$
(11)
$$\begin{aligned} \ell \mathcal{N}_{\ell }= & {} -\frac{\displaystyle \ell }{\displaystyle 2}\mathcal{M}_{\ell -1} +\frac{\displaystyle \ell -1}{\displaystyle 2}\mathcal{M}_{\ell -2}+(2\ell -1)\mathcal{N}_{\ell -1}- (\ell -1) \mathcal{N}_{\ell -2}. \end{aligned}$$
(12)

Unfortunately, the straightforward implementation of (11) and (12) in Matlab with double precision (Matlab function MM_2.m in the Appendix) turns out to be unstable, as illustrated in Fig. 2.

Fig. 2
figure 2

Plot, on a logarithmic scale, of the entries, in absolute value, of the modified moments \( {\mathcal{M}}_{k},\; k=0,1,\ldots , 400,\) computed by using the implementation of (11) and (12) (Matlab function MM_2.m in the Appendix) in double precision (denoted by ‘\({\times }\)’), and in extended precision with 200 digits (denoted by ‘ \(*\)’)

We now consider a new method to compute the sequence \( \left\{ \mathcal{M}_{\ell } \right\} _{\ell =0}^{\infty } \). Let

$$ \textbf{m}_\ell = \left[ \begin{array}{c} \mathcal{M}_{0}\\ \mathcal{M}_{1}\\ \vdots \\ \mathcal{M}_{\ell -1}\\ \mathcal{M}_{\ell } \end{array} \right] \in {\mathbb {R}}^{\ell +1} \quad \text { and} \quad \textbf{n}_\ell = \left[ \begin{array}{c} \mathcal{N}_{0}\\ \mathcal{N}_{1}\\ \vdots \\ \mathcal{N}_{\ell -1}\\ \mathcal{N}_{\ell } \end{array} \right] \in {\mathbb {R}}^{\ell +1}, \; \ell =0,1,2,\ldots , $$

and define

$$\begin{aligned} A_\ell= & {} \left[ \begin{array}{rrrrrrr} 1 &{} -1 &{} &{} &{} &{} &{}\\ -1 &{} 3 &{} -2 &{} &{} &{} &{}\\ &{} -2 &{} 5 &{} -3 &{} &{} &{}\\ &{} &{}-3 &{} \ddots &{} \ddots &{} &{}\\ &{} &{} &{} \ddots &{} 2\ell -3 &{} -\ell +1 &{}\\ &{} &{} &{} &{}-\ell +1 &{} 2\ell -1 &{} -\ell \\ \end{array} \right] \in {\mathbb {R}}^{\ell \times (\ell +1)},\\ B_\ell= & {} \left[ \begin{array}{rrrrrrr} 1 &{} &{} &{} &{} &{} \\ -1 &{} 2 &{} &{} &{} &{} \\ &{} -2 &{} 3 &{} &{} &{} \\ &{} &{}-3 &{} \ddots &{} &{} \\ &{} &{} &{} \ddots &{} \ell -1 &{} \\ &{} &{} &{} &{}-\ell +1 &{} \ell \end{array} \right] \in {\mathbb {R}}^{\ell \times \ell }. \end{aligned}$$

Then, the recurrence relations (11) and (12) can be written in matrix form, respectively, as,

$$\begin{aligned} A_\ell \textbf{m}_\ell= & {} \textbf{n}_{\ell -1},\end{aligned}$$
(13)
$$\begin{aligned} A_{\ell -1} \textbf{n}_{\ell -1}= & {} \frac{1}{2} B_{\ell -1}\textbf{m}_{\ell -2}. \end{aligned}$$
(14)

Multiplying both sides of (13) by \( A_{\ell -1}\) yields

$$ A_{\ell -1}A_\ell \textbf{m}_\ell = A_{\ell -1} \textbf{n}_{\ell -1}, $$

and replacing \(A_{\ell -1} \textbf{n}_{\ell -1}\) by \( A_{\ell -1}A_\ell \textbf{m}_\ell \) in (14), we obtain

$$ A_{\ell -1} A_\ell \textbf{m}_\ell =\frac{1}{2} B_{\ell -1}\textbf{m}_{\ell -2}, $$

i.e.,

$$\begin{aligned} 2 B^{-1}_{\ell -1} A_{\ell -1} A_\ell \textbf{m}_\ell =\textbf{m}_{\ell -2} \Leftrightarrow M_{\ell -1} \textbf{m}_\ell =\textbf{0}, \end{aligned}$$
(15)

where

$$ M_{\ell -1} =2 B^{-1}_{\ell -1} A_{\ell -1} A_\ell - [I_{l-1}\; \textbf{0}\;\textbf{0}]. $$

It is easy to verify that

$$ M_\ell = \left[ \begin{array}{rrrrrrrr} 3 &{} -8 &{} 4 &{} &{} &{} &{}\\ -2 &{} 9 &{} -14 &{} 6 &{} &{} &{}\\ &{} -4 &{} 15 &{} -20 &{} 8 &{} &{}\\ &{} &{}-6 &{} \ddots &{} \ddots &{} \ddots &{}\\ &{} &{} &{} \ddots &{} 6\ell -9 &{} -6\ell -8 &{} 2\ell &{} \\ &{} &{} &{} &{}-2\ell +2 &{} 6\ell -3 &{} -6\ell -2 &{} 2\ell +2\\ \end{array} \right] . $$

Hence, by (15), \( \textbf{m}_{\ell +1}\) belongs to the right null–space of \(M_{\ell }.\) Since \(M_{\ell }\) has full row–rank, its right null–space has dimension two. In the next section we describe how to retrieve the vector \( \textbf{m}_{\ell +1}\) from the right null–space of \(M_{\ell }.\)

Fig. 3
figure 3

Plot of the ratio \( \kappa _2 (M_k)/k\) for \(k=1,\ldots , 1000\), implying that \( \kappa _2 (M_k)\) grows linearly with k

Remark 1

In Fig. 3, \( \kappa _{2} (M_k)/k\), i.e., the ratio between the condition number of \(M_k \) in the 2–norm, and k,  is plotted. It can be noticed that this ratio becomes constant, which implies that \( \kappa _2 (M_k)\) grows linearly with k for large k. It follows from [19] that \(\kappa _2(M_k)\) is also the condition number of the right null–space of the matrix \(M_k\), which implies that relative errors in the calculation of this null–space will be bounded by k times the machine precision of the computer used. In fact, since \(M_k\) is a structured matrix, we can expect an even smaller sensitivity when the structure is exploited in the algorithm. This will be illustrated in the numerical results shown in Sect. 6.

Remark 2

Defining the diagonal sign matrices \(D_k=\text {diag}(d_1,d_2,\ldots , d_{k})\) for \(k= \ell \) and \(k=\ell +2\), where \(d_{i}=(-1)^{i+1}, \; i\ge 1\), one can show that the matrix \(M^{(D)}_\ell = D_{\ell }M_\ell D_{\ell +2} \) is totally nonnegative. Therefore, its right null–space can be computed with high relative accuracy [14].

4 Computation of the null–space

In this section, we describe an algorithm to compute the sequence \(\mathcal{M}_{k,} \; k= 0,1, \ldots , \ell , \) from the right null-space of \(M_{\ell }. \) The null-space of \(M_{\ell } \) can easily be retrieved by reducing the matrix to lower triangular form by applying two sequences of \( \ell \) Givens rotations to the right. For the sake of simplicity let \( M:= M_{\ell },\) \(\textbf{m}:= \textbf{m}_{\ell +1},\) and denote by \( \textbf{e}_i,\; i=1,\ldots , \ell +2, \) the vectors of the canonical basis of \( {\mathbb {R}}^{\ell +2}. \)

Let us initialise \(\tilde{M}^{(0)}:= M_{\ell }\) and consider the sequence of Givens rotations

$$ \tilde{G}_i= \left[ \begin{array}{ccc} I_{i} &{} &{} \\ \qquad \tilde{c}_i &{} \tilde{s}_i \\ \qquad -\tilde{s}_i &{} \tilde{c}_i \\ &{} &{} I_{\ell -i} \end{array} \right] \in {\mathbb {R}}^{(\ell +2) \times (\ell +2)},\; i=1,2, \ldots ,\ell , $$

such that

$$ \left[ \begin{array}{cc} \tilde{c}_i &{} \tilde{s}_i \\ -\tilde{s}_i &{} \tilde{c}_i \end{array} \right] \left[ \begin{array}{c} \tilde{m}^{(i-1)}_{i,i+1}\\ \tilde{m}^{(i-1)}_{i,i+2 } \end{array} \right] = \left[ \begin{array}{c} \tilde{m}^{(i)}_{i,i+1}\\ 0 \end{array} \right] , $$

i.e.,

$$ \tilde{c}_i = \frac{\tilde{m}^{(i-1)}_{i,i+1} }{\sqrt{ \tilde{m}^{(i-1)^2}_{i,i+1}+ \tilde{m}^{(i-1)^2}_{i,i+2} }}, \quad {\tilde{s}_i} = \frac{\tilde{m}^{(i-1)}_{i,i+2} }{\sqrt{ \tilde{m}^{(i-1)^2}_{i,i+1}+ \tilde{m}^{(i-1)^2}_{i,i+2} }}. $$

Then,

$$ \tilde{M}^{(i)}:= \tilde{M}^{(i-1)}\tilde{G}^T_{i} $$

has its entry \((i,i+2)\) annihilated.

Since \( \tilde{G_i}\textbf{e}_1= \textbf{e}_1,\; i=1, \ldots ,\ell , \) then the accumulated product

$$\begin{aligned} \tilde{Q} =\tilde{G}_{1} \tilde{G}_{2} \cdots \tilde{G}_{\ell -1}\tilde{G}_{\ell }, \end{aligned}$$

satisfies also

$$\begin{aligned} \tilde{Q} \textbf{e}_1 = \textbf{e}_1. \end{aligned}$$
(16)

Moreover, the last column of \( \tilde{M}^{(\ell )}= \tilde{M}^{(0)}\tilde{Q}^T\) is zero. Therefore, the last column of \(\tilde{Q},\) given by

$$ \tilde{Q} \textbf{e}_{\ell +2} = \left[ \begin{array}{c} 0 \\ (-1)^{\ell }\prod _{i=1}^{\ell }\tilde{s}_i \\ (-1)^{\ell -1} \tilde{c}_{\ell }\prod _{i=2}^{\ell }\tilde{s}_i\\ \vdots \\ -\tilde{c}_{n-3} \tilde{s}_{n} \tilde{s}_{n-1}\tilde{s}_{n-2}\\ \tilde{c}_{n-2} \tilde{s}_{n} \tilde{s}_{n-1}\\ - \tilde{c}_{n-1} \tilde{s}_n \\ \tilde{c}_n \\ \end{array} \right] , $$

belongs to the right null–space of \( M^{(0)}. \)

To compute a second vector of an orthogonal basis of the null–space of M,  let \(\hat{M}^{(0)}:= \tilde{M}^{(\ell )}.\) Hence, a second sequence of Givens rotations is chosen

$$ \hat{G}_i= \left[ \begin{array}{ccc} I_{i-1} &{} &{} \\ &{}\begin{array}{@{}cc@{}} \hat{c}_i &{} \hat{s}_i \\ -\hat{s}_i &{} \hat{c}_i \end{array} &{} \\ &{} &{} I_{\ell -i+1} \end{array} \right] \in {\mathbb {R}}^{(\ell +2) \times (\ell +2)},\; i=1,2, \ldots ,\ell , $$

with

$$ \left[ \begin{array}{cc} \hat{c}_i &{} \hat{s}_i \\ -\hat{s}_i &{} \hat{c}_i \end{array} \right] \left[ \begin{array}{c} \hat{m}^{(i-1)}_{i,i}\\ \hat{m}^{(i-1)}_{i,i+1 } \end{array} \right] = \left[ \begin{array}{c} \hat{m}^{(i)}_{i,i}\\ 0 \end{array} \right] , $$

and

$$ \hat{c}_i = \frac{\hat{m}^{(i-1)}_{i,i} }{\sqrt{ \hat{m}^{(i-1)^2}_{i,i}+ \hat{m}^{(i-1)^2}_{i,i+1} }}, \quad {\hat{s}_i} = \frac{\hat{m}^{(i-1)}_{i,i+1} }{\sqrt{ \hat{m}^{(i-1)^2}_{i,i}+ \hat{m}^{(i-1)^2}_{i,i+1} }}. $$

The above sequence is applied to the right of the matrices \(\hat{M}^{(i)},\) such that

$$ \hat{M}^{(i)} = \hat{M}^{(i-1)}\hat{G}^T_{i} $$

has its entry \( (i,i+1)\) annihilated.

Since \( \hat{G}_{i} \textbf{e}_{\ell +2} =\textbf{e}_{\ell +2}, \; i=1,\ldots ,\ell ,\) then the accumulated product

$$\begin{aligned} \hat{Q} =\hat{G}_{1} \hat{G}_{2} \cdots \hat{G}_{\ell -1}\hat{G}_{\ell } \end{aligned}$$

satisfies also

$$\begin{aligned} \hat{Q} \textbf{e}_{\ell +2} = \textbf{e}_{\ell +2}. \end{aligned}$$
(17)

Hence, the last two columns of \( \hat{M}^{(\ell )}= \hat{M}^{(0)}\hat{Q}^T\) are zero and thus, the last two columns of \( Q:=\tilde{Q}\hat{Q},\) are an orthogonal basis for the right null–space of M. By (16) and (17),

$$\left[ \begin{array}{c|c}\textbf{v}_{1}&\textbf{v}_{2} \end{array}\right] := \tilde{Q}\hat{Q} \left[ \begin{array}{c|c}\textbf{e}_{\ell +1}&\textbf{e}_{\ell +2} \end{array}\right] = \left[ \begin{array}{@{}c|c@{}} \tilde{Q} \left[ \begin{array}{c} (-1)^{\ell }\prod _{i=1}^{\ell }\hat{s}_i \\ (-1)^{\ell -1} \hat{c}_{\ell }\prod _{i=2}^{\ell }\hat{s}_i\\ \vdots \\ -\hat{c}_{n-3} \hat{s}_{n} \hat{s}_{n-1}\hat{s}_{n-2}\\ \hat{c}_{n-2} \hat{s}_{n} \hat{s}_{n-1}\\ - \hat{c}_{n-1} \hat{s}_n \\ \hat{c}_n \\ 0 \end{array} \right] &{} \begin{array}{c} 0 \\ (-1)^{\ell }\prod _{i=1}^{\ell }\tilde{s}_i \\ (-1)^{\ell -1} \tilde{c}_{\ell }\prod _{i=2}^{\ell }\tilde{s}_i\\ \vdots \\ -\tilde{c}_{n-3} \tilde{s}_{n} \tilde{s}_{n-1}\tilde{s}_{n-2}\\ \tilde{c}_{n-2} \tilde{s}_{n} \tilde{s}_{n-1}\\ - \tilde{c}_{n-1} \tilde{s}_n \\ \tilde{c}_n \\ \end{array} \end{array} \right] $$

is an orthogonal basis of the right null-space of \( M_{\ell }. \)

In Fig. 4, the absolute values of the entries of \( \textbf{v}_{1} \) and \( \textbf{v}_{2}\), for \( \ell =400, \) are plotted on a logarithmic scale.

Fig. 4
figure 4

Plot, on a logarithmic scale, of the entries, in absolute value, of the basis vectors \( \textbf{v}_1\) and \( \textbf{v}_2\), denoted respectively by “\(*\)” and “ \(\times \),” of the null–space of \( M_{400}.\)

Since \(\lim _{\ell \rightarrow \infty }\textbf{v}_1(\ell )=0 \) and \(\lim _{\ell \rightarrow \infty }\textbf{v}_2(\ell )=\infty , \) then \( \textbf{v}_1\) and \( \textbf{v}_2\) are the minimal and the dominant solutions of (15), respectively [9]. Moreover, \( \textbf{v}_1\) is unique up to a constant multiplicative factor [9].

On the other hand, \( \mathcal{M}_{\ell }\) goes to 0 as \({\ell }\) goes to \( \infty .\) Hence,

$$ \left\{ \mathcal{M}_{\ell }\right\} _{\ell =0}^{\infty }=\frac{\mathcal{M}_{0}}{\textbf{v}_1(0)}\left\{ \textbf{v}_1\right\} _{\ell =0}^{\infty }.$$

The vector \( \textbf{v}_{1} \) is computed with \(O(\ell ) \) floating point operations by the Matlab function MM_3.m given in the Appendix.

Furthermore, if we denote by \( \textbf{v}_3 \) the solution computed by implementing straightforwardly the recurrence relations (11) and (12) (function MM_2.m in the Appendix), then it belongs to the right null–space of \(M_{\ell }\) (see Fig. 5), as

$$ \texttt{svd}([\textbf{v}_1,\textbf{v}_2,\textbf{v}_3]) = \left[ \begin{array}{l} 1.046203719403972\times 10^{10} \\ 9.909635450807446\times 10^{-1} \\ 1.379989783205324\times 10^{-14} \end{array} \right] . $$

Moreover,

$$\textbf{v}_3 \approx \alpha _2 \textbf{v}_1+ \beta _2 \textbf{v}_2, $$

where \( \alpha _2= 9.999997323206090\times 10^{-1}\) and \( \beta _2= -1.046203719403972\times 10^{10}\) are computed by solving the following least squares problem

$$ [ \textbf{v}_1 \textbf{v}_2][ \alpha _2, \beta _2]^T\approx \textbf{v}_3, $$

with a relative error of the order of \( \mathcal{O}(10^{-12}). \)

Fig. 5
figure 5

Plot, on a logarithmic scale, of the entries, in absolute value, of \( \textbf{v}_1, \) \( \textbf{v}_2, \) and \(\textbf{v}_3,\) denoted, respectively, by “\(*\),” “\(\times \),” and “\(+\)

Remark 3

A different approach to compute the null–space of \( M_{\ell } \) is to compute the RQ factorization of \( M_{\ell }\) :

$$ M_{\ell } = \left[ \begin{array}{ccc} \textbf{0}&\textbf{0}&\check{R} \end{array}\right] \check{Q}, $$

with \(\check{R}\in {\mathbb {R}}^{ \ell \times \ell }\) nonsingular upper triangular and \(\check{Q}\in {\mathbb {R}}^{ (\ell +2) \times (\ell +2)}\) orthogonal. Since the features of this algorithm are similar to the one described earlier, we omit the details.

5 Product rule

In this section, we briefly describe how the integral

$$\begin{aligned} \mathcal {I}(f)=\int _{0}^{\infty }e^{-x^2} f(x) dx \end{aligned}$$
(18)

with f a continuous function in \( [0, \infty ), \) can be computed by a product rule of interpolatory type [4]. Let \( \mathcal{P}_{\ell -1}( f,x) \) be the Lagrange polynomial of degree \( \ell -1 \) interpolating the function f on the zeros of \(\mathcal{L}_{\ell }(x),\) the Laguerre polynomial of degree \(\ell . \) Let \( \lambda _k^{(\ell )} \) and \(\omega _k^{(\ell )}, \; k=1,\ldots , \ell , \) be the nodes and the weights of the \( \ell \)–point Gauss–Laguerre quadrature rule. They can be computed by solving a symmetric tridiagonal eigenvalue problem [10, 11]. Then,

$$\begin{aligned} \mathcal{P}_{\ell -1}( f,x)= & {} \sum _{j=0}^{\ell -1} \mathcal{L}_{j}(x) \int _{0}^{\infty }e^{-x} \mathcal{P}_{\ell -1}( f,x)\mathcal{L}_{j}(x) dx\\= & {} \sum _{j=0}^{\ell -1} \mathcal{L}_{j}(x)\sum _{k=1}^{\ell } \omega _k^{(\ell )} \mathcal{P}_{\ell -1}( f,\lambda _k^{(\ell )})\mathcal{L}_{j}(\lambda _k^{(\ell )}) \\= & {} \sum _{j=0}^{\ell -1} \mathcal{L}_{j}(x)\sum _{k=1}^{\ell } \omega _k^{(\ell )} f(\lambda _k^{(\ell )})\mathcal{L}_{j}(\lambda _k^{(\ell )}) \\= & {} \sum _{k=1}^{\ell } \omega _k^{(\ell )} f(\lambda _k^{(\ell )})\sum _{j=0}^{\ell -1}\mathcal{L}_{j}(\lambda _k^{(\ell )}) \mathcal{L}_{j}(x). \end{aligned}$$

Hence, to compute (18), the function f is replaced by the Lagrange polynomial \( \mathcal{P}_{\ell -1}( f,x), \) obtaining the product rule,

$$\begin{aligned} {P}_{\ell }(f)=\int _{0}^{\infty }e^{-x^2}\mathcal{P}_{\ell -1}( f,x) dx= & {} \int _{0}^{\infty } e^{-x^2} \sum _{k=1}^{\ell } \omega _k^{(\ell )} f(\lambda _k^{(\ell )})\sum _{j=0}^{\ell -1}\mathcal{L}_{j}(\lambda _k^{(\ell )}) \mathcal{L}_{j}(x)dx \\= & {} \sum _{k=1}^{\ell } \omega _k^{(\ell )} f(\lambda _k^{(\ell )})\sum _{j=0}^{\ell -1}\mathcal{L}_{j}(\lambda _k^{(\ell )}) \int _{0}^{\infty }e^{-x^2} \mathcal{L}_{j}(x)dx \\= & {} \sum _{k=1}^{\ell } \omega _k^{(\ell )} f(\lambda _k^{(\ell )})\sum _{j=0}^{\ell -1}\mathcal{L}_{j}(\lambda _k^{(\ell )}) \mathcal{M}_{j}\\= & {} { \sum _{k=1}^{\ell } \bar{\omega }^{(\ell )}_k f(\lambda _k^{(\ell )})}, \end{aligned}$$

where

$$\begin{aligned} \mathcal{M}_{k}=\int _{0}^{\infty }e^{-x^2} \mathcal{L}_{k}(x) dx,\qquad {k}=0,1,\ldots \end{aligned}$$
(19)

are the modified moments, and

$$\begin{aligned} \bar{\omega }^{(\ell )}_k:=\omega _k^{(\ell )} \sum _{j=0}^{\ell -1}\mathcal{L}_{j}(\lambda _k^{(\ell )}) \mathcal{M}_{j}, \qquad k=1,\ldots , \ell . \end{aligned}$$
(20)

Let \( {E}_{\ell }(f,x) = \mid \mathcal{I}(f) -{P}_{\ell }(f)\mid \) be the error approximating \(\mathcal{I}(f)\) by the product rule \({P}_{\ell }(f)\). Then \( { E}_{\ell }(f,x) =0, \) if f is a polynomial of degree up to \( \ell -1 \) [4]. The sums \(\sum _{j=0}^{\ell -1}\mathcal{L}_{j}(\lambda _k^{(\ell )}) \mathcal{M}_{j},\; k=1,\ldots ,\ell , \) in (20) can be computed by means of Clenshaw’s algorithm [3], that has been shown to be backward stable in [18]. To improve the results obtained by Clenshaw’s algorithm, one step of iterative refinement can be applied [12]. The Matlab function implementing Clenshaw’s algorithm with one step of iterative refinement for computing the nodes and the weights of the product rule, called Clenshaw_PR.m, is reported in the Appendix. Let us denote by \( \tilde{P}_{\ell }\) the \(\ell \)–point product rule implemented by using Clenshaw’s algorithm, and by \( \hat{P}_{\ell }\) the \(\ell \)–point product rule implemented by using Clenshaw’s algorithm with one step of iterative refinement. In Example 1, we show that the latter algorithm performs better than the former one. Therefore, we will use \( \hat{P}_{\ell },\) the \(\ell \)–point product rule implemented by using Clenshaw’s algorithm with one step of iterative refinement, for the numerical experiments in Sect. 6.

Example 1

In this example, we compute approximations of the integral

$$\begin{aligned} \mathcal{I}(x^5)=\int _{0}^{\infty }e^{-x^2} x^5 dx= 1, \end{aligned}$$
(21)

by the \(\ell \)-point product rules \( \tilde{P}_{\ell } \) and \( \hat{P}_{\ell }\), for different values of \(\ell . \) Since \(f(x)=x^5 \) is a polynomial of degree 5, the product rules \( \tilde{P}_{\ell } \) and \( \hat{P}_{\ell }\) are exact for \(\ell \ge 6.\) The results are reported in Table 1. We can see that one step of iterative refinement improves the accuracy of the computed integral by about one digit.

6 Numerical examples

In this section, the \(\ell \)–point product rule \(\hat{P}_{\ell }(f)\), described in Sect. 5, is used to compute (18) for different functions f(x),  and compared to the \(\ell \)–point Gauss–Laguerre quadrature rule, denoted by \(G_{\ell }(f)\), applied to

$$ \mathcal{I}(f)=\int _{0}^{\infty }e^{-x^2} f(x) dx= \int _{0}^{\infty }e^{-x} \hat{f}(x) dx, $$

with \(\hat{f}(x)= e^{-x^2+x} {{f}}(x), \) and to the \(\ell \)–point Gaussian quadrature rule associated tho the half-range Hermite weight, denoted by \( GH_{\ell } \) [7].

Table 1 Results obtained by computing the integral (21) by \( \tilde{P}_{\ell } \) (column 2) and \( \hat{P}_{\ell } \) (column 4) for different values of \( \ell \) (column 1). The values of the corresponding relative errors are reported in columns 3 and 5,  respectively
Table 2 Results obtained by computing the integral (22), for \( t= 0.1 \) and \(n=10, \) by \( \hat{P}_{\ell }(f), \) \( G_{\ell }(f) \) and \(GH_{\ell }(f)\), for different values of \( \ell . \) The exact value of \(\mathcal{I}(f(0.1,10))\), is computed in Matlab as \(t^n (\sqrt{\pi } / 2) e^{t^2} \texttt{erfc}(t)\)
Table 3 Relative errors obtained by computing the integral (22), for \( t= 0.1 \) and \(n=10, \) by \( \hat{P}_{\ell }(f), \) \( G_{\ell }(f) \) and \(GH_{\ell }(f),\) for different values of \( \ell \). The exact value of \(\mathcal{I}(f(0.1,10))\), is computed in \(\texttt{Matlab}\) as \(t^n (\sqrt{\pi } / 2) e^{t^2} \texttt{erfc}\)(t)
Table 4 Results obtained by computing the integral (22), for \( t= 0.5 \) and \(n=20, \) by \( \hat{P}_{\ell }(f),\) \( G_{\ell }(f) \) and \(GH_{\ell }(f),\) for different values of \( \ell . \) The exact value \(\mathcal{I}(f(0.5,10)), \) is computed in \(\texttt{Matlab}\) as \(t^n (\sqrt{\pi } / 2) e^{t^2} \texttt{erfc}(t)\)
Table 5 Relative errors obtained by computing the integral (22), for \( t= 0.5 \) and \(n=20, \) by \( \hat{P}_{\ell }(f),\) \( G_{\ell }(f) \) and \(GH_{\ell }(f),\) for different values of \( \ell . \) The exact value \(\mathcal{I}(f(0.5,10))\), is computed in \(\texttt{Matlab}\) as \(t^n (\sqrt{\pi } / 2) e^{t^2} \texttt{erfc}(t)\)
Table 6 Results obtained by computing the integral (22), for \(f(x) =\log (x+ 10) \), by \( \hat{P}_{\ell }(f),\) \( G_{\ell }(f) \) and \(GH_{\ell }(f)\), for different values of \( \ell \)
Table 7 Relative errors obtained by computing the integral (22), for \(f(x) =\log (x+ 10) \), by \( \hat{P}_{\ell }(f),\) \( G_{\ell }(f) \) and \(GH_{\ell }(f),\) for different values of \( \ell \)
Table 8 Results obtained by computing the integral (22), with \(f(x)=\sin (x)\), \( \hat{P}_{\ell }(f),\) \( G_{\ell }(f) \) and \(GH_{\ell }(f),\) for different values of \( \ell \)
Table 9 Relative errors obtained by computing the integral (22), with \(f(x)=\sin (x)\), by \( \hat{P}_{\ell }(f),\) \( G_{\ell }(f) \) and \(GH_{\ell }(f),\) for different values of \( \ell \)
Table 10 Results obtained by computing the integral (22), with \(f(x)=\cos (x)\), by \( \hat{P}_{\ell }(f),\) \( G_{\ell }(f) \) and \(GH_{\ell }(f),\) for different values of \( \ell \)
Table 11 Relative errors obtained by computing the integral (22), with \(f(x)=\cos (x)\), by \( \hat{P}_{\ell }(f),\) \( G_{\ell }(f) \) and \(GH_{\ell }(f),\) for different values of \( \ell \)

In order to compute \( GH_{\ell }, \) the eigenvalue decomposition of the associated Jacobi matrix is computed [10]. Observe that the entries of the latter matrix, i.e., the coefficients of the three-term recurrence relation of the half-range Hermite polynomials, are computed by the classical Chebyshev algorithm in high-precision arithmetic, as described in [7]. These coefficients are listed in the file “ab_hrhermite” to 32-digit accuracy and available as supplementary material of [7].

For each numerical example, we report the results obtained by \(\hat{P}_{\ell }(f), \) \(G_{\ell }(f) \) and \(GH_{\ell }(f), \) for \( \ell =10,20,30,\ldots , 100, \) in two tables. Specifically, in the first table, the value of \(\ell \) is displayed in the first column; the computed integral by \(\hat{P}_{\ell }(f),\) \(G_{\ell }(f)\) and \(GH_{\ell }(f), \) are reported in columns 2, 3,  and 4, respectively. In the second table, the value of \(\ell \) is displayed in the first column; the relative error of the integral computed by \(\hat{P}_{\ell }(f),\) \(G_{\ell }(f)\), and \(GH_{\ell }(f) \) are reported in columns 2, 3,  and 4, respectively.

For all the considered examples, \(GH_{\ell }(f) \) converges faster than \(\hat{P}_{\ell }(f) \) to the considered integral, since \(GH_{\ell }(f) \) and \(\hat{P}_{\ell }(f)\) have degree of exactness \( 2 \ell -1\) and \(\ell -1, \) respectively, and both converge faster than \(G_{\ell }(f)\).

However, as already mentioned, \( GH_{\ell } \) relies on the computation of the coefficients of the associated Jacobi matrix in high-precision arithmetic, whereas \(\hat{P}_{\ell }(f)\) relies only on floating point arithmetic.

All the computations are performed in Matlab R2022a with machine precision \(\epsilon \approx 2.22 \times 10^{-16}. \)

Example 2

In this example the integral [7]

$$\begin{aligned} \mathcal{I}(f(t,n))=\int _{0}^{\infty }e^{-x^2}t^n e^{-2tx} dx, \end{aligned}$$
(22)

is computed for some values of n and t. The value of (22) is known [20] and given by \(t^n (\sqrt{\pi } / 2) e^{t^2} \texttt{erfc}(t),\) with \( \texttt{erfc}(t)\) the complementary error function [20] :

$$ \texttt{erfc}(t)=\frac{2}{\sqrt{\pi }} \int _{t}^{\infty }e^{-x^2} dx. $$

The results, for \( t=0.1\) and \( n=10,\) and for \( t=0.5\) and \( n=20\), are reported in Tables 2, 3, 4, and 5, respectively.

Example 3

In this example, the integrand function f in (18) is \( f(x)=\log (x+10). \) The exact value \(\mathcal{I}(f)=2.088549149913451 \) is computed by Mathematica ver. 13. The results are reported in Tables 6 and 7.

Example 4

We consider here the integrand function \(f(x)=\sin (x)\) in (18). Also, in this case, the exact value \(\mathcal{I}(f)= 4.244363835020223\times 10^{-1}\) is computed by Mathematica ver. 13. The results are displayed in Tables 8 and 9.

Example 5

In this example, \( f(x)=\cos (x). \) The exact value \(\mathcal{I}(f)= 6.90194223\)\(5215714\times 10^{-1}\) is computed by Mathematica ver. 13. The results are displayed in Tables 10 and 11.

Since \( \cos (x) \) is an even function, then

$$\begin{aligned} \int _{0}^{\infty }e^{-x^2} \cos (x) dx=\frac{1}{2}\int _{-\infty }^{\infty }e^{-x^2} \cos (x) dx. \end{aligned}$$
(23)

Therefore, an approximation of the integral (23) can be obtained dividing the result of the \( \ell \)–point Gauss–Hermite quadrature rule \( H_{\ell }(f)\), by 2.

We can observe that the speed of convergence of \( H_{\ell }(f) \) is similar to the one of \( GH_{\ell }(f). \)

7 Conclusions and future work

In this work, an algorithm to compute the modified moments for the system of orthonormal Laguerre polynomials on the real semiaxis with the Hermite weight is described.

It is shown that these moments can be efficiently retrieved from the null–space of a particular totally nonnegative matrix. Therefore, their computation can be carried out in floating point arithmetic with high relative accuracy. The modified moments are then used to compute integrals on the real semiaxis with the Hermite weight by a product rule.

The numerical experiments confirm the effectiveness of the proposed approach.