1 Introduction

This paper deals with the numerical approximation of double integrals in which the integrand function may increase as \(x,y \to \infty\) and/or x,y → 0. Then, we consider

$$I(f)= \int_{0}^{\infty} \int_{0}^{\infty} f(x,y) w(x,y) dx dy,$$
(1)

where f is a known function defined on the domain \([0,\infty ) \times [0,\infty )\) and \(w(x,y) = x^{\alpha} y^{\beta} e^{-(x + y)},\: \alpha,\beta > -1\) is a weight function which includes eventual algebraic singularities. Let us note that w turns out to be the product of two Laguerre weights \(w_{\alpha} (x) = x^{\alpha} e^{-x}\) and \(w_{\beta} (y) = x^{\beta} e^{-y}\).

The numerical treatment of (1) can be approached in two ways [1, 2]. The first procedure consists of approximating each integral by using well-known quadrature rules. This is an “indirect” technique that takes the advantage of the fact that the univariate rules have been more treated and explored than the multivariate ones. The second approach consists of constructing cubature schemes from scratch. In the bivariate case, an example is given in [3] where the nodes are zeros of suitable bivariate orthogonal polynomials (see also [4]).

In this paper, we focus on the “indirect” procedure. The optimal cubature formula is the following Gauss-Laguerre cubature rule [5] based on m × n nodes

$$I(f)=\sum\limits_{k=1}^{m} \sum\limits_{j=1}^{n} \lambda_{k}^{\alpha} \lambda_{j}^{\beta} f(x_{k},y_{j})+R_{m,n}(f) =: G_{m,n}(f)+R_{m,n}(f),$$
(2)

where \(\{x_{k}\}_{k{=}1}^{m}\) and \(\{y_{j}\}_{j{=}1}^{n}\) are the zeros of the Laguerre polynomials which are orthogonal with respect to the weights wα and wβ, respectively, \(\{\lambda _{k}^{\alpha }\}_{k=1}^{m}\) and \(\{\lambda _{j}^{\beta }\}_{j=1}^{n}\) are the corresponding coefficients of the Gauss-Laguerre formula, and the term Rm,n(f) denotes the related cubature error of Gm,n. The rule is obtained as the tensor product of two univariate Gauss-Laguerre quadrature formulae. Hence, it can be easily constructed [5] and Rm,n(f) = 0 if \(f \in \mathbb {P}_{2m-1,2n-1}\), where \(\mathbb {P}_{2m-1,2n-1}\) denotes the set of all algebraic bivariate polynomials of degree at most 2m − 1 in the variable x and 2n − 1 in the variable y.

One can associate to the cubature rule (2), the truncated version introduced in [6]:

$$I(f)=\sum\limits_{k=1}^{\kappa} \sum\limits_{j=1}^{\iota} \lambda_{k}^{\alpha} \lambda_{j}^{\beta} f(x_{k},y_{j})+\mathring{R}_{m,n}(f) =: \mathring{G}_{m,n}(f)+\mathring{R}_{m,n}(f).$$
(3)

Here, the upper limit of summations κ and ι are the indeces determined by

$$x_{\iota} =\min \{ x_{j} | x_{j} \geq 4 m \theta_{1} \}, \quad y_{\kappa} =\min \{ y_{j} | y_{j} \geq 4 n \theta_{2} \},$$

with 𝜃1,𝜃2 ∈ (0,1) fixed, and \(\mathring{R}_{m,n}(f)\) identifies the error. As shown in [6] (see also [7, 8]), the cubature error of (3) has the same magnitude as the error of (2) and is convenient especially when the function f in (1) is bounded or does not increase too fast, so that the last terms of the sums in (2) can be neglected. Of course, in order to construct formula (3) all the m + n weights and nodes of (2) have to be computed but the number of function evaluations is reduced. Moreover, an application of the truncated formula (3) to integral equations implies a considerable computational saving (see [6,7,8,9]).

The aim of this paper is to give estimates of the magnitude of the errors \({R}_{m,n}(f)\) and \(\mathring{R}_{m,n}(f)\). We remark that in [6, Proposition 3.1] the authors provide asymptotic estimates for \(\mathring{R}_{m,n}(f)\), for large values of m = n, according to the nature of the integrand. Specifically, they state the order of convergence depending on the smoothness properties of the function f, providing a lower bound which include an unknown constant independent of m, n, and f. In this paper, we provide numerical estimates of the error for each fixed m and n so that one can determine the number of points m and n needed to approximate the integral with a prescribed accuracy. The estimate does not depend on unknown constants and it is not asymptotic. So, it does not provide a decay law depending on the smoothness of f. Neverthless, it allows us to develop new cubature rules, which turns out to have several advantages in terms of accuracy and computational cost.

In the univariate case, an interesting approach to estimate the error of the m-point Gauss formula Gm consists of using another quadrature rule Ql with l > m nodes and degree of exactness higher than 2m − 1. A popular choice of Ql is the Gauss-Kronrod rule [10] which has 2m + 1 nodes, including the m nodes of Gm, and has degree of exactness at least 3m + 1. However, in some cases, among which the Laguerre one, the formula fails to exist: the nodes are not real and distinct and the weights are not positive [10,11,12,13]. Another possible choice is given by “stratified” quadrature formulae (see, e.g., [14,15,16,17,18]). They are linear combinations of two rules, usually the m-point Gauss rule Gm and a new rule U which involves a number of points greater than m and has degree of exactness greater than 2m − 1. A first example of a “stratified” rule was given by Laurie [17] in 1996 who proposed the so-called averaged Gaussian quadrature formula \(G_{2m+1}^{L}\):

$$G_{2m+1}^{L}=\frac{1}{2} ({G}_{m} + G^{A}_{m+1}).$$
(4)

Basically, it is the average between the m-point Gauss rule Gm and the \(m + 1\)-point anti-Gauss formula \(G^{A}_{m+1}\) which is such that

$$\mathcal{I}(p)-G^{A}_{m+1}(p)=-(\mathcal{I}(p)-{G}_{m}(p)), \qquad \forall p \in \mathbb{P}_{2m+1},$$
(5)

where \(\mathcal {I}\) denotes the corresponding univariate integral of the form (1). Formula \(G_{2m+1}^{L}\) has degree of exactness at least 2m + 1, involves 2m + 1 real and distinct zeros and has positive weights. All these properties hold true also in the Laguerre case for each α > − 1, α being the parameter of the Laguerre weight wα appearing in the integral \(\mathcal {I}\) [17]. In 2002, Ehrich [16] proposed the optimal stratified extension

$$G_{2m+1}^{L*}=\frac{1}{1+\gamma} \left(\gamma {G}_{m} + G^{A}_{m+1}\right), \qquad \gamma>0$$

for the Gauss-Laguerre and the Gauss-Hermite formulae. Specifically, he proved that, if we handle with the Laguerre weight wα, the formula \(G_{2m+1}^{L*}\) of the highest possible degree of exactness 2m + 2 is unique and is obtained for

$$\gamma=1+\frac{2m+\alpha+1}{m(m+\alpha)}.$$

However, this formula has positive nodes only for α ≥ 1, whereas it has one negative node for − 1 < α < 1. In 2007, Spalević [18] developed the generalized averaged formula \(G_{2m+1}^{S}\) which is the unique stratified formula with the highest possible degree of exactness and, in the Laguerre and the Hermite cases, coincides with \(G_{2m+1}^{L*}\). In 2016, Djukić, Reichel, and Spalević, in an attempt to ensure that all nodes be internal, proposed in [15] the truncated generalized averaged Gauss quadrature rule obtained by removing the last r < m rows and columns from the Jacobi matrix of \(G_{2m+1}^{S}\). In particular, taking r = m − 1 one gets the truncated rule \(G^{T}_{m+2}\) that in the Laguerre case has positive zeros for each α > − 1. Let us mention that here the term “truncated” does not have the same meaning as in formula (3). In fact, in (3) the truncation refers to the sequence of nodes whereas in \(G^{T}_{m+2}\) the truncation refers to the Jacobi matrix. Consequently, in this last case, all the computed nodes are only the necessary ones. To avoid confusion, from now on, we will name \({G}^{T}_{m+2}\) “reduced” generalized averaged Gauss rule.

In the multidimensional case, in [19] the authors consider integrals defined on bounded intervals [a,b] and estimate the error of the corresponding Gauss cubature rule by using the generalized averaged formula \(G_{2m+1}^{S}\).

In order to estimate the error Rm,n, in this paper, we develop a “stratified” cubature formula which is obtained as an average of the cubature formula (2) and the anti-Gauss cubature formula. According to our knowledge, this last formula has never been proposed to approximate double integrals and, even if we get it as tensor product of the univariate anti-Gauss rule, some new useful properties are proved. Moreover, its stability and convergence is studied in suitable weighted spaces equipped with the uniform norm. In addition, we write the “reduced” generalized averaged Gauss cubature rule which is the bivariate version of the formula proposed in [15].

Then, we give two truncated rules aiming at estimating the error \(\mathring{R}_{m,n}(f)\). Here, what we develop are the truncated anti-Gauss cubature rule and the “reduced” truncated generalized averaged Gauss cubature rule. Basically, we consider the two developed cubature schemes, truncate the sequences of nodes and prove that these rules provide an error which is of the same magnitude of the original cubature formula by only using a fraction of nodes.

The paper is structured in five main sections. Section 2 focuses on the estimation of the error Rm,n, Section 3 pays attention to \(\mathring{R}_{m,n}\), and Section 4 gathers several numerical examples to show and compare the performance of the four considered cubature formulae. Section 5 collects the proofs of the theoretical results and, finally, Section 6 draws some conclusions and briefly sketches possible research developments.

2 On the error of the Gauss-Laguerre cubature formula

The aim of this section is to estimate the error Rm,n by introducing “stratified” cubature formulae or averaged schemes of the form

$$H_{2m+1,2n+1}(f) = \eta \cdot G_{m,n}(f) + U_{m+1,n+1}(f),$$
(6)

where η is a given real parameter, Gm,n(f) is as in (2), and Um+ 1,n+ 1 is a new cubature formula.

2.1 The averaged Gauss-Laguerre cubature rule

A possible approach for constructing the new rule Um+ 1,n+ 1 is to use the well-known 1D anti-Gauss formula [17] to approximate each integral of (1). In this way, we get a cubature scheme which we will refer to as anti-Gauss cubature formula \({G}^{A}_{m+1,n+1}\). It can be written as

$${G}^{A}_{m+1,n+1}(f) =\sum\limits_{k=1}^{m+1} \sum\limits_{j=1}^{n+1} \tilde{\lambda}_{k}^{\alpha} \tilde{\lambda}_{j}^{\beta} f(\tilde{x}_{k},\tilde{y}_{j}),$$
(7)

where \(\{\tilde {\lambda }_{k}^{\alpha }\}_{k=1}^{m+1}\) and \(\{\tilde {\lambda }_{j}^{\beta }\}_{j=1}^{n+1}\) are the weights and \(\{\tilde {x}_{k}\}_{k=1}^{m+1}\) and \(\{\tilde {y}_{j}\}_{j=1}^{n+1}\) are the cubature nodes. In particular, let \(\{p_{j}^{\alpha }\}_{j=0}^{\infty }\) be the sequence of monic orthogonal Laguerre polynomial of degree j such that

$$\int_{0}^{\infty} p_{j}^{\alpha}(x) p_{i}^{\alpha}(x) w_{\alpha}(x) dx = \begin{cases} 0, & i \neq j, \\ \dfrac{\Gamma(i+\alpha+1)}{i!}, & i=j, \end{cases}$$
(8)

where Γ(z) is the Gamma function. Such polynomials satisfy the three term recurrence relation

$$\begin{cases} p_{-1}^{\alpha}(x)=0, \qquad p_{0}^{\alpha}(x)=1, \\ p_{j+1}^{\alpha}(x)=(x-a^{\alpha}_{m}) p_{j}^{\alpha}(x)-b^{\alpha}_{m} p_{j-1}^{\alpha}(x), \qquad j=0,1,2,\dots, \end{cases}$$
(9)

where \(a^{\alpha }_{j}= 2j+\alpha +1\), j ≥ 0, \(b^{\alpha }_{0}={\Gamma }(1+\alpha )\), \(b^{\alpha }_{j}=j(j+\alpha )\), j ≥ 1. Similarly for \(p_{j}^{\beta }\). Then, the nodes \(\{\tilde {x}_{k}\}_{k=1}^{m+1}\) and \(\{\tilde {y}_{j}\}_{j=1}^{n+1}\) are the zeros of the polynomials \(\tilde {p}_{m+1}^{\alpha }\) and \(\tilde {p}_{n+1}^{\beta }\), respectively, defined as

$$\tilde{p}_{m+1}^{\alpha}(x)=p_{m+1}^{\alpha}(x)- b_{m}^{\alpha} p_{m-1}^{\alpha}(x), \quad \tilde{p}_{n+1}^{\beta}(y)=p_{n+1}^{\beta}(y)- b_{n}^{\beta} p_{n-1}^{\beta}(y),$$
(10)

and, consequently, are the eigenvalues of the following two matrices:

$$\widetilde{J}^{\alpha}_{m+1} = \begin{bmatrix} J^{\alpha}_{m} & \sqrt{2 b^{\alpha}_{m} }\mathbf{e}_{m} \\ \sqrt{2 b^{\alpha}_{m}} \mathbf{e}^{T}_{m} & a^{\alpha}_{m} \\ \end{bmatrix}, \qquad \widetilde{J}^{\beta}_{n+1} = \begin{bmatrix} J^{\beta}_{n} & \sqrt{2 b^{\beta}_{n} }\mathbf{e}_{n} \\ \sqrt{2 b^{\beta}_{n}} \mathbf{e}^{T}_{n} & a^{\beta}_{n} \\ \end{bmatrix},$$

where

$$J^{\alpha}_{m}{=} \begin{bmatrix} a^{\alpha}_{0} & \sqrt{b^{\alpha}_{1}} \\ \sqrt{b^{\alpha}_{1}} & a^{\alpha}_{1} & {\ddots} \\ & {\ddots} & {\ddots} & \sqrt{b^{\alpha}_{m-1}} \\ & & \sqrt{b^{\alpha}_{m-1}} & a^{\alpha}_{m-1} \\ \end{bmatrix}, \ \ J^{\beta}_{n}{=} \begin{bmatrix} a^{\beta}_{0} & \sqrt{b^{\beta}_{1}} \\ \sqrt{b^{\beta}_{1}} & a^{\beta}_{1} & {\ddots} \\ & {\ddots} & {\ddots} & \sqrt{b^{\beta}_{n-1}} \\ & & \sqrt{b^{\beta}_{n-1}} & a^{\beta}_{n-1} \\ \end{bmatrix}.$$
(11)

The coefficients \(\tilde {\lambda }_{k}^{\alpha }\) and \(\tilde {\lambda }_{j}^{\beta }\) are also related to the matrices \(\tilde {J}^{\alpha }_{m+1}\) and \(\tilde {J}^{\beta }_{n+1}\). Indeed, they are determined by the first component of the associated normalized real eigenvectors.

Next Proposition contains the main properties of \({G}^{A}_{m+1,n+1}(f)\).

Proposition 1

The following properties hold true.

  1. 1.

    Setting \(I(f)={G}^{A}_{m+1,n+1}(f)+{R}^{A}_{m+1,n+1}(f)\), then

    $${R}^{A}_{m+1,n+1}(f)= \begin{cases} -R_{m,n}(f), & \forall f \in \mathbb{P}_{2m+1,2n-1} \quad and \quad \forall f \in \mathbb{P}_{2m-1,2n+1}, \\ 0, & \forall f \in \mathbb{P}_{2m-1,2n-1}. \end{cases}$$
  2. 2.

    For each α,β > − 1 the nodes \(\{\tilde {x}_{k}\}_{k=1}^{m+1}\) and \(\{\tilde {y}_{j}\}_{j=1}^{n+1}\) are all real and positive. Moreover, they interlace the Gauss-Laguerre zeros:

    $$\begin{array}{@{}rcl@{}} && 0<\tilde{x}_{1}<x_{1}< {\dots} <\tilde{x}_{m}<x_{m}<\tilde{x}_{m+1},\\ && 0<\tilde{y}_{1}<y_{1}< {\dots} <\tilde{y}_{n}<y_{n}<\tilde{y}_{n+1}. \end{array}$$
  3. 3.

    The coefficients \(\{\tilde {\lambda }_{k}^{\alpha }\}_{k=1}^{m+1}\) and \(\{\tilde {\lambda }_{j}^{\beta }\}_{j=1}^{n+1}\) are positive and are given by

    $$\tilde{\lambda}_{k}^{\alpha}= 2 \frac{ \| {p}_{m}^{\alpha}\|^{2}}{ {p}_{m}^{\alpha}(\tilde{x}_{k}) {\tilde{p}_{m+1}^{\alpha}{}}^{\prime}(\tilde{x}_{k})}\quad and \quad \tilde{\lambda}_{j}^{\beta}= 2 \frac{\| {p}_{m}^{\beta}\|^{2}}{p_{m}^{\beta}(\tilde{y}_{j}) {\tilde{p}_{m+1}^{\beta}{}}^{\prime}(\tilde{y}_{j})}.$$
  4. 4.

    For a fixed quadrature node \(\tilde {x}_{k}\) and for each differentiable function g such that \(g^{(j)} > 0\) for j = 0,...,2m + 1 one has

    $$\sum\limits_{i=1}^{k-1}\tilde{\lambda}_{i}^{\alpha} g(\tilde{x}_{i}) < \int_{0}^{\tilde{x}_{k+1}} g(x) w_{\alpha}(x) dx, \quad \int_{0}^{\tilde{x}_{k-1}} g(x) w_{\alpha}(x) dx \leq \sum\limits_{i=1}^{k}\tilde{\lambda}_{i}^{\alpha} g(\tilde{x}_{i}).$$

    Similarly, for each fixed j,

    $$\sum\limits_{i=1}^{j-1}\tilde{\lambda}_{i}^{\beta} g(\tilde{y}_{i}) < \int_{0}^{\tilde{y}_{j+1}} g(y) w_{\beta}(y) dy, \quad \int_{0}^{\tilde{y}_{j-1}} g(y) w_{\beta}(y) dy \leq \sum\limits_{i=1}^{j}\tilde{\lambda}_{i}^{\beta} g(\tilde{y}_{i}).$$

Let us mention that the first property implies

$${G}^{A}_{m+1,n+1}(f)=2I(f)- G_{m,n}(f), \qquad \forall f \in \mathbb{P}_{2m+1,2n-1} \cup \mathbb{P}_{2m-1,2n+1}.$$
(12)

This suggests that a “stratified” cubature rule of the form (6) can be obtained with \(\eta = 1/2\) and \(U_{m+1,n+1} ={G}^{A}_{m+1,n+1}/2\), that is

$$G_{2m+1,2n+1}^{L}(f)=:\frac{1}{2}[G_{m,n}(f)+{G}^{A}_{m+1,n+1}(f)].$$
(13)

From now on, we will refer to the above formula as the averaged Gauss-Laguerre cubature rule.

Let us also note that \(G^{L}_{2m+1,2n+1}\) requires the computation of 2(m + n + 1) nodes and weights and if we use the algorithm devised by Golub and Welsch in [20] their computation has a cost equal to \(2c(m^{2}+n^{2})+2\mathcal {O}(m)+2\mathcal {O}(n)\), that is one half of the cost of the cubature Gauss rule G2m,2n which is \(4c(m^{2}+n^{2})+2\mathcal {O}(m)+2\mathcal {O}(n)\).

Moreover, the first property of Proposition 1 suggests that the cubature error for \(G_{2m+1,2n+1}^{L}(f)\) is smaller then the cubature error Rm,n since \(G_{2m+1,2n+1}^{L}(f)=I(f)\) for each \(f \in \mathbb {P}_{2m+1,2n-1} \cup \mathbb {P}_{2m-1,2n+1}\). This allows us to reach a certain accuracy with a number of evaluation of the integrand function which is smaller than that required by G2m,2n. Furthermore, we can use \(G_{2m+1,2n+1}^{L}(f)\) to provide an accurate approximation of the cubature error (2), that is

$$\begin{array}{@{}rcl@{}} R_{m,n}(f)=I(f)-G_{m,n}(f) & \simeq& G_{2m+1,2n+1}^{L}(f)-G_{m,n}(f) \\ &=& \frac{1}{2}[{G}^{A}_{m+1,n+1}(f)-G_{m,n}(f)] =:\mathring{R}_{m,n}^{[1]}(f). \end{array}$$
(14)

Let us mention that \(\mathring{R}_{m,n}^{[1]}(f)\) provides an accurate numerical estimates for \(\mathring{R}_{m,n}(f)\); for each fixed m and n, see Table 8 for a numerical evidence.

Finally, we underline that the inequalities proved in Property 4 are Possé-Chebyshev-Markov-Stieltjes type inequalities [21, p. 33] and they allow us to state the following corollary.

Corollary 1

For a fixed quadrature node \(\tilde {x}_{k}\) and \(\tilde {y}_{k}\) and for each differentiable function g such that g(j) > 0 for j = 0,...,2m + 1 one has

$$\tilde\lambda_{k}^{\alpha}g(\tilde x_{k}) < \int_{\tilde x_{k-2}}^{\tilde x_{k+2}}g(x)w_{\alpha}(x)dx, \qquad \tilde\lambda_{k}^{\beta}g(\tilde y_{k}) < \int_{\tilde y_{k-2}}^{\tilde y_{k+2}}g(x)w_{\beta}(y)dy.$$

The Corollary is essential to prove the stability of the formula \(G^{A}_{m+1,n+1}\) in suitable weighted spaces. In spaces of continuous functions \(C\equiv C([0,\infty ])\), the stability is trivial.

Let us now investigate on the stability and convergence properties of the anti-Gauss cubature formula in suitable weighted spaces Cu. This setting is useful when we want to consider functions f(x,y) that may tend to infinity with algebraic growth as x,y → 0+ and with an exponential growth as \(x,y \to \infty\). Fix the weight functions

$$u_{i}(x)=(1+x)^{\eta_{i}} x^{\gamma_{i}} e^{-x/2}, \quad \eta_{i}, \gamma_{i} \geq 0 , \qquad i=1,2,$$

and, setting

$$\begin{array}{@{}rcl@{}} u(x,y)=u_{1}(x) u_{2}(y), \end{array}$$
(15)

we define the weighted space Cu as the set of all continuous functions on \((0,\infty ) \times (0,\infty )\) such that

$$\begin{cases} \underset{\underset{x \to 0^{+}}{x \to \infty}}{\lim} (fu)(x,y) =0, \qquad \forall y \in [0,\infty), \\ \underset{\underset{y \to 0^{+}}{y \to \infty}}{\lim} (fu)(x,y)=0, \qquad \forall x \in [0,\infty). \end{cases}$$

We endow the space Cu with the norm

$$\|f\|_{C_{u}}=\|fu\|_{\infty}=\sup_{x,y \in \mathbb{R}^{+}} \vert (fu)(x,y)\vert.$$

For smoother functions, we introduce the Sobolev-type space of index \(1 \leq r \in \mathbb {N}\)

$$W_{r}(u)= \left \{ f \in C_{u} : \|f\|_{W_{r}(u)}= \|fu\|_{\infty}+\max\{\|f^{(r)}_{y} {\varphi_{1}^{r}} u\|_{\infty}, \|f^{(r)}_{x} {\varphi_{2}^{r}} u\|_{\infty} \}< \infty \right \},$$

where \(\varphi _{1}(x)=\sqrt {x}\), \(\varphi _{2}(y)=\sqrt {y}\) and fy means that the function f is considered as a function of the only variable x. Similarly for fx.

In Cu, we define the error of best polynomial approximation by means of bivariate polynomials in \(\mathbb {P}_{m,n}\) as

$$E_{m,n}(f)_{u} = \inf_{P \in \mathbb{P}_{m,n}} \|(f-P)u\|_{\infty}$$

(see, e.g., [6, Theorem 2.1]). It is well known that [6, Theorem 2.1 and estimate (3.1)]

$$E_{m,n}(f)_{u} \leq \mathcal{C} \left[\frac{1}{m^{r/2}}+\frac{1}{n^{r/2}}\right] \|f\|_{W_{r}(u)},$$
(16)

where \(\mathcal {C}\) is a positive constant independent of m, n and f.

Theorem 1

For any \(f \in C_{u}\) if α > γ1 − 1 and β > γ2 − 1, the anti-Gauss cubature formula is stable and

$$\vert {R}^{A}_{m+1,n+1}(f) \vert \leq \mathcal{C} E_{2m-1,2n-1}(f)_{u},$$

where \(\mathcal {C}\) is a positive constant independent of m, n, and f.

By the previous theorem, we can deduce the following estimate for the error of the averaged rule (13)

$$R^{L}_{2m+1,2n+1}(f)= I(f)-G_{2m+1,2n+1}^{L}(f).$$

Corollary 2

For any \(f \in C_{u}\), if α > γ1 − 1 and β > γ2 − 1, then

$$|R^{L}_{2m+1,2n+1}(f)| \leq \frac{\mathcal{C}}{2} E_{2m-1,2n-1}(f)_{u}.$$

2.2 The reduced generalized averaged Gauss cubature rule

In this subsection, we want to extend to the bivariate case the one-dimensional quadrature rule \(G_{2m+1}^{S}\) having the highest possible degree of exactness 2m + 2. Such rule has several formulations [16, 18, 22] and has been applied to matrix functions approximation with applications to complex networks analysis [23, 24].

For our aims, we will consider the formulation presented in [18] and we will refer to it as the generalized averaged Gauss cubature rule \(G_{2m+1,2n+1}^{S}\). It is obtained as tensor product of \(G_{2m+1}^{S}\) and \(G_{2n+1}^{S}\) and reads as

$${G}^{S}_{2m+1,2n+1}(f) = \sum\limits_{k=1}^{2m+1} \sum\limits_{j=1}^{2n+1} \bar{\lambda}_{k}^{\alpha} \bar{\lambda}_{j}^{\beta} f(\bar{x}_{k},\bar{y}_{j}),$$
(17)

where \(\{\bar {\lambda }_{k}^{\alpha }\}_{k=1}^{m+1}\) and \(\{\bar {\lambda }_{j}^{\beta }\}_{j=1}^{n+1}\) are the weights, and \(\{\bar {x}_{k}\}_{k=1}^{m+1}\) and \(\{\bar {y}_{j}\}_{j=1}^{n+1}\) are the nodes. Specifically, the nodes \(\{\bar {x}_{k}\}_{k=1}^{m+1}\) are the eigenvalues of the symmetric tridiagonal matrix

$$\bar{J}^{\alpha}_{2m+1}= \begin{bmatrix} J^{\alpha}_{m} & \sqrt{b^{\alpha}_{m}}\boldsymbol{e}_{m} & 0 \\ \sqrt{b^{\alpha}_{m}} \boldsymbol{e}^{T}_{m} & a^{\alpha}_{m} & \sqrt{b^{\alpha}_{m+1}}\boldsymbol{e}^{T}_{1} \\ 0 & \sqrt{b^{\alpha}_{m+1}} \boldsymbol{e}_{1} & Z_{m} J^{\alpha}_{m} Z_{m} \end{bmatrix},$$
(18)

where \(\boldsymbol{e}_{k}=[0,\dots ,0,1,0,\dots ,0]^{T}\), Zm is the m × m reversal matrix [25, Section 0.9.5], \(J^{\alpha }_{m}\) is given by (11) and \(a^{\alpha }_{m}\) and \(b^{\alpha }_{m}\) are the recursion coefficients of the Laguerre polynomial \(p_{m}^{\alpha }\). The weights \(\{\bar {\lambda }_{k}^{\alpha }\}_{k=1}^{m+1}\) are the squares of the first components of normalized real eigenvectors of (18) multiplied by \(b^{\alpha }_{0}\). The zeros \(\{\bar {y}_{j}\}_{j=1}^{n+1}\) and the coefficients \(\{\bar {\lambda }_{j}^{\beta }\}_{j=1}^{n+1}\) can be computed in the same way by replacing α with β and m with n.

Formula (18) retains nice properties that can be automatically deducted by the corresponding univariate rule [16, 18, 22]. We summarize them in the following proposition.

Proposition 2

The generalized averaged Gauss cubature rule (18) has the following properties.

  • \(I(f)=G^{S}_{2m+1,2n+1}(f)\) for each \(f \in \mathbb {P}_{2m+2,2n+2}\).

  • The nodes are all real. Specifically, when they are ordered in increasing order, they are such that

    $$\bar{x}_{k}=\begin{cases} x_{k}, \quad \textrm{if}\; \textit{k}\; \textrm{is even}\\ {x}^{*}_{k}, \quad \textrm{if}\; \textit{k}\; \textrm{is odd} \end{cases}$$

    where xk are the Gauss nodes and \(x^{*}_{k}\) are the zeros of the following polynomial

    $${p}^{*}_{m+1}(\alpha,x)=p_{m+1}^{\alpha}(x)- b_{m+1}^{\alpha} p_{m-1}^{\alpha}(x),$$

    i.e., the eigenvalues of the following matrix

    $$J^{*}_{m+1} = \begin{bmatrix} J^{\alpha}_{m} & \sqrt{b^{\alpha}_{m}+b^{\alpha}_{m+1} }\mathbf{e}_{m} \\ \sqrt{b^{\alpha}_{m}+b^{\alpha}_{m+1}} \mathbf{e}^{T}_{m} & a^{\alpha}_{m} \\ \end{bmatrix}.$$

    The same holds for the nodes \(\bar {y}_{j}\) with β and n in place of α and m, respectively.

  • The weights are all positive. In details, assuming that they are ordered in increasing order, one has

    $$\bar{\lambda}_{k}=\begin{cases} \dfrac{b_{m+1}^{\alpha}}{b_{m}^{\alpha}+b_{m+1}^{\alpha}} \lambda_{k}, \quad \textrm{if}\; \textit{k}\; \textrm{is even},\\ \dfrac{b_{m}^{\alpha}}{b_{m}^{\alpha}+b_{m+1}^{\alpha}} \lambda^{*}_{k}, \quad \textrm{if}\; \textit{k}\; \textrm{is odd} \end{cases}$$

    where \(\lambda ^{*}_{k}\) can be computed from the first components of the associated normalized real eigenvectors of the matrix \(J^{*}_{m+1}\).

  • If α,β ≥ 1 all the nodes \(\bar{x}_{k}, \bar{y}_{j}\) are positive. On the other hand, if α,β ∈ (− 1,1) then there exist negative zeros for any m,n ≥ 1.

According to the last property, the introduced cubature rule (18) has nodes outside the domain \((0,\infty )\). Therefore, in order to overcome this problem, we follow what has been proposed in [15] for the univariate case, that is, we consider the Jacobi matrix \(\bar {J}^{\alpha }_{2m+1}\) given in (18) (and the analogous \(\bar {J}^{\beta }_{2n+1}\)) and remove the last m − 1 (resp. n − 1) rows and columns.

Then, we define the reduced generalized averaged Gauss cubature rule as

$${G}^{T}_{m+2,n+2}(f) = \sum\limits_{k=1}^{m+2} \sum\limits_{j=1}^{n+2} \hat{\lambda}_{k}^{\alpha} \hat{\lambda}_{j}^{\beta} f(\hat{x}_{k},\hat{y}_{j}),$$
(19)

where the nodes \(\{\hat {x}_{k}\}_{k=1}^{m+2}\) and \(\{\hat {y}_{j}\}_{j=1}^{n+2}\) are the zeros of the polynomials \(\hat {p}_{m+1}^{\alpha }\) and \(\hat {p}_{n+1}^{\beta }\), respectively, defined by

$$\begin{aligned} \hat{p}_{m+1}^{\alpha}(x)&=(x-a_{m-1}^{\alpha})p_{m+1}^{\alpha}(x) -b_{m+1}^{\alpha} p_{m}^{\alpha}(x), \\ \hat{p}_{n+1}^{\beta}(y)&=(y-a_{n-1}^{\beta})p_{n+1}^{\beta}(y) -b_{n+1}^{\beta} p_{n}^{\beta}(y), \end{aligned}$$
(20)

i.e., the eigenvalues of the matrices

$$\hat{J}^{\alpha}_{m+2}{=} \begin{bmatrix} J^{\alpha}_{m} & \sqrt{b^{\alpha}_{m}}\boldsymbol{e}_{m} & 0 \\ \sqrt{b^{\alpha}_{m}} \boldsymbol{e}^{T}_{m} & a^{\alpha}_{m} & \sqrt{b^{\alpha}_{m+1}}\boldsymbol{e}^{T}_{1} \\ 0 & \sqrt{b^{\alpha}_{m+1}} \boldsymbol{e}_{1} & a^{\alpha}_{m-1} \end{bmatrix}\quad \text{and}\quad \hat{J}^{\beta}_{n+2}{=} \begin{bmatrix} J^{\beta}_{n} & \sqrt{b^{\beta}_{n}}\boldsymbol{e}_{n} & 0 \\ \sqrt{b^{\beta}_{n}} \boldsymbol{e}^{T}_{n} & a^{\beta}_{n} & \sqrt{b^{\beta}_{n+1}}\boldsymbol{e}^{T}_{1} \\ 0 & \sqrt{b^{\beta}_{n+1}} \boldsymbol{e}_{1} & a^{\beta}_{n-1} \end{bmatrix}\!.$$

The weights \(\{\hat {\lambda }_{k}^{\alpha }\}_{k=1}^{m+2}\) and \(\{\hat {\lambda }_{j}^{\beta }\}_{j=1}^{n+2}\) are the first component of the eigenvector associated to the eigenvalue \(\hat {x}_{k}\) and \(\hat {y}_{j}\), respectively.

Proposition 3

The reduced formula \({G}^{T}_{m+2,n+2}(f)\) is still exact for polynomials belonging to \(\mathbb {P}_{2m+2,2n+2}\). Moreover, all the nodes are positive for any m,n ≥ 2 if \(\alpha,\beta \geq 0\) and for any m,n ≥ 3 if \(\alpha,\beta,\in (-1,1)\).

The above Proposition suggests that we can use \({G}^{T}_{m+2,n+2}(f)\) to estimate the cubature error Rm,n(f) by computing the following difference:

$$R_{m,n}(f)=I(f)-G_{m,n}(f) \simeq {G}^{T}_{m+2,n+2}(f)-G_{m,n}(f)=:R_{m,n}^{[2]}(f).$$
(21)

3 On the error of the truncated Gauss-Laguerre cubature rule

In this section, we develop truncated versions of both the rules previously introduced, preserving the same accuracy of the original cubature error. We aim at reducing the number of function evaluations, producing at the same time an estimate for the error of the truncated Gauss-Laguerre rule (3).

Generally, truncated rules are very useful in the numerical solution of integral equations [6, 8, 9]. In fact, collocation methods lead to square linear systems whose order depends on the number of cubature points. If we approximate the integral operator by using a cubature scheme with m × n nodes, then the system will be of order mn. Moreover, if the kernel and the right-hand side of the equation are not smooth, to obtain a good accuracy, we need to take m,n ≫ 1, so that the system becomes very large. Truncated rules use only a fraction of the nodes in the cubature scheme, reaching the same precision by solving a smaller linear system, with a significant saving in computing time.

3.1 The truncated averaged Gauss-Laguerre cubature rule

Let us consider the rule (7). Among all the computed nodes \(\{\tilde {x}_{k}\}_{k=1}^{m+1}\) and \(\{\tilde {y}_{j}\}_{j=1}^{n+1}\) let us select the node \(\tilde {x}_{\kappa }\) and the node \(\tilde {y}_{\iota }\) such that

$$\tilde{x}_{\kappa} =\min \{ \tilde{x}_{k} | \tilde{x}_{k} \geq 4 m \theta_{1} \}, \quad \tilde{y}_{\iota} =\min \{ \tilde{y}_{j} | \tilde{y}_{j} \geq 4 n \theta_{2} \},$$

where \(\theta_{1},\theta_{2} \in (0,1)\) are two fixed parameters.

Then, we define the truncated anti-Gauss Laguerre cubature formula as

$$\mathring{G}^{A}_{m+1,n+1}(f) =\sum\limits_{k=1}^{\kappa} \sum\limits_{j=1}^{\iota} \tilde{\lambda}^{\alpha}_{k} \tilde{\lambda}^{\beta}_{j} f(\tilde{x}_{k},\tilde{y}_{j}).$$
(22)

Denoting

$$\mathring{R}^{A}_{m+1,n+1}(f)=I(f)-\mathring{G}^{A}_{m+1,n+1}(f),$$

next proposition shows that, for a certain class of polynomials, formula (22) gives an error opposite in sign of the error of formula (3).

Proposition 4

For each polynomial \(p \in \mathbb {P}_{2m+1,2n-1} \cup \mathbb {P}_{2m-1,2n+1}\), one has

$$\mathring{R}^{A}_{m+1,n+1}(p)\approx -\mathring{R}_{m,n}(p).$$

Remark 1

Let \(\mathbb {P}^{*}_{m,n}\) be the set of all bivariate polynomials vanishing at the points \((\tilde {x}_{i},\tilde {y}_{j})\) with \(\tilde {x}_{i} \geq 4m \theta _{1}\) and \(\tilde {y}_{j} \geq 4n \theta _{2}\). Then, since \(\mathring{R}_{m,n}(f) = 0\) for all \(p \in \mathbb {P}^{*}_{2m-1,2n-1}\), it follows

$$\int_{0}^{\infty} \int_{0}^{\infty} p(x,y) w(x,y) dx dy = \sum\limits_{k=1}^{\kappa} \sum\limits_{j=1}^{\iota} \tilde{\lambda}^{\alpha}_{k} \tilde{\lambda}^{\beta}_{j} p(\tilde{x}_{k},\tilde{y}_{j}).$$

According to Proposition 4, one has

$$\mathring{G}^{A}_{m+1,n+1}(f) = 2 I(f)- \mathring{G}_{m,n}(f), \qquad \forall f \in \mathbb{P}_{2m+1,2n-1} \cup \mathbb{P}_{2m-1,2n+1}$$

This identity suggest us to define the truncated averaged Gauss-Laguerre cubature rule as

$$\begin{aligned} \mathring{G}^{L}_{2m+1,2n+1}(f)&=\frac{1}{2}[\mathring{G}_{m,n}(f)+ \mathring{G}^{A}_{m+1,n+1}(f)]. \end{aligned}$$

Such formulae can be used to estimate the cubature error of (3)

$$\begin{aligned} \mathring{R}_{m,n}(f) = I(f)- \mathring{G}_{m,n}(f) & \simeq \mathring{G}^{L}_{2m+1,2n+1}(f) - \mathring{G}_{m,n}(f) \\ & = \frac{1}{2}[ \mathring{G}^{A}_{m+1,n+1}(f)-\mathring{G}_{m,n}(f)]=:\mathring{R}_{m,n}^{[1]}(f). \end{aligned}$$
(23)

Let us note that \(\mathring {R}_{m,n}^{[1]}(f)\) provides an accurate numerical estimate for \(\mathring{R}_{m,n}(f)\); for each fixed m and n, see Table 9 for a numerical evidence, which also confirms the lower bound given in [6].

3.2 The truncated reduced generalized averaged Gauss cubature rule

Similarly to what has been done for the anti-Gauss rule, let us consider now the formula (19) and introduce the nodes

$$\hat{x}_{\kappa} =\min \{ \hat{x}_{k} | \hat{x}_{k} \geq 4 m \theta_{1} \}, \quad \hat{y}_{\iota} =\min \{ \hat{y}_{j} | \hat{y}_{j} \geq 4 n \theta_{2} \},$$

where \(\theta_{1},\theta_{2} \in (0,1)\) are two fixed parameters. Then, we define truncated reduced generalized averaged Gauss cubature rule as

$$\mathring{G}^{T}_{m+2,n+2} = \sum\limits_{k=1}^{\kappa} \sum\limits_{j=1}^{\iota} \hat{\lambda}_{k}^{\alpha} \hat{\lambda}_{j}^{\beta} f(\hat{x}_{k},\hat{y}_{j}).$$

Next proposition shows that the error given by the above formula

$$\mathring{R}^{T}_{m+2,n+2}(f)=I(f)- \mathring{G}^{T}_{m+2,n+2}(f)$$

is of the same order as the one given by the original formula (19). Then, in order to use fewer points, we can use this formula to estimate the cubature error \(\mathring{R}_{m,n}(f)\), i.e.,

$$\mathring{R}_{m,n}(f) =I(f)-\mathring{G}_{m,n}(f) \simeq \mathring{G}^{T}_{m+2,n+2}(f)-\mathring{G}_{m,n}(f)=:\mathring{R}_{m,n}^{[2]}(f).$$
(24)

Proposition 5

Let \(f \in C_{u}\) where u is given by (15) with γ1 < α + 1 and γ2 < β + 1. Then,

$$|\mathring{R}^{T}_{m+2,n+2}(f)| \leq |{R}^{T}_{m+2,n+2}(f)|+ \mathcal{C} (E_{m}(f)_{u}+e^{-c(m+n)} \|fu\|_{\infty})$$

where \(\mathcal {C}\) and c are positive constants independent of m and n.

4 Numerical examples

The aim of this section is to show and compare the performance of the four cubature rules applied to several integrals of the form (1).

In each example, for increasing value of m and n, we compute the relative errors of the Gauss and anti-Gauss cubature rule 𝜖m,n, \(\epsilon ^{A}_{m+1,n+1}\), together with the corresponding relative error \(\epsilon ^{L}_{2m+1,2n+1}\) furnished by the averaged formula (13). Then, we show the relative errors of the associated truncated rules \(\mathring{\epsilon}_{m,n}\), \(\mathring {\epsilon }^{A}_{m+1,n+1}\) and \(\mathring {\epsilon }^{L}_{m+1,n+1}\) to highlight the advantage of such formulae in terms of accuracy and evaluation of functions. In addition, we test the performance of the reduced generalized averaged Gauss rule and its truncated version, by showing the relative errors \({\epsilon }^{T}_{m+2,m+2}\) and \(\mathring {\epsilon }^{T}_{m+2,n+2}\).

In each example, we also compute the Estimated Order of Convergence (EOC), i.e.,

$$EOC_{m,n}=\frac{\log{\left(\frac{err_{m,n}}{err_{2m,2n}}\right)}}{\log{2}},$$

where here errm,n is the absolute error of the cubature rule we are dealing with.

All the computations were performed on an Intel Xeon E-2244G system with 16Gb RAM, running MATLAB 9.10. The software developed is only prototypal, but it is available from the authors upon request.

Example 1

Let us test the convergence of the considered cubature rules to the following integral

$$\int_{0}^{\infty} \int_{0}^{\infty} \sin(x+y) x^{3} y e^{-x-y} dx dy= -\frac{3}{4},$$

with \(f(x,y)=\sin \limits (x+y) x^{3} y\) and w(x,y) = exy. The integrand function is very smooth and then we expect a fast convergence. The numerical tests are performed with high-precision arithmetic. Specifically, to make the round-off errors introduced during the computations negligible, the computations are achieved with 110 significant decimal digits. By choosing m = n, Table 1 contains the relative errors of the classical Gauss-Laguerre formula Gm,m(f), the anti-Gauss rule \({G}^{A}_{m,m}(f)\) and the corresponding averaged rule \(G^{L}_{2m+1,2m+1}(f)\), together with the corresponding EOCm,m. By comparing the second and fourth column, we can see that the anti-Gauss formula furnishes an error opposite in sign and with the same magnitude of the Gauss rule. Consequently, if we approximate the integral by using the averaged rule, then we get a better approximation as highlighted by the sixth column.

In Table 2, we collect the results we get by approximating the integral with the truncated version of the Gauss, anti-Gauss rule, and the corresponding averaged formula, by fixing 𝜃1 = 𝜃2 = 0.4. The truncated anti-Gauss rule still retains the same properties of its complete formulation, that is gives an error opposite in sign and with the same magnitude of the truncated Gauss-rule. This implies that if we use the truncated averaged rule we get the same accuracy of the complete rule but with fewer points. For instance, we get an error of the order 10− 9 with 27 × 2 = 54 nodes (instead of 33 × 2 = 66 nodes) and 365 evaluations of functions (instead of 545 evaluations).

Table 3 contains the relative errors we get with the reduced generalized averaged formula \(G^{T}_{m+2,m+2}\) and its truncated version \(\mathring {G}^{T}_{m+2,m+2}\). They are smaller than the the errors given by the single rules Gm,m and \(G^{A}_{m+1,m+1}\) but the averaged rule \(G^{L}_{2m+1,2m+1}\) is more precise. By the last column, we can see that, also in this case, the truncated form gives the same accuracy of the complete rule with fewer points.

In Table 4, we want to compare the performance of the reduced generalized averaged formula \(G^{T}_{m+2,m+2}\) with the complete generalized averaged formula from which it derives \(G^{S}_{2m+1,2m+1}\) to show that the first formula reaches the same accuracy of the second one even if the number of points is reduced. To this end, to make internal the complete formula, we set the parameters of the weight function as α = 2 and β = 1 and \(f(x,y)=\sin \limits {(x+y)}\).

Table 1 Numerical results for Example 1
Table 2 Numerical results for Example 1 with 𝜃1 = 𝜃2 = 0.4
Table 3 Numerical results for Example 1 with 𝜃1 = 𝜃2 = 0.4
Table 4 Numerical results for Example 1 in the case α = 2 and β = 1

Example 2

The aim of this test is to show that in the two-dimensional case (and even more in the multidimensional one) the averaged rules are more competitive with respect to the simple Gauss formula. Let us consider the following integral

$$\int_{0}^{\infty} \int_{0}^{\infty} \frac{e^{-\frac{3}{4}x-y}}{1+y+2x} \frac{dx\; dy}{(x-2)^{2}+1},$$

which is of the form (1) with

$$w(x,y)=e^{-(x+y)} \quad f(x,y)=\frac{1}{1+y+2x} \frac{e^{x/4}}{(x-2)^{2}+1}.$$

The function f is smooth and, for each fixed y, is bounded and does not increase too fast, so we need to use a large value of m and n to get a good accuracy [7]. The exact value of the integral is not known. Since the use of a high precision considerably increases the computation times when m is large, we have computed the exact value numerically with Wolfram Mathematica. We perform the computation with 32 significant decimal digits.

By inspecting Table 5 we can deduce, for instance, that to get an error of the order 10− 8 we have two options: using the averaged rule \(G^{L}_{2m+1,2n+1}\) with m = n = 64 which requires the computation of 2(2m + 1) = 258 nodes and 642 + 652 = 8321 evaluation of functions, or using the simple Gauss-Laguerre formula Gm,m with m = n = 128. However, in the latter case, we have to compute 256 nodes and 16384 evaluation of functions.

If we move to the truncated averaged rules, the number of function evaluations is further reduced. As showed in Table 6, to reach an error of order 10− 8, by using the truncated averaged rule \(G^{L}_{2m+1,2m+1}\), we only need to compute the integrand function in 362 + 372 = 2665 points. On the contrary, the computational cost of the truncated Gauss \(\mathring{G}_{m,m}\) rule is 712 = 5041 evaluations.

Table 5 Numerical results for Example 2
Table 6 Numerical results for Example 2 with 𝜃1 = 𝜃2 = 0.2

Table 7 contains the results concerning the reduced generalized averaged rule from which we can see that \(G^{T}_{m+2,m+2}\) is less accurate than the rule \(G^{L}_{2m+1,2m+1}\).

Table 7 Numerical results for Example 2 with 𝜃1 = 𝜃2 = 0.2

Example 3

Let us consider an integral in which the integrand function is not smooth as in the previous examples, that is

$$\int_{0}^{\infty} \int_{0}^{\infty} \frac{|y-1|^{5/2}}{25+x^{3}+y^{3}} \frac{e^{-(x+y)}}{\sqrt[5]{y} \sqrt[10]{x}} dx\; dy,$$

where we can fix wα(x) = x− 1/10ex and wβ(y) = y− 1/5ey. Here, we compute the tests with double arithmetic precision and since the exact value is not known, we consider as exact the approximated value obtained by the Gauss cubature formula with m = 1024. The integrand function \(f(x,y)=\frac {|y-1|^{5/2}}{25+x^{3}+y^{3}}\) is smooth with respect to the variable x but belongs to the Sobolev space W2 with respect to the variable y. Then, we expect a theoretical convergence of the order \(\mathcal {O}(m^{-1})\); according to Theorem 1 combined with (16) (see also [6, Proposition 3.1]). The performance of the rules are displayed in Tables 89, and 10. Once again, the computational saving of the truncated averaged rule is confirmed.

In this example, we have also add a column in each table to show that the proposed formulae can be used to estimate the relative error given by the classical Gauss cubature formulae Gm,m and \(\mathring{G}_{m,m}\) as emphasized in (14), (21), (23), and (24).

Table 8 Numerical results for Example 3
Table 9 Numerical results for Example 3 with 𝜃1 = 𝜃2 = 0.2
Table 10 Numerical results for Example 3 with 𝜃1 = 𝜃2 = 0.2

5 Proofs

Proof of Proposition 1.

Property 1 Assume without loss of generality that \(\deg _{x} f\leq 2m+1\) and \(\deg _{y} f\leq 2n-1\). Then

$$g(x_{i}):=\int_{0}^{\infty} f(x_{i},y)w_{\beta}(y)dy=\sum\limits_{j=1}^{n}\lambda_{j}^{\beta}f(x_{i},y_{j})$$

is a polynomial in xi of degree at most 2m + 1, and we similarly have

$$\sum\limits_{j=1}^{n+1}\lambda_{j}^{\beta}f(\tilde x_{i},\tilde y_{j})=g(\tilde x_{i}).$$

Thus,

$$G_{m,n}=\sum\limits_{i=1}^{m}\lambda_{i}^{\alpha} g(x_{i}),\qquad G_{m+1,n+1}^{A}=\sum\limits_{i=1}^{m+1}\tilde\lambda_{i}^{\alpha} g(\tilde x_{i})$$

are the Gauss and anti-Gauss rule applied on the polynomial g. Using the fact that these two rules give the opposite errors on all polynomials of degree at most 2m + 1, we conclude that

$$\frac12(G_{m,n}+G_{m+1,n+1}^{A})= \int_{0}^{\infty} g(x)w_{\alpha}(x)dx= \iint_{\mathbb{R}^{+}}f(x,y)w(x,y)dxdy.$$

Property 2 It is inherited by the one-variable anti-Gauss rule [17, 26].

Property 3 The positivity follows by the fact that the j th weight is the squared of the first component of the eigenvector associated to the j th eigenvalue. Let us now give the proof of the expression of the weights \(\lambda _{k}^{\alpha }\), by following essentially the one given in [26] for bounded intervals. The identity related to the weights \(\lambda _{j}^{\beta }\) can be proved in the same way. For each fixed k, let us consider the polynomial of degree 2m

$$f_{k}(x)=p_{m}^{\alpha}(x)\frac{\tilde{p}_{m+1}^{\alpha}(x)}{x-\tilde{x}_{k}}. \qquad$$

Since Gm(fk) = 0, the univariate averaged rule (4) gives the identity

$$\frac{1}{2} G_{m+1}^{A}(f_{k})= \frac{1}{2} \tilde\lambda_{k}^{\alpha} f_{k}(\tilde x_{k}) = \frac{1}{2} \tilde{\lambda}_{k} { {p}_{m}^{\alpha}(\tilde{x}_{k}) {\tilde{p}_{m+1}^{\alpha}{}}^{\prime}(\tilde{x}_{k})}.$$
(25)

On the other hand, for each k, there exist a polynomial qm− 1,k of degree m − 1 such that

$$\int_{0}^{\infty} f_{k}(x) w_{\alpha}(x) dx = \int_{0}^{\infty} p_{m}^{\alpha}(x) [x^{m}+q_{m-1,k}(x)] w_{\alpha}(x) dx = \|p_{m}^{\alpha}\|^{2} ,$$
(26)

where ∥⋅∥ denotes the usual norm in the Hilbert space \(L^{2}(\mathbb {R}^{+})\). Then, by combining (25) with (26), we get the assertion.

Property 4

Let P and Q be two polynomials of degree 2m − 2 (or 2m − 1 if k = 1 or k = m) such that

$$P(\tilde{x}_{i})=\begin{cases} g(\tilde{x}_{i}) ,& i\leq k-1,\\ 0,& i \geq k+1, \end{cases} \quad \text{and} \quad \frac{dP}{dx}(\tilde{x}_{i})=\begin{cases} g^{\prime}(\tilde{x}_{i}) ,& i\leq k-1,\\ 0,& i \geq k+2, \end{cases}$$

and

$$Q(\tilde{x}_{i})=\begin{cases} g(\tilde{x}_{i}) ,& i\leq k-2,\\ 0,& i \geq k, \end{cases}\quad\text{and}\quad \frac{dQ}{dx}(\tilde{x}_{i})= \begin{cases} g^{\prime}(\tilde{x}_{i}) ,& i\leq k-2,\\ 0,& i \geq k+1. \end{cases}$$

These polynomials are uniquely determined (see, for instance, [21, Lemma 1.3]).

The polynomial P(x) < g(x) for \(x<\tilde {x}_{k}\), hits the x-axis at \(\tilde {x}_{k+1}\) and is negative thereafter, so

$$P(x)< \begin{cases} g(x), & x < \tilde{x}_{k+1}, \\ 0, & x > \tilde{x}_{k+1}. \end{cases}$$

Thus,

$$\begin{aligned} \sum\limits_{i=1}^{k-1}\tilde{\lambda}_{i}^{\alpha} g(\tilde x_{i}) = \sum\limits_{i=1}^{k-1}\tilde\lambda_{i}^{\alpha} P (\tilde x_{i}) & < \sum\limits_{i=1}^{n+1}\tilde\lambda_{i}^{\alpha} P (\tilde x_{i}) \\ &= \int_{0}^{\infty} P(x)w_{\alpha} (x)dx < \int_{0}^{\tilde x_{k+1}}g(x) w_{\alpha} (x)dx. \end{aligned}$$

Similarly,

$$Q(x)> \begin{cases} g(x), & x < \tilde{x}_{k-1}, \\ 0, & x > \tilde{x}_{k-1}, \end{cases}$$

and consequently, we can claim that

$$\begin{aligned} \sum\limits_{i=1}^{k}\tilde{\lambda}_{i}^{\alpha} g(\tilde x_{i}) > \sum\limits_{i=1}^{k}\tilde\lambda_{i}^{\alpha} Q(\tilde x_{i}) & =\sum\limits_{i=1}^{n+1}\tilde\lambda_{i} Q(\tilde x_{i}) \\ & = \int_{0}^{\infty} Q(x)w_{\alpha} (x)dx > \int_{0}^{\tilde x_{k-1}}g(x) w_{\alpha} (x)dx. \end{aligned}$$

The last inequalities can be proved similarly. 

Proof of Corollary 1.

By Property 4 of Proposition 1 we can state

$$\sum\limits_{i=1}^{k}\tilde\lambda_{i}^{\alpha}g(\tilde x_{i}) < \int_{0}^{\tilde x_{k+2}}g(x)w_{\alpha}(x)dx$$

and

$$\sum\limits_{i=1}^{k-1}\tilde\lambda_{i}^{\alpha}g(\tilde x_{i}) > \int_{0}^{\tilde x_{k-2}}g(x)w_{\alpha}(x) dx,$$

so that 

$$\tilde{\lambda}^{\alpha}_{k} g(\tilde{x}_{k})= \sum\limits_{i=1}^{k} \tilde{\lambda}^{\alpha}_{k} g(\tilde{x}_{k})- \sum\limits_{i=1}^{k-1} \tilde{\lambda}^{\alpha}_{k} g(\tilde{x}_{k}) < \int_{\tilde{x}_{k-2}}^{\tilde{x}_{k+2}} g(x) w_{\alpha}(x) dx.$$

Proof of Theorem 1.

To prove the stability of the formula, we have to prove that

$$\sup_{m,n} \sup_{\| fu \|_{\infty}=1} \left(\sum\limits_{k=1}^{m+1} \sum\limits_{j=1}^{n+1} \frac{\tilde{\lambda}_{k}^{\alpha}}{u_{1}(\tilde{x}_{k})} \frac{\tilde{\lambda}_{j}^{\beta}}{u_{2}(\tilde{y}_{j})} \right)<\infty.$$

Now, by Corollary 1 with \(g(x)=e^{\frac {x}{2}}\), we get

$$\begin{aligned} \sum\limits_{k=1}^{m+1} \frac{\tilde{\lambda}_{k}^{\alpha}}{u_{1}(\tilde{x}_{k})} & = \sum\limits_{k=1}^{m+1} \frac{\tilde{\lambda}_{k}^{\alpha} e^{\tilde{x}_{k}/2}}{(1+\tilde{x}_{k})^{\eta_{1}} \tilde{x}_{k}^{\gamma_{1}}} \\ &\leq \sum\limits_{k=1}^{m+1} \left[ \int_{\tilde{x}_{k-2}}^{\tilde{x}_{k}} \frac{e^{x/2} w_{\alpha}(x) }{(1+x)^{\eta_{1}} x^{\gamma_{1}}} dx + \left(\frac{1+\tilde{x}_{k+2}}{1+\tilde{x_{k}}} \right)^{\eta_{1}} \left(\frac{\tilde{x}_{k+2}}{\tilde{x}_{k}} \right)^{\gamma_{1}} \int_{\tilde{x}_{k}}^{\tilde{x}_{k+2}} \frac{e^{\frac{x}{2}} w_{\alpha}(x) }{(1+x)^{\eta_{1}} x^{\gamma_{1}}} dx\right] \\ &\leq \mathcal{C} \int_{0}^{\infty} \frac{x^{\alpha-\gamma_{1}}}{(1+x)^{\eta_{1}}} e^{-x/2} dx. \end{aligned}$$

where \(\mathcal {C}\) is a positive constant independent of m. Similarly, we deduce that

$$\sum\limits_{j=1}^{n+1} \frac{\tilde{\lambda}_{j}^{\beta}}{u_{2}(\tilde{y}_{j})} \leq \mathcal{C} \int_{0}^{\infty} \frac{y^{\beta-\gamma_{2}}}{(1+y)^{\eta_{2}}} e^{-y/2} dy.$$

Consequently, by the assumption on α and β, we deduce the boundedness of the integrals and then the stability of the formula. About the convergence, denoted by \(P \in \mathbb {P}_{2m-1,2n-1}\) we have

$$\begin{aligned} |{R}^{A}_{m+1,n+1}(f)| & = |{R}^{A}_{m+1,n+1}(f-P)| =\left|I(f-P)-{G}^{A}_{m+1,n+1}(f-P) \right|\\ & \leq \|(f-P)u\|_{\infty} \left\{ I\left(\frac{1}{u}\right) + \left(\sum\limits_{k=1}^{m+1} \frac{\tilde{\lambda}_{k}^{\alpha}}{u_{1}(\tilde{x}_{k})} \right) \left(\sum\limits_{j=1}^{n+1} \frac{\tilde{\lambda}_{j}^{\beta}}{u_{2}(\tilde{y}_{j})} \right) \right\} \\ & \leq \mathcal{C} \|(f-P)u\|_{\infty}. \end{aligned}$$

Then, taking infimum on \(\mathbb {P}_{2m-1,2n-1}\), by assumptions we get the thesis. 

Proof of Corollary 2

By definition, taking Theorem 1 into account and being \(|R_{m,n}(f)| \leq \mathcal {C} E_{2m-1,2n-1}(f)_{u}\) one get 

$$|I(f)-G_{2m+1,2n+1}^{L}(f)| \leq \frac{1}{2}[|R_{m,n}|+|{R}^{A}_{m+1,n+1}|] \leq \frac{\mathcal{C}}{2} E_{2m-1,2n-1}(f)_{u}.$$

Proof of Proposition 3

Both assertions follow by the properties of the corresponding univariate quadrature rule \(G^{T}_{m+2}\) [14]. 

Proof of Proposition 4

By the first assertion of Proposition 1 we can write

$$\begin{aligned} & I(p)- \mathring{G}^{A}_{m+1,n+1}(p) \\ & = I(p)- G^{A}_{m+1,n+1}(p)+ \sum\limits_{k=\kappa+1}^{m+1} \sum\limits_{j=\iota+1}^{n+1} \tilde{\lambda}^{\alpha}_{k} \tilde{\lambda}^{\beta}_{j} p(\tilde{x}_{k},\tilde{y}_{j}) \\ & = -(I(p)-\mathring{G}_{m+1,n+1}(p))+ \sum\limits_{k=\kappa + 1}^{m} \sum\limits_{j=\iota + 1}^{n} \!\!\lambda_{k}^{\alpha} \lambda_{j}^{\beta} p(x_{k},y_{j})+\sum\limits_{k=\kappa + 1}^{m+1} \sum\limits_{j=\iota + 1}^{n+1} \!\!\tilde{\lambda}^{\alpha}_{k} \tilde{\lambda}^{\beta}_{j} p(\tilde{x}_{k},\tilde{y}_{j}). \end{aligned}$$

Therefore, we have

$$\begin{aligned} \mathring{R}^{A}_{m+1,n+1}(p) & = -\mathring{R}_{m,n}(p)+ \sum\limits_{k=\kappa + 1}^{m} \sum\limits_{j=\iota + 1}^{n} \!\!\lambda_{k}^{\alpha} \lambda_{j}^{\beta} p(x_{k},y_{j}) + \sum\limits_{k=\kappa + 1}^{m+1} \sum\limits_{j=\iota + 1}^{n+1} \!\!\tilde{\lambda}^{\alpha}_{k} \tilde{\lambda}^{\beta}_{j} p(\tilde{x}_{k},\tilde{y}_{j}) \\ & = -\mathring{R}_{m,n}(p)+S_{1}+S_{2}. \end{aligned}$$

Let us now prove that S1 and S2 are negligible so that they cannot influence the sign of the first term at the right-hand side.

Let u(x,y) = u1(x)u2(y) with \(u_{i}(x)=(1+x)^{\eta _{i}} x^{\gamma _{i}} e^{-x/2},\) where ηi ≥ 0 for each i = 1,2 and γ1 < α + 1, γ2 < β + 1. Then, setting \(\theta =\min \limits \{\theta _{1},\theta _{2} \}\) and \(M=\min \limits \{m,n \}\), we have S1 = sign(S1)|S1|, where [9, formula 4.2]

$$|S_{1}|\leq \|p\|_{C_{u}([4M\theta,+\infty))} \sum\limits_{k=\kappa+1}^{m} \sum\limits_{j=\iota+1}^{n} \frac{\lambda_{k}^{\alpha}}{u_{1}(x_{k})} \frac{\lambda_{j}^{\beta}}{u_{2}(x_{k})} \leq \mathcal{C} e^{-cM} I \left(\frac{1}{u}\right) \|pu\|_{\infty},$$

with \(\mathcal {C}\) and c two positive constants independent of M and p, and with c depending on 𝜃. Hence, S1 approaches zero and then whatever its sign, it does not change the sign of \(\mathring{R}_{m,n}(f)\) . A similar result can be proven for the sum S2 so that we have the assertion. 

Proof of Proposition 6

Let PM the polynomial of best approximation for f, we can write

$$\begin{aligned} |\mathring{R}^{T}_{m+2,n+2}(f)| \leq |{R}^{T}_{m+2,n+2}(f)|& + \sum\limits_{k=\kappa+1}^{m+2} \sum\limits_{j=\iota+1}^{n+2} \hat{\lambda}_{k}^{\alpha} \hat{\lambda}_{j}^{\beta} |f(\hat{x}_{k},\hat{y}_{j})-P_{M}(\hat{x}_{k},\hat{y}_{j})| \\ & + \sum\limits_{k=\kappa+1}^{m+2} \sum\limits_{j=\iota+1}^{n+2} \hat{\lambda}_{k}^{\alpha} \hat{\lambda}_{j}^{\beta} |P_{M}(\hat{x}_{k},\hat{y}_{j})|. \end{aligned}$$

By virtue of the exactness of formula \(G^{T}_{m+2,n+2}\) for constants, we have

$$\begin{aligned} &\sum\limits_{k=\kappa+1}^{m+2} \sum\limits_{j=\iota+1}^{n+2} \frac{\hat{\lambda}_{k}^{\alpha}}{u_{1}(\hat{x}_{k})} \frac{\hat{\lambda}_{j}^{\beta}}{u_{2}(\hat{y}_{j})} |(f(\hat{x}_{k},\hat{y}_{j})-P_{M}(\hat{x}_{k},\hat{y}_{j}))u(\hat{x}_{k},\hat{y}_{j})| \\ & \leq E_{M}(f)_{u} \sum\limits_{k=1}^{m+2} \sum\limits_{j=1}^{n+2} \frac{\hat{\lambda}_{k}^{\alpha}}{u_{1}(\hat{x}_{k})} \frac{\hat{\lambda}_{j}^{\beta}}{u_{2}(\hat{y}_{j})} \\ & \leq \mathcal{C} E_{M}(f)_{u} {\int}_{0}^{\infty} {\int}_{0}^{\infty} \frac{w(x,y)}{u(x,y)} dx dy. \end{aligned}$$

Moreover, by proceeding as done in Proposition 4 for the sums S1 and S2, we get

$$\sum\limits_{k=\kappa+1}^{m+2} \sum\limits_{j=\iota+1}^{n+2} \frac{\hat{\lambda}_{k}^{\alpha}}{u_{1}(\hat{x}_{k})} \frac{\hat{\lambda}_{j}^{\beta}}{u_{2}(\hat{y}_{j})} |P_{M}(\hat{x}_{k},\hat{y}_{j})| \leq \mathcal{C} e^{-c(m+n)} \|fu\|_{\infty}.$$

6 Conclusions and research perspectives

In this paper, we have proposed four cubature rules to approximate double integrals defined on the interval \([0,+\infty )\). The first one is the averaged Gauss cubature formula which is an average of the Gauss cubature formula and the anti-Gauss cubature rule, introduced here for the first time. Then, we have extended to the bivariate case the formula proposed in [15] by writing the reduced generalized averaged Gauss cubature rule. Finally, we have investigated on the corresponding truncated versions obtained by removing “large” nodes.

Our numerical results confirm the theoretical properties of these schemes. In the case of the averaged rule, the formula does not only provide a good estimate for the remainder terms \({R}_{m,m}(f)\) and \(\mathring{R}_{m,m}(f)\) but gives a good approximation of the integral with a reduced number of evaluation of functions with respect to the one given by the Gauss cubature formula G2m,2n. To this advantage, the reduced computational cost for computing nodes and weights is added, compared to the related cost required for the construction of G2m,2n. If we move on the truncated versions, the number of evaluation of function is streactly reduced.

All the above formulae are constructed as tensor product of the corresponding univariate formulae and then are based on the zeros of polynomials orthogonal with respect to a weight function of the type u(x) = xαex.

Regarding research perspectives, we want to construct these formulae by considering the more general weight \(u(x)=x^{\alpha } e^{-x^{\beta }}\) [27,28,29]. In this case, the coefficients of the recurrence relation are not known in a closed form and then must be evaluated numerically. The Mathematica package “Orthogonal Polynomial” [30, 31] computes such coefficients (and then evaluates nodes and weights) for the Gauss and anti-Gauss rule but not for the reduced generalized averaged formula. Then, we want to develop a software package, written in Matlab, which includes the computation of the recurrence coefficients of all the cubature formulae investigated here.

Our package will also include solvers for the numerical solution of integral equations defined on the set \((0,\infty ) \times (0,\infty )\) based on the cubature schemes presented in this paper [32]. Their properties will imply a considerable computational saving in the numerical solution of the linear system that will have to be solved.