1 Introduction

The paper deals with the numerical computation of integrals of the type

$$\begin{aligned} I(f,y):=\int _{-1}^1 f(x)K(x,y)w(x)dx, \quad y \in S\subset \mathbb {R}, \end{aligned}$$
(1)

where f is a sufficiently smooth function, \(w(x)=(1-x)^\alpha (1+x)^\beta \) is a Jacobi weight of parameters \(\alpha ,\beta >-1\), and the kernel K(xy) defined in \(D=\{(x,y): x\in [-1,1], \ y\in S\}\), contains some peculiar drawbacks. The topic is of interest in many applications, and in particular useful in numerical methods for functional equations (see e.g. [8, 16, 26, 29]), which model problems arising in a large variety of fields: mathematical physics, electrochemistry, crystal growth, biophysics, viscoelasticity, heat transfer model, etc. Because of the large variety of applications, great attention has been posed in literature and several numerical methods for solving these equations have been proposed (see [1, 4] and the references therein).

Weakly singular functions, such as \(\log |x-y|, \ |x-y|^\mu , \mu >-1,\) or highly oscillating functions, such as \(\sin (yx),\) \(\cos (yx)\) for \(\ |y|\gg 1\), are only examples of possible kernels for which the accurate computation of I(fy) can be successfully performed by means of the so-called product integration rules, i.e. formulae based on the approximation of the “smooth” function f and the exact computation of the rules coefficients. Product rules of interpolation type based on the zeros of orthogonal polynomials are well known in literature, also for the case of unbounded intervals and/or double integrals (see e.g. [15, 18, 21,22,23,24,25, 28, 30, 33]). These methods produce very satisfactory results, since the quadrature error depends essentially on the smoothness of the function f, and usually behaves like the best approximation error by polynomials of the function f. However, in many applications f is known only on a set of equally spaced points, or more in general the integrals have to be computed starting from scattered data. In these cases other procedures involving composite quadrature rules can be used, but this approach leads to a low degree of approximation, showing saturation phenomena. In this setting, we propose a product integration rule, based on the approximation of f known on a set of equispaced nodes, by using the constrained mock-Chebyshev least squares linear operator \(\hat{P}_{r,n}(f)\)  [9, 12]. Such operator has been recently used for deriving an efficient quadrature rule [10, 11] based on equidistant points, for integrals of the type (1), with \(K(x,y)\equiv 1\). As we will show, the product formula we introduce here is convergent in suitable subspaces of \(C([-1,1])\), providing in these cases estimates of the quadrature error.

The outline of the paper is the following. In Sect. 2 we briefly recall the main results concerning the product formula and the constrained mock-Chebyshev least squares interpolant. In Sect. 3 we introduce the constrained mock-Chebyshev product formula, with estimate of the error in Sobolev-type subspaces, and provide some implementation details for a selection of kernels. In Sect. 4 are given some numerical tests, and comparisons with the results achieved by the product rule on Chebyshev zeros.

2 Preliminaries

From now on, \(\mathcal {C}\) will denote any positive constant having different meanings at different occurrences and the writing \(\mathcal {C}\ne \mathcal {C}(a,b,\dots )\) has to be understood as \(\mathcal {C}\) not depending on \(a,b,\dots \). If \(A,B >0\) are quantities depending on some parameters, we write \(A \sim B,\) if there exists a constant \(\mathcal {C}\ \ne \mathcal {C}\ (A,B)\) such that \(\mathcal {C}\ ^{-1} B \le A \le \mathcal {C}\ B.\) Furthermore, we denote by \(\varPi _m\) the space of the algebraic polynomials of degree less than or equal to m. Finally, for any bivariate function h(xy), we denote the projections of the function h(xy) on one variable as \(h_y(x)\) and \(h_x(y)\) respectively.

2.1 Function spaces and orthogonal basis

Let us denote by \(C^0([-1,1])\) the space of continuous functions in \([-1,1]\) equipped with the norm \( \Vert f\Vert _{\infty }:=\max \limits _{x\in [-1,1]}|f(x)|\). For any \(f\in C^0([-1,1])\)

$$\begin{aligned} \displaystyle E_r(f)=\inf _{Q_r\in \varPi _r}\Vert f-Q_r\Vert _\infty \end{aligned}$$
(2)

is the error of best polynomial approximation of f in uniform norm. As well-known, by the Weierstrass Theorem [6],

$$\begin{aligned} f\in C^0([-1,1])\Leftrightarrow \lim _{r\rightarrow \infty } E_r(f)=0. \end{aligned}$$

By denoting with \(\mathcal{A}\mathcal{C}(-1,1)\) the space of functions in \([-1,1]\) which are absolutely continuous on every closed subset of \((-1,1)\) and by setting \(\phi (x)=\sqrt{1-x^2}\), let

$$\begin{aligned} W_s=\left\{ f\in C^0([-1,1]): f^{(s-1)}\in \mathcal{A}\mathcal{C}(-1,1)\, \text { and } \, \Vert f^{(s)}\phi ^{s}\Vert _\infty <\infty \right\} , \end{aligned}$$

be the Sobolev space of order \(s\in \mathbb {N},\ s\ge 1\), endowed with the norm

$$\begin{aligned} \Vert f\Vert _{W_s}=\Vert f\Vert _\infty + \Vert f^{(s)}\phi ^{s}\Vert _\infty . \end{aligned}$$

To estimate the error of best polynomial approximation, we recall the Favard inequality [14], which holds for each \(f\in W_s,\)

$$\begin{aligned} E_r(f)\le \mathcal {C}\ \frac{\Vert f\Vert _{W_s}}{r^s}, \quad \mathcal {C}\ \ne \mathcal {C}\ (r,f). \end{aligned}$$
(3)

We also set

$$\begin{aligned}{} & {} w^C(x):=\frac{1}{\sqrt{1-x^2}},\\{} & {} T_i(x):=\cos (i\arccos x), \quad i\in \mathbb {N}_0,\\{} & {} \Vert T_i \Vert _{2,\sqrt{w^C}}:= \left( \int _{-1}^{1} T_i(x)^2 w^C(x)\,dx \right) ^{\frac{1}{2}}= {\left\{ \begin{array}{ll} \sqrt{\pi }, &{}n=0,\\ \sqrt{\frac{\pi }{2}}, &{}n \ne 0, \end{array}\right. } \end{aligned}$$

and we denote by \(\left\{ {p}_i(w^C)\right\} _{i\in \mathbb {N}_0}\) the sequence of orthonormal polynomials w.r.t. the 1-st kind Chebyshev weight \(w^C\)

$$\begin{aligned} p_i(w^C,x):=\frac{T_i(x)}{\Vert T_i \Vert _{2,\sqrt{w^C}}}, \quad i\in \mathbb {N}_0. \end{aligned}$$

2.2 A product integration rule on the zeros of 1-st kind Chebyshev polynomials

Let \(\{z_1,\dots , z_{r+1}\}\) be the zeros of the \((r+1)\)-th orthonormal polynomial \(p_{r+1}(w^C,\cdot )\) with respect to the 1-st kind Chebyshev weight \(w^C(x)=\frac{1}{\sqrt{1-x^2}}\) and denote by \(\{\lambda _i\}_{i=1}^{r+1}\) the corresponding Christoffel numbers. Let \(\mathcal {L}_{r}(w^C,f)\in \varPi _r\) be the interpolating polynomial of f at the zeros \(\{z_i\}_{i=1}^{r+1}\) of \(p_{r+1}(w^C)\), i.e.

$$\begin{aligned} \mathcal {L}_{r}(w^C,f,z_i)=f(z_i), \quad i=1,2,\dots ,r. \end{aligned}$$

It can be represented as (see e.g. [22], Chapter 4)

$$\begin{aligned} \mathcal {L}_{r}(w^C,f,x)=\sum _{i=1}^{r+1} f(z_i)\ell _i(w^C,x)=\sum _{i=1}^{r+1} f(z_i)\lambda _{i}\sum \limits _{k=0}^{r}p_{k}(w^C,x)p_{k}(w^C,z_i), \end{aligned}$$

A class of interpolating product rules is based on the approximation of the function f in (1) by \(\mathcal {L}_{r}(w^C,f)\) (see e.g. [22] and the references therein), i.e.

$$\begin{aligned}{} & {} \int _{-1}^{1}f(x)K(x,y)w(x)dx= \int _{-1}^{1}\mathcal {L}_r(w^C,f,x)K(x,y)w(x)dx+e_r(f,y)\nonumber \\{} & {} \quad =:\varSigma _r(f,y)+e_r(f,y), \end{aligned}$$
(4)

where

$$\begin{aligned} \varSigma _r(f,y) = \sum _{i=1}^{r+1}f(z_i) \lambda _{i} \sum \limits _{k=0}^{r}p_{k}(w^C,z_i)\int _{-1}^{1} p_{k}(w^C,x)K(x,y)w(x)dx, \end{aligned}$$

and

$$\begin{aligned} e_r(f,y)=\int _{-1}^{1}(f(x)-\mathcal {L}_r(w^C,f,x))K(x,y)w(x)dx. \end{aligned}$$

The rule is exact for polynomials of degree r, i.e.

$$\begin{aligned} e_r(Q_r)=0, \quad \forall \, Q_r\in \varPi _r. \end{aligned}$$

As it is well known, the accuracy of the product rule is based on “exact” evaluation of the modified moments

$$\begin{aligned} M_k(y):=\int _{-1}^{1} p_{k}(w^C,x)K(x,y)w(x)dx, \quad k=0,\dots , r. \end{aligned}$$
(5)

Depending on the kernel K and on the weight function w, a standard computation of \(M_k(y),\,k=0,\ldots ,r\) can be obtained by recurrence relations (see, e.g., [32]). Besides this approach, other strategies can be thought, as long as modified moments are computed with high accuracy. About the error estimate, from a more general result by Nevai [27], the following theorem holds

Theorem 1

Let \(f \in C^0([-1,1])\). Under the assumption

$$\begin{aligned} \sup _{y \in S} \int _{-1}^{1} |K(x,y) |w(x) \, \log \left( 2+|k(x,y) |w(x)\right) \, dx < +\infty , \end{aligned}$$
(6)

the following estimate holds true

$$\begin{aligned} \sup _{y \in S} \, |e_{r}(f,y) |\le \mathcal {C} E_{r}(f), \quad \mathcal {C}\ne \mathcal {C}(r,f). \end{aligned}$$
(7)

2.3 Constrained mock-Chebyshev least squares interpolant

Let \(X_n=\{ \xi _i\}_{i=0}^n\) be the set of \(n+1\) equally spaced nodes of \([-1,1]\), i.e.

$$\begin{aligned} X_n=\left\{ \xi _i=-1+\frac{2}{n}i:\, i=0,\dots , n\right\} , \end{aligned}$$

and let the function f be known only at the points of \(X_n\). The main idea of the constrained mock-Chebyshev least squares method [9, 13] is to construct an interpolant of f on a proper subset of \(X_n\), formed by \(m+1\) nodes, chosen as “closest” to Chebyshev–Lobatto nodes, and use the remaining \(n-m\) points of \(X_n\) to improve the accuracy of approximation by a process of simultaneous regression of degree \(p\ge 0\). To be more precise, let \(m=\left\lfloor \pi \sqrt{\frac{n}{2}} \right\rfloor \), and denote by \(X_m^{CL}\) the set of Chebyshev–Lobatto nodes of order \(m+1\)

$$\begin{aligned} X_m^{CL}=\left\{ \xi ^{CL}_i=-\cos \bigg (\frac{\pi }{m}i\bigg ):i=0,\dots , m\right\} . \end{aligned}$$

For any \(i=0,\dots ,m\), let us define the node \(\xi ^{\prime }_{i}\in X_n\) as

$$\begin{aligned} \left|\xi ^\prime _i-\xi ^{CL}_i\right|:=\min _{\xi _j\in X_n}\left|\xi _j-\xi ^{CL}_i\right|, \end{aligned}$$

then \(X^\prime _m:=\{\xi ^{\prime }_{i}:i=0,\dots ,m\}\subset X_n\) is the mock-Chebyshev set of order m related to \(X_n\) [2, 19, 20]. Despite \( X_{n-m}^{\prime \prime }:=X_n\smallsetminus X^{\prime }_m\) is not an equispaced grid, in [9] it is proven that, for n sufficiently large, it is possible to approximate an equispaced grid of \(q=\lfloor n/6\rfloor \) internal nodes of \([-1,1]\) with nodes of \( X_{n-m}^{\prime \prime }\). We denote such grid of q elements by \(\tilde{X}_{n-m}^{\prime \prime }\). The degree \(p=\left\lfloor \pi \sqrt{\frac{n}{12}}\right\rfloor \) of the least-squares polynomial is selected so that there is a subset of cardinality \(p+1\) of the equispaced set which is close, in the mock-Chebyshev sense, to the \(p+1\) Chebyshev grid

$$\begin{aligned} \cos \left( \frac{2k-1}{2p+2}\pi \right) , \quad k=1,\dots ,p+1. \end{aligned}$$
(8)

Set \(r=m+p+1\) and let \(\mathcal {B}_r=\{u_0(x),\dots ,u_r(x)\}\) be a basis of \(\varPi _r\). The constrained mock-Chebyshev least squares interpolant \(\hat{P}_{r,n}(f)\in \varPi _r\) is

$$\begin{aligned} \hat{P}_{r,n}(f,x)=\sum _{i=0}^{r}a_iu_i(x), \end{aligned}$$
(9)

where the vector \(\varvec{a}=[a_0,a_1,\dots ,a_r]^{T}\) is computed by solving the KKT linear system [3, 12]

$$\begin{aligned} \begin{bmatrix} 2 V^TV &{} C^T \\ C &{} 0 \\ \end{bmatrix} \begin{bmatrix} \varvec{a} \\ \varvec{z} \\ \end{bmatrix}= \begin{bmatrix} 2 V^T\varvec{b} \\ \varvec{d} \\ \end{bmatrix}, \end{aligned}$$
(10)

with

$$\begin{aligned} V=[u_j(\xi _i)]_{\begin{array}{c} i=0,\dots ,n\\ j=0,\dots ,r \end{array}}, \qquad C=[u_j(\xi _i)]_{\begin{array}{c} i=0,\dots ,m\\ j=0,\dots ,r \end{array}}, \end{aligned}$$
(11)

\(\varvec{b}=[f(\xi _0),\dots , f(\xi _n)]^T,\;\varvec{d}=[f(\xi _0),\dots , f(\xi _m)]^T\) and \(\varvec{z}=[\hat{z}_1,\dots ,\hat{z}_{m+1}]^T\) the Lagrange multipliers vector. In defining V and C in (11) the assumption is that the nodes \(\xi _i\) have been reordered so that \(\xi _i=\xi ^{\prime }_i,\,\) \(i=0,\dots ,m,\) and the polynomials \(u_0,\dots ,u_m\) span \(\varPi _m\). In the following we denote by

$$\begin{aligned} M= \begin{bmatrix} 2 V^TV &{} C^T \\ C &{} 0 \\ \end{bmatrix}, \end{aligned}$$
(12)

the KKT matrix and by \(\kappa (M)=\Vert M \Vert _1 \Vert M^{-1} \Vert _1 \) its condition number in \(l_1\)-norm.

Remark 1

We note that the approximant \(\hat{P}_{r,n}(f)\) is uniquely determined by the evaluations of the function f at the set of equispaced nodes \(X_n\). Consequently,

$$\begin{aligned} \hat{P}_{r,n}(f)=\hat{P}_{r,n}(L_n(f)) \end{aligned}$$
(13)

where

$$\begin{aligned} L_n(f,x)=\sum _{i=0}^n f(\xi _i)\ell _i(x), \qquad \ell _i(x)=\prod _{\begin{array}{c} j=0\\ j\ne i \end{array}}^{n}\frac{x-\xi _j}{\xi _i-\xi _j}, \qquad i=0,\ldots ,n. \end{aligned}$$
(14)

is the Lagrange polynomial interpolating f at the nodes of \(X_n\).

The constrained mock-Chebyshev least squares operator

$$\begin{aligned} \hat{P}_{r,n}: C^0([-1,1])\rightarrow & {} \varPi _r \\ f(x)\mapsto & {} \hat{P}_{r,n}(f,x) \end{aligned}$$

reproduces polynomials of degree \(\le r\) (cf. [9]) and interpolates the function f at the mock-Chebyshev subset of nodes, that is

$$\begin{aligned} \hat{P}_{r,n}(f,\xi ^{\prime }_i)=f(\xi ^{\prime }_i), \quad i=0,\dots ,m. \end{aligned}$$

Denoting by

$$\begin{aligned} \hat{R}_{r,n}(f,x):=f(x)-\hat{P}_{r,n}(f,x), \quad x\in [-1,1], \end{aligned}$$
(15)

the approximation error by means of the constrained mock-Chebyshev least squares interpolant and by setting

$$\begin{aligned} B_n=D\left( 2 (r+1)\kappa (M)+(m+1)\Vert M^{-1}\Vert _1\right) \end{aligned}$$
(16)

where \(D:=\max \limits _{j=0,\dots ,r}\left\Vert u_j \right\Vert _{\infty }\), the following theorem holds [13]

Theorem 2

Let be \(f\in C^0([-1,1])\), then

$$\begin{aligned} \left\Vert \hat{R}_{r,n}(f)\right\Vert _{\infty }\le \left( 1+B_n\right) E_r(f) \end{aligned}$$
(17)

where \(E_r(f)\), introduced in (2), is the error of best uniform approximation of f by polynomials of \(\varPi _r\).

Corollary 1

Let \(f\in C^k([-1,1])\), \(k=0,\dots ,r\). Then we have

$$\begin{aligned}{} & {} \left\| \hat{R}_{r,n}(f)\right\| _{\infty }\le \left( 1+B_n\right) \omega _f\left( \frac{\pi }{r+1}\right) , \quad k=0, \end{aligned}$$
(18)
$$\begin{aligned}{} & {} \left\| \hat{R}_{r,n}(f)\right\| _{\infty }\le \left( \frac{\pi }{2}\right) ^k\left( 1+B_n\right) \frac{\left\| f^{(k)}\right\| _{\infty }}{(r+1)r\cdots (r-k+2)}, \quad 0<k\le r, \end{aligned}$$
(19)

where \(\omega _f(\cdot )\) is the modulus of continuity of the function f (cf. [5]).

Proof

The proof follows by combining Theorem 2 and Jackson Theorem (see for instance [5, Ch. 4]). \(\square \)

In what follows we are going to choose the basis \(\mathcal {B}_r\) as

$$\begin{aligned} \left\{ T_i(x)\, :\, i=0,\dots ,r\right\} , \end{aligned}$$
(20)

and hence the constrained mock-Chebyshev least squares polynomial takes the form

$$\begin{aligned} \hat{P}_{r,n}^C(f,x)= \sum _{i=0}^{r}a_i \Vert T_i \Vert _{2,\sqrt{w^C}}\, p_i(w^C,x). \end{aligned}$$
(21)

Moreover, in [13] it has been shown that

$$\begin{aligned} B_n\approx e^{3.66}n^{2.03}. \end{aligned}$$
(22)

3 The main result

From now on we assume that the function f is known only on the set of equispaced nodes \(X_n\). As we announced in the introduction, the product integration rule we are going to introduce is based on the approximation of the function f by \(\hat{P}^C_{r,n}(f)\) expressed in the Chebyshev polynomial basis as in (21) and on the “exact” evaluation of the coefficients of the quadrature method. Indeed, by (1) we get

$$\begin{aligned} I(f,y)= & {} \int _{-1}^1 \hat{P}_{r,n}^C(f,x)K(x,y)w(x)dx+ \hat{e}_{r,n}(f,y) \end{aligned}$$
(23)
$$\begin{aligned}= & {} \int _{-1}^{1} \left( \sum _{i=0}^{r}a_i \Vert T_i \Vert _{2,\sqrt{w^C}}\, p_i(w^C,x) \right) K(x,y)w(x)dx+ \hat{e}_{r,n}(f,y)\qquad \end{aligned}$$
(24)
$$\begin{aligned}=: & {} \sum _{i=0}^{r}a_i\Vert T_i \Vert _{2,\sqrt{w^C}}\;M_i(y)+\hat{e}_{r,n}(f,y)=\varSigma _{r,n}(f,y)+\hat{e}_{r,n}(f,y)\qquad \end{aligned}$$
(25)

where \(w(x)=(1-x)^\alpha (1+x)^\beta ,\;\alpha ,\beta >-1\),

$$\begin{aligned} M_i(y)=\int _{-1}^1 p _i(w^C,x)K(x,y)w(x)dx \end{aligned}$$

are the modified moments defined in (5) and

$$\begin{aligned} \quad \hat{e}_{r,n}(f,y)=\int _{-1}^1 (f(x)- \hat{P}_{r,n}^C(f,x))K(x,y)w(x)dx \end{aligned}$$
(26)

is the quadrature error.

Theorem 3

The quadrature sum \(\varSigma _{r,n}(f,y)\) in (25) takes the following expression

$$\begin{aligned} \varSigma _{r,n}(f,y)=\sum _{i=0}^n \hat{w}_i(y)f(\xi _i), \end{aligned}$$
(27)

where

$$\begin{aligned} \hat{w}_i(y)= \int _{-1}^{1} \hat{P}_{r,n}^{C}(\ell _i,x)K(x,y)w(x)\,dx, \quad i=0,\dots , n. \end{aligned}$$
(28)

Proof

By the property (13) we get

$$\begin{aligned} \hat{P}_{r,n}^{C}(f)=\hat{P}_{r,n}^{C}(L_n(f))=\hat{P}_{r,n}^{C}\left( \sum _{i=0}^n f(\xi _i)\ell _i \right) =\sum _{i=0}^n f(\xi _i) \hat{P}_{r,n}^{C}(\ell _i), \end{aligned}$$
(29)

where \(L_n(f)\) is defined as in (14). By substituting (29) in (2324), we obtain

$$\begin{aligned} \varSigma _{r,n}(f,y)= & {} \int _{-1}^{1} \hat{P}_{r,n}^{C}(f,x)K(x,y)w(x)dx\\= & {} \int _{-1}^{1} \hat{P}_{r,n}^{C}(L_n(f),x)K(x,y)w(x)dx\\= & {} \sum _{i=0}^{n} f(\xi _i)\int _{-1}^{1} \hat{P}_{r,n}^{C}(\ell _i,x)K(x,y)w(x)dx\\= & {} \sum _{i=0}^n \hat{w}_{i}(y)f(\xi _i). \end{aligned}$$

\(\square \)

Remark 2

Equation (27) and (28) show that the weights of the quadrature rule \(\varSigma _{r,n}(f,y)\) depends on y and therefore, changing y, they must be recomputed. Note that the dependence on y is typical of the product integration rules, that, in the face of fast convergent rules, requires more computational effort. The same happens in the case of product rules based on the zeros of orthogonal polynomials.

Remark 3

When \(K(x,y)\equiv 1\) the quadrature rule (27) with weights (28) reduces to the stable quadrature rule from \(n+1\) equispaced nodes with degree of exactness r already introduced in [11]. As well known, this formula is based on the approximation of the function f by \(\hat{P}^C_{r,n}(f)\) expressed in the Chebyshev polynomial basis as in (21) and on the exact evaluation of the coefficients of the quadrature method by a Gaussian–Christoffel quadrature formula of order m [17, Ch. 3]. It is worth noting that the use of Clenshaw–Curtis quadrature rules with algebraic degree of precision equal to r, for which fast algorithms for computing the weights are well known [34, 35], will produce another kind of quadrature formula from equispaced nodes, which is worth of investigation.

We observe that the construction of the proposed quadrature rule requires the same modified moments of the product rule (4), i.e. it employs the same computational effort. However, differently from the formula (4), the new rule (25) presents the main advantage of using samples of f at equally spaced nodes. About the convergence of the product rule, we are able to prove the following

Theorem 4

Under the assumption

$$\begin{aligned} U=\sup _{y \in S} \Vert K_y w \Vert _1 < +\infty , \end{aligned}$$
(30)

for any \(y\in S\) the following error estimate holds true

$$\begin{aligned} \left|\hat{e}_{n,r}(f,y)\right|\le U\left( 1+B_n\right) E_{r}(f). \end{aligned}$$

Proof

From Theorem 2, in view of (26) and under the assumption (30), we have

$$\begin{aligned} |\hat{e}_{r,n}(f,y) |\le & {} \int _{-1}^{1} |f(x)-\hat{P}_{r,n}^C(f,x) ||K(x,y) |w(x)\,dx\\\le & {} \left\Vert f-\hat{P}_{r,n}^C(f)\right\Vert _{\infty }\int _{-1}^{1} \left|K(x,y)\right|w(x)\,dx\\\le & {} U (1+B_n) E_{r}(f). \end{aligned}$$

\(\square \)

Remark 4

Since \(B_n \approx \mathcal {C}\ n^{2.03}\), taking into account (3), the convergence of the rule is assured for functions \(f\in W_3\). As one can deduce from (7), the “classical” product formula (4) converges under less restrictive assumptions on the function f, which is required to be only continuous in the integration interval. However, such a rule requires the samples of f at the zeros of the Chebyshev polynomial \(T_{r+1}\), and hence is not reliable if one works on experimental data, usually obtained on equally spaced nodes. For instance, in evolution equations of nonlocal diffusion type, the data can be given at equally spaced points. To solve such a kind of equation, in [26] the authors proposed a discretization procedure based on the application of the line method and on quadrature formulae over equally spaced points.

3.1 Implementation details

Now we provide some details about the effective computation of the coefficients of the product rule (25), for the following choices of kernels:

$$\begin{aligned} K_1(x,y)= & {} |x-y |^\lambda , \quad \lambda>-1, \ |y |< 1,\\ K_2(x,y)= & {} g(yx), \quad g(\cdot )=\sin (\cdot ) \, \text { or } \, g(\cdot )=\cos (\cdot ), \quad |y |\gg 1, \\ K_3(x,y)= & {} \frac{1}{(x^2+y^2)^\mu }, \quad 0< |y |\ll 1,\ \mu >0, \end{aligned}$$

and \(w(x)=(1-x)^\alpha (1+x)^\beta .\)

Let us focus on the case \(K_1(x,y)\) in order to compute the modified moments \(\left\{ M_{i}^{(K_1)}(y)\right\} _{i=0}^r\), where

$$\begin{aligned} M_{i}^{(K_1)}(y)=\int _{-1}^{1} p_{i}(w^C,x)|x-y |^\lambda w(x) \, dx. \end{aligned}$$
(31)

To this purpose we first split the integral as follows

$$\begin{aligned} M_{i}^{(K_1)}(y)= \int _{-1}^{y} (y-x)^\lambda p_i(w^C,x)w(x)\,dx+\int _{y}^{1}(x-y)^\lambda p_i(w^C,x)w(x)\,dx. \end{aligned}$$

Introducing the linear transformations

$$\begin{aligned} \varPhi _1(x):=2 \frac{1+x}{1+y}-1, \qquad \varPhi _2(x):=2 \frac{x-y}{1-y}-1, \end{aligned}$$

and setting \(z=\varPhi _j(x),\;j=1,2,\) we have

$$\begin{aligned} M_{i}^{(K_1)}(y)= & {} \left( \frac{1+y}{2} \right) ^{\lambda +\beta +1} \int _{-1}^{1} p_i(w^C,\varPhi _1^{-1}(z))(1-\varPhi _1^{-1}(z))^\alpha (1-z)^\lambda (1+z)^\beta \,dz\\+ & {} \left( \frac{1-y}{2} \right) ^{\lambda +\alpha +1} \int _{-1}^{1} p_i(w^C,\varPhi _2^{-1}(z))(1+\varPhi _2^{-1}(z))^\beta (1-z)^\alpha (1+z)^\lambda \,dz, \end{aligned}$$

where both the above integrals can be evaluated with high precision by means of \(r-\)point Gaussian rules w.r.t. the weight \((1-z)^\lambda (1+z)^\beta \) and \((1-z)^\alpha (1+z)^\lambda \) in the first and second integrals, respectively. Note that the transformed integrand contains factors of the type \((1\pm \varPhi _j^{-1}(z))^\rho , \ \rho >-1\), which are analytical functions, so that the error of the Gaussian rule geometrically goes to zero.

Let us consider the modified moments

$$\begin{aligned} M_{i}^{(K_2)}(y)=\int _{-1}^{1} p_{i}(w^C,x)g(yx)w(x) \, dx. \end{aligned}$$
(32)

In this case the main problem is the oscillation of the integrand. In order to mitigate this phenomenon, we propose to use a dilation technique introduced in [7, 30]. Setting \(z=yx\), we have

$$\begin{aligned} M_{i}^{(K_2)}(y)=\dfrac{1}{y} \int _{-y}^{y} p_{i} \left( w^C,\dfrac{z}{y} \right) g(z)w \left( \dfrac{z}{y} \right) \, dz. \end{aligned}$$

Moreover, we consider the following partition of the integration interval \([-y,y]\) into \(s:=\lfloor y \rfloor \) subintervals of size \(d:=\frac{2y}{s}\)

$$\begin{aligned}{}[-y,y]=\bigcup _{j=1}^{s} \, [-y+d(j-1),-y+dj]. \end{aligned}$$

Hence, the modified moments (32) take the following expression

$$\begin{aligned} M_{i}^{(K_2)}(y)=\dfrac{1}{y} \sum _{j=1}^{s} \int _{-y+d(j-1)}^{-y+dj} p_{i} \bigg ( w^C,\dfrac{z}{y} \bigg ) g(z)w\bigg ( \dfrac{z}{y} \bigg ) \, dz. \end{aligned}$$

By setting

$$\begin{aligned} \varphi _j(z):=\frac{2}{d} (z+y-d(j-1))-1, \quad j=1,\ldots ,s, \end{aligned}$$

and with the change of variable \(t=\varphi _j(z)\), we get

$$\begin{aligned} M_{i}^{(K_2)}(y)= & {} \dfrac{d}{2y} \bigg \{ c_1 \int _{-1}^{1} p_{i} \bigg ( w^C,\dfrac{\varphi _1^{-1}(t)}{y} \bigg ) g(\varphi _1^{-1}(t)) \bigg ( 1-\dfrac{\varphi _1^{-1}(t)}{y} \bigg )^{\alpha } (1+t)^\beta \, dt\\{} & {} + \sum _{j=2}^{s-1} \int _{-1}^{1} p_{i} \bigg ( w^C,\dfrac{\varphi _j^{-1}(t)}{y} \bigg ) g(\varphi _j^{-1}(t))w\bigg ( \dfrac{\varphi _j^{-1}(t)}{y} \bigg ) \, dt\\{} & {} + c_2 \int _{-1}^{1} p_{i} \bigg ( w^C,\dfrac{\varphi _s^{-1}(t)}{y} \bigg ) g(\varphi _s^{-1}(t))\bigg ( 1+\dfrac{\varphi _s^{-1}(t)}{y} \bigg )^{\beta } (1-t)^\alpha \, dt \bigg \}, \end{aligned}$$

where

$$\begin{aligned} c_1 = \bigg ( \dfrac{d}{2y} \bigg )^{\beta }, \qquad c_2 = \bigg ( \dfrac{d}{2y} \bigg )^{\alpha }. \end{aligned}$$

The above integrals are less oscillating and can be evaluated with high precision by means of \(r-\)point Gaussian rules w.r.t. the weight \((1+t)^\beta \) and \((1-t)^\alpha \) in the first and last integrals and w.r.t. the Legendre weight in the others, respectively. Note that the transformed integrand contains factors of the type \(g(\varphi _j^{-1}(t))\), \(w\left( \frac{\varphi _j^{-1}(t)}{y} \right) \) and \(\left( 1\pm \frac{\varphi _j^{-1}(t)}{y}\right) ^\rho , \ \rho >-1\), which are analytical functions, so that the error of the Gaussian rule geometrically goes to zero.

The same dilation technique has been applied for “nearly” singular kernels of the type \(K_3(x,y)\) [30, 31]. In this case, we consider the modified moments

$$\begin{aligned} M_{i}^{(K_3)}(y)=\int _{-1}^{1} \dfrac{p_{i}(w^C,x)}{(x^2+y^2)^\mu }w(x) \, dx, \end{aligned}$$
(33)

and we set \(z=\frac{x}{y}\), obtaining

$$\begin{aligned} M_{i}^{(K_3)}(y)=\dfrac{1}{y^{2\mu -1}} \int _{-1/y}^{1/y} \dfrac{p_{i}(w^C,yz)}{(z^2+1)^\mu }w(yz) \, dz. \end{aligned}$$
(34)

We fix the parameter \(s:=\left\lfloor \frac{1}{y} \right\rfloor \) also in this case, in order to have \(d \sim 2\) and divide the interval \(\left[ -\frac{1}{y},\frac{1}{y}\right] \) into s subintervals of size d. Hence, we get

$$\begin{aligned} \bigg [-\dfrac{1}{y},\dfrac{1}{y} \bigg ]=\bigcup _{j=1}^{s} \, \bigg [-\dfrac{1}{y}+d(j-1),-\dfrac{1}{y}+dj \bigg ], \end{aligned}$$

and

$$\begin{aligned} M_{i}^{(K_3)}(y)=\dfrac{1}{y^{2\mu -1}} \sum _{j=1}^{s} \int _{-1/y+d(j-1)}^{-1/y+dj} \dfrac{p_{i}(w^C,yz)}{(z^2+1)^\mu }w(yz) \, dz. \end{aligned}$$

Denoting by

$$\begin{aligned} \psi _j(z):=\frac{2}{d}\left( z+\frac{1}{y}-d(j-1)\right) -1, \quad j=1,\ldots ,s, \end{aligned}$$

and by setting \(t=\psi _j(z)\) in each integral, we have

$$\begin{aligned} M_{i}^{(K_3)}(y){} & {} = \dfrac{d}{2y^{2\mu -1}} \bigg \{ c_3 \int _{-1}^{1} \dfrac{p_{i}(w^C,y\psi _1^{-1}(t))(2-(\frac{t+1}{2})yd)^\alpha }{((\psi _1^{-1}(t))^2+1)^\mu } (1+t)^\beta \, dt\\{} & {} \quad + \sum _{j=2}^{s-1} \int _{-1}^{1} \dfrac{p_{i}(w^C,y\psi _j^{-1}(t))w(y\psi _j^{-1}(t))}{((\psi _j^{-1}(t))^2+1)^\mu } \, dt\\{} & {} \quad + c_4 \int _{-1}^{1} \dfrac{p_{i}(w^C,y\psi _s^{-1}(t))(2+(\frac{t-1}{2})yd)^\beta }{((\psi _s^{-1}(t))^2+1)^\mu } (1-t)^\alpha \, dt \bigg \}, \end{aligned}$$

with

$$\begin{aligned} c_3 = \bigg ( \dfrac{yd}{2} \bigg )^{\beta }, \qquad c_4 = \bigg ( \dfrac{yd}{2} \bigg )^{\alpha }. \end{aligned}$$

Since the poles of the integrand functions are far away from the real axis, the above integrals can be evaluated by means of \(r-\)point Gaussian rules w.r.t. the weight \((1+t)^\beta \) and \((1-t)^\alpha \) in the first and last integrals and w.r.t. the Legendre weight in the others, respectively. Note that the transformed integrand contains factors of the type \(\frac{1}{((\psi _j^{-1}(t))^2+1)^\mu }\), \(w(y\psi _j^{-1}(t))\) and \((2\pm (\frac{t-1}{2})yd)^\rho , \ \rho >-1\), which are analytical functions, so that the error of the Gaussian rule geometrically goes to zero.

Remark 5

The computation of the modified moments for the considered kernels is attained by means of Gaussian rules. By doing so we introduce an error that is of the order of the machine precision. This choice doesn’t have an impact on the performance of our rule (25). For other choices of kernels the modified moments can be evaluated exactly by means of recurrence relations that however can become progressively unstable as r increases.

4 Numerical experiments

In this section we present some numerical experiments to analyze the performance of the constrained mock-Chebyshev product rule and compare it with other procedures. More in detail, we perform a direct comparison between the constrained mock-Chebyshev product rule (25) and the classical product formula (4). To this purpose, in each example, we consider the following functions

$$\begin{aligned} f_1(x)=\frac{1}{1+8x^2}, \quad f_2(x)=\sin (x), \quad f_3(x)=\log (x+3), \quad f_4(x)=e^{x}, \end{aligned}$$

and we focus on a particular kernel for different choices of \(y\in S\).

Tables 1, 2, 3 display the absolute errors

$$\begin{aligned}{} & {} |\hat{e}_{r,n}(f_i,y)|=|I(f_i,y)-\varSigma _{r,n}(f_i )|, \quad y \in S, \quad i=1,2,3,4, \\{} & {} |e_{m}(f_i,y) |=|I(f_i,y)-\varSigma _m(f_i,y)|, \quad y \in S, \quad i=1,2,3,4, \end{aligned}$$

being the constrained mock-Chebyshev product rule based on a grid of 1001 uniformly distributed nodes in the interval \([-1,1]\). In this setting, we have \(n=1000,\,r=98,\,m=70\). All the tests are carried out in MatlabR2022a on a MacBook Pro under the MacOS operating system and in each table we report the CPU time required to evaluate the considered integral in four different values of y. Moreover, since the exact value of the integrals is not known, we assume as exact the values provided in quadruple working precision by the built-in function NIntegrate of the Wolfram Mathematica 13 software.

Table 1 Numerical results for integrals \(I(f_i,y)\) by means of the constrained mock-Chebyshev product rule (top) and the classical product rule (bottom) with kernel function \(K(x,y)=|x-y |^{\frac{3}{10}}\) and weight function \(w(x)=w^C(x)\)
Table 2 Numerical results for integrals \(I(f_i,y)\) by means of the constrained mock-Chebyshev product rule (top) and the classical product rule (bottom) with kernel function \(K(x,y)=\frac{1}{(x^2+y^2)^2}\) and weight function \(w(x)=w^C(x)\)

In our numerical tests we considered the following integrals:

Example 1

$$\begin{aligned} I(f_i,y)=\int _{-1}^{1} \dfrac{f_i(x)|x-y |^{\frac{3}{10}}}{\sqrt{1-x^2}}\,dx, \quad y \in (-1,1), \quad i=1,2,3,4, \end{aligned}$$

in which \(K(x,y)=|x-y |^{\frac{3}{10}}\) is a weakly singular kernel function and \(w(x)=w^C(x)\).

Example 2

$$\begin{aligned} I(f_i,y)=\int _{-1}^{1} \dfrac{f_i(x)}{\sqrt{1-x^2} (x^2+y^2)^{2}}\,dx, \quad y \in (-1,1), \quad i=1,2,3,4, \end{aligned}$$

involving a nearly singular kernel function \(K(x,y)=\frac{1}{(x^2+y^2)^2}\) and \(w(x)=w^C(x).\)

Example 3

$$\begin{aligned} I(f_i,y)=\int _{-1}^{1} \dfrac{f_i(x)\sin (yx)}{\sqrt{1-x^2}}\,dx, \quad y \gg 1, \quad i=1,2,3,4, \end{aligned}$$

with the highly oscillating kernel \(K(x,y)=\sin (yx)\) and \(w(x)=w^C(x)\).

Example 4

$$\begin{aligned} I(f,y)=\int _{-1}^{1} \frac{\sqrt{1-x^2}\cos (yx)}{1+25x^2}\,dx, \quad y \gg 1, \end{aligned}$$

where \(f(x)=\frac{1}{1+25x^2}\) is the Runge function, \(K(x,y)=\cos (yx)\) is a highly oscillating kernel and \(w(x)=\sqrt{1-x^2}\) is the 2-nd kind Chebyshev weight.

Table 3 Numerical results for integrals \(I(f_i,y)\) by means of the constrained mock-Chebyshev product rule (top) and the classical product rule (bottom) with kernel function \(K(x,y)=\sin (yx)\) and weight function \(w(x)=w^C(x)\)
Table 4 Numerical results for integral I(fy) by means of the constrained mock-Chebyshev product rule based on different equispaced grids with \(f(x)=\frac{1}{1+25x^2}\), \(K(x,y)=\cos (yx)\) and \(w(x)=\sqrt{1-x^2}\)

The results in Tables 1, 2, 3 highlight that the product rules (25) and (4) have comparable performances. Nevertheless, the constrained mock-Chebyshev product formula (25) is based on equally spaced data and this makes it the adequate tool to choose in many practical applications, differently from the classical product rule (4) requiring that the function f is known analytically or at least at the zeros of orthonormal polynomial w.r.t. the weight function w.

In Example 4 we test the performance of the constrained mock-Chebyshev rule for different choices of equispaced grids in the integration interval \([-1,1]\). In this context, Table 4 reports the relative errors

$$\begin{aligned} \hat{e}_{r,n}^{rel}(f,y):=\dfrac{|I(f,y)-\varSigma _{r,n}(f,y)|}{|I(f,y) |}, \quad y \in S, \end{aligned}$$

attained by the rule (25) for increasing values of n,\(\,m\) and p, namely for gradually more dense equispaced grids of \(n+1\) points in \([-1,1]\), and for different values of y.

Fig. 1
figure 1

Benchmark analysis of the relative errors attained approximating the integral I(fy) by means of the constrained mock-Chebyshev product rule based on different equispaced grids with \(f(x)=\frac{1}{1+25x^2}\) and \(K(x,y)=\cos (yx)\)

Table 4 and Fig. 1 show that the more dense the equispaced grid is, the more accurate is the constrained mock-Chebyshev rule (25). However, we also observe a slight loss of accuracy for increasing values of y. This can be attributed to the high oscillations of the kernel \(\cos (yx)\) and to the Runge phenomenon that occurs when using polynomial interpolation with polynomials of high degree over a set of equispaced interpolation points.

5 Conclusions and future works

In this paper we proposed a new product rule, based on equispaced points, through the constrained mock-Chebyshev least squares operator. Moreover, we provided an error estimate and many numerical examples which confirm the accuracy of the proposed formula. By the numerical results, the rule (27) on n nodes shows the same accuracy of the product rule (4) on \(m=\left\lfloor \pi \sqrt{\frac{n}{2}} \right\rfloor \) nodes. Based on these satisfactory results, it is our aim to extend this formula to the case of unbounded intervals or/and bivariate domains.