Product integration rules by the constrained mock-Chebyshev least squares operator

In this paper we consider the problem of the approximation of definite integrals on finite intervals for integrand functions showing some kind of “pathological” behavior, e.g. “nearly” singular functions, highly oscillating functions, weakly singular functions, etc. In particular, we introduce and study a product rule based on equally spaced nodes and on the constrained mock-Chebyshev least squares operator. Like other polynomial or rational approximation methods, this operator was recently introduced in order to defeat the Runge phenomenon that occurs when using polynomial interpolation on large sets of equally spaced points. Unlike methods based on piecewise approximation functions, mainly used in the case of equally spaced nodes, our product rule offers a high efficiency, with performances slightly lower than those of global methods based on orthogonal polynomials in the same spaces of functions. We study the convergence of the product rule and provide error estimates in subspaces of continuous functions. We test the effectiveness of the formula by means of several examples, which confirm the theoretical estimates.


Introduction
The paper deals with the numerical computation of integrals of the type formula, with estimate of the error in Sobolev-type subspaces, and provide some implementation details for a selection of kernels. In Sect. 4 are given some numerical tests, and comparisons with the results achieved by the product rule on Chebyshev zeros.

Preliminaries
From now on, C will denote any positive constant having different meanings at different occurrences and the writing C = C(a, b, . . . ) has to be understood as C not depending on a, b, . . . . If A, B > 0 are quantities depending on some parameters, we write A ∼ B, if there exists a constant C = C (A, B) such that C −1 B ≤ A ≤ C B. Furthermore, we denote by Π m the space of the algebraic polynomials of degree less than or equal to m. Finally, for any bivariate function h(x, y), we denote the projections of the function h(x, y) on one variable as h y (x) and h x (y) respectively.

Function spaces and orthogonal basis
is the error of best polynomial approximation of f in uniform norm. As well-known, by the Weierstrass Theorem [6], By denoting with AC(−1, 1) the space of functions in [− 1,1] which are absolutely continuous on every closed subset of (−1, 1) and by setting φ(x) = √ 1 − x 2 , let be the Sobolev space of order s ∈ N, s ≥ 1, endowed with the norm To estimate the error of best polynomial approximation, we recall the Favard inequality [14], which holds for each f ∈ W s , We also set T i (x) := cos(i arccos x), i ∈ N 0 , 1 2 = √ π, n = 0, π 2 , n = 0, and we denote by p i (w C ) i∈N 0 the sequence of orthonormal polynomials w.r.t. the 1-st kind Chebyshev weight w C

A product integration rule on the zeros of 1-st kind Chebyshev polynomials
Let {z 1 , . . . , z r +1 } be the zeros of the (r + 1)-th orthonormal polynomial p r +1 (w C , ·) with respect to the 1-st kind Chebyshev weight It can be represented as (see e.g. [22], Chapter 4) A class of interpolating product rules is based on the approximation of the function f in (1) by L r (w C , f ) (see e.g. [22] and the references therein), i.e. where and The rule is exact for polynomials of degree r , i.e.
As it is well known, the accuracy of the product rule is based on "exact" evaluation of the modified moments Depending on the kernel K and on the weight function w, a standard computation of M k (y), k = 0, . . . , r can be obtained by recurrence relations (see, e.g., [32]). Besides this approach, other strategies can be thought, as long as modified moments are computed with high accuracy. About the error estimate, from a more general result by Nevai [27], the following theorem holds the following estimate holds true
and let the function f be known only at the points of X n . The main idea of the constrained mock-Chebyshev least squares method [9,13] is to construct an interpolant of f on a proper subset of X n , formed by m +1 nodes, chosen as "closest" to Chebyshev-Lobatto nodes, and use the remaining n − m points of X n to improve the accuracy of approximation by a process of simultaneous regression of degree p ≥ 0. To be more precise, let m = π n 2 , and denote by X C L m the set of Chebyshev-Lobatto nodes of order m + 1 For any i = 0, . . . , m, let us define the node ξ i ∈ X n as then X m := {ξ i : i = 0, . . . , m} ⊂ X n is the mock-Chebyshev set of order m related to X n [2,19,20]. Despite X n−m := X n X m is not an equispaced grid, in [9] it is proven that, for n sufficiently large, it is possible to approximate an equispaced grid of q = n/6 internal nodes of [−1, 1] with nodes of X n−m . We denote such grid of q elements byX n−m . The degree p = π n 12 of the least-squares polynomial is selected so that there is a subset of cardinality p + 1 of the equispaced set which is close, in the mock-Chebyshev sense, to the p + 1 Chebyshev grid Set r = m + p + 1 and let B r = {u 0 (x), . . . , u r (x)} be a basis of Π r . The constrained mock-Chebyshev least squares interpolantP r ,n ( f ) ∈ Π r iŝ where the vector a = [a 0 , a 1 , . . . , a r ] T is computed by solving the KKT linear system [3,12] 2V with . . , f (ξ m )] T and z = [ẑ 1 , . . . ,ẑ m+1 ] T the Lagrange multipliers vector. In defining V and C in (11) the assumption is that the nodes ξ i have been reordered so that ξ i = ξ i , i = 0, . . . , m, and the polynomials u 0 , . . . , u m span Π m . In the following we denote by the KKT matrix and by κ(M) = M 1 M −1 1 its condition number in l 1 -norm.

Remark 1
We note that the approximantP r ,n ( f ) is uniquely determined by the evaluations of the function f at the set of equispaced nodes X n . Consequently, where is the Lagrange polynomial interpolating f at the nodes of X n .
The constrained mock-Chebyshev least squares operator reproduces polynomials of degree ≤ r (cf. [9]) and interpolates the function f at the mock-Chebyshev subset of nodes, that iŝ Denoting byR the approximation error by means of the constrained mock-Chebyshev least squares interpolant and by setting where D := max j=0,...,r u j ∞ , the following theorem holds [13] Theorem 2 Let be f ∈ C 0 ([−1, 1]), then where E r ( f ), introduced in (2), is the error of best uniform approximation of f by polynomials of Π r .
where ω f (·) is the modulus of continuity of the function f (cf. [5]).

Proof
The proof follows by combining Theorem 2 and Jackson Theorem (see for instance [5,Ch. 4]).
In what follows we are going to choose the basis B r as and hence the constrained mock-Chebyshev least squares polynomial takes the form Moreover, in [13] it has been shown that

The main result
From now on we assume that the function f is known only on the set of equispaced nodes X n . As we announced in the introduction, the product integration rule we are going to introduce is based on the approximation of the function f byP C r ,n ( f ) expressed in the Chebyshev polynomial basis as in (21) and on the "exact" evaluation of the coefficients of the quadrature method. Indeed, by (1) we get are the modified moments defined in (5) and is the quadrature error.

Theorem 3
The quadrature sum Σ r ,n ( f , y) in (25) takes the following expression Proof By the property (13) we get where L n ( f ) is defined as in (14). By substituting (29) in (23), we obtain (27) and (28) show that the weights of the quadrature rule Σ r ,n ( f , y) depends on y and therefore, changing y, they must be recomputed. Note that the dependence on y is typical of the product integration rules, that, in the face of fast convergent rules, requires more computational effort. The same happens in the case of product rules based on the zeros of orthogonal polynomials.

Remark 3
When K (x, y) ≡ 1 the quadrature rule (27) with weights (28) reduces to the stable quadrature rule from n + 1 equispaced nodes with degree of exactness r already introduced in [11]. As well known, this formula is based on the approximation of the function f byP C r ,n ( f ) expressed in the Chebyshev polynomial basis as in (21) and on the exact evaluation of the coefficients of the quadrature method by a Gaussian-Christoffel quadrature formula of order m [17,Ch. 3]. It is worth noting that the use of Clenshaw-Curtis quadrature rules with algebraic degree of precision equal to r , for which fast algorithms for computing the weights are well known [34,35], will produce another kind of quadrature formula from equispaced nodes, which is worth of investigation.
We observe that the construction of the proposed quadrature rule requires the same modified moments of the product rule (4), i.e. it employs the same computational effort. However, differently from the formula (4), the new rule (25) presents the main advantage of using samples of f at equally spaced nodes. About the convergence of the product rule, we are able to prove the following Theorem 4 Under the assumption for any y ∈ S the following error estimate holds true Proof From Theorem 2, in view of (26) and under the assumption (30), we have

Remark 4
Since B n ≈ C n 2.03 , taking into account (3), the convergence of the rule is assured for functions f ∈ W 3 . As one can deduce from (7), the "classical" product formula (4) converges under less restrictive assumptions on the function f , which is required to be only continuous in the integration interval. However, such a rule requires the samples of f at the zeros of the Chebyshev polynomial T r +1 , and hence is not reliable if one works on experimental data, usually obtained on equally spaced nodes. For instance, in evolution equations of nonlocal diffusion type, the data can be given at equally spaced points. To solve such a kind of equation, in [26] the authors proposed a discretization procedure based on the application of the line method and on quadrature formulae over equally spaced points.

Implementation details
Now we provide some details about the effective computation of the coefficients of the product rule (25), for the following choices of kernels: , g(·) = sin(·) or g(·) = cos(·), |y| 1, Let us focus on the case K 1 (x, y) in order to compute the modified moments To this purpose we first split the integral as follows Introducing the linear transformations and setting z = Φ j (x), j = 1, 2, we have where both the above integrals can be evaluated with high precision by means of r −point Gaussian rules w.r.t. the weight (1 − z) λ (1 + z) β and (1 − z) α (1 + z) λ in the first and second integrals, respectively. Note that the transformed integrand contains factors of the type (1 ± Φ −1 j (z)) ρ , ρ > −1, which are analytical functions, so that the error of the Gaussian rule geometrically goes to zero. Let us consider the modified moments In this case the main problem is the oscillation of the integrand. In order to mitigate this phenomenon, we propose to use a dilation technique introduced in [7,30]. Setting z = yx, we have Hence, the modified moments (32) take the following expression By setting and with the change of variable t = ϕ j (z), we get The above integrals are less oscillating and can be evaluated with high precision by means of r −point Gaussian rules w.r.t. the weight (1 + t) β and (1 − t) α in the first and last integrals and w.r.t. the Legendre weight in the others, respectively. Note that the transformed integrand contains factors of the type g(ϕ −1 which are analytical functions, so that the error of the Gaussian rule geometrically goes to zero. The same dilation technique has been applied for "nearly" singular kernels of the type K 3 (x, y) [30,31]. In this case, we consider the modified moments and we set z = x y , obtaining We fix the parameter s := 1 y also in this case, in order to have d ∼ 2 and divide the interval − 1 y , 1 y into s subintervals of size d. Hence, we get and

Denoting by
and by setting t = ψ j (z) in each integral, we have Since the poles of the integrand functions are far away from the real axis, the above integrals can be evaluated by means of r −point Gaussian rules w.r.t. the weight (1+t) β and (1 − t) α in the first and last integrals and w.r.t. the Legendre weight in the others, respectively. Note that the transformed integrand contains factors of the type and (2 ± ( t−1 2 )yd) ρ , ρ > −1, which are analytical functions, so that the error of the Gaussian rule geometrically goes to zero.

Remark 5
The computation of the modified moments for the considered kernels is attained by means of Gaussian rules. By doing so we introduce an error that is of the order of the machine precision. This choice doesn't have an impact on the performance of our rule (25). For other choices of kernels the modified moments can be evaluated exactly by means of recurrence relations that however can become progressively unstable as r increases.

Numerical experiments
In this section we present some numerical experiments to analyze the performance of the constrained mock-Chebyshev product rule and compare it with other procedures. More in detail, we perform a direct comparison between the constrained mock-Chebyshev product rule (25) and the classical product formula (4). To this purpose, in each example, we consider the following functions and we focus on a particular kernel for different choices of y ∈ S. Tables 1, 2, 3 display the absolute errors being the constrained mock-Chebyshev product rule based on a grid of 1001 uniformly distributed nodes in the interval [−1, 1]. In this setting, we have n = 1000, r = Table 1 Numerical results for integrals I ( f i , y) by means of the constrained mock-Chebyshev product rule (top) and the classical product rule (bottom) with kernel function K (x, y) = |x − y| 3 10 and weight function   Table 2 Numerical results for integrals I ( f i , y) by means of the constrained mock-Chebyshev product rule (top) and the classical product rule (bottom) with kernel function K (x, y) = 1 (x 2 +y 2 ) 2 and weight function  in which K (x, y) = |x − y| 3 10 is a weakly singular kernel function and w(x) = w C (x).

Example 3
with the highly oscillating kernel K (x, y) = sin(yx) and w(x) = w C (x).
The results in Tables 1, 2, 3 highlight that the product rules (25) and (4) have comparable performances. Nevertheless, the constrained mock-Chebyshev product formula (25) is based on equally spaced data and this makes it the adequate tool to choose in many practical applications, differently from the classical product rule (4) requiring that the function f is known analytically or at least at the zeros of orthonormal polynomial w.r.t. the weight function w.  In Example 4 we test the performance of the constrained mock-Chebyshev rule for different choices of equispaced grids in the integration interval [−1, 1]. In this context, Table 4 reports the relative errorŝ  Table 4 and Fig. 1 show that the more dense the equispaced grid is, the more accurate is the constrained mock-Chebyshev rule (25). However, we also observe a slight loss of accuracy for increasing values of y. This can be attributed to the high oscillations of the kernel cos(yx) and to the Runge phenomenon that occurs when using polynomial interpolation with polynomials of high degree over a set of equispaced interpolation points.

Conclusions and future works
In this paper we proposed a new product rule, based on equispaced points, through the constrained mock-Chebyshev least squares operator. Moreover, we provided an error estimate and many numerical examples which confirm the accuracy of the proposed formula. By the numerical results, the rule (27) on n nodes shows the same accuracy of the product rule (4) on m = π n 2 nodes. Based on these satisfactory results, it is our aim to extend this formula to the case of unbounded intervals or/and bivariate domains. 35. Trefethen, L.N.: Is Gauss quadrature better than Clenshaw-Curtis? SIAM Rev. 50, 67-87 (2008) Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.