Solution of second kind Fredholm integral equations by means of Gauss and anti-Gauss quadrature rules

This paper is concerned with the numerical approximation of Fredholm integral equations of the second kind. A Nyström method based on the anti-Gauss quadrature formula is developed and investigated in terms of stability and convergence in appropriate weighted spaces. The Nyström interpolants corresponding to the Gauss and the anti-Gauss quadrature rules are proved to furnish upper and lower bounds for the solution of the equation, under suitable assumptions which are easily verified for a particular weight function. Hence, an error estimate is available, and the accuracy of the solution can be improved by approximating it by an averaged Nyström interpolant. The effectiveness of the proposed approach is illustrated through different numerical tests.


Introduction
Let us consider the following Fredholm integral equation of the second kind where f is the unknown function, k and g are two given functions, and is the Jacobi weight with parameters α, β > −1. Several numerical methods have been described for the numerical approximation of the solution of Eq. (1) (collocation methods, projection methods, Galerkin methods, etc.) and have been extensively investigated in terms of stability and convergence in suitable function spaces, also according to the smoothness properties of the kernel k and the right-hand side g; see [2,6,7,9,17,25,[29][30][31][32].
Most of these methods are based on the approximation of the integral appearing in (1) by means of the well-known Gauss quadrature formula, introduced by C. F. Gauss at the beginning of the nineteenth century [10] and considered one of the most significant discoveries in the field of numerical integration and in all of numerical analysis. As it is well known, it is an interpolatory formula having maximal algebraic degree of exactness, it is stable and convergent, and it provides one of the most important applications of orthogonal polynomials. Gauss's discovery inspired other contemporaries, such as Jacobi and Christoffel, who developed Gauss's method into new directions, and Heun, who generalized Gauss's idea to ordinary differential equations opening the way to the discovery of Runge-Kutta methods. Since then, several other generalizations and extensions have been introduced, such as the Lobatto and Radau quadrature formulae, Gauss-Kronrod quadrature rules, optimal rules with multiple nodes, the anti-Gauss quadrature formula, etc. [11,18].
Gauss-Kronrod quadrature formulae were introduced in 1964 in order to economically estimate the error term for the n-point Gauss quadrature rule for the Legendre weight. Their main advantage is that the degree of exactness is (at least) 3n + 1, by means of 2n + 1 evaluations of the integrand function. However, they fail to exist for some particular weight functions (Hermite and Laguerre measures, Gegenbauer and Jacobi measures for certain values of the parameters) because some of the quadrature nodes may be complex.
To overcome this problem, Laurie [18] constructed in 1996 an alternative interpolatory formula, the anti-Gauss quadrature rule. It always has positive coefficients and distinct real nodes and is designed to have an error of the same magnitude as the error of the Gauss formula and opposite in sign, when applied to polynomials of certain degrees. Consequently, coupled to a Gauss rule, it provides a bound for the quadrature error, while an average of the Gauss and anti-Gauss formulae sometimes produces significantly more accurate results. In particular, it has been proved that for some weight functions the averaged formula has a higher degree of exactness [21,23,34]. Several researchers investigated and generalized the anti-Gauss formula in relation to the approximation of integrals; see [1,3,15,19,22,28,33].
This paper aims to take advantage of anti-Gauss formulae in the numerical solution of a Fredholm integral equation of the second kind, including the case in which the unknown solution may have algebraic singularities at the endpoints of the integration interval.
Following [20], we develop a global approximation method of Nyström type for Eq. (1) based on the anti-Gauss quadrature formula and we prove stability and con-vergence results by exploiting two novel properties of the nodes and weights of the anti-Gauss rule. Under suitable assumptions, we show that the Nyström interpolants based on the Gauss and the anti-Gauss formulae bracket the solution of the equation. Such assumptions are not easily verified in general, but we prove that this happens for a particular weight function, and we conjecture that this result can be extended to a broad class of weight functions. The availability of upper and lower bounds for the solution makes it possible to estimate the approximation error for a given number of quadrature nodes, allowing one to improve the accuracy by refining the discretization, if required, or accept the current approximation. In particular situations, the Nyström interpolant obtained by averaging the two bounds produces much better results than both the Gauss and the anti-Gauss approximations.
The paper is structured as follows. Section 2 provides preliminary definitions, notations, and well-known results concerning orthogonal polynomials, and Gauss and anti-Gauss quadrature formulae. Section 3 contains new theoretical results on the nodes and coefficients of the anti-Gauss quadrature rule, and provides an error estimate in suitable weighted spaces. Section 4 introduces a numerical method to approximate the solution of the integral equation, whose accuracy is investigated in Sect. 5 through some numerical tests. Finally, the "Appendix" reports the proof of a rather technical Lemma.

Function spaces
Let us denote by C q ([−1, 1]), q = 0, 1, . . ., the set of all continuous functions on [−1, 1] having q continuous derivatives, and by L p the space of all measurable functions f such that Let us introduce a Jacobi weight with γ, δ > −1/ p. Then, f ∈ L p u if and only if f u ∈ L p , and we endow the space L p u with the norm If p = ∞, the space of weighted continuous functions is defined as in the case when γ, δ > 0. If γ = 0 (respectively δ = 0) L ∞ u consists of all functions which are continuous on (−1, 1] (respectively [−1, 1)) and such that lim 1]). We equip the space L ∞ u with the weighted uniform norm and we remark that L ∞ u endowed with such a weighted norm is a Banach space. The definition of L ∞ u ensures the validity of the Weierstrass theorem. Indeed, for any polynomial P of degree n we have For smoother functions, we introduce the weighted Sobolev-type space

Monic orthogonal polynomials
Let { p j } ∞ j=0 be the sequence of monic orthogonal polynomials on (−1, 1) with respect to the Jacobi weight defined in (2), i.e., where and Γ is the Gamma function. It is well known (see, for instance, [12]) that such a sequence satisfies the following three-term recurrence relation where the coefficients α j and β j are given by Equivalently, by virtue of the Stieltjes process, the recursion coefficients can be written as

Quadrature formulae
In this subsection, we recall two quadrature rules which will be useful for our aims. The first one is the classical Gauss-Jacobi quadrature rule [10], whereas the second one is the anti-Gauss quadrature rule, developed by Laurie in [18]; see also [19].

The Gauss-Jacobi quadrature formula
Let f be defined in (−1, 1), w be the Jacobi weight given in (2), and let us express the integral as where the sum G n ( f ) is the well-known n-point Gauss-Jacobi quadrature rule and e n ( f ) stands for the quadrature error. The quadrature nodes {x j } n j=1 are the zeros of the Jacobi orthogonal polynomial p n (x), and the weights or coefficients {λ j } n j=1 are the so-called Christoffel numbers, defined as (see [20, p. 235]) The Gauss-Jacobi quadrature rule is an interpolatory formula having optimal algebraic degree of exactness 2n − 1, namely I (P) = G n (P), or equivalently e n (P) = 0, where P 2n−1 is the set of the algebraic polynomials of degree at most 2n − 1, the coefficients λ j are all positive, and the formula is stable in the sense of [20, Definition 5.1.1.], as Moreover, the above condition, together with (14), guarantees the convergence of the quadrature rule (see, for instance, [27,35]), that is lim n→∞ e n ( f ) = 0.
If f ∈ C 2n ([−1, 1]), the error e n ( f ) of the Gauss quadrature formula has the following analytical expression [5] where ξ ∈ (−1, 1) depends on n and f . If we consider functions belonging to the Sobolev-type spaces W 1 r (w), it is possible to estimate e n ( f ) (see, e.g., [20]) in terms of the weighted error of best polynomial approximation, i.e., where C = C(n, f ) and ϕ(x) = √ 1 − x 2 . Here and in the sequel, C denotes a positive constant which has a different value in different formulas. We write C = C(a, b, . . .) in order to say that C is independent of the parameters a, b, . . ., and C = C(a, b, . . .) to say that C depends on them.
About the computation of the nodes x j and weights λ j of the Gauss-Jacobi quadrature rule, in 1962 Wilf observed (see also [14]) that they can be obtained by solving the eigenvalue problem for the Jacobi matrix of order n associated to the coefficients α j and β j defined in (6) and (8), respectively. Specifically, the nodes x j are the eigenvalues of the symmetric tridiagonal matrix J n , and the weights are determined as where β 0 is defined as in (7) and v j,1 is the first component of the normalized eigenvector corresponding to the eigenvalue x j .

The anti-Gauss quadrature formula
Let us approximate the integral I ( f ) defined in (12) by where G n+1 ( f ) is the n + 1 point anti-Gauss quadrature formula andẽ n+1 ( f ) is the corresponding remainder term. Such a rule is an interpolatory formula designed to have the same degree of exactness of the Gauss-Jacobi formula G n ( f ) in (13) and an error of the same magnitude and opposite in sign to the error of G n ( f ), when applied to polynomials of degree at most 2n + 1, namely This quadrature formula was developed with the aim to estimate the error term e n ( f ) of the Gauss rule G n ( f ), especially when the Gauss-Kronrod formula fails in this intent. This happens, for instance, when we deal with a Jacobi weight with parameters α and β such that min{α, β} ≥ 0 and max{α, β} > 5/2; see [26].
If f is a polynomial of degree at most 2n + 1, the Gauss and the anti-Gauss quadrature rules provide an interval containing the exact integral I ( f ), an interval which gets smaller as the degree of the polynomial n increases. Indeed, it either holds If, on the contrary, f is a general function, it is still possible to prove, under suitable assumptions (see [4,Equations (26)- (28)], [8, p. 1664], and [28, Theorem 3.1]) that the Gauss and the anti-Gauss quadrature rules bracket the integral I ( f ), and that the error of the averaged Gaussian quadrature formula [18] G AvG is bounded by The above bound allows one to choose the integer n so that the averaged Gaussian formula reaches a prescribed accuracy. It is also worth noting that, while the averaged rule (19) has, in general, degree of exactness 2n + 1, under particular conditions it has been proved to have degree of exactness 4n − 2 + 2 for a fixed integer (and usually small) value of [21,23,34]. An anti-Gauss quadrature formula can easily be constructed [18]. The key of such a construction is relation (17), which characterizes the anti-Gauss quadrature formula as an n + 1 points Gauss rule for the functional If q ∈ P 2n−1 , by virtue of (14), then, while for the Jacobi polynomial p n and any integrable function f , it holds By using (20) and (21) we can compute the recursion coefficients {α j } n j=0 and {β j } n j=1 for the recurrence relation defining the sequence {p j } n+1 j=0 of monic polynomials orthogonal with respect to the functional I.
The following theorem holds.

Theorem 1 The recursion coefficients for the polynomials orthogonal with respect to the functional I are related to the recursion coefficients for the Jacobi polynomials as followsα
Proof The theorem was proved by Laurie in [18]. For its relevance, we report here the scheme of the proof. The fact thatα 0 = α 0 andβ 0 = β 0 is trivial. Then, the recurrence relations for the two families of orthogonal polynomials implies thatp 1 = p 1 . Let us proceed by induction. Letp j = p j for any 1 ≤ j ≤ n − 1. Taking into account (9), (11), and (20), we haveα so thatp j+1 = p j+1 . In particular,p n = p n . To conclude the proof, by applying (21) and again (9), (11), and (20), we obtaiñ The previous theorem implies that the sequence of polynomials {p j } n+1 j=0 is defined by Since the polynomials {p j } n+1 j=0 satisfy a recurrence relation, the nodesx j and the weightsλ j of the associated anti-Gauss quadrature formula can be computed by solving the eigenvalue problem for the modified Jacobi matrix of order n + 1 2β n e T n α n , with e n = (0, 0, . . . , 1) T ∈ R n . In fact, the n + 1 nodes are the eigenvalues of the above matrix and the weights are determined as where β 0 is defined by (7) andṽ j,1 is the first component of the eigenvector associated to the eigenvaluex j . The anti-Gauss quadrature rule has nice properties: the weights {λ j } n+1 j=1 are strictly positive and the nodes {x j } n+1 j=1 interlace with the Gauss nodes {x j } n j=1 , i.e., Thus, we can deduce that the anti-Gauss nodesx j with j = 2, . . . , n, belong to the interval (−1, 1), whereas the first and the last node may be outside of it. Specifically, it was proved in [18] that More in detail [18,Theorem 4], if the following conditions are satisfied then all the anti-Gauss nodes belong to [−1, 1]. From now on, we will assume that the parameters of the weight function w satisfy (24). Let us remark that some classical Jacobi weights, such as the Legendre weight (α = β = 0) and the Chebychev weights of the first (α = β = −1/2), second (α = β = 1/2), third (α = −1/2, β = 1/2), and fourth (α = 1/2, β = −1/2) kind, satisfy conditions (24).
The next theorem defines the anti-Gauss rule for Chebychev polynomials of the first kind. It will be useful in Sect. 4. Let us denote by T n (x) = cos(n arccos(x)) = 2 n−1 p n (x), n ≥ 1, the trigonometric form of first kind Chebychev polynomial of degree n, where p n (x) is the monic polynomial of the same degree; see Sect. 2.2.

Theorem 2
If α = β = −1/2, then the nodes and the weights for the anti-Gauss quadrature formula (16) are given bỹ Proof From recurrence (22), being β n = 1 4 , we havẽ denote the Chebychev polynomials of the second kind. This proves the expression for the nodes. Now, let us apply (16) to a first kind Chebychev polynomial of degree k = 0, 1, . . . , n. We have where δ k,0 is the Kronecker symbol andθ j = (n − j + 1) π n . Multiplying both terms by cos(kθ r ), and summing over k, we obtain

Convergence results for the anti-Gauss rule in weighted spaces
This section aims to provide an error estimate for the anti-Gauss rule in weighted Sobolev spaces. Such an estimate, which will be useful for our aims, is similar to inequality (15); see (28) in Proposition 1. To prove it, we need two additional properties of the nodes and weights appearing in (16), which are stated in the following lemma. Let A, B > 0 be quantities depending on some parameters; then, we write A ∼ B if there exists a constant 1 < C = C (A, B) such that B C ≤ A ≤ CB, for any value of the parameters.
j=1 and {λ j } n+1 j=1 be the quadrature nodes and the coefficients, respectively, of the anti-Gauss quadrature formula G n+1 ( f ) defined in (16). Then, setting Δx j =x j+1 −x j , for j = 1, . . . , n, we have where holds, thenλ where the constants in ∼ are independent of n and j.
We were not able to prove that (26) is always true, but we conjecture it is. Indeed, the nodesx j interlace with the zeros of the Jacobi polynomial of degree n; see (23). Since (26) holds for such zeros, the anti-Gauss nodes should have the same asymptotic distribution. The validity of (26) would imply that the nodes {x j } n+1 j=1 have an arc sine distribution [20], that is, settingx j = cosθ j , it holds Relations (26) and (27) are essential in the proof of next proposition.

Proof
The proof can be obtained, mutatis mutandis, from the proof of [20, Theorem 5.1.8] by using Lemma 1.

The numerical method
We propose a solution method for a second kind Fredholm integral equation, based on the quadrature rules introduced in Sect. 2. To this end, we rewrite Eq. (1) in the operatorial form where I is the identity operator and Let us approximate the integral operator K by means of the Gauss-Jacobi quadrature formula (13) and by the anti-Gauss quadrature rule (16) ( Then, we consider the following equations where f n andf n+1 are two unknown functions. By evaluating (32) at the nodes {x i } n i=1 , and multiplying the equations by the weight function u evaluated at x i (see (3)), we obtain the system where a j = u(x j ) f n (x j ) are the entries of the solution vector a. Analogously, a simple collocation of Eq. (33) at the knots {x i } n+1 i=1 , and a multiplication of both sides by u(x i ), leads to the square system n+1 j=1 whereã j = u(x j )f n+1 (x j ) are the entries of the solution vectorã. A compact representation of systems (34) and (35) is given by where , andh are similarly defined. As remarked at the end of Sect. 2.3.2, in some situations the anti-Gauss nodes might include ±1. To avoid that (35) looses significance, in the weight u(x) we set γ = 0 wheneverx n+1 = 1, and δ = 0 whenx 1 = −1.
Once systems (34) and (35) have been solved, we can compute the corresponding weighted Nyström interpolants Thus, if systems (34) and (35) have a unique solution for n large enough, then (37) and (38) provide a natural interpolation formula for obtaining f n (y) andf n+1 (y) for each y ∈ [−1, 1]. Conversely, if (37)-(38) are solutions of (32)- (33), then the coefficients a j andã j are solutions of systems (34) and (35), respectively. This is the well-known Nyström method developed for the first time in 1930 [24] and widely analyzed in terms of convergence and stability in different function spaces, according to the smoothness properties of the known functions; see [2,6,9,13,17].
In the next theorem, by exploiting the results introduced in Sect. 3, we extend the well-known stability and convergence results, valid for the Nyström method based on the Gauss rule [6,9,20], to the Nyström method based on the anti-Gauss quadrature formula.

Theorem 3 Assume that K er
and let f * be the unique solution of Eq. (29) for a given right-hand side g ∈ L ∞ u . Moreover let us assume that, for an integer r , Then, for n sufficiently large, systems (34) and ( where cond ∞ (A) denotes the condition number of A in the matrix ∞-norm and C is independent of n. Finally, if (26) holds, the following estimates hold true where the constants in O are independent of n and f * .
Proof The proof follows the line of the corresponding theorem for Gauss quadrature [9, Theorem 3.1] .
According to the previous theorem, both Nyström interpolants (37) and (38) furnish a good approximation for the unique solution f * of Eq. (29).
At this point, our goal is to prove that the unique solution f * of the equation is bracketed by the two Nyström interpolants for any y ∈ [−1, 1], namely This allows us to obtain a better approximation of the solution by the averaged Nyström interpolant Let us note that to prove (41), taking into account (29), (32), and (33), it is sufficient to prove that the discrete operators K n f n and K n+1fn+1 provide an interval containing the exact value of the integral operator K , namely either or As already mentioned in Sect. 2.3.2, inequalities similar to (43) and (44) have already been proved for the integral I ( f ). Here the situation is different, as the quadrature formulae do not act on a fixed function f , as in (18), but on its approximations. Therefore, before proving (43) and (44), where such approximations f n andf n+1 appear, we need the following further result. (2), as follows

Theorem 4 Let us express the integrand function k(x, y) f * (x), and their approximations k(x, y) f n (x) and k(x, y)f n+1 (x) in terms of Jacobi polynomials {π i } orthonormal with respect to weight
Then, under the assumption of Theorem 3, and then which implies the first relation in (48). We remark that condition (39) ensures the boundedness of the integral in the right-hand side. A similar procedure is applied to show the second relation in (48).
In the following theorem, we give a sufficient condition for the bracketing (41) of the solution to hold. The condition is similar to those given in [4,Equations (26)- (28)], [8, p. 1664], and [28, Theorem 3.1] in different contexts. Such a condition is not easily verified in practice, without an assumption on the asymptotic behavior of the Gauss and anti-Gauss quadrature formulae, when applied to polynomials of increasing degree. We will later prove a stronger result, valid for a particular weight function.

Theorem 5 Let the assumptions of Theorem 3 be satisfied, so that (48) is verified.
Moreover, let us assume that, for any y ∈ [−1, 1], the terms {α i (y)} introduced in (45) converge to zero sufficiently rapidly, and the following relation holds true for n large enough. Then, either Proof Taking into account (29), (32), and (33), it is sufficient to prove either (43) or (44). Let {π i } denote the Jacobi orthonormal polynomials. Then, by (45), we can assert Moreover, by (30) and (46), we have In the first summation π i ∈ P 2n−1 , so by the exactness of the Gauss rule and by (4), we have Hence, by (50) we have Similarly, by (31) and (47), we have from which, by applying (17), Let us now focus on the first term in the right-hand side. By the exactness of the Gauss quadrature rule and the orthogonality of polynomials π i (x), we can write By replacing this equality in (52), and taking (50) into account, we have For n sufficiently large, by using (48) from Theorem 4, equalities (51) and (53) become where n → 0 and˜ n → 0 as n → ∞. Now, by the assumption (49), both hold, which shows that either (43) or (44) are satisfied.
We now consider the special case of Chebychev polynomials of the first kind, and show that in this case assumption (49) becomes a much simpler one.

Corollary 1 Under the assumptions of Theorem
holds for n large enough, then either Proof For the Chebychev polynomials of the first kind, we have β 0 = π and c i = 2 1−2i π , i ≥ 1; see [12].
To begin with, G n (π 0 ) = G n+1 (π 0 ) = √ π . Let us initially consider the first summation in (49). From the expression of the nodes and weights for the Gauss-Chebychev quadrature formula, we can write If i is not a multiple of 2n, [16,Formula 1.342.4] implies On the contrary, by applying standard trigonometric identities, we obtain Now, let us consider the second summation in (49). For i ≥ 1, from Theorem 2 it follows that It is immediate to verify that G n+1 (π i ) = 0 when i is odd. For i even and not multiple of 2n, from the identity it follows that G n+1 (π i ) = 0. Finally, G n+1 (π 2nk ) = √ 2π , k = 1, 2, . . .. Thanks to the above relations, (49) becomes (54), and the Corollary is proved.
To illustrate the effectiveness of condition (54), let us assume that the Fourier coefficients (45) exhibit a moderate decay rate, e.g., α i (y) ∼ 1 i 2 . Then, from the classical identities immediately follows. On the contrary, assuming for a general weight function that |G n (π i )|, | G n+1 (π i )| ≤ M n , for i = 2n, 2n +1, . . ., and that the coefficient α i (y) decay as above, it is easy to verify that (49) does not hold. We notice that results of this kind are important, in general, for many applications of anti-Gauss quadrature rules. We numerically observed for other classes of Gegenbauer weight functions (α = β) a behaviour for G n (π i ) and G n+1 (π i ) similar to the one proved for first kind Chebychev polynomials. We conjecture that it is possible to prove conditions analogous to (54) also in these cases. This aspect will be studied in further research.

Numerical tests
The goal of this section is to illustrate, by numerical experiments, the performance of the method described in the paper. We consider three second kind Fredholm integral equations, having a different degree of regularity in suitable weighted spaces. For each test equation, we solve systems (34) and (35), we compute the Nyström interpolants f n andf n+1 , defined in (37) and (38), respectively, as well as the averaged Nyström interpolant f n given by (42). Then, we compare the absolute errors with respect to the exact solution f * at different points y ∈ [−1, 1]. When the exact solution is not available, we consider the approximation obtained by Gauss quadrature with n = 512 points to be exact.
All the numerical experiments were performed in double precision on an Intel Core i7-2600 system (8 cores), running the Debian GNU/Linux operating system and Matlab R2019a.

Example 1 Let us consider the equation
where g(y) = 1 16 (8 cos 2 − 4 cos 4 − 4 sin 2 + sin 4)e y cos y + cos(3y), in the space The exact solution is f * (y) = cos 3y.   We report in Table 1 the approximation errors at two points of the solution domain, produced by the Gauss and anti-Gauss quadrature formulae, as well as by the averaged formula f n , for n = 4, 8, 16. Since the kernel and the right-hand side are analytic functions, the Gauss and the anti-Gauss rules lead to errors of opposite sign and roughly the same absolute value. For this reason, the accuracy of the approximation furnished by the averaged formula greatly improves: three digits for n = 4 and five digits for n = 8. The machine precision is attained for n ≥ 16; when this happens, rounding errors may prevent the error to change sign. Table 2 reports the condition number in infinity norm of the matrices A n and A n+1 of linear systems (34) and (35), showing that they are extremely well-conditioned.
The graph on the left hand side of Fig. 1 displays the exact weighted solution and the Gauss, anti-Gauss, and averaged interpolants, when n = 2. With a larger number of nodes, the approximations are too close to the solution for the graph to be significant. It can be observed that, in this example, the Gauss error is positive on the whole interval, while the anti-Gauss one is negative. This fact is confirmed by the graph on the right hand side in the same figure, which reports a plot of the errors for n = 8. The averaged rule produces a solution which is very close to the exact solution even with such a small number of nodes.

Example 2 The second test integral equation is the following
which has a unique solution f * ∈ L ∞ .  As theoretically expected, the convergence is slower than in the previous case, because of the non-smoothness of the right-hand side. Nevertheless, Tables 3 and 4 numerically confirm the final statement in Theorem 3, as well as the fact that the condition number does not grow significantly with n. Moreover, the last column of Table 3 shows that the averaged formula provides up to 2 additional correct digits, with respect to the approximations obtained by the Gauss and anti-Gauss rules. Figure 2 compares the three approximations obtained for n = 2 to the exact solution in the left hand side graph, and reports the plot of the errors for n = 8 on the right. The last graph shows that, in this particular example, the errors corresponding to the  Gauss and the anti-Gauss rules are always opposite in sign, but they do not keep a constant sign. The fact that the order of convergence is at least O(1/n 4 ), as predicted by Theorem 3 since the right-hand side belongs to W ∞ 4 , is illustrated in Fig. 3. The graph on the left shows the decay of the weighted infinity norm error (40) for the three quadrature methods, compared to the curve 1/n 4 . The graph shows that the infinity norm errors of the Gauss and the anti-Gauss rules are almost coincident, and they decay faster than 1/n 4 . The averaged rule is more accurate, but the order of convergence is the same.
To give numerical evidence to Theorem 5 and Corollary 1, we illustrate in Fig. 4 the assumptions (49) and (54), which in this case coincide. In the integral equation (55), we set the sample solution f * (x) = cos(x), and compute the coefficients α i (y) in (45) by a high precision Gauss quadrature rule G n ( f ) with n = 128. The coefficients, depicted in the graph on the left of Fig. 4, decay exponentially. For this reason, only those above machine precision were displayed, that is, α i (y) with i = 0, 1, . . . , 33. Then, fixed y = 0.3, the three summations in (49) were computed for n = 1, . . . , 15. We denoted them by R n , R a n , and S n , respectively. The graph on the right hand side of Fig. 4 clearly shows that R n and R a n are both smaller than S n , and the difference between these quantities increases as n progresses, showing that the assumption of Theorem 5 is valid in this example. The situation is similar considering other values of y in [−1, 1].

Example 3
In the final example, we apply our approach to the integral equation to approximate the unique solution f * ∈ L ∞ u , with u(x) = (1 − x 2 ) 1/4 . From the non-smoothness of the kernel, it follows that the approximate solutions f n andf n+1 converge to the exact solution f * with order at least O(1/n 3 ). The theoretical expectation is confirmed by the numerical results, reported in Tables 5 and 6. The order of convergence is illustrated by the graph on the right hand side of Fig. 3.
In Fig. 5 we report for the integral equation (56) the coefficients α i (y) defined in (45) (graph on the left), as well as the summations from the assumption (49) of Theorem 5 (graph on the right), similarly to what we did in Fig. 4 for Example 2. Like in the previous example, we set the sample solution f * (x) = cos(x) in (56), and y = 0.3.
In this case, the coefficients α i (y) are slowly decaying. The first 500 coefficients, computed by a Gauss quadrature rule with 1024 nodes, are displayed in the graph on   the left hand side of Fig. 5. From the graph on the right of the same figure, representing the summations from (49), it is clear that the assumption of Theorem 5 is not verified for each index n. For the sake of clarity, we reported only the first 20 values of R n , R a n , and S n , but the situation is similar for the remaining 228 we computed. Even if the assumption of the sufficient condition proved in the theorem is not valid here, Table 5 shows that the error of the Gauss and the anti-Gauss rules changes sign as well, for all the test performed.
Thus, we can assert Then, settingx = cosθ ∈ [x j−1 , x j+1 ], by applying (57) and the following relation from [20] 1 we obtain We recall that C denotes a positive constant which may have a different value in different formulas. In order to prove (27), we start from the following expression for the weights [22, Theorem 2.1]λ j = 2 p n , p n w p n (x j )p n+1 (x j ) .
Let us investigate the asymptotic behavior of the denominator. By (22), taking into account (59), we have where the symbol denotes asymptotic equivalence, and β n