Numerical treatment of the generalized Love integral equation

In this paper, the generalized Love integral equation has been considered. In order to approximate the solution, a Nyström method based on a mixed quadrature rule has been proposed. Such a rule is a combination of a product and a “dilation” quadrature formula. The stability and convergence of the described numerical procedure have been discussed in suitable weighted spaces and the efficiency of the method is shown by some numerical tests.


Introduction
In 1949, Love investigated for the first time on a mathematical model describing the capacity of a circular plane condenser consisting of two identical coaxial discs placed where f is the solution of the following integral equations of the second kind: with ω = q/r a real positive parameter. Then, he proved that (1.1) has a unique, continuous, real, and even solution which analytically has the following form: where the iterated kernels are given by: From a numerical point of view, the developed methods [7,8,12,15,17] for the undisputed most interesting case (i.e., when ω −1 → 0) have followed the very first methods [4,5,16,19,20], and the most recent ones [11], proposed for the case when ω = 1.
If ω −1 → 0 the kernel function is "close" to be singular on the bisector x = y. This kind of kernel belongs to the so-called nearly singular kernels class. Moreover, Phillips noted in [16] that: Hence, for ω sufficiently large, the left-hand side of (1.1), in the case "−" is considered, becomes approximately zero which does not coincide with the right-hand side of (1.1).
In [12], the authors presented a numerical approach based on a suitable transformation to move away from the poles x = y ± ω −1 i from the real axis. The numerical method produced very accurate results in the case when ω −1 is not so small but they are poor if ω −1 → 0.
Then, in order to get satisfactory errors also in this latter case, in [15] the author proposed to dilate the integration interval and to decompose it into N subintervals. Hence, the equation was reduced to an equivalent system of N integral equations and a Nyström method based on a Gauss-Legendre quadrature formula was proposed for its numerical approximation. The approach produces satisfactory order of convergence even if ω −1 is small. However, the dimension of the structured linear system that one needs to solve is very large as ω −1 decreases.
In [7], the authors improve the results given in [15] by using the same transformation as in [8] which takes into account the behavior of the unknown function showed in (1.2). Then, they follow the approach given in [15]; i.e., they write the integral as the sum of m new integrals which are approximated by means of a n-point Gauss-Legendre quadrature rule. In this way, they get a linear system of size nm that, multiplied by suitable diagonal matrices, is equivalent to a new linear system which is solved by using a preconditioned conjugate gradient method, being the matrix of coefficients symmetric, positive definitive, and having a Toeplitz block structure.
In this paper, we consider the more general equation: g is a known right-hand side, μ ∈ IR and 0 < ω ∈ IR. Such equation includes (1.1) (in the case when g ≡ w ≡ 1 and μ = ± 1 π ) and, at the same time, the presence of the weight w leads to the case when the unknown function has algebraic singularities at the endpoints of [−1, 1]. So, the method we propose and the theoretical results we prove can be applied not only in the special case of the Love equation but also in the general one in which the involved functions are singular at ±1. This general case could be interesting in the future for other possible physical models.
The method we propose is a Nyström method based on a mixed quadrature formula. This is a product rule whose coefficients are computed by using a quadrature scheme. In fact, following an idea presented in [2,15], we approximate such coefficients by using a "dilation" quadrature formula that we prove to be stable and convergent. Such an idea which consists of a preliminary dilation of the domain, that "relax" in some sense the pathological behavior of the kernel of the integral, allows us to get very accurate results when ω is large, better than those in [7,15]. The proposed method, whose convergence and stability are proved in suitable weighted spaces, leads to a well-conditioned linear system, the dimension of which is greatly reduced w.r.t. the ones involved in [7,15]. The size of the system we solve does not depend on the magnitude of the parameter ω and the error is of the order of the best polynomial approximation of the unknown function, independently of the value of ω. Moreover, we underline that mutatis mutandis the numerical approach could be applied to different Fredholm integral equations with other types of "nearly singular" kernels.
The outline of the paper is as follows. At first, in Section 2, we introduce the functional spaces in which we will analyze our method. In Section 3, we remind some well-known quadrature rule (Section 3.1), we study a "dilation" quadrature formula (Section 3.2), and we propose a new mixed quadrature scheme (Section 3.3) which is used in the Nyström method presented in Section 4. Section 5 shows the efficiency of the proposed procedure by means of several numerical tests, and finally, Section 6 contains the proofs of our main results.

Functional spaces and notations
Let us consider the following Jacobi weight function with parameters γ, δ ≥ 0 and let us define the space of weighted continuous functions defined as: in the case when γ, δ > 0. In the case γ = 0 (respectively δ = 0), C σ consists of all functions which are continuous on (−1, 1] (respectively [−1, 1)) and such that 1]). We equip the space C σ with the weighted uniform norm and we remark that C σ endowed with such a weighted norm is a Banach space. For smoother functions, we introduce the following Sobolev-type space where the superscript (r) denotes the rth derivative of the function f with r a positive integer and ϕ(x) = √ 1 − x 2 . We equip W r σ with the norm: Finally, for any p ∈ N 0 ∪ {∞}, we denote by C p ([−1, 1]) the set of all continuous functions having p continuous derivatives. Throughout the whole paper, we will denote by C a positive constant which will have different meanings in different formulas. We will write C = C (a, b, . . .) in order to say that C is a positive constant independent of the parameters a, b, . . ., and C = C (a, b, . . .) to say that C depends on a, b, . . . . Moreover, if A, B > 0 are quantities depending on some parameters, we write A ∼ B, if there exists a constant 0

Classical quadrature formulae
In this subsection, we remind two well-known quadrature rules [10] and we mention the related convergence results which will be essential for our aims. Let us introduce the Jacobi weight of parameters α, β > −1, i.e.: Then, we recall the Gauss-Jacobi quadrature rule [10]: are the zeros of p m (w, x) and R m stands for the remainder term. About the latter, it is possible to estimate it (see, for instance, [10]) in terms of the weighted error of best polynomial approximation of f ∈ C σ , i.e.: where P m denotes the set of all algebraic polynomials of degree at most m. In fact, for each f ∈ C σ , if the parameters of the weights w and σ are such that 0 ≤ γ < α + 1 and 0 ≤ δ < β + 1, then , (3.5) where γ m (w) is the positive leading coefficient of p m (w, x) introduced in (3.2). Now, we introduce a product rule in order to evaluate the integrals of the type where f ∈ C σ , h is a known kernel function and w is the weight defined in (3.1). Such rule reads as [10]: where E m (f, y) denotes the remainder term and the coefficients {A j } m j =1 are defined as: the j th fundamental Lagrange polynomial. About the stability and the convergence of the above formula, it is possible to prove, by using a well-known Nevai theorem (see, for instance, [10,13]), that if 9) and the weights w and σ are such that their exponents satisfy the following inequalities: then, the rule (3.6) is stable and convergent and the following convergence estimate holds true: Remark 3.1 Let us remark that the kernel function appearing in the Love (1.3) satisfies (3.9). Moreover, if α, β < 3 2 , then the parameters γ and δ can also be chosen equal to zero.

A dilation formula
In this subsection, we present a quadrature formula in order to approximate the integrals of the type: where F is a given function, w is as in (3.1), and k(x, y, ω) is a known kernel which is close to be singular if ω −1 → 0. This is the case of the kernel function appearing in the Love equation (1.3).
In order to construct such kind of formula, we follow the approach proposed in [2,15] for the unweighted case.
First, in order to "relax" the "too fast" behavior of the kernel function when ω grows, we introduce in (3.13) the change of variables x = η ω , y = θ ω , with η, θ ∈ [−ω, ω]. In this way, (3.13) is equivalent to the following integral having a dilated domain of integration [−ω, ω]: Then, we split the new integration interval [−ω, ω] into S subintervals of size getting (3.14) Now, we want to remap each integral into [−1, 1]. To this end, we introduce the invertible linear maps i : and in (3.14) we make the change of variable: In this way, we get: and k i are the new kernel functions: or, equivalently, in terms of the original kernel k, Let us underline that the advantage of the change of variable we introduced in (3.13) and the splitting of the dilated interval [−ω, ω] into S parts of length d is that we reduce the computation of the integral into the computation of a sum of integrals and in each of them the kernel function k i (x, ωy, ω) has complex poles sufficiently far from the real axis. In fact, the distance from these poles to the x axis is greater than 2 d .
By approximating each integral appearing in (3.16) by means of the Gauss-Jacobi quadrature rule (3.3) with u i in place of w and k i F i instead of f , we have the following "dilation" quadrature formula: where n is the remainder term.
Next results state the stability of the previous formula and give an error estimate for n in the case when . we have experimented that an optimal choice is d ∼ 2. We also remark that the rderivative of the kernel function appearing in the assumption (3.21) depends on the real parameter ω and its asymptotic behavior (with respect to such a parameter) goes like ω −1 .

A mixed quadrature formula
In this subsection, we want to propose a mixed quadrature rule which will be essential for our method. It consists in applying an m-point product rule (3.6) in order to approximate the integral: and in approximating the coefficients A j of such a product rule (defined in (3.7) with h(x, y) = k(x, y, ω)) by means of the n-point "dilation" quadrature formula (3.19). Then, the mixed quadrature formula is the following: where E n m is the remainder term and: with k i and u i as in (3.18) If f ∈ C σ and the kernel function k satisfies the conditions (3.9) and the assumptions given in Theorem 3.1, the following error estimate holds true:

Remark 3.3
Let us remark that if α, β < − 1 2 , then the parameters of the weight σ could also be chosen equal to zero. Moreover, in Theorem 3.2, for the sake of simplicity, we considered the case m = n. Nevertheless in practice in the numerical test we can use n fixed. Indeed, according with (3.23), the error decreases exponentially and, for instance, for n = 20, d = 2, and ω = 10 2 , the quantity before the square brackets is of the order 10 −98 . Hence, the error of the mixed quadrature formula is, in practice, of the same order of the error of best approximation of f . We emphasize that the larger the ω is, the more the second term goes to zero quickly. In any case, whatever the magnitude of ω is, the first term of the estimate stated in Theorem 3.2 prevails on the second.

The numerical method
In this section, we propose a numerical method for the Love integral equation (1.3) which can be rewritten in operatorial form as: where I is the identity operator and (4. 3) The next proposition shows the mapping properties of the operator K. The proposed numerical strategy is a Nyström method based on the mixed quadrature formula K n m f introduced in (3.24). Then, we consider the functional equation: I + μK n m f n m = g, (4.4) where f n m is unknown. We multiply both sides of (4.4) by the weight function σ and we collocate each equation at the points {ξ w i } m i=1 . In this way, we have that the quantities a i = (f n m σ )(ξ w i ) are the unknowns of the following m × m linear system: where δ ij is the Kronecker symbol. In terms of matrices the system is A m a = b, where A m = I + μB m , with I the identity matrix of order m and Once (4.5) is solved, its solution [a * ] m i=1 = a * i allows us to construct the following weighted Nyström interpolant: which will approximate the unknown solution f ∈ C σ . Next theorem states that the above described Nyström method is stable and convergent, as well as that the condition number in infinity norm of the matrix A m ; i.e., cond(A m ) = A m ∞ A −1 m ∞ is bounded by a constant which does not depend on m.
Remark 4.2 Let us remark that the first step of our method is the natural and immediate approximation of the integral by means of the mixed quadrature formula. In doing this, we "isolate" in some sense the parameter ω in the computation of the coefficients of the rule. The collocation is independent of ω and this produces the advantage that the size of the system does not depend on it, as instead happens in other methods available in the literature [7,15]. Moreover, as stated in estimate (4.7), the proposed global approximation method allows us to find the solution of the equation with a convergence order which is again independent of the magnitude of ω. Indeed, the error is of the order of the best polynomial approximation of f and being f ∈ W r σ one has: E m (f ) σ ≤ C m r f (r) ϕ r σ ∞ . However, the function f naturally depends on ω, being the solution of an equation in which such a parameter appears (see also the analytical expression of the solution recalled in the "Introduction"). Consequently, the norm of the rth derivative (appearing at the right-hand side of the above estimate) depends on and becomes large for increasing values of ω. This is the only reason why for a large value of ω we need to increase the dimension of the system in order to get high precision (see Table 1).

Remark 4.3
Let us underline that the numerical method we propose leads to a not structured linear system of order m in which each element of the matrix involves S × n evaluation of a polynomial of degree m − 1. However, the fact that the matrix is not structured does not lead to any problem. In fact, as we can see in Section 5, the method produces very accurate results for small values of m and, as stated in the previous theorem, the matrix is also well-conditioned. This is also the reason why we do not propose a suitable and more performing numerical method in order to solve the linear system we get.

Numerical tests
In this section, we show by some numerical tests the performance of the method described in the previous section. Specifically, we first test the proposed approach on the classical Love integral equation (Example 5.1) and then we show its effectiveness on other two generalized Love's equations (Examples 5.2 and 5.3). In all the numerical tests, the solution f is very smooth and we expect a fast convergence according to estimate (4.7). We approximate the solutions of the test equations by means of the Nyström interpolants given by (4.6) and we compute the absolute errors: Moreover, in the tables, the CPU time average of the complete procedure (construction and solution of the linear system+construction and evaluation of the Nyström interpolant) is given. The numerical evidence is that the needed CPU time depends linearly on m and ω. Indeed, when m is doubled also is the CPU time. And if we see at the CPU time for ω = 10 2 , 10 3 , 10 4 , it is clear that for the same value of m, if t is the time spent in the case ω = 10 2 , then it will be about 10 * t for ω = 10 3 and 100 * t for ω = 10 4 .
All the numerical experiments were performed in double precision arithmetic on an IntelCore i7 system (4 cores), running the Mac OS operating system and using Matlab R2018a.
Example 5.1 Let us consider the classical Love integral (1.3) in the space C σ with σ ≡ 1 and μ = 1 π . In Table 1 we report the results we get for different choices of ω. By comparing them with those presented in [7, Table 1, Table 3, and Table 5], we can see that, in the case when ω = 10 2 by solving a square system of m = 256 equations we get an error of the order of the machine precision (M.P.), instead of 10 −5 as shown in [7, Table 1]. If ω = 10 3 , by solving a system of order 700, we get the machine precision, accuracy that in [7, Table 3] is reached with a system of 16384 equations. Similarly, the method gives accurate results also in the case when ω = 10 4 . The first graph of Fig. 1 shows the approximated solution f 20 700 with ω = 10 2 .
Example 5.2 Let us test our method on the equation: namely, a generalized Love integral equation with ω = 10 2 and μ = 1 π . Table 2 shows the errors (5.1) that we get with σ (x) = v 4 (x) and where μ = 1 π . Table 3 contains the accurate results we get also in this case and the last graph of Fig. 1 shows the approximated solution f 20 512 σ .

Proofs
Proof of Theorem 3.1 First, let us prove the stability of the formula, i.e., estimate (3.20). We can write: Then (3.20) follows taking into account the definition of k i given in (3.18), the first assumption on the kernel, and by considering that in virtue on the assumptions on the parameters of the weights, we have: In order to prove (3.22), we can note that by (3.4), we have: so that by using the well-known estimate [10]: Then, taking into account that by the assumptions k i (·, ωy, ω) ∞ < C and by applying the Favard inequality [10]: once with the Jacobi weight v = 1, and then with v = σ , we deduce: Now, let us note that the functions k i defined in (3.18) can be rewritten as: where the functions U i , defined as: Then, by applying the Leibnitz rule, we get: 2n−j ∂ j ∂x j k i (x, ωy, ω) 2n−j 2 j ∂ j ∂x j k i (x, ωy, ω) .

Numerical Algorithms
By the definitions (3.18) of the kernels k i and taking into account the form of the functions U i given in (6.3), we can write: and being [3] sup |x|≤1 in virtue of the assumptions on the kernel k, we have: Thus, by replacing the above estimate in (6.4), we have: from which we deduce | n (F, ωy, ω)| ≤ 1 (2n)! γ n (w) 2d ω 2n F ∞ + F (2n) ∞ . Therefore, by using the well-known Stirling formula: n e n √ 2πn e − 1 12n ≤ n! ≤ n e n √ 2πn e − 1 12n+1 , and, taking into account that [10] γ n (w) ∼ 2 n , we get the thesis. σ (x) log m, (6.5) and the proof is completed.
(6.6) By the definition (4.2), and taking into account the conditions on the parameters of the weights, we have: k(x, ·, ω)σ ∞ from which, by using (6.6), we can deduce that the operator K is continuous and bounded. In order to prove its compactness, we remind that [18] if K satisfies the following condition: lim then K is compact. We note that: (Kf ) (r) (y)(ϕ r σ )(y) ≤ f σ ∞ max |x|≤1 ∂ r ∂y r k(x, ·, ω)ϕ r σ Hence, Kf ∈ W r σ for each f ∈ C σ , and by using the Favard inequality (6.2) with m instead of n, Kf in place of H and σ in place of v, we deduce (6.7).
Proof of Theorem 4.1. The goal of the proof is to prove that To this end, we can use the same arguments in [1, p. 113] only by replacing the usual infinity norm with the weighted uniform norm of C σ . Finally, estimate (4.7) follows taking into account that: (f − f m m )σ ∞ ≤ (I + μK m m ) −1 (K − K m m )f ∞ and by applying Theorem 3.2 to the last term.