1 Introduction

Various problems of mathematical physics and engineering can be described by differential equations which can often be reformulated as an equivalent integral equation. For example, the integral equations with singular kernel often arise in practical applications such as potential problems, Dirichlet problem, and radiative equilibrium [1], etc., which in general can be described as

$$ au(x)+\frac{b}{\pi}\int_{-1}^{1} \frac{u(y)}{y-x}\,dy+\lambda\int_{-1}^{1}k(y,x)u(y) \,dy=f(x), \quad x\in(-1,1), $$
(1)

where a, b are two real constants, \(f(x)\) and \(k(y,x)\) are given functions, the first integral in (1) is defined in the sense of Cauchy principal value, and λ is not an eigenvalue. In last few years, many numerical methods have been developed to solve SIE (1), among which collocation methods, Galerkin methods, spectral methods, etc. [211] have been widely used for solving these kinds of problems for many years. Recently, Xiang and Brunner [12, 13] introduced collocation and discontinuous Galerkin methods for Volterra integral equations with highly oscillatory Bessel kernels, and they concluded that the collocation methods are much more easily implemented and can get higher accuracy than discontinuous Galerkin methods under the same piecewise polynomials space. Volterra integral equations with oscillatory trigonometric kernels are given in [14, 15], which shows that the convergence order of collocation on graded meshes is not necessarily better than that on uniform meshes when the kernels are oscillatory trigonometric. Numerical solutions for the Fredholm integral equations are discussed in [16, 17].

In general, it will give rise to a standard linear system of equations when approximating the solution of SIE by numerical methods. Specially, collocation methods would lead to systems of equations with dense and non-symmetrical coefficient matrices where \(O(N^{2})\) elements need to be stored, with N being the number of degrees of freedom. Solving the systems of equations directly will need \(O(N^{3})\) arithmetic operations. Fortunately, Rokhlin and Greengard innovated the fast multipole method (FMM) which has been widely used for solving large scale engineering problems such as potential, elastostatic, Stokes flow, and acoustic wave problems. For one dimension (SIE (1)), the interval \([-1,1]\) is not a closed curve; however, we can also utilize BEM to solve SIE and accelerate BEM by FMM when the kernel \(k(y,x)\) has multipole expansion or \(k(y,x)= 0\), for details, see [1820].

In this paper, we are concerned with the evaluation of SIE

$$ u(x)+\lambda\int_{-1}^{1} \frac{u(y)}{(y-x)^{m}}\,dt=f(x),\quad x\in(-1,1), $$
(2)

where \(m\in\mathbb{Z}\), \(m\geq1\), \(u(x)\) is an unknown function and \(f(x)\) is a given function. The integral in (2) is defined in the sense of Hadamard finite-part integral for \(m>1\). For simplicity, we denote SIE (2) as

$$(I+\lambda K)u=f. $$

We approximate the solution of SIE by the collocation methods and utilize the FMM to improve the efficiency of algorithm. The paper is organized as follows. In Section 2 we give a brief description of the FMM, where the multipole expansion theory is introduced and also moment to moment translation (M2M), moment to local translation (M2L), and local to local translation (L2L). In Section 3, we give the convergence analysis of the proposed method. In Section 4 we give preliminary numerical examples to illustrate the effectiveness and accuracy of the proposed method.

2 Fast multipole boundary element method for the solution of (2)

In this section, we recall some basic formulations for the fast multipole boundary element method. In order to solve a SIE numerically for the unknown function, we need to discretize the SIE, firstly. We divide the interval \((-1,1)\) into several segments \((x_{j-1},x_{j})\), \(j=1,\ldots, N\), and use the piecewise constant collocation method [16, 21], then SIE (2) becomes

$$ u(\tilde{x}_{i})+\lambda\sum _{j=1}^{N}\int_{x_{j-1}}^{x_{j}} \frac {u(y)}{(y-\tilde{x}_{i})^{m}}\,dy=f(\tilde{x}_{i}),\quad \tilde{x}_{i} \in(-1,1). $$
(3)

Denote \(A=(a_{ij})\) with \(a_{ij}=\lambda\int_{x_{j-1}}^{x_{j}}\frac {1}{(y-\tilde{x}_{i})^{m}}\,dy+\delta_{ij}\), and \(\mathbf{b}=[f(\tilde {x}_{1}),\ldots,f(\tilde{x}_{N})]^{T}\), \(\mathbf{u}=[u(\tilde{x}_{1}),\ldots, u(\tilde{x}_{N})]^{T}\), we obtain a standard linear system of equations

$$A\mathbf{u}=\mathbf{b}. $$

Due to matrix-vector multiplication, solving this system by iterative solvers such as the generalized minimum residue (GMRES) method needs \(O(N^{2})\) operations, and even worse by direct solvers such as Gauss elimination. Based on the multipole expansion of the kernel, the FMM can be used to accelerate the matrix-vector multiplication.

2.1 Multipole expansion of the kernel

To derive the multipole expansion of kernel \(G(y,x):=\frac {1}{(y-x)^{m}}\), we need the following formulation.

Lemma 1

([22])

Let a be any real or complex constant and \(\vert z\vert <1\), then we have

$$ (1+z)^{a}=1+\frac{a}{1!}z+\frac{a(a-1)}{2!}z^{2}+ \frac {a(a-1)(a-2)}{3!}z^{3}+\cdots. $$
(4)

Applying Lemma 1, set \(z=\frac{y-y_{c}}{y_{c}-x}\), then the kernel \(G(y,x)\) can be expressed as the multipole expansion in the following, if \(\vert y-y_{c}\vert <\vert y_{c}-x\vert \),

$$\begin{aligned} G(y,x) =&(y_{c}-x)^{-m}\biggl(1+\frac{y-y_{c}}{y_{c}-x} \biggr)^{-m} \\ =& (y_{c}-x)^{-m} \biggl(1+\frac{-m}{1!}z+ \frac{-m(-m-1)}{2!}z^{2}+\frac {-m(-m-1)(-m-2)}{3!}z^{3}+\cdots\biggr) \\ =&\sum_{k=0}^{\infty} O_{k}(y_{c}-x)I_{k}(y-y_{c}), \end{aligned}$$
(5)

where

$$\begin{aligned}& I_{k}(x)=\frac{x^{k}}{k!} , \quad k\geq0, \end{aligned}$$
(6)
$$\begin{aligned}& O_{k}(x)= \textstyle\begin{cases} x^{-m},&k=0,\\ x^{-m-k}(-m)(-m-1)\cdots(-m-k+1),&k\geq1. \end{cases}\displaystyle \end{aligned}$$
(7)

In addition, we have the following two results:

$$\begin{aligned} I_{k}(x_{1}+x_{2}) =&\sum _{l=0}^{k} I_{k}(x_{1})I_{k-l}{(x_{2})}= \sum_{l=0}^{k}I_{k}{(x_{2})}I_{k-l}(x_{1}), \end{aligned}$$
(8)
$$\begin{aligned} O_{k}(x_{1}+x_{2}) =&\sum _{l=0}^{\infty}O_{k+l}(x_{1})I_{l}(x_{2}), \quad \vert x_{1}\vert >\vert x_{2}\vert . \end{aligned}$$
(9)

The integral in (3) is now evaluated as follows:

$$\begin{aligned} \int_{x_{j-1}}^{x_{j}} G(y, \tilde{x}_{i})u(y)\,dy = & \sum_{k=0}^{\infty }O_{k}(y_{c}- \tilde{x}_{i})M_{k}(y_{c}), \end{aligned}$$
(10)

where \(M_{k}(y_{c})=\int_{x_{j-1}}^{x_{j}}I_{k}(y-y_{c})u(y)\,dy\) are called moments about \(y_{c}\), which are independent of the collocation point \(\tilde{x}_{i}\).

If the expansion point \(y_{c}\) is moved to a new location \(y_{c'}\), we obtain this translation by (6) and

$$\begin{aligned} M_{k}(y_{c'})=\int_{x_{j-1}}^{x_{j}}I_{k}(y-y_{c'})u(y) \,dy, \end{aligned}$$

which leads to

$$\begin{aligned} M_{k} (y_{c'})=\sum _{l=0}^{k} I_{k-l}(y_{c}-y_{c'})M_{l} (y_{c}). \end{aligned}$$
(11)

We call this formula moment-to-moment translation (M2M).

On the other hand, if \(\vert y_{c}-x_{l}\vert >\vert x_{l}-\tilde{x}_{i}\vert \), then (10) can be rewritten as

$$\begin{aligned} \int_{x_{j-1}}^{x_{j}} G(y, \tilde{x}_{i})u(y)\,dy = & \sum_{k=0}^{\infty }O_{k}(y_{c}- \tilde{x}_{i})M_{k}(y_{c}) \\ =&\sum_{k=0}^{\infty}O_{k}(y_{c}-x_{l}+x_{l}- \tilde{x}_{i})M_{k}(y_{c}) \\ =&\sum_{k=0}^{\infty}L_{k}(x_{l})I_{k}(x_{l}- \tilde{x}_{i}) \end{aligned}$$
(12)

with

$$\begin{aligned} L_{k}(x_{l})=\sum _{l=0}^{\infty}O_{k+l}(y_{c}-x_{l})M_{l}(y_{c}), \end{aligned}$$
(13)

where \(L_{k}(x_{l})\) denotes the local expansion coefficients about \(x_{l}\). We call the formula (13) moment-to-local translation (M2L).

If the point for local expansion is moved from \(x_{l}\) to \(x_{l'}\), using a local expansion with p terms, we obtain this translation by

$$\begin{aligned} \int_{x_{j-1}}^{x_{j}} G(y,\tilde{x}_{i})u(y) \,dy =&\sum_{k=0}^{p}L_{k}(x_{l})I_{k}(x_{l}- \tilde{x}_{i}) \\ =&\sum_{k=0}^{p} L_{k}(x_{l})I_{k}(x_{l}-x_{l'}+x_{l'}- \tilde{x}_{i}) \\ =&\sum_{k=0}^{p} L_{k}(x_{l}) \sum_{j=0}^{k} I_{k}(x_{l}-x_{l'})I_{k-j}(x_{l'}- \tilde{x}_{i}), \end{aligned}$$

which leads to

$$\begin{aligned} L_{k} (x_{l'})=\sum _{j=0}^{k} L_{k} (x_{l})I_{k-j}(x_{l'}- \tilde{x}_{i}). \end{aligned}$$
(14)

We call this formula local-to-local translation (L2L).

2.2 Evaluation of the integrals

The multipole expansion is used to evaluate the integrals that are far away from the collocation point, whereas the direct approach is applied to evaluate the integrals on the remaining segments that are close to the collocation point. In the following, we are concerned with the piecewise constant collocation method for (2), where we can evaluate the moments analytically by

$$\begin{aligned} M_{k}(y_{c}) =&\int_{x_{j-1}}^{x_{j}}I_{k}(y-y_{c})u(y) \,dy \\ =&u_{j}\int_{x_{j-1}}^{x_{j}} \frac{(y-y_{c})^{k}}{k!}\,dy \\ =&u_{j}\bigl(I_{k+1}(x_{j}-y_{c})-I_{k+1}(x_{j-1}-y_{c}) \bigr). \end{aligned}$$
(15)

When the integrating interval \((x_{j-1},x_{j})\) is close to the collocation point \(\tilde{x}_{i}\) but not coincidence with the interval \((x_{i-1},x_{i})\), the integral is not singular, we can evaluate it analytically by

$$\begin{aligned} \int_{x_{j-1}}^{x_{j}}\frac{u_{j}}{(y-\tilde{x}_{i})^{m}}\,dy= \frac {u_{j}}{(-m+1)}(y-\tilde{x}_{i})^{-m+1}\mid_{x_{j-1}}^{x_{j}}. \end{aligned}$$
(16)

For the integrating interval \((x_{i-1},x_{i})\), where \(\tilde{x}_{i}\in (x_{i-1}, x_{i})\), we can evaluate the integral analytically by

$$\begin{aligned} \int_{x_{i-1}}^{x_{i}}\frac{u_{i}}{(y-c)^{m}}\,dy= \textstyle\begin{cases} \ln(\frac{1-c}{1+c})u_{i},& m=1,\\ u_{j}\frac{1}{(m-1)!}\frac{d^{(m-1)}}{dc^{(m-1)}}\int_{x_{i-1}}^{x_{i}}\frac{1}{y-c}\,dy,&m> 1, \end{cases}\displaystyle \end{aligned}$$
(17)

where \(x_{i-1}< c< x_{i}\).

3 Convergence analysis

In this section, we derive the error bound for (10) when \(m=1\).

Lemma 2

([23])

The Cauchy integral operator \(K: C^{(0,\alpha)(-1,1)}\rightarrow C^{(0,\alpha)(-1,1)}\) defined by

$$ (Ku) (z):=\frac{1}{\pi i}\int_{-1}^{1} \frac{u(y)}{y-z}\,dy,\quad z\in(-1,1) $$
(18)

is bounded, where \(C^{(0,\alpha)}(-1,1)\) denotes the Hölder continuous function on \((-1,1)\).

Lemma 3

([24])

Suppose that \(f(z)\) is analytic in \(\vert z\vert < R\), then the Taylor series of \(f(z)\) is converging absolutely in \(\vert z\vert < R\), and we have

$$\begin{aligned} \vert R_{n}\vert \leq& \frac{1}{2\pi}\oint_{c} \frac{\vert f(t)\vert \vert \,dt\vert }{\vert t\vert (1-\vert \frac {z}{t}\vert )}\biggl\vert \frac{z}{t}\biggr\vert ^{n} \\ \leq& M_{1}\biggl(\frac{r}{R}\biggr)^{n}, \end{aligned}$$

where \(r=\vert z\vert \), c is a circle centered at origin with radius R and \(R_{n}\) is the remainder of Taylor series.

Theorem 1

If we apply a multipole expansion with p terms and suppose that \(\vert y_{c}-\tilde{x}_{i}\vert /h\geq2\), we have the following error bound:

$$\begin{aligned} E_{M}^{p} =&\Biggl\vert \int_{x_{j-1}}^{x_{j}} G(y,\tilde{x}_{i})u(y)\,dy- \sum_{k=0}^{p}O_{k}(y_{c}- \tilde{x}_{i})M_{k}(y_{c})\Biggr\vert \\ \leq&C \frac{1}{h}\biggl(\frac{1}{2}\biggr)^{p+1}, \end{aligned}$$
(19)

where \(h=\max_{y\in(x_{j-1},x_{j})}\vert y-y_{c}\vert \), \(C=\int_{x_{j-1}}^{x_{j}}\vert u(y)\vert \,dy\).

Proof

$$\begin{aligned} E_{M}^{p} =&\Biggl\vert \int_{x_{j-1}}^{x_{j}} G(y,\tilde{x}_{i})u(y)\,dy- \sum_{k=0}^{p}O_{k}(y_{c}- \tilde{x}_{i})M_{k}(y_{c})\Biggr\vert \\ =& \Biggl\vert \sum_{k=p+1}^{\infty}O_{k}(y_{c}- \tilde{x}_{i})M_{k}(y_{c})\Biggr\vert \\ \leq&C\sum_{k=p+1}^{\infty} \bigl\vert O_{k}(y_{c}-\tilde{x}_{i})\bigr\vert \frac{h^{k}}{k!} \\ \leq&C\sum_{k=p+1}^{\infty}{C_{m+k-1}^{k}} \frac{h^{k}}{(y_{c}-\tilde {x}_{i})^{m+k}}, \end{aligned}$$
(20)

where \(\vert y-y_{c}\vert \leq h\), \(C_{m+k-1}^{k}\) is the binomial coefficients.

For \(m=1\), we have \(C_{m+k-1}^{k}=1\), then

$$\begin{aligned} E_{M}^{p}\leq C \frac{h^{(p+1)}}{\vert y_{c}-\tilde{x}_{k}\vert ^{(p+2)}}\frac {1}{1-h/\vert y_{c}-\tilde{x}_{i}\vert }. \end{aligned}$$
(21)

Let \(\eta=\vert y_{c}-\tilde{x}_{i}\vert /h\), then the preceding error bound can be written as

$$\begin{aligned} E_{M}^{p}\leq C \frac{1}{h(\eta-1)}\biggl(\frac{1}{\eta} \biggr)^{p+1}, \end{aligned}$$
(22)

which establishes the desired error bound. □

Theorem 2

If we apply multipole expansion and local expansion with p terms and suppose that \(\vert y_{c}-\tilde{x}_{i}\vert /h\geq2\), we have the following error bound:

$$\begin{aligned} E_{L}^{p} =&\Biggl\vert \int_{x_{j-1}}^{x_{j}} G(y,\tilde{x}_{i})u(y)\,dy- \sum_{k=0}^{p}L_{k}(x_{l})I_{k}(x_{l}- \tilde{x}_{i})\Biggr\vert \\ \leq& C M\biggl(\frac{1}{2}\biggr)^{p+1}, \end{aligned}$$
(23)

where C, M are some constants.

Proof

From (12) and (13), we have

$$\begin{aligned} E_{L}^{p} =&\Biggl\vert \int _{x_{j-1}}^{x_{j}} G(y,\tilde{x}_{i})u(y)\,dy- \sum_{k=0}^{p}L_{k}(x_{l})I_{k}(x_{l}- \tilde{x}_{i})\Biggr\vert \\ \leq&\Biggl\vert \int_{x_{j-1}}^{x_{j}} G(y, \tilde{x}_{i})u(y)\,dy-\sum_{k=0}^{p}O_{k}(y_{c}- \tilde{x}_{i})M_{k}(y_{c})\Biggr\vert \\ &{}+\Biggl\vert \sum_{k=0}^{p} \bigl(O_{k}(y_{c}-\tilde{x}_{i})- \tilde{O}_{k}(y_{c}-\tilde {x}_{i}) \bigr) M_{k}(y_{c})\Biggr\vert , \end{aligned}$$
(24)

where \(\tilde{O}_{k}(y_{c}-\tilde{x}_{i})=\sum_{n=0}^{p}O_{k+n}(y_{c}-x_{l})I_{n}(x_{l}-\tilde{x}_{i})\).

Due to

$$\begin{aligned} O_{k}(y_{c}-\tilde{x}_{i})= \frac{k!}{(y_{c}-\tilde{x}_{i})^{k+1}}=\frac {k!}{(y_{c}-x_{l})^{k+1}}\biggl(1+\frac{x_{l}-\tilde{x}_{i}}{y_{c}-x_{l}} \biggr)^{-k-1}, \end{aligned}$$
(25)

it follows that

$$\begin{aligned} \vert R_{p+1}\vert =&\bigl\vert O_{k}(y_{c}- \tilde{x}_{i})-\tilde{O}_{k}(y_{c}-\tilde {x}_{i})\bigr\vert =\Biggl\vert \sum_{n=p+1}^{\infty}O_{k+n}(y_{c}-x_{l})I_{n}(x_{l}- \tilde{x}_{i}) \Biggr\vert , \end{aligned}$$
(26)

which is the remainder of Taylor series expansion of (25).

Define \(g(z)=(1+z)^{-k-1}\), then \(g(z)\) is analytic in \(\vert z\vert <1\). Since \(x_{l}\) and \(y_{c}\) are well-separated points, we could set \(\vert \frac{x_{l}-\tilde{x}_{i}}{y_{c}-x_{l}}\vert <1/2\). By Lemma 3 we can estimate the remainder of Taylor series \(R_{p+1}\) as follows:

$$\begin{aligned} \vert R_{p+1}\vert \leq M_{1}\biggl\vert \frac{k!}{|(y_{c}-x_{l})|^{k+1}}\biggr\vert \biggl(\frac{1}{2}\biggr)^{p+1}, \end{aligned}$$
(27)

where \(M_{1}\) is constant determined by a function \(g(z)\).

By Theorem 1, (27) and (24), we have

$$\begin{aligned} E_{L}^{p}\leq C \frac{1}{h}\biggl(\frac{1}{2} \biggr)^{p+1}+(p+1)CM_{1}C_{1}\biggl( \frac {1}{2}\biggr)^{p+1}, \end{aligned}$$
(28)

where \(C_{1}=\max\{ \frac{h^{k}}{\vert (y_{c}-x_{l})\vert ^{k+1}}, k=0,\ldots,p\}\).

The desired error bound is established by setting \(M= \frac{1}{h}+(p+1)M_{1}C_{1}\). □

Remark 1

In the FMM, \(x_{l}\) and \(y_{c}\) are well-separated points, we could obtain \(\vert y_{c}-x_{l}\vert > 2h\), which leads to \(C_{1}\) is bounded.

When we use the FMM to solve SIE (3), the integral operator K is approximated by multipole expansion; if we define \(K_{p}\) as an approximate operator used in the collocation method, then Theorem 1 and Theorem 2 indicate that

$$\begin{aligned} \lim_{p\rightarrow\infty} \Vert K_{p}-K\Vert =0. \end{aligned}$$
(29)

It follows from Lemma 2 and (29) that the integral operator \(((I+\lambda K_{p})u)(x)\) is bounded.

4 Numerical examples

We illustrate the efficiency and accuracy of the methods described in this paper by numerical examples. Here \(\hat{u}_{N}\) denotes the piecewise constant collocation method, N is the number of collocation points. We choose the uniform mesh, and \(\tilde{x}_{i}\) is the middle point of \((x_{i-1},x_{i})\). Denote by \(I_{h}^{N}=\{\tilde{x}_{i},i=1,\ldots,N\}\) the set of collocation points.

Example 4.1

We consider

$$u(x)-\frac{1}{2\pi}\int_{-1}^{1} \frac{u(y)}{(y-x)^{m}}\,dy=f(x),\quad \vert x\vert < 1, $$

where

$$f(x)=1-\frac{1}{2\pi}\int_{-1}^{1} \frac{1}{(y-x)^{m}}\,dy, $$

and for \(m=1\) or \(m=2\),

$$u(x)=1 $$

is the exact solution of equation.

Tables 1-4 illustrate the efficiency and accuracy of the methods.

Table 1 Approximations at \(\pmb{x=-0.9,-0.5,-0.1}\) for \(\pmb{u(x)-\frac{1}{2\pi}\int_{-1}^{1}\frac{u(y)}{(y-x)}\,dy=f(x)}\)
Table 2 Approximations at \(\pmb{x=0.1,0.5,0.9}\) for \(\pmb{u(x)-\frac{1}{2\pi}\int_{-1}^{1}\frac{u(y)}{(y-x)}\,dy=f(x)}\)
Table 3 Approximations at \(\pmb{x=-0.9,-0.5,-0.1}\) for \(\pmb{u(x)-\frac{1}{2\pi}\int_{-1}^{1}\frac{u(y)}{(y-x)^{2}}\,dy=f(x)}\)
Table 4 Approximations at \(\pmb{x=0.1,0.5,0.9}\) for \(\pmb{u(x)-\frac{1}{2\pi}\int_{-1}^{1}\frac{u(y)}{(y-x)^{2}}\,dy=f(x)}\)

From Tables 1-4, it is easy to see that the proposed method is effective. It might also be noted that \(\tilde{x}_{i} \in I_{h}^{10}\) but \(\tilde {x}_{i} \notin I_{h}^{100}\) and \(\tilde{x}_{i} \notin I_{h}^{1000}\), i.e., the points \(\{-0.9,-0.5,-0.1,0.1, 0.5,0.9 \}\subset I_{h}^{10}\), but it is not a subset of \(I_{h}^{100}\) or \(I_{h}^{1000}\).

Example 4.2

Let us consider

$$u(x)-\int_{-1}^{1}\frac{u(y)}{(y-x)}\,dy =f(x), \quad \vert x\vert < 1, $$

where

$$f(x)=x-\bigl(2-x\ln(x+1)+x\ln(1-x)\bigr), $$

then

$$u(x)=x, $$

is the exact solution of equation.

Tables 5-6 also show that the proposed method is effective and the convergence order is \(O(1/N)\), which coincides with the classical theory of collocation methods.

Table 5 Approximations at \(\pmb{x=-0.9,-0.5,-0.1}\) for \(\pmb{u(x)-\int_{-1}^{1}\frac{u(y)}{(y-x)}\,dy=f(x)}\)
Table 6 Approximations at \(\pmb{x=0.1,0.5,0.9}\) for \(\pmb{u(x)-\int_{-1}^{1}\frac{u(y)}{(y-x)}\,dy=f(x)}\)

Example 4.3

For the more general case, we consider

$$u(x) \bigl(1+x^{2}\bigr)-\bigl(1+x^{2}\bigr)\int _{-1}^{1}\frac{u(y)}{(y-x)}\,dy=f(x),\quad \vert x\vert < 1, $$

where

$$f(x)=x^{3}- \biggl(\frac{4 - \pi}{2} + 2x^{2} + x^{3} \log\bigl[-1 + 2/(1 + x)\bigr] \biggr) $$

and the exact solution is

$$u(x)=\frac{x^{3}}{1+x^{2}}. $$

Tables 7-8 also illustrate the efficiency and accuracy of the methods.

Table 7 Approximations at \(\pmb{x=-0.9,-0.5,-0.1}\) for \(\pmb{u(x)-\int_{-1}^{1}\frac{u(y)}{(y-x)}\,dy=f(x)}\)
Table 8 Approximations at \(\pmb{x=0.1,0.5,0.9}\) for \(\pmb{u(x)-\int_{-1}^{1}\frac{u(y)}{(y-x)}\,dy=f(x)}\)

5 Conclusion

In this paper, we explore collocation methods for a singular Fredholm integral equation of the second kind and utilize the FMM to improve the efficiency of algorithm. Based on the multipole expansion of kernel, we demonstrate that the approximate operator used in the collocation equation converges to the initial operator. Numerical examples demonstrate the performance of the proposed algorithm.