1 Introduction

The concept of the principal value of a Cauchy type singular integral equation is well known. This kind of equations is applied in many branches of engineering and science like fracture mechanics [1], aerodynamics [2] and occurs in a variety of mixed boundary value problems of mathematical physics [24]. The Cauchy singular integral equations form is given as

$$ \int_{-1}^{1}\frac{y(t)}{t-s}\,dt+ \int _{-1}^{1}k(s,t)y(t)\,dt=g(s), \quad -1< s< 1, $$
(1)

where \(y(t)\) is an unknown function and \(g(s)\) is a given function [5]. When \(k(s,t)=0\), equation (1) is reduced to the following equation:

$$ \int_{-1}^{1}\frac{y(t)}{t-s}\,dt=g(s),\quad -1< s< 1. $$
(2)

Equation (2) is an airfoil equation in aerodynamics. For all different cases in [6], the complete analytical solution of (2) is presented. Let it be displayed by

$$ y(s)=y_{j}(s), $$
(3)

where \(j =1,2,3,4\) show Cases (i), (ii), (iii), (iv), respectively.

Case (i). The solution is bounded at both end points \(s = \pm1\),

$$ y_{1}(x)=-\frac{\sqrt{1-s^{2}}}{\pi^{2} } \int_{-1}^{1}\frac{g(t)}{\sqrt {1-t^{2}}(t-s)}\,dt, $$
(4)

provided that

$$\int_{-1}^{1}\frac{g(t)}{\sqrt{1-t^{2}}}\,dt=0. $$

Case (ii). The solution is unbounded at both end points \(s = \pm1\),

$$ y_{2}(s)=-\frac{1}{\pi^{2} \sqrt{1-s^{2}}} \int_{-1}^{1}\frac{\sqrt {1-t^{2}}}{t-s}g(t)\,dt+ \frac{\omega}{\sqrt{1-s^{2}}}, $$
(5)

where

$$\omega= \int_{-1}^{1}y_{2}(t)\,dt. $$

Case (iii). The solution is bounded at the end point \(s = 1\), but unbounded at \(s = -1\),

$$ y_{3}(x)=-\frac{1}{\pi^{2}}\sqrt{\frac{1-s}{1+s}} \int_{-1}^{1}\sqrt {\frac{1+t}{1-t}} \frac{g(t)}{(t-s)}\,dt. $$
(6)

Case (iv). The solution is bounded at the end point \(s = -1\), but unbounded at \(s = 1\),

$$ y_{4}(x)=-\frac{1}{\pi^{2}}\sqrt{\frac{1+s}{1-s}} \int_{-1}^{1}\sqrt {\frac{1-t}{1+t}} \frac{g(t)}{(t-s)}\,dt. $$
(7)

Due to the singularity of the integrands of CSFIEs, solving the CSFIEs is analytically difficult. However, a wide variety of applications of these equations show that cases of the CSFIEs have a special significance. On the other hand, the volume of work done in the area of the CSFIEs is relatively small. Hence, it is important that the approximate solutions of the CSFIEs can be solved by numerical methods.

In previous work, several numerical methods have been introduced to solve the Cauchy singular integral equation in various techniques, such as the cubic spline method [7], using Gaussian quadrature and an overdetermined system [4], the generalized inverses technique [8], the iteration method [9], using Bernstein polynomials [10], the Jacobi polynomials technique [11], the rational functions method [12], using Chebyshev polynomials [13] and the Nyström method [14]. Liu et al. [15] devised a collocation scheme for a certain Cauchy singular integral equation based on the superconvergence analysis. Panja and Mandal [16] used the Daubechies scale function to solve the second kind integral equation with a Cauchy type kernel. Recently, other good numerical methods have been proposed for an approximate solution of a Cauchy type singular integral equation, like the Bernstein polynomials method [17], using Legendre polynomials [18], the differential transform technique [19] and the reproducing kernel method [20]. Also, Dezhbord et al. [21] presented the reproducing kernel Hilbert space method for solving equation (2).

In recent years, several types of matrix collocation methods have been proposed for solving the singular integral and the singular integro-differential equations (see [22, 23]). In the present paper, we use a different collocation method for CSFIEs. Since the Bernstein polynomials have many good properties, such as the positivity continuity, recursion’s relation, symmetry, unity partition of the basis set over \([a,b]\), uniform approximation, differentiability and integrability, these polynomials are applied for the collocation methods [2427]. Also, using the expansion of different functions in Bernstein polynomials leads to good numerical results and has a high efficiency in convergence theorems.

In this work, we use the different matrix collocation method based on Bernstein polynomials for solving the CSFIEs of the first kind. The rest of this paper is organized as follows: In Section 2, we point out some definitions of the Bernstein polynomials and collocation method as used for solving CSFIEs. By reducing the singularity, the transformation of the main equation to the equivalent integral equations is performed in Section 3. The next section is devoted to a description of a numerical method based on Bernstein polynomials. In Section 5, an error analysis of the proposed method are discussed. In Section 6, numerical results with the exact solution for some examples have been compared.

2 Preliminaries

2.1 The Bernstein polynomials

The Bernstein polynomials of degree n are defined by

$$ B_{i,n}(s)=\binom{n}{i}s^{i} (1-s)^{n-i}, \quad s \in[0,1]. $$
(8)

By using the binomial expansion, they can be written

$$ B_{i,n}(s)=\sum_{k=0}^{n-i} (-1)^{k} \binom{n}{i} \binom {n-i}{k}s^{i+k}, \quad s \in[0,1]. $$
(9)

Also, the Bernstein polynomials of the nth degree on the interval \([a,b]\) are [25]

$$ B_{i,n}(s)=\binom{n}{i}\frac{(s-a)^{i}(b-s)^{n-i}}{(b-a)^{n}} \quad\mbox{for } i=0,1,2,\dots,n. $$
(10)

2.2 Collocation method

We use the truncated Bernstein polynomial series form based on the Cases (i), (ii), (iii) and (iv) from (2) to obtain approximate solutions as follows:

$$\begin{aligned} &x(s)\simeq x_{n}(s)=\sum _{i=0}^{n} x_{1,i} B_{i,n}(s),\quad \mbox{for Case } (\text{i}), \\ \begin{aligned} &x(s)\simeq x_{n}(s)=\sum_{i=0}^{n} x_{2,i} B_{i,n}(s),\quad \mbox{for Case } (\text{ii}), \\ &x(s)\simeq x_{n}(s)=\sum_{i=0}^{n} x_{3,i} B_{i,n}(s), \quad \mbox{for Case }(\text{iii}), \end{aligned} \\ &x(s)\simeq x_{n}(s)=\sum_{i=0}^{n} x_{4,i} B_{i,n}(s),\quad \mbox{for Case }(\text{iv}), \end{aligned}$$
(11)

where \(x_{j,i}, (i=0,1,\dots,n)\), \((j=1,2,3,4)\) are the unknown Bernstein coefficients.

3 Removing singularity of equation (2)

It is clear that the approximate solutions based on the analytical solutions of equation (2) can be represented by the following relations [21]:

$$ y_{j}(s)=\omega_{j} (s) x(s),\quad j=1,2,3,4, $$
(12)

where \(x(s)\) is the well-behaved function on \([-1,1]\) and we have the weight functions for the corresponding cases as follows, respectively:

$$ \begin{aligned} &\omega_{1}(s)= \frac{1-s^{2}}{\sqrt{1-s^{2}}}, \qquad\omega_{2}(s)= \frac {1}{\sqrt{1-s^{2}}}, \\ &\omega_{3}(s)= \frac{1-s}{\sqrt{1-s^{2}}},\qquad \omega_{4}(s)= \frac {1+s}{\sqrt{1-s^{2}}}. \end{aligned} $$
(13)

Now, in order to reduce the singular term, we have to convert equation (2) to the equivalent integral equations. The unknown functions \(x(s)\) of equation (12) can be expressed as the following cases:

Case (i). By using (3), (12) and (13), \(y_{1}(s)\) can be represented in the form

$$ y_{1}(s)=\frac{1-s^{2}}{\sqrt{1-s^{2}}}x(s)=\sqrt{1-s^{2}} x(s), \quad s\in(-1,1). $$
(14)

So by substituting (14) into equation (2), we have

$$ \begin{aligned} &\int_{-1}^{1}\sqrt{1-t^{2}} \frac{x(t)}{t-s} \,dt= g(s), \\ &\int_{-1}^{1}\sqrt{1-t^{2}} \frac{x(t)}{t-s} \,dt= \int_{-1}^{1}\sqrt {1-t^{2}} \frac{x(t)-x(s)}{t-s} \,dt+ \int_{-1}^{1}\sqrt{1-t^{2}} \frac {x(s)}{t-s} \,dt. \end{aligned} $$
(15)

Note that the singular term is integrable in the sense of the Cauchy principal value. We have

$$ \int_{-1}^{1}\sqrt{1-t^{2}} \frac{x(s)}{t-s} \,dt= x(s) \int _{-1}^{1}\sqrt{1-t^{2}} \frac{1}{t-s} \,dt=-\pi s x(s), \quad s \in[-1,1]. $$
(16)

Thus, the singular term has been removed and equation (15) is transformed into

$$ \int_{-1}^{1}\sqrt{1-t^{2}} \frac{x(t)-x(s)}{t-s} \,dt - \pi s x(s)=g(s), \quad s \in(-1,1). $$
(17)

Case (ii). The solution \(y_{2}(s)\) can be represented in the form

$$ y_{2}(s)=\frac{1}{\sqrt{1-s^{2}}}x(s),\quad s\in(-1,1). $$
(18)

Also, by substituting (18) into equation (2), we have

$$ \begin{aligned} &\int_{-1}^{1}\frac{1}{\sqrt{1-t^{2}}}\frac{x(t)}{t-s} \,dt= g(s),\quad s\in (-1,1) \\ &\int_{-1}^{1}\frac{1}{\sqrt{1-t^{2}}}\frac{x(t)}{t-s} \,dt= \int _{-1}^{1}\frac{1}{\sqrt{1-t^{2}}}\frac{x(t)-x(s)}{t-s} \,dt+ \int _{-1}^{1}\frac{1}{\sqrt{1-t^{2}}}\frac{x(s)}{t-s} \,dt. \end{aligned} $$
(19)

In the sense of the Cauchy principal value,

$$ \int_{-1}^{1}\frac{1}{\sqrt{1-t^{2}}}\frac{x(s)}{t-s} \,dt= x(s) \int _{-1}^{1}\frac{1}{\sqrt{1-t^{2}}}\frac{1}{t-s} \,dt=0,\quad s \in[-1,1]. $$
(20)

Thus, the singular term has been removed and equation (19) is transformed into

$$ \int_{-1}^{1}\frac{1}{\sqrt{1-t^{2}}}\frac{x(t)-x(s)}{t-s} \,dt =g(s), \quad s \in(-1,1). $$
(21)

Case (iii). The solution \(y_{3}(s)\) can be represented in the form

$$ y_{3}(s)=\sqrt{\frac{1-s}{1+s}}x(s),\quad s\in(-1,1]. $$
(22)

We substitute (22) into equation (2) and we get

$$ \begin{aligned} \int_{-1}^{1}\sqrt{\frac{1-t}{1+t}} \frac{x(t)}{t-s} \,dt= {}&g(s), \\ \int_{-1}^{1}\sqrt{\frac{1-t}{1+t}} \frac{x(t)}{t-s} \,dt={}& \int _{-1}^{1}\frac{\sqrt{1-t^{2}}}{1+t}\frac{x(t)}{t-s} \,dt \\ ={}& \int_{-1}^{1}\frac{\sqrt{1-t^{2}}}{t-s} \biggl( \biggl( \frac {x(t)}{1+t}-\frac{x(s)}{1+s} \biggr)+\frac{x(s)}{1+s} \biggr) \,dt \\ ={}& \int_{-1}^{1}\frac{\sqrt{1-t^{2}}}{t-s} \biggl( \frac{ (x(t)-x(s) )(1+t)-x(t)(t-s)}{(1+t)(1+s)} \biggr)\,dt \\ &{}+ \int_{-1}^{1}\frac{\sqrt{1-t^{2}}x(s)}{(t-s)(1+s)}\,dt \\ ={}&\frac{1}{1+s} \biggl( \biggl( \int_{-1}^{1}\sqrt{1-t^{2}} \frac {x(t)-x(s)}{t-s}\,dt- \int_{-1}^{1}\sqrt{1-t^{2}} \frac{x(t)}{1+t}\,dt \biggr) \\ &{}+ \int_{-1}^{1}\frac{\sqrt{1-t^{2}}x(s)}{t-s}\,dt \biggr), \end{aligned} $$
(23)

also, from (16), we get

$$ \int_{-1}^{1}\frac{\sqrt{1-t^{2}}x(s)}{(t-s)}\,dt=- \pi s x(s), $$
(24)

so, equation (22) is converted into

$$ \frac{1}{1+s} \biggl( \int_{-1}^{1}\sqrt{1-t^{2}} \frac {x(t)-x(s)}{t-s}\,dt- \int_{-1}^{1}\sqrt{1-t^{2}} \frac{x(t)}{1+t}\,dt \biggr)-\frac{\pi s x(s)}{1+s} =g(s). $$
(25)

Case (iv). The solution \(y_{4}(s)\) can be represented in the form

$$ y_{4}(s)=\sqrt{\frac{1+s}{1-s}}x(s),\quad s\in[-1,1). $$
(26)

By substituting (26) into equation (2), we have

$$ \begin{aligned} \int_{-1}^{1}\sqrt{\frac{1+t}{1-t}} \frac{x(t)}{t-s} \,dt={}& g(s), \\ \int_{-1}^{1}\sqrt{\frac{1+t}{1-t}} \frac{x(t)}{t-s} \,dt={}& \int _{-1}^{1}\frac{\sqrt{1-t^{2}}}{1-t}\frac{x(t)}{t-s} \,dt \\ ={}& \int _{-1}^{1}\frac{\sqrt{1-t^{2}}}{t-s} \biggl( \biggl( \frac {x(t)}{1-t}-\frac{x(s)}{1-s} \biggr)+\frac{x(s)}{1-s} \biggr) \,dt \\ ={}& \int_{-1}^{1}\frac{\sqrt{1-t^{2}}}{t-s} \biggl( \frac{ (x(t)-x(s) )(1-t)+x(t)(t-s)}{(1-t)(1-s)} \biggr)\,dt \\ &{}+ \int_{-1}^{1}\frac{\sqrt{1-t^{2}}x(s)}{(t-s)(1-s)}\,dt \\ ={}&\frac{1}{1-s}\biggl( \biggl( \int_{-1}^{1}\sqrt{1-t^{2}} \frac {x(t)-x(s)}{t-s}\,dt+ \int_{-1}^{1}\sqrt{1-t^{2}} \frac{x(t)}{1-t}\,dt \biggr) \\ &{}+ \int_{-1}^{1}\frac{\sqrt{1-t^{2}}x(s)}{t-s}\,dt\biggr), \end{aligned} $$
(27)

also, from (20), we get

$$ \int_{-1}^{1}\frac{\sqrt{1-t^{2}}x(s)}{(t-s)}\,dt=- \pi s x(s), $$
(28)

so, equation (26) is transformed into

$$ \frac{1}{1-s} \biggl( \int_{-1}^{1}\sqrt{1-t^{2}} \frac {x(t)-x(s)}{t-s}\,dt+ \int_{-1}^{1}\sqrt{1-t^{2}} \frac{x(t)}{1-t}\,dt \biggr)-\frac{\pi s x(s)}{1-s} =g(s). $$
(29)

In all of equations (17), (21), (25) and (29) \(\frac{x(t)-x(s)}{t-s}=x'(s)\) while \(t=s\) and it means that the singularity of equation (2) has been removed. Finally, for computing integrals, we use the Gauss-Chebyshev quadrature rule.

4 Description of the technique

First, we rewrite equations (17), (21), (25), and (29) in the following form:

$$ F_{j}(s)+G_{j}(s)=g(s),\quad j=1,2,3,4, $$
(30)

where

$$\begin{aligned} & F_{1}(s)= \int_{-1}^{1} \sqrt{1-t^{2}} \frac{x(t)-x(s)}{t-s}\,dt, \\ &G_{1}(s)=-\pi s x(s), \quad\mbox{for Case }(\text{i}), \\ &F_{2}(s)= \int_{-1}^{1} \frac{1}{\sqrt{1-t^{2}}} \frac {x(t)-x(s)}{t-s}\,dt, \\ \begin{aligned} &G_{2}(s)=0, \quad\mbox{for Case }(\text{ii}), \\ &F_{3}(s)=\frac{1}{1+s} \biggl( \int_{-1}^{1}\sqrt{1-t^{2}} \frac {x(t)-x(s)}{t-s}\,dt- \int_{-1}^{1}\sqrt{1-t^{2}} \frac{x(t)}{1+t}\,dt \biggr), \end{aligned} \\ &G_{3}(s)=-\frac{\pi s x(s)}{1+s}, \quad\mbox{for Case }(\text{iii}), \\ &F_{4}(s)=\frac{1}{1-s} \biggl( \int_{-1}^{1}\sqrt{1-t^{2}} \frac {x(t)-x(s)}{t-s}\,dt+ \int_{-1}^{1}\sqrt{1-t^{2}} \frac{x(t)}{1-t}\,dt \biggr), \\ &G_{4}(s)=-\frac{\pi s x(s)}{1-s}, \quad\mbox{for Case }(\text{iv}), \end{aligned}$$
(31)

and \(\frac{x(t)-x(s)}{t-s}=x'(s)\) while \(t=s\). According to equations (31) and by placing the collocation points \(s_{m}\) defined by

$$ s_{m}=-1+\frac{2}{n+2}(m+1),\quad m=0,1,\dots,n, $$
(32)

into (30), we get

$$ F_{j}(s_{m})+G_{j}(s_{m})=g(s_{m}),\quad j=1,2,3,4. $$
(33)

Using (11) and (31), we obtain

$$ \begin{aligned}& F_{1}(s_{m})\simeq \sum_{i=0}^{n}x_{1,i} \int_{-1}^{1} \sqrt{1-t^{2}} \frac {B_{i,n}(t)-B_{i,n}(s_{m})}{t-s_{m}}\,dt, \quad\mbox{for Case }(\text{i}), \\ &F_{2}(s_{m})\simeq\sum_{i=0}^{n}x_{2,i} \int_{-1}^{1} \frac{1}{\sqrt {1-t^{2}}} \frac{B_{i,n}(t)-B_{i,n}(s_{m})}{t-s_{m}}\,dt,\quad \mbox{for Case }(\text{ii}), \\ &F_{3}(s_{m})\simeq\frac{1}{1+s_{m}} \Biggl( \sum _{i=0}^{n}x_{3,i} \int _{-1}^{1}\sqrt{1-t^{2}} \frac{B_{i,n}(t)-B_{i,n}(s_{m})}{t-s_{m}}\,dt \\ &\phantom{F_{3}(s_{m})\simeq}{}- \sum_{i=0}^{n}x_{3,i} \int_{-1}^{1}\sqrt{1-t^{2}} \frac {B_{i,n}(t)}{1+t}\,dt \Biggr), \quad\mbox{for Case }(\text{iii}), \\ &F_{4}(s_{m})\simeq\frac{1}{1-s_{m}} \Biggl( \sum _{i=0}^{n}x_{4,i} \int _{-1}^{1}\sqrt{1-t^{2}} \frac{B_{i,n}(t)-B_{i,n}(s_{m})}{t-s_{m}}\,dt \\ &\phantom{F_{4}(s_{m})\simeq}{}+ \sum_{i=0}^{n}x_{4,i} \int_{-1}^{1}\sqrt{1-t^{2}} \frac {B_{i,n}(t)}{1-t}\,dt \Biggr), \quad\mbox{for Case }(\text{iv}) \end{aligned} $$
(34)

and

$$\begin{aligned} & G_{1}(s_{m})\simeq-\sum _{i=0}^{n}x_{1,i} B_{i,n}(s_{m}) \pi s_{m}, \quad\mbox{for Case }(\text{i}), \\ &G_{2}(s_{m})=0, \quad\mbox{for Case }(\text{ii}), \\ &G_{3}(s_{m})\simeq-\sum_{i=0}^{n}x_{3,i} B_{i,n}(s_{m})\frac{\pi s_{m} }{1+s_{m}}, \quad\mbox{for Case }(\text{iii}), \\ &G_{4}(s_{m})\simeq-\sum_{i=0}^{n}x_{4,i} B_{i,n}(s_{m})\frac{\pi s_{m} }{1-s_{m}}, \quad \mbox{for Case }(\text{iv}). \end{aligned}$$

We use the Gauss-Chebyshev quadrature rule of the first kind for computing the integral part in Case (ii) and select the Gauss-Chebyshev quadrature rule of the second kind for computing the integral part in the other cases. So, we can rewrite (33) for all cases as follows:

$$ \begin{aligned} & \sum_{i=0}^{n} \Biggl[\sum_{k=1}^{N} \omega_{k} \frac {B_{i,n}(t_{k})-B_{i,n}(s_{m})}{t_{k}-s_{m}}- B_{i,n}(s_{m})\pi s_{m} \Biggr]x_{1,i}=g(s_{m}), \quad\mbox{for Case }(\text{i}), \\ & \sum_{i=0}^{n} \Biggl[\sum _{k=1}^{N} \omega_{k} \frac {B_{i,n}(t_{k})-B_{i,n}(s_{m})}{t_{k}-s_{m}} \Biggr]x_{2,i}=g(s_{m}),\quad \mbox{for Case }(\text{ii}), \\ & \sum_{i=0}^{n} \biggl[ \frac{ \sum_{k=1}^{N} \omega_{k} \frac {B_{i,n}(t_{k})-B_{i,n}(s_{m})}{t_{k}-s_{m}}- \sum_{k=1}^{N} \omega_{k} \frac {B_{i,n}(t_{k})}{1+t_{k}} }{1+s_{m}}-\frac{\pi s_{m} B_{i,n}(s_{m})}{1+s_{m}} \biggr]x_{3,i}=g(s_{m}), \\ &\quad\mbox{for Case }(\text{iii}), \\ & \sum_{i=0}^{n} \biggl[ \frac{ \sum_{k=1}^{N} \omega_{k} \frac {B_{i,n}(t_{k})-B_{i,n}(s_{m})}{t_{k}-s_{m}}- \sum_{k=1}^{N} \omega_{k} \frac {B_{i,n}(t_{k})}{1-t_{k}} }{1-s_{m}}-\frac{\pi s_{m} B_{i,n}(s_{m})}{1-s_{m}} \biggr]x_{4,i}=g(s_{m}), \\ &\quad\mbox{for Case }(\text{iv}), \end{aligned} $$
(35)

where

$$ \begin{aligned}& t_{k}=\cos \biggl( \frac{k\pi}{N+1} \biggr), \\ &\omega_{k}=\frac{\pi }{N+1}{ \biggl(\sin \biggl( \frac{k\pi}{N+1} \biggr) \biggr)}^{2}\quad \mbox{for Cases } \text{(i), (iii), (iv)}, \\ & t_{k}=\cos \biggl(\frac{(2k-1)\pi}{2N} \biggr),\\ & \omega_{k}= \frac{\pi }{N},\quad \mbox{for Case }(\text{ii}), k=1,2,\dots,N. \end{aligned} $$
(36)

For simplicity, we write equations (35) as follows:

$$ \sum_{i=0}^{n} \bigl[ \mathcal{F}_{j}(s_{m})+\mathcal{G}_{j}(s_{m}) \bigr]x_{j,i}=g(s_{m}),\quad j=1,2,3,4. $$
(37)

Hence, the main matrix form (37) corresponding to all cases of (2) can be written separately in the form

$$ \mathbf{A}_{j}\mathbf{X}_{j}=\mathbf{G},\quad j=1,2,3,4, $$
(38)

where

$$\begin{aligned} &[\mathbf{A}_{j}]_{(m+1)(i+1)}= \mathcal{F}_{j}(s_{m})+\mathcal{G}_{j}(s_{m}),\quad m,i=0,1,\dots,n, \\ &\mathbf{X}_{j}=[x_{j,0},x_{j,1}, \dots,x_{j,n}]^{T},\quad j=1,2,3,4, \end{aligned}$$

and

$$\mathbf{G}=\bigl[g(s_{0}),g(s_{1}),\dots,g(s_{n}) \bigr]^{T},\quad \mbox{for Cases }(\text{i}), (\text{ii}), (\text{iii}), (\text{iv}). $$

After solving equations (38) for Cases (i), (ii), (iii) and (iv), the unknown coefficients \(x_{j,i}\) are determined and we can approximate the solutions of (17), (21), (25) and (29) with substituting \(x_{j,i}\), \(i=0,1,\dots ,n\) \(j=1,2,3,4\) in (11). So, the approximate solutions for (2) in all cases follow:

$$ \begin{aligned} &y_{n}(s)=\frac{1-s^{2}}{\sqrt{1-s^{2}}}x_{n}(s)= \frac{1-s^{2}}{\sqrt {1-s^{2}}}\sum_{i=0}^{n} x_{1,i} B_{i,n}(s), \quad\mbox{for Case }(\text{i}), \\ &y_{n}(s)=\frac{1}{\sqrt{1-s^{2}}}x_{n}(s)=\frac{1}{\sqrt{1-s^{2}}}\sum _{i=0}^{n} x_{2,i} B_{i,n}(s), \quad\mbox{for Case }(\text{ii}), \\ &y_{n}(s)=\sqrt{\frac{1-s}{1+s}}x_{n}(s)=\sqrt{ \frac{1-s}{1+s}}\sum_{i=0}^{n} x_{3,i} B_{i,n}(s), \quad\mbox{for Case }(\text{iii}), \\ &y_{n}(s)=\sqrt{\frac{1+s}{1-s}}x_{n}(s)=\sqrt{ \frac{1+s}{1-s}}\sum_{i=0}^{n} x_{4,i} B_{i,n}(s), \quad\mbox{for Case } (\text{iv}). \end{aligned} $$
(39)

5 Error estimation analysis

In the current section, we intend to give an error analysis based on the Bernstein polynomials for the presented method by using an interpolation polynomial [26].

Theorem 1

Let f be a function in \(C^{n+1}[-1,1]\) and let \(p_{n}\) be a polynomial of degreen that interpolates the function f at \(n+1\) distinct points \(s_{0},s_{1},\dots,s_{n} \in[-1,1]\), then for each \(s\in[-1,1]\) there exists a point \(\xi_{s} \in[-1,1]\) such that

$$f(s)-p_{n}(s)=\frac{\prod_{i=0}^{n} (s-s_{i})}{(n+1)!}{f}^{(n+1)}( \xi_{s}). $$

Let \(\omega_{j} f\), \(j=1,2,3,4\) be the exact solution of equation (2) and \(p_{n}\) be the interpolation polynomial of f. Now, if f is sufficiently smooth, we can write f as \(f=p_{n}+R_{n}\) where \(R_{n}\) is the error function

$$R_{n}(s)=\frac{(s-s_{0})(s-s_{1})\dots(s-s_{n})}{(n+1)!}{f}^{(n+1)}(\xi _{s}),\quad \xi_{s} \in(-1,1). $$

If \(y_{n}=\omega_{j} x_{n}\), \(j=1,2,3,4\) is the Bernstein polynomial series solution of (2), given by Cases (i), (ii), (iii) and (iv) of (11), then \(y_{n}\) satisfies (2) on the nodes. So, \(x_{n}\) and \(p_{n}\) are the solutions of \(\mathbf{A}_{j} \mathbf{X}_{j}=\mathbf{G}\) and \(\mathbf {A}_{j} \bar{\mathbf{X}}_{j}=\mathbf{G}+\triangle{\mathbf{G}}\) where

$$\begin{aligned} &[\triangle{\mathbf{G}}]_{(i+1)1}= \int_{-1}^{1} \sqrt{1-t^{2}} \frac {R_{n}(t)-R_{n}(s_{i})}{t-s_{i}}\,dt-\pi s_{i} R_{n}(s_{i}),\quad \mbox{for Case }(\text{i}), \\ &[\triangle{\mathbf{G}}]_{(i+1)1}= \int_{-1}^{1}\frac{1}{\sqrt {1-t^{2}}}\frac{R_{n}(t)-R_{n}(s_{i})}{t-s_{i}}\,dt, \quad\mbox{for Case }(\text{ii}), \\ &[\triangle{\mathbf{G}}]_{(i+1)1}=\frac{1}{1+s_{i}} \biggl( \int _{-1}^{1}\sqrt{1-t^{2}} \frac{R_{n}(t)-R_{n}(s_{i})}{t-s_{i}}\,dt- \int _{-1}^{1}\sqrt{1-t^{2}} \frac{R_{n}(t)}{1+t}\,dt \biggr) \\ &\phantom{[\triangle{\mathbf{G}}]_{(i+1)1}=}{}-\frac{\pi s_{i} R_{n}(s_{i})}{1+s_{i}}, \quad\mbox{for Case }(\text{iii}), \\ &[\triangle{\mathbf{G}}]_{(i+1)1}=\frac{1}{1-s_{i}} \biggl( \int _{-1}^{1}\sqrt{1-t^{2}} \frac{R_{n}(t)-R_{n}(s_{i})}{t-s_{i}}\,dt+ \int _{-1}^{1}\sqrt{1-t^{2}} \frac{R_{n}(t)}{1-t}\,dt \biggr) \\ &\phantom{[\triangle{\mathbf{G}}]_{(i+1)1}=}{}-\frac{\pi s_{i} R_{n}(s_{i})}{1-s_{i}},\quad \mbox{for Case }(\text{iv}), \end{aligned}$$

where \(i=0,1,\dots,n\).

In the following theorem, we present an upper bound of the absolute errors for our method.

Theorem 2

Assume that \(x(s)\) and \(f(s)\) are Bernstein polynomial series solution and exact solution of (17), so \(y(s)=\omega_{1}(s) x(s)\) and \(\omega_{1}(s) f(s)\) are Bernstein polynomial series solution and the exact solution of equation (2). \(p_{n}(s)\) denotes the interpolation polynomial of \(f(s)\). If \(\mathbf{A}_{1}, \mathbf{X}_{1}, \bar{\mathbf{X}}_{1}, \mathbf{G}\) andG are defined as above, and \(f(s)\) is sufficiently smooth, then

$$\bigl\vert \omega_{1}(s)f(s)-y_{n}(s) \bigr\vert \leq M_{1} \bigl( \bigl\vert R_{n}(s) \bigr\vert + M_{2} \bigr), $$

where \(\smash{ \max_{-1 \leq s \leq1} \vert \omega_{1}(s) \vert } \leq M_{1}\) and \(\smash{ \max_{0 \leq i \leq n} \vert x_{1,i}-\bar{x}_{1,i} \vert }\leq M_{2}\).

Proof

Taking into account the given assumptions, we have

$$\begin{aligned} \bigl\vert \omega_{1}(s)f(s)-y_{n}(s) \bigr\vert & = \bigl\vert \omega _{1}(s)f(s)-\omega_{1}(s)p_{n}(s)+ \omega_{1}(s)p_{n}(s)-y_{n}(s) \bigr\vert \\ & \leq \bigl\vert \omega_{1}(s)f(s)-\omega_{1}(s)p_{n}(s) \bigr\vert + \bigl\vert y_{n}(s)-\omega_{1}(s)p_{n}(s) \bigr\vert \\ & \leq \bigl\vert \omega_{1}(s)R_{n}(s) \bigr\vert + \Biggl\vert \omega _{1}(s)\sum_{i=0}^{n} x_{1,i} B_{i,n}(s)-\omega_{1}(s)\sum _{i=0}^{n}\bar {x}_{1,i} B_{i,n}(s) \Biggr\vert \\ & \leq \bigl\vert \omega_{1}(s) \bigr\vert \bigl\vert R_{n}(s) \bigr\vert + \bigl\vert \omega_{1}(s) \bigr\vert \Biggl\vert \sum_{i=0}^{n}(x_{1,i}- \bar{x}_{1,i})B_{i,n}(s) \Biggr\vert \\ & \leq M_{1} \bigl\vert R_{n}(s) \bigr\vert + M_{1} M_{2} \Biggl\vert \sum_{i=0}^{n}B_{i,n}(s) \Biggr\vert , \end{aligned}$$
(40)

where

$$\sum_{i=0}^{n} B_{i,n}(s)=1,\quad -1 \leq s \leq1. $$

This completes the proof. □

The same reasoning applies for other similar theorems for Cases (ii), (iii), and (iv).

6 Numerical experiments

In this section, the following examples are given to illustrate the performance of the presented method in solving the CSFIEs and the accuracy of the technique. These examples have been solved by our method with \(n=5\), \(N=4\) and the results are showed in the tables and figures. In Example 1, the results are computed by using a program written in Mathematica 11.0 and are compared with the computed solutions of another well-known method.

Example 1

[13, 21]

Let us consider the first kind of Cauchy type singular Fredholm integral equation given by

$$ \int_{-1}^{1} \frac{y(t)}{t-s} \,dt= g(s),\quad -1< s < 1, $$
(41)

where \(g(s)=s^{4} + 5s^{3} + 2s^{2} +s - \frac{11}{8}\).

This equation has an exact solution for all following cases:

$$\begin{aligned} &\text{Case } (\text{i}): \quad y(s)=-\frac{1}{\pi} \sqrt{1-s^{2}} \biggl(s^{3}+5s^{2}+ \frac{5}{2}s+\frac{7}{2} \biggr), \\ &\text{Case } (\text{ii}):\quad y(s)=\frac{1}{\pi\sqrt{1-s^{2}}} \biggl(s^{5}+5s^{4}+ \frac{3}{2} \bigl(s^{3}-s^{2} \bigr)- \frac{5}{2}s-\frac {7}{2} \biggr), \\ &\text{Case } (\text{iii}):\quad y(s)=-\frac{1}{\pi} \sqrt{\frac {1-s}{1+s}} \biggl(s^{4}+6s^{3}+\frac{15}{2}s^{2}+6s+ \frac{7}{2} \biggr), \\ &\text{Case } (\text{iv}):\quad y(s)=\frac{1}{\pi} \sqrt{\frac {1+s}{1-s}} \biggl(s^{4}+4s^{3}-\frac{5}{2}s^{2}+s- \frac{7}{2} \biggr). \end{aligned}$$

In Figures 1, 3, 5 and 7, we plot the exact solutions and the approximate solutions for \(n=5\) and \(N=4\) for all cases and it is clear that the approximate solution is in good agreement with the exact solution. Also, the errors of the presented method are plotted in Figures 2, 4, 6 and 8 and are compared with Chebyshev polynomial approximations in all cases in Tables 1, 2, 3 and 4.

Figure 1
figure 1

\(\pmb{x(s)}\) and \(\pmb{x_{5}(s)}\) for Case (i).

Figure 2
figure 2

Absolute error \(\pmb{\vert y(s)-y_{5}(s) \vert }\) for Case (i).

Figure 3
figure 3

\(\pmb{x(s)}\) and \(\pmb{x_{5}(s)}\) for Case (ii).

Figure 4
figure 4

Absolute error \(\pmb{\vert y(s)-y_{5}(s) \vert }\) for Case (ii).

Figure 5
figure 5

\(\pmb{x(s)}\) and \(\pmb{x_{5}(s)}\) for Case (iii).

Figure 6
figure 6

Absolute error \(\pmb{\vert y(s)-y_{5}(s) \vert }\) for Case (iii).

Figure 7
figure 7

\(\pmb{x(s)}\) and \(\pmb{x_{5}(s)}\) for Case (iv).

Figure 8
figure 8

Absolute error \(\pmb{\vert y(s)-y_{5}(s) \vert }\) for Case (iv).

Table 1 Numerical results of Example 1 for Case (i)
Table 2 Numerical results of Example 1 for Case (ii)
Table 3 Numerical results of Example 1 for Case (iii)
Table 4 Numerical results of Example 1 for Case (iv)

Example 2

Suppose we have the following CSFIE:

$$ \int_{-1}^{1} \frac{y(t)}{t-s} \,dt= g(s),\quad -1< s < 1, $$
(42)

where \(g(s)=-s^{4} + \frac{3}{2}s^{2} - \frac{3}{8}\).

This equation has exact solution for all following cases:

$$\begin{aligned} &\text{Case } (\text{i}):\quad y(s)=-\frac{1}{\pi} \sqrt{1-s^{2}} \bigl(-s^{3}+s \bigr), \\ &\text{Case } (\text{ii}):\quad y(s)=\frac{1}{\pi\sqrt{1-s^{2}}} \bigl(-s^{5}+2s^{3}-s \bigr), \\ &\text{Case } (\text{iii}):\quad y(s)=-\frac{1}{\pi} \sqrt{\frac {1-s}{1+s}} \bigl(-s^{4}-s^{3}+s^{2}+s \bigr), \\ &\text{Case } (\text{iv}):\quad y(s)=\frac{1}{\pi} \sqrt{\frac {1+s}{1-s}} \bigl(-s^{4}+s^{3}+s^{2}-s \bigr). \end{aligned}$$

In Figures 9, 11, 13 and 15, we plot the exact solutions and the approximate solutions for \(n=5\) and \(N=4\) for all cases and it is clear that the approximate solution is in good agreement with the exact solution. Also, the errors of the presented method are plotted in Figures 10, 12, 14 and 16 and are presented in all cases in Tables 5, 6, 7 and 8.

Figure 9
figure 9

\(\pmb{x(s)}\) and \(\pmb{x_{5}(s)}\) for Case (i).

Figure 10
figure 10

Absolute error \(\pmb{\vert y(s)-y_{5}(s) \vert }\) for Case (i).

Figure 11
figure 11

\(\pmb{x(s)}\) and \(\pmb{x_{5}(s)}\) for Case (ii).

Figure 12
figure 12

Absolute error \(\pmb{\vert y(s)-y_{5}(s) \vert }\) for Case (ii).

Figure 13
figure 13

\(\pmb{x(s)}\) and \(\pmb{x_{5}(s)}\) for Case (iii).

Figure 14
figure 14

Absolute error \(\pmb{\vert y(s)-y_{5}(s) \vert }\) for Case (iii).

Figure 15
figure 15

\(\pmb{x(s)}\) and \(\pmb{x_{5}(s)}\) for Case (iv).

Figure 16
figure 16

Absolute error \(\pmb{\vert y(s)-y_{5}(s) \vert }\) for Case (iv).

Table 5 Numerical results of Example 2 for Case (i)
Table 6 Numerical results of Example 2 for Case (ii)
Table 7 Numerical results of Example 1 for Case (iii)
Table 8 Numerical results of Example 1 for Case (iv)

7 Conclusion

Earlier, other numerical methods for solving the singular integral equation with Cauchy kernel were introduced, but in this work we present a new efficient approach. The present technique is a simple method to obtain the approximate solutions of different cases of singular integral equation with Cauchy kernel. The approximate approach is based on the Bernstein polynomials based on the collocation method. Removing the singularity of the equations in all cases has been set and an approximation for the integral in determining the system of linear equations was implemented as well. By comparing the approximate solutions with the exact solutions in the numerical results, it is obvious that the approximate solutions are close to the well-known results for various cases.