1 Introduction

The importance of special functions and orthogonal polynomials occupies a central position in the numerical analysis. Most common solution techniques of differential equations with these polynomials can be seen in [112]. One of the most important of those special functions is Chebyshev polynomials. The well-known first kind Chebyshev polynomials [1] are orthogonal with respect to the weight-function w c (x)= 1 1 x 2 on the interval [1,1]. These polynomials have many applications in different areas of interest, and a lot of studies are devoted to show the merits of them in various ways. One of the application fields of Chebyshev polynomials can appear in the solution of differential equations. For example, Chebyshev polynomial approximations have been used to solve ordinary differential equations with boundary conditions in [1], with collocation points in [13], the general class of linear differential equations in [14, 15], linear-integro differential equations with collocation points in [16], the system of high-order linear differential and integral equations with variable coefficients in [17, 18], and the Sturm-Liouville problems in [19].

Some of the fundamental ideas of Chebyshev polynomials in one-variable techniques have been extended and developed to multi-variable cases by the studies of Fox et al. [1], Basu [20], Doha [21] and Mason et al. [5]. In recent years, the Chebyshev matrix method for the solution of partial differential equations (PDEs) has been proposed by Kesan [22] and Akyuz-Dascioglu [23] as well.

On the other hand, all of the above studies are considered on the interval [1,1] in which Chebyshev polynomials are defined. Therefore, this limitation causes a failure of the Chebyshev approach in the problems that are naturally defined on larger domains, especially including infinity. Then, Guo et al. [24] has proposed a modified type of Chebyshev polynomials as an alternative to the solutions of the problems given in a nonnegative real domain. In his study, the basis functions called rational Chebyshev polynomials are orthogonal in L 2 (0,) and are defined by

R n (x)= T n ( x 1 x + 1 ) .

Parand et al. and Sezer et al. successfully applied spectral methods to solve problems on semi-infinite intervals [25, 26]. These approaches can be identified as the methods of rational Chebyshev Tau and rational Chebyshev collocation, respectively. However, this kind of extension also fails to solve all of the problems over the whole real domain. More recently, we have introduced a new modified type of Chebyshev polynomials that is developed to handle the problems in the whole real range called exponential Chebyshev (EC) polynomials [27].

In this study, we have shown the extension of the EC polynomial method to multi-variable case, especially, to two-variable problems.

2 Properties of double EC polynomials

The well-known first kind Chebyshev polynomials are orthogonal in the interval [1,1] with respect to the weight-function w c (x)= 1 1 x 2 and can be simply determined with the help of the recurrence formula [1]

T 0 ( x ) = 1 , T 1 ( x ) = x , T n + 1 ( x ) = 2 x T n ( x ) T n 1 ( x ) , n 1 .
(2.1)

Therefore, the exponential Chebyshev (EC) functions are recently defined in a similar fashion as follows [27].

Let

L 2 (φ)= { f : f L 2 = | f ( x ) | 2 w e ( x ) d x < }

be a function space with the weight function w e (x)= e x e x + 1 . We also assume that, for a nonnegative integer n, the n th derivative of a function f L 2 is also in L 2 . Then an EC polynomial can be given by

where y= e x 1 e x + 1 .

This definition leads to the three-term recurrence equation for EC polynomials

E 0 ( x ) = 1 , E 1 ( x ) = e x 1 e x + 1 , E n + 1 ( x ) = 2 ( e x 1 e x + 1 ) E n ( x ) E n 1 ( x ) , n 1 .
(2.2)

This definition also satisfies the orthogonality condition [27]

E n (x) E m (x)w(x)dx= c m π 2 δ m n ,
(2.3)

where c m = { 2 , m = 0 , 1 , m 0 and δ m n is the Kronecker function.

Double EC functions

Basu [20] has given the product T r , s (x,y)= T r (x) T s (y) which is a form of bivariate Chebyshev polynomials. Mason et al. [5] and Doha [11] have also mentioned a Chebyshev polynomial expression for an infinitely differentiable function u(x,y) defined on the square S(1x,y1) by

u(x,y) r = 0 s = 0 a r s T r (x) T s (y),

where T r (x) and T s (y) are Chebyshev polynomials of the first kind, and the double primes indicate that the first term is 1 4 a 0 , 0 ; a m , 0 and a 0 , n are to be taken as 1 2 a m , 0 and 1 2 a 0 , n for m,n0, respectively.

Definition

Based on Basu’s study, now we introduce double EC polynomials in the following form:

E r , s (x,y)= E r (x) E s (y),
(2.4)

where E m (x), E n (y) are EC polynomials defined by

E r (x)= T r ( e x 1 e x + 1 ) , E s (y)= T s ( e y 1 e y + 1 ) .

Recurrence relation The polynomial E r , s (x,y) satisfies the recurrence relations

(2.5)
(2.6)

If the function f(x,y) is continuous throughout the whole infinite domain <x,y<, then the E r , s (x,y)’s are biorthogonal with respect to the weight function

w e (x,y)= e x + y ( e x + 1 ) ( e y + 1 ) ,
(2.7)

and we have

E i , j (x,y) E k , l (x,y)w(x,y)dxdy={ π 2 , i = j = k = l = 0 , π 2 4 , i = k 0 , j = l 0 , π 2 2 , i = k = 0 , j = l 0 or i = k 0 , j = l = 0 , 0 , for all other values of i , j , k , l .
(2.8)

Multiplication E i , j (x,y) is said to be of higher order than E m , n (x,y) if i+j>m+n. Then the following result holds:

(2.9)

Function approximation

Let u(x,y) be an infinitely differentiable function defined on the square S(<x,y<). Then it may be expressed in the form

u(x,y)= r = 0 s = 0 a r , s E r , s (x,y),
(2.10)

where

a r , s = u ( x , y ) E r , s ( x , y ) w ( x , y ) d x d y E r , s 2 ( x , y ) w ( x , y ) d x d y .
(2.11)

If u(x,y) in Eq. (2.10) is truncated up to the m th and n th terms, then it can be written in the matrix form

u(x,y) r = 0 m s = 0 n a r , s E r , s (x,y)=E(x,y)A
(2.12)

with E(x,y) is a 1×(m+1)(n+1) EC polynomial matrix with entries E r , s (x,y),

E ( x , y ) = [ E 0 , 0 ( x , y ) E 0 , 1 ( x , y ) E 0 , n ( x , y ) E 1 , 0 ( x , y ) E 1 , 1 ( x , y ) E 1 , n ( x , y ) E m , 0 ( x , y ) E m , 1 ( x , y ) E m , n ( x , y ) ]
(2.13)

and A is an unknown coefficient vector,

A= [ a 0 , 0 a 0 , 1 a 0 , n a 1 , 0 a 1 , 1 a 1 , n a m , 0 a m , 1 a m , n ] T .
(2.14)

Matrix relations of the derivatives of a function

(i+j)th-order partial derivative of u(x,y) can be written as

u ( i , j ) (x,y) r = 0 m s = 0 n a r , s E r , s ( i , j ) (x,y)
(2.15)

and its matrix form is

u ( i , j ) (x,y) E ( i , j ) (x,y)A,
(2.16)

where E r , s ( 0 , 0 ) = E r , s , u ( 0 , 0 ) (x,y)=u(x,y) and

E ( i , j ) ( x , y ) = [ E 0 , 0 ( i , j ) ( x , y ) E 0 , 1 ( i , j ) ( x , y ) E 0 , n ( i , j ) ( x , y ) E 1 , 0 ( i , j ) ( x , y ) E 1 , n ( i , j ) ( x , y ) E m , 0 ( i , j ) ( x , y ) E m , 1 ( i , j ) ( x , y ) E m , n ( i , j ) ( x , y ) ] .

Proposition 1 Let u(x,y) and (i,j)th-order derivative be given by (2.12) and (2.16), respectively. Then there exists a relation between the double EC coefficient row vector E(x,y) and (i+j)th-order partial derivatives of the vector E ( i , j ) (x,y) of size 1×(m+1)(n+1) as

E ( i , j ) (x,y)=E(x,y) ( D x ) i ( D y ) j ,
(2.17)

where D x and D y are (m+1)(n+1)×(m+1)(n+1) operational matrices for partial derivatives given in the following forms:

D x = [ c α , β ] T = ( diag ( α 4 I , O , α 4 I ) ) T ,α=0,1,,m,β=0,1,,n

and

Here, I and O are (m+1)(n+1) identity and zero matrices, respectively, and T denotes the usual matrix transpose.

Proof Taking the partial derivatives of E 0 , s , E 1 , s and both sides of the recurrence relation (2.5) with respect to x, we get

(2.18)
(2.19)

and

E r + 1 , s ( 1 , 0 ) ( x , y ) = x [ 2 E 1 , s ( 0 , 0 ) ( x , y ) E r , s ( 0 , 0 ) ( x , y ) E r 1 , s ( 0 , 0 ) ( x , y ) ] = 2 E 1 , s ( 1 , 0 ) ( x , y ) E r , s ( 0 , 0 ) ( x , y ) + 2 E 1 , s ( 0 , 0 ) ( x , y ) E r , s ( 1 , 0 ) ( x , y ) E r 1 , s ( 1 , 0 ) ( x , y ) , s 0 .
(2.20)

By using the relations (2.18)-(2.20) for r=0,1,2,,m the elements c α , β of the matrix of partial derivatives D x can be obtained from the following equalities:

{ E 0 , s ( 1 , 0 ) ( x , y ) = 0 , E 1 , s ( 1 , 0 ) ( x , y ) = 1 4 E 0 , s ( 0 , 0 ) ( x , y ) 1 4 E 2 , s ( 0 , 0 ) ( x , y ) , E 2 , s ( 1 , 0 ) ( x , y ) = 1 2 E 1 , s ( 0 , 0 ) ( x , y ) 1 2 E 3 , s ( 0 , 0 ) ( x , y ) , E m , s ( 1 , 0 ) ( x , y ) = m 4 E m 1 , s ( 0 , 0 ) ( x , y ) m 4 E m + 1 , s ( 0 , 0 ) ( x , y ) , s 0 .
(2.21)

Similarly, taking the partial derivatives of E r , 0 , E r , 1 and both sides of the recurrence relation (2.6) with respect to y, respectively, we write

(2.22)
(2.23)

and

E r , s + 1 ( 0 , 1 ) ( x , y ) = x [ 2 E r , 1 ( 0 , 0 ) ( x , y ) E r , s ( 0 , 0 ) ( x , y ) E r , s 1 ( 0 , 0 ) ( x , y ) ] = 2 E r , 1 ( 0 , 1 ) ( x , y ) E r , s ( 0 , 0 ) ( x , y ) + 2 E r , 1 ( 0 , 0 ) ( x , y ) E r , s ( 0 , 1 ) ( x , y ) E r , s 1 ( 0 , 1 ) ( x , y ) , r 0 .
(2.24)

Then with the help of the relations (2.22)-(2.24), the elements d α , β of the matrices of partial derivatives D y can be obtained from

{ E r , 0 ( 0 , 1 ) ( x , y ) = 0 , E r , 1 ( 0 , 1 ) ( x , y ) = 1 4 E r , 0 ( 0 , 0 ) ( x , y ) 1 4 E r , 2 ( 0 , 0 ) ( x , y ) , E r , n ( 0 , 1 ) ( x , y ) = n 4 E r , n 1 ( 0 , 0 ) ( x , y ) n 4 E r , n + 1 ( 0 , 0 ) ( x , y ) , r 0 .
(2.25)

We have noted here that E r , s ( 1 , 0 ) (x,y)= E r , s ( 0 , 0 ) (x,y)=0 for r>m and E r , s ( 0 , 1 ) (x,y)= E r , s ( 0 , 0 ) (x,y)=0 for s>n.

From (2.21) and (2.25), the following equalities hold for i=0,1,2, and j=0,1,2,

E ( 1 , 0 ) ( x , y ) = E ( x , y ) D x , E ( 2 , 0 ) ( x , y ) = E ( 1 , 0 ) ( x , y ) D x = ( E ( x , y ) D x ) D x = E ( x , y ) ( D x ) 2 , E ( i , 0 ) ( x , y ) = E ( i 1 , 0 ) ( x , y ) D x = E ( x , y ) ( D x ) i
(2.26)

and

E ( 0 , 1 ) ( x , y ) = E ( x , y ) D y , E ( 0 , 2 ) ( x , y ) = E ( 0 , 1 ) ( x , y ) D y = ( E ( x , y ) D y ) D y = E ( x , y ) ( D y ) 2 , E ( 0 , j ) ( x , y ) = E ( 0 , j 1 ) ( x , y ) D y = E ( x , y ) ( D y ) j ,
(2.27)

where E ( 0 , 0 ) (x,y)=E(x,y) and ( D x ) 0 = ( D y ) 0 =I and I denotes (m+1)(n+1) identity matrix.

Then utilizing the equalities in (2.26) and (2.27), the explicit relation between the double EC polynomial row vector and those of its derivatives has been proved as follows:

E ( i , j ) ( x , y ) = E ( i , 0 ) ( x , y ) ( D y T ) j = ( E ( 0 , 0 ) ( x , y ) ( D x ) i ) ( D y ) j = E ( x , y ) ( D x ) i ( D y ) j

or

E ( i , j ) ( x , y ) = E ( 0 , j ) ( x , y ) ( D x T ) i = ( E ( 0 , 0 ) ( x , y ) ( D y ) j ) ( D x T ) i = E ( x , y ) ( D y ) j ( D x ) i .

 □

Remark ( D x ) i ( D y ) j = ( D y ) j ( D x ) i .

Corollary From Eqs. (2.16) and (2.17), it is clear that the derivatives of the function are expressed in terms of double EC coefficients as follows:

u ( i , j ) (x,y)=E(x,y) ( D x ) i ( D y ) j A.
(2.28)

3 Collocation method with double EC polynomials

In the process of obtaining the numerical solutions of partial differential equations with the double EC method, the main idea or major step is to evaluate the necessary Chebyshev coefficients of the unknown function. So, in Section 2, we give the explicit relations between the polynomials E r , s (x,y) of an unknown function and those of its derivatives E r , s ( i , j ) (x,y) for different nonnegative integer values of i and j.

In this section, we consider the higher-order linear PDE with variable coefficients of a general form

i = 0 p j = 0 r q i , j (x,y) u ( i , j ) (x,y)=f(x,y),<x,y<
(3.1)

with the conditions mentioned in [23] as three possible cases:

t = 1 ρ i = 0 p j = 0 r b t i , j u ( i , j ) ( ω t , η t )=λ
(3.2)

and/or

t = 1 υ i = 0 p j = 0 r c t i , j (x) u ( i , j ) (x, γ t )=g(x)
(3.3)

and/or

t = 1 ϑ i = 0 p j = 0 r d t i , j (y) u ( i , j ) ( ε t ,y)=h(y).
(3.4)

Here, u ( 0 , 0 ) (x,y)=u(x,y), u ( i , j ) (x,y)= i + j x i y j u(x,y) and q i , j (x,y), f(x,y), c t i , j (x), g(x), d t i , j (y), h(y) are known functions on the square S(<x,y<). We now describe an approximate solution of this problem by means of double EC series as defined in (2.10). Our aim is to find the EC coefficients in the vector A. For this reason, we can represent the given problem and its conditions by a system of linear algebraic equations by using collocation points.

Now, the collocation points can be determined in the inner domain as

x k = ln ( 1 + cos ( k π / m ) 1 cos ( k π / m ) ) , y l = ln ( 1 + cos ( l π / n ) 1 cos ( l π / n ) ) ( k = 1 , 2 , , m 1 ; l = 1 , 2 , , n 1 )
(3.5)

and at the boundaries

  1. (i)

    x m and y n ,

  2. (ii)

    x 0 and y n .

Since EC polynomials are convergent at both boundaries, namely their values are either 1 or −1, the appearance of infinity in the collocation points does not cause a loss in the method.

Therefore, when we substitute the collocation points into the problem (3.1), we get

i = 0 p j = 0 r q i , j ( x k , y l ) u ( i , j ) ( x k , y l )=f( x k , y l )(k=0,1,,m,l=0,1,,n).
(3.6)

The system (3.6) can be written in the matrix form as follows:

i = 0 p j = 0 r Q i , j U ( i , j ) =F,pm,rn,
(3.7)

where Q i , j denotes the diagonal matrix with the elements q i , j ( x k , y l ) (k=0,1,,m; l=0,1,,n) and F denotes the column matrix with the elements f( x k , y l ) (k=0,1,,m; l=0,1,,n).

Putting the collocation points into derivatives of the unknown function as in Eq. (2.28) yields

[ u ( i , j ) ( x k , y l ) ] = E ( x k , y l ) ( D x ) i ( D y ) j A , U ( i , j ) = [ u ( i , j ) ( x 0 , y 0 ) u ( i , j ) ( x 0 , y n ) u ( i , j ) ( x 1 , y 0 ) u ( i , j ) ( x 1 , y n ) u ( i , j ) ( x m , y n ) ] = E ( i , j ) A = E ( D x ) i ( D y ) j A ,
(3.8)

where E is the block matrix given by

E = [ E ( x 0 , y 0 ) E ( x 0 , y 1 ) E ( x 0 , y n ) E ( x 1 , y 0 ) E ( x 1 , y 1 ) E ( x 1 , y n ) E ( x m , y 0 ) E ( x m , y 1 ) E ( x m , y n ) ] T

and for i=j=0, we see

U=EA.
(3.9)

Therefore, from Eq. (3.7), we get a system of the matrix equation for the PDE

( i = 0 p j = 0 r Q i , j E ( D x ) i ( D y ) j ) A=F,
(3.10)

which corresponds to a system of (m+1)(n+1) linear algebraic equations with unknown double EC coefficients a r , s .

It is also noted that the structures of matrices Q i , j and F vary according to the number of collocation points and the structure of the problem. However, E, D x and D y do not change their nature for fixed values of m and n which are truncation limits of the EC series. In other words, the changes in E, D x and D y are just dependent on the number of collocation points.

Briefly, we can denote the expression in the parenthesis of (3.10) by W and write

WA=F.
(3.11)

Then the augmented matrix of Eq. (3.11) becomes

[W:F].
(3.12)

Applying the same procedure for the given conditions (3.2)-(3.4), we have

(3.13)
(3.14)
(3.15)

Then these can be written in a compact form

VA=R,
(3.16)

where V is an h×(m+1)(n+1) matrix and R is an h×1 matrix, so that h is the rank of all row matrices belonging to the given condition. The augmented matrices of the conditions become

[V:R].
(3.17)

Consequently, (3.12) together with (3.17) can be written in a new augmented matrix form

[ W : F ] .
(3.18)

This form can be achieved by replacing some rows of (3.12) by the rows of (3.17) accordingly, or adding those rows to the matrix (3.12) provided that (det W )0. Then it can be written in the following compact form:

W A= F .
(3.19)

Finally, the vector A (thereby the coefficients a r , s ) is determined by applying some numerical methods (e.g., Gauss elimination) designed especially to solve the system of linear equations. Therefore, the approximate solution can be obtained. In other words, it gives the double EC series solution of the problem (3.1) with given conditions.

4 Illustration

Now, we give an example to show the ability and efficiency of the double EC polynomial approximation method.

Example

Let us consider the linear partial differential equation

u x y 2 e x + 1 u y = 4 e y ( e x + 1 ) 2 ( e y + 1 ) 2

with the conditions

u y (0,y)=0,u(x,0)=0.

It is known that the exact solution of the problem is u(x,y)= e x + y e x e y + 1 ( e x + 1 ) ( e y + 1 ) .

Absolute errors of the proposed procedure at the grid points are tabulated for m=n=15 in Table 1.

Table 1 Absolute errors of Example at different points

Contour plots of the exact solutions and the approximate solutions are given for the region 2x,y10 in (a) and (b) and for the region 3x3, 5y5 in (c) and (d) of Figure 1, respectively. Figure 2 shows a graphical representation of the exact solution and, for m=n=15, the approximate solution of the example.

Figure 1
figure 1

Contour plots of exact and approximate solutions.

Figure 2
figure 2

Exact and approximate solution of the example.

5 Conclusion

In this article, a new solution scheme for the partial differential equation with variable coefficients defined on unbounded domains has been investigated and EC polynomials have been extended to double EC polynomials to solve multi-variable problems. It is also noted that the double EC-collocation method is very effective and has a direct ability to solve multi-variable (especially two-variable) problems in the infinite domain. For computational purposes, this approach also avoids more computations by using sparse operational matrices and saves much memory. On the other hand, the double EC polynomial approach deals directly with infinite boundaries, and their operational matrices are of few non-zero entries lain along two subdiagonals.