1 Introduction

Thermal problems generally have three main components: (i) the material parameters involved in the solution, such as the heat conduction coefficient and the specific heat capacity of the medium; (ii) the initial state of the system under consideration; and (iii) the boundary conditions accompanying the set of governing equations. The thermal inverse problems have many interesting applications in many branches of science as well as in mechanical, thermal, and chemical engineering. Such problems may be classified into three main categories, depending on the required outcome of the problem. One may be interested in experiments to determine some of the material coefficients, see, e.g, [3, 4, 7, 12, 20,21,22, 27], or else to reconstruct the initial conditions, see, e.g, [6, 9, 13, 18, 24] or boundary conditions, see, e.g, [8, 11, 14,15,16, 26], which may be difficult to determine otherwise.

The main mathematical difficulty facing inverse thermal problems is the presence of noise in the observed data during experimental measurements, which is tightly related to the problem of stability of the boundary-value problem under consideration. Such measurements usually involve temperatures and heat fluxes at the boundaries using thermometers and thermocouples. Additional data may be available through quantities measured inside the medium by means of sensors, this indicates that sometimes numerical solutions are preferable compared to exact ones.

In recent times, many methods have been used to solve the inverse heat problems, including the sinc methods, see e.g, [1, 9, 21, 22, 26, 27]. In this work, we are interested in solving the inverse heat problems where the sinc-function is replaced by a Taylor approximation polynomial, see [2].

In this paper, we consider the inverse heat conduction problem

$$\begin{aligned} \begin{array}{cccc} \displaystyle \partial _{t} u(x,t) &{}=&{} \displaystyle \partial _{xx} u(x,t),&{} x\in \mathbb R,\, t>0, \\ \displaystyle u(x,0) &{}=&{} \displaystyle f(x),\,\,\,\,&{} x\in \mathbb R, \end{array} \end{aligned}$$
(1.1)

to determine an initial condition f from a known solution u(xt) at a specific time \(t_0\). In the direct problem of (1.1), the temperature u(xt) is obtained when f is given. If, for instance, the initial data \(f\in L^{2} (\mathbb R)\), cf. [17, 25], then the solution of (1.1) is obtained in terms of the heat kernel:

$$\begin{aligned} u(x,t)= \frac{1}{\sqrt{4\pi t}} \int _{-\infty }^{\infty } \exp \left( \frac{-(x-y)^{2}}{4 t}\right) f(y) dy. \end{aligned}$$
(1.2)

In [9], Gilliam et al. described a numerical technique for approximating the initial data within a special class of analytic functions for solving the inverse heat problem (1.1). This procedure, which is based on the sinc method requires samples of the temperature at a single time occasion and at a set of equally spaced spatial points; is stable and it is exact for infinite discrete sampling. In more details, let \(\textbf{B} (\mathcal {S}_d), d > 0\) is the class of analytic functions defined on the infinite strip \(\mathcal {S}_d:= \{z=x+iy \in \mathbb C: |y| < d\} \subset \mathbb C\), such that

$$\begin{aligned} \int _{-d}^{d} |f(x+iy)|dy = O\left( x^\gamma \right) ,\quad |x|\rightarrow \infty ,\, \gamma <1, \end{aligned}$$
(1.3)

and such that \(N(f,\mathcal {S}_d) < \infty \), where

$$\begin{aligned} N (f,\mathcal {S}_d ):= \lim _{y\rightarrow d^{-}} \int _{\mathbb R }(|f(x+iy)|+|f(x+iy)|) dx. \end{aligned}$$
(1.4)

The solution of the inverse problem introduced in [9] depends on the expandability of f by sinc-interpolation series

$$\begin{aligned} f(y)\simeq \sum _{n=-\infty }^{\infty } f\left( n h\right) \mathrm {\,sinc\,}\left( \frac{y-nh}{h}\right) , \end{aligned}$$
(1.5)

where \(h > 0\) is a fixed step-size and

$$\begin{aligned} \mathrm {\,sinc\,}\left( \frac{y-nh}{h}\right) :=\left\{ \begin{array}{ll} \displaystyle \frac{\sin \pi \left( \frac{y-nh}{h}\right) }{\pi \left( \frac{y-nh}{h}\right) }, &{}\displaystyle y\ne n h, \\ \displaystyle 1, &{}\displaystyle y=n h. \end{array}\right. \end{aligned}$$
(1.6)

The aliasing error of (1.5) is defined for \( y\in \mathbb R\) by

$$\begin{aligned} \mathcal {E}(y)=f(y)-\sum _{n=-\infty }^{\infty } f\left( n h\right) \mathrm {\,sinc\,}\left( \frac{y-nh}{h}\right) . \end{aligned}$$
(1.7)

If \(f\in \textbf{B} (\mathcal {S}_d)\), then we have, see [23, p. 177]

$$\begin{aligned} \left\| \mathcal {E} \right\| _{\infty }:= \sup _{y\in \mathbb R} |\mathcal {E}(y)| \le \frac{N (f,\mathcal {S}_d )}{2\pi d \sinh (\pi d/h)}. \end{aligned}$$
(1.8)

For \(N \in \mathbb N\), the truncation error of (1.5) is defined by

$$\begin{aligned} \left( T_{N}f\right) (y):= & {} f(y)-\sum _{\vert n \vert \le N} f\left( n h\right) \mathrm {\,sinc\,}\left( \frac{y-nh}{h}\right) \nonumber \\= & {} \sum _{\vert n \vert > N} f\left( n h\right) \mathrm {\,sinc\,}\left( \frac{y-nh}{h}\right) , \quad y\in \mathbb R. \end{aligned}$$
(1.9)

If f satisfies the decay condition

$$\begin{aligned} |f(y)|\le K e^{-\alpha |y|}, \quad y\in \mathbb R, \end{aligned}$$
(1.10)

where K and \(\alpha \) are positive constants, then by choosing \(\displaystyle h =\left( \frac{\pi d}{\alpha N}\right) ^{1/2}\), we have, see [23, p. 178]

$$\begin{aligned} \left\| T_{N} f \right\| _{\infty }:= \sup _{y\in \mathbb R} |\left( T_{N}f\right) (y)| \le C \sqrt{N} e^{-(\pi d \alpha N)^{1/2}}, \end{aligned}$$
(1.11)

where C is a constant depending only on \(f, \alpha \) and d. The solution of [9] is obtained by solving a linear system of equations which results from substituting from the truncated sinc series of \(f(\cdot )\) into (1.2). However, due to the existence of \(\exp \left( \frac{-(x-y)^2}{4t}\right) \) in this integral, the coefficients of this system cannot be computed explicitly. Instead, an approximate linear system is obtained. Hence, an amplitude-type error is obtained and is not consiedred in [9]. In [1], Annaby and Asharabi investigated the amplitude error which results when the exact samples f(nh) are replaced by approximate closer ones \(\widetilde{f}(n h)\), such that there is a sufficiently small \(\varepsilon >0\), which satisfies \(\varepsilon _{n}:=\left| f\left( n h\right) -\widetilde{f}\left( n h\right) \right| <\varepsilon \) for all \(n\in \mathbb Z\). The amplitude error is defined for \( y\in \mathbb {R}\), to be

$$\begin{aligned} \textrm{A}(\varepsilon ,f;y):= \sum _{n=-\infty }^{\infty } \left\{ f\left( n h\right) -\widetilde{f}\left( n h\right) \right\} \mathrm {\,sinc\,}\left( \frac{y-nh}{h}\right) . \end{aligned}$$
(1.12)

If f satisfies the condition

$$\begin{aligned} |f(y)|\le \frac{M_f}{|y|^{\gamma +1}}, \quad \gamma \in ]0,1],\, |y|>1, \end{aligned}$$
(1.13)

where \(M_f\) is a positive constant that depends only on f, then, see [1]

$$\begin{aligned} |\textrm{A}(\varepsilon ,f;y)|\le \frac{4}{\gamma +1}\left( 3^{(\gamma +1)/2} e+ M_f 2^{(\gamma +1)/2} e^{1/4}\right) \varepsilon \log (1/\varepsilon ). \end{aligned}$$
(1.14)

Hence, the error caused by using the sinc technique for solving the inverse problem is a combination of both truncation and amplitude errors.

In this work, we implement the Shannon-Taylor approximation approach introduced by Butzer and Engels in [2] to solve the inverse heat problem (1.1). In this approach the sinc function of (1.6) is replaced by

$$\begin{aligned} \mathrm {\,sinc\,}\left( \frac{y-nh}{h}\right) \simeq \sum _{j=0}^{m} \frac{(-1)^j }{(2j+1)!}\left\{ \pi \left( \frac{y-nh}{h}\right) \right\} ^{2j},\quad m\in \mathbb N,\, y\in \mathbb R. \end{aligned}$$
(1.15)

In [2], the authors gave conditions on m to guarantee that

$$\begin{aligned} f(y)=\lim _{N\rightarrow \infty }\sum _{n=-N}^{N}f\left( n h \right) \sum _{j=0}^{m(N)} \frac{(-1)^j }{(2j+1)!}\left\{ \pi \left( \frac{y-nh}{h}\right) \right\} ^{2j}. \end{aligned}$$
(1.16)

Here m(N) is a function of N, normally taken as multiples of N. They also investigated the truncation error and established precise error estimates for band-limited functions. The following integral representation, cf. [10, p. 365], will be required in the sequel:

$$\begin{aligned} \int _{-\infty }^{\infty } u^{n} e^{-(u - \beta )^2} du = \frac{\sqrt{\pi }}{(2i)^{n} } H_{n}(i \beta ),\quad n=0,1,2,\ldots ,\, \beta \in \mathbb C, \end{aligned}$$
(1.17)

where \(H_{n}(\cdot )\) is the Hermite polynomial of degree n and \(i=\sqrt{-1}\).

In the next section, we investigate a uniform error estimate for the truncation series (1.16), when \(f\in \textbf{B} (\mathcal {S}_d)\). In Sects. 3 and 4, we present the method which is used to solve the problem of the heat equation on \(\mathbb R\) and \(\mathbb R^{+}\). In Sect. 5, experimental results of the derived results are presented.

2 Shannon-Taylor interpolation error

Let \(N\in \mathbb {N}\), \(f\in \textbf{B} (\mathcal {S}_d)\). The truncation error associated with the Taylor sampling series (1.16) is defined for \( y\in \mathbb R\) by

$$\begin{aligned} T_{N,\rho }(f)(y):=f(y)-\sum _{\vert n \vert \le N}f\left( n h\right) \sum _{j=0}^{\rho N} \frac{(-1)^j}{(2j+1)!} \left\{ \pi \left( \frac{y-nh}{h}\right) \right\} ^{2j}, \end{aligned}$$
(2.1)

where m(N) is taken as \(\rho N\), and \(\rho \in \mathbb {N},\rho \ge 5\), see [2].

In the following, we establish a uniform error estimate for the truncation error.

Theorem 2.1

Let \(f\in \textbf{B} (\mathcal {S}_d)\), and satisfy the condition (1.10). Then, by choosing

$$\begin{aligned} h =\left( \frac{\pi d}{\alpha N}\right) ^{1/2}, \end{aligned}$$
(2.2)

and \(\rho =5\), we have for \(|y|< N h/7\), \(N\in \mathbb {N}\),

$$\begin{aligned} |T_{N,\rho }(f)(y)| \le \frac{N (f,\mathcal {S}_d )}{2\pi d \sinh (\pi d/h)}+ C \sqrt{N} e^{-(\pi d \alpha N)^{1/2}}+\frac{3\Vert f\Vert _{\infty }}{10\sqrt{20\pi }}\frac{1}{\sqrt{N}}(0.785)^{N}, \end{aligned}$$
(2.3)

where \({\displaystyle \Vert f\Vert _{\infty }:=\sup _{y\in \mathbb R}\left| f(y)\right| }\) and C is a constant depending only on \(f, \alpha \) and d.

Proof

Let \(f\in \textbf{B} (\mathcal {S}_d)\) and \(N\in \mathbb {N}\), then

$$\begin{aligned} \nonumber { |T_{N,\rho }(f)(y)|}&{ =}&{ \left| f(y)-\sum _{\vert n \vert \le N}f\left( n h\right) \sum _{j=0}^{\rho N} \frac{(-1)^j}{(2j+1)!} \left\{ \pi \left( \frac{y-nh}{h}\right) \right\} ^{2j}\right| } \\ \nonumber&{ \le }&{ \left| f(y)-\sum _{n=-\infty }^{\infty } f\left( n h\right) \mathrm {\,sinc\,}\left( \frac{y-nh}{h}\right) \right| }\\ \nonumber{} & {} { +\left| \sum _{\vert n \vert > N} f\left( n h\right) \mathrm {\,sinc\,}\left( \frac{y-nh}{h}\right) +\sum _{\vert n \vert \le N} f\left( n h\right) \mathrm {\,sinc\,}\left( \frac{y-nh}{h}\right) \right. } \\{} & {} {\left. -\sum _{\vert n \vert \le N}f\left( n h\right) \sum _{j=0}^{\rho N} \frac{(-1)^j}{(2j+1)!} \left\{ \pi \left( \frac{y-nh}{h}\right) \right\} ^{2j}\right| .} \end{aligned}$$
(2.4)

Since

$$\begin{aligned} \nonumber \mathrm {\,sinc\,}\left( \frac{y-nh}{h}\right)= & {} \sum _{j=0}^{\rho N} \frac{(-1)^j }{(2j+1)!}\left\{ \pi \left( \frac{y-nh}{h}\right) \right\} ^{2j}\\{} & {} + \sum _{j=\rho N+1}^{\infty } \frac{(-1)^j }{(2j+1)!}\left\{ \pi \left( \frac{y-nh}{h}\right) \right\} ^{2j}, \end{aligned}$$
(2.5)

then

$$\begin{aligned} |T_{N,\rho }(f)(y)| \le \left| \mathcal {E}(y)\right| +\left| \left( T_{N}f\right) (y)\right| +\left| E_{N,\rho } (f)(y)\right| , \end{aligned}$$
(2.6)

where

$$\begin{aligned} \left| E_{N,\rho } (f)(y)\right| =\left| \sum _{\vert n \vert \le N}f\left( n h\right) \sum _{j=\rho N+1}^{\infty } \frac{(-1)^j}{(2j+1)!} \left\{ \pi \left( \frac{y-nh}{h}\right) \right\} ^{2j}\right| . \end{aligned}$$
(2.7)

Let \(\rho =5\), then we have for \(|y|< N h/7\), see [2]

$$\begin{aligned} \left| E_{N,5} (f)(y)\right| \le \frac{3\Vert f\Vert _{\infty }}{10\sqrt{20\pi }}\frac{1}{\sqrt{N}}(0.785)^{N}. \end{aligned}$$
(2.8)

Substituting from (1.8), (1.11) and (2.8) in (2.6) we get (2.3).

3 Shannon-Taylor inversion problem on \(\mathbb R\)

In this section, we establish the Shannon-Taylor technique to solve the inverse heat conduction problem (1.1). First of all, assume that the initial data, which is to be recovered, satisfies \(f\in \textbf{B}(\mathcal {S}_d)\), and it is approximated via the truncated Shannon-Taylor series

$$\begin{aligned} f(y)\simeq \sum _{n=-N}^{N}f\left( n h \right) \sum _{j=0}^{5N} \frac{(-1)^j }{(2j+1)!}\left\{ \pi \left( \frac{y-nh}{h}\right) \right\} ^{2j}. \end{aligned}$$
(3.1)

Substituting from (3.1) into (1.2), we obtain

$$\begin{aligned} u(x,t)\simeq \frac{1}{\sqrt{4\pi t}} \sum _{n=-N}^{N} f (n h) \sum _{j=0}^{5N} \frac{(-1)^j \pi ^{2j}}{(2j+1)!} \int _{-\infty }^{\infty } e^{\frac{-(x-y)^{2}}{4 t}} \left( \frac{y-nh}{h}\right) ^{2j} dy. \end{aligned}$$
(3.2)

Choosing \(x_{k} = kh\), \(\displaystyle s =\frac{ y-kh}{h}\) and \(l = n - k\) in (3.2) yields

$$\begin{aligned} u(x_k,t)\simeq \frac{h}{\sqrt{4\pi t}} \sum _{n=-N}^{N} f (n h) \sum _{j=0}^{5N} \frac{(-1)^j \pi ^{2j}}{(2j+1)!} \int _{-\infty }^{\infty } e^{\frac{-(sh)^{2}}{4 t}} \left( s-l\right) ^{2j} ds. \end{aligned}$$
(3.3)

Taking \(\displaystyle \tilde{t}=\left( \frac{h}{2\pi }\right) ^2\) and from (1.17), one obtains

$$\begin{aligned} \nonumber u(x_k,\tilde{t})\simeq & {} \sqrt{\pi } \sum _{n=-N}^{N} f (n h) \sum _{j=0}^{5N} \frac{(-1)^j \pi ^{2j}}{(2j+1)!} \int _{-\infty }^{\infty } e^{-(\pi s)^{2}} \left( s-l\right) ^{2j} ds\\ \nonumber= & {} \sqrt{\pi } \sum _{n=-N}^{N} f (n h) \sum _{j=0}^{5N} \frac{(-1)^j \pi ^{2j}}{(2j+1)!} \left[ \pi ^{-1-2j}\int _{-\infty }^{\infty } e^{-u^{2}} (u-\pi l)^{2j} du\right] \\ \nonumber= & {} \sum _{n=-N}^{N} f (n h) \sum _{j=0}^{5N} \frac{(-1)^j }{(2j+1)!\sqrt{\pi }} \int _{-\infty }^{\infty } e^{-(u+\pi l)^{2}} u^{2j} du\\= & {} \sum _{n=-N}^{N} f (n h) \sum _{j=0}^{5N} \frac{1 }{(2j+1)!\, 2^{2j}} H_{2j}(i \pi l), \end{aligned}$$
(3.4)

where \(H_{2j}(\cdot )\) is an even function, see [19]. Thus, we end with the linear system

$$\begin{aligned} u(x_k,\tilde{t})= \sum _{n=-N}^{N} f (n h) \beta _{n-k},\quad k=-N,\ldots ,N, \end{aligned}$$
(3.5)

where

$$\begin{aligned} \beta _{l}= \sum _{j=0}^{5N} \frac{ 1}{(2j+1)!\, 2^{2j}} H_{2j}(i \pi l). \end{aligned}$$
(3.6)

The system (3.5) can be written in a more compact form as

$$\begin{aligned} \textbf{B}_{N} \vec {f} = \vec {u}, \end{aligned}$$
(3.7)

where \(\textbf{B}_N\) is the \((2N+1)\times (2N+1)\) symmetric Toeplitz matrix

$$\begin{aligned} \left( \begin{array}{ccccc} \beta _{0} &{} \beta _{1} &{} \beta _{2} &{} \cdots &{} \beta _{2N} \\ \beta _{1} &{} \beta _{0} &{} \beta _{1} &{} \cdots &{} \beta _{2N-1} \\ \beta _{2} &{} \beta _{1} &{} \beta _{0} &{} \cdots &{} \beta _{2N-2} \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots \\ \beta _{2N} &{} \beta _{2N-1} &{} \beta _{2N-2} &{} \cdots &{} \beta _{0} \end{array}\right) , \end{aligned}$$
(3.8)

with entries \(\displaystyle \beta _{i,j}=\beta _{i-j}\) and \(\displaystyle \beta _{-l}=\beta _{l}\). The \((2N + 1)\)-vectors \(\displaystyle \vec {f}\) and \(\displaystyle \vec {u}\) are given by

$$\begin{aligned} \vec {f}= & {} (f (-N h), \cdots , f (N h))^{T}, \\ \vec {u}= & {} (u (-N h,\tilde{t}), \cdots , u (N h,\tilde{t}))^{T}, \end{aligned}$$

where \(\displaystyle A^{T}\) denotes the transpose of a matrix A. Assume that \(\vec {u}\) is known and that we determine \(\vec {f}\) from (3.7) and use (3.1) to approximate f. Note that in spite of the existence of \((2N+1)^2\) nonzero entries in the matrix \(\textbf{B}_N\), we just need compute the \(2N+ 1\) terms \(\beta _{0}..... \beta _{2N}\). Although we have computed the matrix (3.8) explicitly as its entries are computed from (3.6), it is a tough task to verify if it is invertible. Also due to the factorials, the computations of entries requires fast machines. We checked it for \(1\le N\le 12\), i.e, \(25\times 25\) matrices and found that the system is solvable.

Suppose that \(f_{app}\) denotes the approximation of f resulting from using the Shannon-Taylor technique. The next corollary is the direct result of Theorem 2.1.

Corollary 1

Let \(f\in \textbf{B} (\mathcal {S}_d)\) and satisfies condition (1.10). Then, by choosing \(\displaystyle h =\left( \frac{\pi d}{\alpha N}\right) ^{1/2}\), \(\rho =5\), we have for \(|y|< N h/7\), \(N\in \mathbb {N}\),

$$\begin{aligned} \Vert f-f_{app}\Vert _{\infty } \le \frac{N (f,\mathcal {S}_d )}{2\pi d \sinh (\pi d/h)}+C \sqrt{N} e^{-(\pi d \alpha N)^{1/2}}+\frac{3\Vert f\Vert _{\infty }}{10\sqrt{20\pi }}\frac{1}{\sqrt{N}}(0.785)^{N},\nonumber \\ \end{aligned}$$
(3.9)

where C is a constant depending only on \(f, \alpha \) and d.

4 Shannon-Taylor inversion problem on \(\mathbb R^{+}\)

In this section, we consider the problem (1.1) when \( x\in \mathbb R^{+}\) and u(xt) satisfies the boundary condition

$$\begin{aligned} u(0,t)=0, \quad t>0. \end{aligned}$$
(4.1)

For initial data \(f\in L^{2}(\mathbb R^{+})\), the solution is given by, cf. [1, 9],

$$\begin{aligned} u(x,t)= \frac{1}{\sqrt{4\pi t}} \int _{0}^{\infty } \left\{ \exp \left( \frac{-(x-y)^{2}}{4 t}\right) -\exp \left( \frac{-(x+y)^{2}}{4 t}\right) \right\} f(y) dy. \end{aligned}$$
(4.2)

Let F be the odd extension of f on \(\mathbb {R}\) defined by

$$\begin{aligned} F(y)=\left\{ \begin{array}{ll} f(y), &{} \quad y\ge 0, \\ -f(-y), &{} \quad y<0. \end{array} \right. \end{aligned}$$
(4.3)

Then we can write (4.2) as

$$\begin{aligned} u(x,t)= \frac{1}{\sqrt{4\pi t}} \int _{-\infty }^{\infty } \exp \left( \frac{-(x-y)^{2}}{4 t}\right) F(y) dy. \end{aligned}$$
(4.4)

For continuous initial data f, the normalization \(f(0) = 0\) is necessary to satisfy the boundary condition at (0, 0). If \(F\in \textbf{B} (\mathcal {S}_d )\), then the solution in (4.4) is well defined and

$$\begin{aligned} F(y) \simeq \sum _{n=-N}^{N} F\left( n h \right) \sum _{j=0}^{5N} \frac{(-1)^j }{(2j+1)!}\left\{ \pi \left( \frac{y-nh}{h}\right) \right\} ^{2j}. \end{aligned}$$
(4.5)

Combining (4.4) and (4.5) and using the same technique as in the previous section, we obtain the system of equations

$$\begin{aligned} u(x_k,\tilde{t})= \sum _{n=-N}^{N} F (n h) \beta _{n-k},\quad k=-N,\ldots ,N, \end{aligned}$$
(4.6)

where \(\displaystyle \tilde{t}=\left( \frac{h}{2\pi }\right) ^2\) and \(\displaystyle \beta _l\) is defined in (3.6). Recall that \(\displaystyle \beta _{-l}=\beta _{l}\) for all \(-N \le l \le N\), \(F(-nh) = -f (nh)\) and \(\displaystyle u\left( -nh, \tilde{t}\right) = -u\left( nh, \tilde{t}\right) \) for all \(-N \le n \le N\). Hence the system (4.6) has the block form

$$\begin{aligned} {\left( \begin{array}{ccc} A_1 &{} \left( \begin{array}{c} \beta _N \\ \beta _{N-1} \\ \vdots \\ \beta _{1} \end{array}\right) &{} A_2 \\ \left( \begin{array}{cccc} \beta _{-N} &{} \beta _{-N+1} &{} \cdots &{} \beta _{-1} \end{array}\right)&\beta _0&\left( \begin{array}{cccc} \beta _{1} &{} \beta _{2} &{} \cdots &{} \beta _{N} \end{array}\right) \\ A_{2}^{T} &{} \left( \begin{array}{c} \beta _{-1} \\ \beta _{-2} \\ \vdots \\ \beta _{-N} \end{array}\right)&A_1 \end{array}\right) \left( \begin{array}{c} \\ \\ \vec {f_1} \\ \\ \\ f(0) \\ \\ \\ \vec {f_2} \\ \\ \end{array}\right) = \left( \begin{array}{c} \\ \\ \vec {u_1} \\ \\ \\ u(0,\tilde{t}) \\ \\ \\ \vec {u_2} \\ \\ \end{array}\right) ,}\nonumber \\ \end{aligned}$$
(4.7)

where

$$\begin{aligned} { A_1 =\left( \begin{array}{cccc} \beta _{0} &{} \beta _{1} &{} \cdots &{} \beta _{N-1} \\ \beta _{-1} &{} \beta _{0} &{} \cdots &{} \beta _{N-2} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \beta _{-N+1} &{} \beta _{-N+2} &{} \cdots &{} \beta _{0} \end{array}\right) ,\quad A_2 =\left( \begin{array}{cccc} \beta _{N+1} &{} \beta _{N+2} &{} \cdots &{} \beta _{2N} \\ \beta _{N} &{} \beta _{N+1} &{} \cdots &{} \beta _{2N-1} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \beta _{2} &{} \beta _{3} &{} \cdots &{} \beta _{N+1} \end{array}\right) .} \end{aligned}$$
(4.8)

The vectors \(\vec {f}_j\) and \(\vec {u}_j, j = 1, 2\) are given by

$$\begin{aligned} \vec {f}_1= & {} \left( f(-Nh),f((-N+1)h),\ldots , f(-h)\right) ^{T}, \\ \vec {f}_2= & {} \left( f(h),f(2h),\ldots ,f((N-1)h), f(Nh)\right) ^{T}, \\ \vec {u}_1= & {} \left( u(-Nh,\tilde{t}),u((-N+1),\tilde{t}),\ldots , u(-h,\tilde{t})\right) ^{T}, \\ \vec {u}_2= & {} \left( u(h,\tilde{t}),u(2h,\tilde{t}),\ldots ,u((N-1),\tilde{t}), u(Nh,\tilde{t})\right) ^{T}. \end{aligned}$$

We notice that \(f(0)=0\) and \(u(0, \tilde{t})=0\) because F is an odd function. As a result, the system (4.7) reduces to \(2N\times 2N\) of equations as follows

$$\begin{aligned} A_1 \vec {f}_1+A_2 \vec {f}_2= & {} \vec {u}_1, \nonumber \\ A_{2}^{T} \vec {f}_1+A_1 \vec {f}_2= & {} \vec {u}_2. \end{aligned}$$
(4.9)

Let \(J_N\) is the \(N\times N\) matrix defined as

$$\begin{aligned} J_N = \left( \begin{array}{cccc} 0 &{} 0 &{} \cdots &{} 1 \\ 0 &{} 0 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ldots &{} \vdots \\ 0 &{} 1 &{} \cdots &{} 0 \\ 1 &{} 0 &{} \cdots &{} 0 \end{array}\right) , \end{aligned}$$
(4.10)

then, by simple calculations we get

$$\begin{aligned} J_N A_1 J_N=A_1,\quad J_N A_{2}^{T}=A_2 J_N,\quad \vec {f}_2=-J_N \vec {f}_1, \quad \vec {u}_1=-J_N \vec {u}_2. \end{aligned}$$
(4.11)

Using (4.11), the system (4.9) reduces to

$$\begin{aligned} \left( A_1 -A_2 J_N\right) \vec {f}_1= \vec {u}_1, \end{aligned}$$
(4.12)

which is of order \(N\times N\). If \(\vec {u}\) is known, then we determine \(\vec {f_1}, \vec {f_2}\) from (4.12) and (4.11), and use (4.5) to approximate F.

Let \(F_{app}\) denote the approximation of F resulting from using the Shannon-Taylor technique. In the following corollary, we estimate the error \(\Vert F-F_{app}\Vert _{\infty }\) as the direct result of Theorem 2.1.

Corollary 2

Suppose that \(F\in \textbf{B} (\mathcal {S}_d)\) satisfies (1.10). Let \(h =\left( \frac{\pi d}{\alpha N}\right) ^{1/2}\), \(\rho =5\). Hence, for \(|y|< N h/7\), \(N\in \mathbb {N}\), we have the following estimate

$$\begin{aligned} \Vert F-F_{app}\Vert _{\infty } \le \frac{N (f,\mathcal {S}_d )}{2\pi d \sinh (\pi d/h)}+ C \sqrt{N} e^{-(\pi d \alpha N)^{1/2}}+\frac{3\Vert F\Vert _{\infty }}{10\sqrt{20\pi }}\frac{1}{\sqrt{N}}(0.785)^{N},\nonumber \\ \end{aligned}$$
(4.13)

where C is a constant depending only on \(F, \alpha \) and d.

5 Numerical examples

In the following examples, we compare the results obtained by the sinc-interpolation method \(f_{N,\varepsilon }^{C}(y)\) with the Shannon-Taylor method \(f_{N,\rho }^{T}(y) \), where \(N\in \mathbb N, \varepsilon > 0\), \(\rho \ge 5\) and

$$\begin{aligned} f_{N,\varepsilon }^{C}(y):= & {} \sum _{n=-N}^{N} \tilde{f}\left( n h\right) \mathrm {\,sinc\,}\left( \frac{y-nh}{h}\right) , \end{aligned}$$
(5.1)
$$\begin{aligned} f_{N,\rho }^{T} (y):= & {} \sum _{n=-N}^{N}f\left( n h \right) \sum _{j=0}^{\rho N} \frac{(-1)^j }{(2j+1)!}\left\{ \pi \left( \frac{y-nh}{h}\right) \right\} ^{2j}. \end{aligned}$$
(5.2)

Example 1

Let the initial data of problem (1.1) be \(\displaystyle f(x)=\mathrm {\,sech}\left( \frac{\pi x}{2}\right) \in \textbf{B}(\mathcal {S}_1)\). The solution (1.2) for this initial data is given by

$$\begin{aligned} u(x,t)=\frac{1}{\pi } \int _{-\infty }^{\infty } \frac{\cos (s x) e^{-s^2 t}}{\cosh (s)}ds. \end{aligned}$$
(5.3)

We may select \(\alpha =1\) in (1.10), so the step size h in (2.2) is \(\displaystyle h=\sqrt{\pi /N}\). Tables 1 and 2 exhibit comparisons between \( f_{N,10^{-5}}^{C} (y)\) and \(f_{N,5}^{T}(y)\) when \(N=6, 12\) respectively. Figure 1 illustrates the absolute error \(\left| f(y)-f_{N,10^{-5}}^{C}(y)\right| \) and the absolute error \(\left| f(y)-f_{N,5}^{T}(y)\right| \) where \(y\in [-1,1]\) and \(N=6,12\) respectively. We notice from Tables 1 and 2 and Fig. 1 that when \(N=6\), the approximations obtained by using the classical sinc method are closer to f(y) than those obtained by Taylor interpolation, while when \(N=12\), the approximations obtained by Taylor interpolation are the closest to f(y). This is occurs because the error caused by cutting the Taylor series is reduced when N increases. Hence, for large N, the error resulting from Taylor series cutting tends to zero, and so we get more accurate results. Figure 2 shows the graphs of \(f(y), f_{2,5}^{T}(y),\) and \(f_{4,5}^{T}(y)\) in the interval \([-2,2]\). Note that in Fig. 2, as N increases, the gap between f(y) and \( f_{N,5}^{T}(y)\) narrows noticeably.

Table 1 Comparison between \( f_{6,10^{-5}}^{C}(y)\) and \(f_{6,5}^{T}(y)\) of Example 1
Table 2 Comparison between \( f_{12,10^{-5}}^{C}(y)\) and \(f_{12,5}^{T}(y)\) of Example 1
Fig. 1
figure 1

Illustrations associated with Example 1. a The blue continuous line is the absolute error \(\left| f(y)-f_{6,10^{-5}}^{C}(y)\right| \), while the red dashed line is the absolute error \(\left| f(y)-f_{6,5}^{T}(y)\right| \). b The blue continuous line is the absolute error \(\left| f(y)-f_{12,10^{-5}}^{C}(y)\right| \), while the red dashed line is the absolute error \(\left| f(y)-f_{12,5}^{T}(y)\right| \)

Fig. 2
figure 2

The green continuous line is f(y), while the blue and the red dashed lines are \(f_{2,5}^{T}(y) \) and \(f_{4,5}^{T}(y)\) respectively (Colour figure online)

Example 2

In this example, we consider (1.1) with the initial data \(f(x)=e^{-x^{2}/4}\). The solution (1.2) for this initial data is given by

$$\begin{aligned} u(x,t)=\frac{e^{-x^2/4(t+1)}}{\sqrt{t+1}}. \end{aligned}$$
(5.4)

Since f is entire, then \(f\in \textbf{B}(\mathcal {S}_d)\) for every d. Note that the inequality (1.10) is satisfied for all \(\alpha >0\), so that h is undetermined. Here, we choose \(\alpha =1/4\) and \(d=1\). Tables 3 and 4 show some numerical results with both techniques where \(\varepsilon =10^{-5}, \rho =5\) and \(N=6,12\) respectively. We notice from Tables 3 and 4 that for \(N=6\), \(\left| f(y)-f_{6,10^{-5}}^{C}(y)\right| , |f(y)-f_{6,5}^{T}(y)| \sim \mathcal {O} \left( 10^{-4}\right) \) and the results is improved as \(N=12\) to be \(\left| f(y)-f_{12,10^{-5}}^{C}(y)\right| ,\left| f(y)-f_{12,5}^{T}(y)\right| \sim \mathcal {O} \left( 10^{-6}\right) \). Figure 3 illustrates comparisons between f(y) and its approximations \( f_{N,10^{-5}}^{C} (y)\) and \(f_{N,5}^{T}(y)\) in the interval \([-2,2]\) when \(N=2,4\) respectively. Figure 4 exhibits the errors \(\left| f(y)-f_{N,10^{-5}}^{C}(y)\right| \) and \( \left| f(y)-f_{N,5}^{T}(y)\right| \) when \(y\in [-2,2]\) and \(N=8,10,12\) respectively. As Figs. 3-4 indicate, we see that the precision improves as N increases.

Table 3 Comparison between \( f_{6,10^{-5}}^{C}(y)\) and \(f_{6,5}^{T}(y)\) of Example 2
Table 4 Comparison between \( f_{12,10^{-5}}^{C}(y)\) and \(f_{12,5}^{T}(y)\) of Example 2
Fig. 3
figure 3

The green continuous line is f(y), while the blue and the red dashed lines are \(f_{N,10^{-5}}^{C}(y) \) and \(f_{N,5}^{T}(y)\) respectively where \(N=2\) (a), and \(N=4\) (b) (Colour figure online)

Fig. 4
figure 4

Illustrations associated with Example 2. a \(\left| f(y)-f_{N,10^{-5}}^{C}(y) \right| \) where \(N=8,10,12\) respectively. b \(\left| f(y)-f_{N,5}^{T}(y)\right| \) where \(N=8,10,12\) respectively

Example 3

Consider problem (1.1) with the initial data \(\displaystyle f(x)=\left( 1+x^{2}\right) ^{-1}\). In this example, we compare between the absolute error \( \left| f(y)-f_{N,10^{-5}}^{C}(y)\right| \) and the absolute error \(\left| f(y)-f_{N,\rho }^{T} (y)\right| \), where \(\rho =10, 15\) and \(N=6, 12\) respectively. It can be seen from Tables 5 and 6 that for \(N=6\), \( \left| f(y)-f_{6,10^{-5}}^{C}(y)\right| \sim \mathcal {O} \left( 10^{-3}\right) \) and \(\left| f(y)-f_{6,\rho }^{T} (y)\right| \sim \mathcal {O} \left( 10^{-3}\right) \) for \( \rho = 10, 15\), and the results of the Shannon-Taylor technique is improved when N increases. For example, when \(N=12,\) \( \left| f(y) - f_{12,10^{-5}}^{C}(y) \right| \sim \mathcal {O} \left( 10^{-3}\right) \), while \(\left| f(y)-f_{12,\rho }^{T} (y)\right| \sim \mathcal {O} \left( 10^{-4}\right) \) for \( \rho = 10, 15\). Graphs of \(\left| f(y)-f_{12,10^{-5}}^{C}(y)\right| \) and \(\left| f(y)-f_{12,\rho }^{T} (y)\right| \) when \(\rho =10, 15\) are exhibited in Fig. 5.

Table 5 Comparison between \(\left| f(y)-f_{6,10^{-5}}^{C}(y)\right| \) and \(\left| f(y)-f_{6,\rho }^{T} (y)\right| \)
Table 6 Comparison between \( \left| f(y)-f_{12,10^{-5}}^{C}(y)\right| \) and \(\left| f(y)-f_{12,\rho }^{T} (y)\right| \)
Fig. 5
figure 5

Illustrations associated with Example 3. The blue line is the absolute error \(\left| f(y)-f_{12,10^{-5}}^{C}(y)\right| \), while the green and the red lines is the absolute error \(\left| f(y)-f_{12,\rho }^{T} (y)\right| \) when \(\rho =10, 15\) respectively (Colour figure online)

6 Conclusions

In this paper, the inverse heat problem was solved by implementing the Shannon-Taylor approximations, where the sinc-function was replaced by the Taylor approximation polynomial. The error caused by cutting the Taylor series was investigated, and the rigorous uniform error estimates were established. This approach is novel and has not been implemented before. It leads to explicit representations of the linear systems resulting from the approximation procedure. Numerical examples were presented to demonstrate the efficiency and accuracy of the numerical method. Our technique could be extended to higher dimensions, cf. [5], as far as the Shannon-Taylor technique is extended to higher dimensions.