1 Introduction

Many problems in physical phenomena, engineering equipment, and living organisms, such as the proliferation of gas, the infiltration of liquid, the conduction of heat, and the spread of impurities in semiconductor materials, can be described with parabolic equations [1,2,3]. In addition, the system related to parabolic equation can be used to describe the phase separation in material sciences [4] and a class of abstract control systems concerned with parabolic equation has been developed [5]. Especially, there exists analysis about linear and nonlinear boundary value problems [6]. Thus, it is of great value to study their exact solutions. In practical engineering calculations, their exact solutions are not easy to obtain directly. We have to find their numerical solutions by various numerical methods. Therefore, the study of numerical solutions of parabolic equations is of great theoretical and practical value. Until now, several types of numerical methods have been developed for numerical simulation of the parabolic problems. For example, the authors in [7] used two-level finite difference schemes to solve one-dimensional parabolic equation [7]. Zhou et al. proposed finite element methods to solve parabolic equation [8]. In [9], a new finite volume method for parabolic equation is presented. Zhang et al. provided the Crank–Nicolson-type difference scheme to solve the equations with spatial variable coefficient [10].

In the methods mentioned above, the finite difference scheme is always regarded as one of the effective methods to solve partial differential equations due to its simple form and easiness to be understood and programmed. In the last decade, the high-order compact finite difference scheme (CFDS) has been implemented for numerical solution of various types of partial differential equations. Such as the Rosenau-regularized long wave (RLW) equation [11], integro-differential equations [12], Burgers’ equation [13], Helmholtz equation [14], Navier–Stokes equations [15], Schrödinger equations [16], Poisson equation [17], sine-Gordon equation [18]. Although the high-order CFDS can get high accurate solution, it usually either needs small spatial discretization or extended finite difference stencils and a small time step, which results in computationally expensive calculations. In general, the computational accuracy and computational efficiency are often seen as the two important factors to evaluate a numerical algorithm. Though the CFDS can achieve a satisfying numerical solution and possesses fast rate of convergence, it also brings many difficulties to practical application such as heavily computational load. Therefore, it is necessary for us to develop the high-order CFDS that not only simplifies the computational load and saves computational time of calculation in the actual engineering problems, but also holds sufficiently accurate solution. In recent years, some reduced models based on proper orthogonal decomposition (POD) have attracted more and more attention in the field of computational mechanics [19, 20]. POD, also known as Karhunen–Loève decomposition (KLD), principal component analysis (PCA), or singular value decomposition (SVD), provides a powerful technique to reduce a large number of interdependent variables to a much smaller number of uncorrelated variables while retaining as much as possible of the variation in the original variables [21,22,23,24]. Thus, we can use the POD technique to greatly reduce the computational cost.

In the past decades, POD has been widely applied to numerical solutions to construct some reduced models. For example, Luo and Sun combined POD with finite difference method, finite element method, and finite volume method to solve the parabolic equations [25,26,27,28], Navier–Stokes equations [29, 30], Sobolev equations [31], viscoelastic wave equation [32], hyperbolic equations [33], and Burgers’ equation [34]. Dehghan and Abbaszadeh proposed POD with upwind local radial basis functions-differential quadrature method to solve compressible Euler equation [35], applied POD into Galerkin method to simulate two-dimensional solute transport problems [36], and combined POD with empirical interpolation method [37]. Zhang proposed a fast and efficient meshless method based on POD for transient heat conduction problem [38] and convection-diffusion problems [19]. Especially, the authors in [39] gave the Carleman estimates for singular parabolic equation. However, to our best knowledge, there are no published results when POD is used to reduce the CFDS4 for parabolic equations. The main goal of this paper is to construct a numerical algorithm which has high computational accuracy and efficiency for solving parabolic equations. Thus, the focus of this paper is on combining the CFDS4 and the POD method, namely the R-CFDS4, to solve parabolic equations.

The paper is organized as follows. The CFDS4 for 1D parabolic equations is derived in Sect. 2. In Sect. 3, R-CFDS4 based on POD for 1D parabolic equations is developed. In Sect. 4, the error estimates between the exact solutions are given. In Sect. 5, a reduced alternating direction implicit fourth-order compact finite difference scheme (R-ADI-CFDS4) based on POD for 2D parabolic equations is developed. In Sect. 6, some numerical examples are presented to demonstrate the efficiency and reliability of the algorithm. Section 7 provides main conclusions.

2 The fourth-order compact FDS for 1D parabolic equation

Firstly, we consider the following 1D initial-boundary value problem \(\mathrm{P} _{1}\):

$$ \textstyle\begin{cases} \frac{{\partial u}}{{\partial t}} - a \frac{{{\partial ^{2}}u}}{{\partial {x^{2}}}} = {f(x,t)}, & 0 < x < L, \\ u(0,t) = u(L,t) = 0, & 0 \le t \le T, \\ u(x,0) = \varphi (x), & 0 \le x \le L, \end{cases} $$
(1)

where a is a positive constant, \(f(x,t)\) and \(\varphi (x)\) are two given enough smooth functions. Let h be the spatial step increment on x-direction and τ be the time step increment, and then write \({x_{j}} = jh \) (\(j = 0,1,2, \ldots ,J\)), \({t_{n}} = n\tau \) (\(n = 0,1,2, \ldots ,N\)), \(u_{j}^{n} \approx u({x_{j}},{j_{n}})\).

Next, we derive the CFDS4 for problem \(\mathrm{P}_{1}\), noting the following Taylor formulas:

$$\begin{aligned}& \begin{aligned}[b] u({x_{j - 1}},t) &= u({x_{j}},t) - { {h \frac{{\partial u}}{{\partial x}}} \bigg\vert _{({x_{j}}, t)}} + \frac{{{h^{2}}}}{2} { {\frac{{{\partial ^{2}}u}}{{\partial {x^{2}}}}} \bigg\vert _{({x_{j}}, t)}} \\ &\quad{}- \frac{{{h^{3}}}}{6} { {\frac{{{\partial ^{3}}u}}{ {\partial {x^{3}}}}} \bigg\vert _{({x_{j}},t)}} + \frac{{{h^{4}}}}{ {24}} { {\frac{{{\partial ^{4}}u}}{{\partial {x^{4}}}}} \bigg\vert _{({x_{j}},t)}} - \cdots, \end{aligned} \end{aligned}$$
(2)
$$\begin{aligned}& \begin{aligned}[b] u({x_{j + 1}},t) &= u({x_{j}},t) + { {h \frac{{\partial u}}{{\partial x}}} \bigg\vert _{({x_{j}},t)}} + \frac{{{h^{2}}}}{2} { {\frac{{{\partial ^{2}}u}}{{\partial {x^{2}}}}} \bigg\vert _{({x_{j}}, t)}} \\ &\quad{}+ \frac{{{h^{3}}}}{6} { {\frac{{{\partial ^{3}}u}}{ {\partial {x^{3}}}}} \bigg\vert _{({x_{j}},t)}} + \frac{{{h^{4}}}}{ {24}} { {\frac{{{\partial ^{4}}u}}{{\partial {x^{4}}}}} \bigg\vert _{({x_{j}},t)}} + \cdots. \end{aligned} \end{aligned}$$
(3)

Adding Eq. (2) to Eq. (3), we get

$$ \frac{{u({x_{j - 1}},t) - 2u({x_{j}},t) + u({x_{j + 1}},t)}}{{{h^{2}}}} = { {\frac{{{\partial ^{2}}u}}{{\partial {x^{2}}}}} | _{({x_{j}},t)}} + \frac{{{h^{2}}}}{{12}} { {\frac{ {{\partial ^{4}}u}}{{\partial {x^{4}}}}} \bigg\vert _{({x_{j}},t)}} + O \bigl( {h^{4}} \bigr). $$
(4)

Let \(v(x,t) = \frac{{{\partial ^{2}}u}}{{\partial {x^{2}}}}\), then Eq. (4) can be rewritten as follows:

$$ \begin{aligned}[b] & \frac{{u({x_{j - 1}},t) - 2u({x_{j}},t) + u({x_{j + 1}},t)}}{{{h^{2}}}} \\ &\quad = v({x_{j}},t) + \frac{{{h^{2}}}}{{12}} { { \frac{ {{\partial ^{2}}v}}{{\partial {x^{2}}}}} \bigg\vert _{({x_{j}},t)}} + O \bigl( {h^{4}} \bigr) \\ &\quad = v({x_{j}},t) + \frac{{{h^{2}}}}{{12}}\frac{{v({x_{j - 1}},t) - 2v( {x_{j}},t) + v({x_{j + 1}},t)}}{{{h^{2}}}} + O \bigl({h^{4}} \bigr) \\ &\quad = \frac{1}{{12}}v({x_{j - 1}},t) + \frac{{10}}{{12}}v({x_{j}},t) + \frac{1}{ {12}}v({x_{j + 1}},t) + O \bigl({h^{4}} \bigr). \end{aligned} $$
(5)

In Eq. (5), if we choose \(t = {t_{n + {\frac{1 }{2}}}}\), then we obtain

$$ \begin{aligned}[b] &\frac{{u({x_{i - 1}},{t_{n + {\frac{1 }{2}}}}) - 2u({x _{i}},{t_{n + { \frac{1 }{2}}}}) + u({x_{i + 1}},{t_{n + { \frac{1 }{2}}}})}}{{{h^{2}}}} \\ &\quad = \frac{1}{{12}} \bigl( {v({x_{i - 1}},{t_{n + { \frac{1 }{2}}}}) + 10v({x_{i}},{t_{n + { \frac{1 }{2}}}}) + v({x_{i + 1}},{t_{n + { \frac{1 }{2}}}})} \bigr) + O \bigl( {h^{4}} \bigr). \end{aligned} $$
(6)

Noting that Eq. (1) and \(v(x,t) = \frac{{{\partial ^{2}}u}}{ {\partial {x^{2}}}}\), we get

$$ v(x,t) = \frac{1}{a} \biggl( {\frac{{\partial u}}{{\partial t}} - f} \biggr). $$
(7)

For the first-order derivative of time, using the center difference quotient formula, we get

$$ { {\frac{{\partial u}}{{\partial t}}} \bigg\vert _{({x_{j}},{t _{n + { \frac{1 }{2}}}})}} = \frac{{u({x_{j}},{t_{n + 1}}) - u({x_{j}},{t_{n}})}}{\tau } + O \bigl({\tau ^{2}} \bigr). $$
(8)

Thus, substituting Eq. (8) to Eq. (7), we obtain

$$ \begin{aligned}[b] v({x_{j}},{t_{n + { \frac{1 }{2}}}}) &= \frac{1}{a}\biggl( {{{ {\frac{{\partial u}}{{\partial t}}} \bigg|}_{({x_{j}}, {t_{n + { \frac{1 }{2}}}})}} - f({x_{j}},{t_{n + { \frac{1 }{2}}}})} \biggr) \\ &= \frac{1}{a} \biggl( {\frac{{u({x_{j}},{t_{n + 1}}) - u({x_{j}}, {t_{n}})}}{\tau } - f({x_{j}},{t_{n + { \frac{1 }{2}}}})} \biggr) + O \bigl({\tau ^{2}} \bigr). \end{aligned} $$
(9)

Similar with Eq. (2) and Eq. (3), we have the following formulas:

$$ u(x,{t_{n + 1}}) = u(x,{t_{n + 0.5}}) + \frac{\tau }{2}{ {\frac{ {\partial u}}{{\partial t}}} \bigg\vert _{(x,{t_{n + 0.5}})}} + O \bigl({\tau ^{2}} \bigr) $$
(10)

and

$$ u(x,{t_{n}}) = u(x,{t_{n + 0.5}}) - { { \frac{\tau }{2}\frac{ {\partial u}}{{\partial t}}} \bigg\vert _{(x,{t_{n + 0.5}})}} + O \bigl({\tau ^{2}} \bigr). $$
(11)

By adding Eq. (10) to Eq. (11), we can get the following formula:

$$ u(x,{t_{n + 0.5}}) = \frac{{u(x,{t_{n}}) + u(x,{t_{n + 1}})}}{2} + O \bigl( { \tau ^{2}} \bigr). $$
(12)

Substituting Eq. (9) and Eq. (12) into Eq. (6), we get the formula

$$ \begin{aligned}[b] &\frac{{u({x_{j - 1}},{t_{n}}) + u({x_{j - 1}},{t_{n + 1}}) - 2u({x_{j}},{t_{n}}) - 2u({x_{j}},{t_{n + 1}}) + u({x_{j + 1}},{t _{n}}) + u({x_{j + 1}},{t_{n + 1}})}}{{2{h^{2}}}} \\ &\quad = \frac{1}{{12a}}\biggl( {\frac{{u({x_{j - 1}},{t_{n + 1}}) - u( {x_{j - 1}},{t_{n}})}}{\tau } - f({x_{j - 1}},{t_{n + { \frac{1 }{2}}}}) + 10\frac{{u({x_{j}},{t_{n + 1}}) - u( {x_{j}},{t_{n}})}}{\tau }} \\ &\qquad{}- 10f({x_{j}},{t_{n + { \frac{1 }{2}}}}) { + \frac{ {u({x_{j + 1}},{t_{n + 1}}) - u({x_{j + 1}},{t_{n}})}}{\tau } - f( {x_{j + 1}},{t_{n + { \frac{1 }{2}}}})} \biggr) + O \bigl( {\tau ^{2}} + {h^{4}} \bigr). \end{aligned} $$
(13)

Let \(r = \frac{{a\tau }}{{{h^{2}}}}\), and using numerical solution \(u_{j}^{n}\) to replace exact solution \(u({x_{j}}, {t_{n}})\) and ignoring higher-order items, we get

$$ \begin{aligned}[b] &\frac{r}{2} \bigl( {u_{j - 1}^{n} + u_{j - 1}^{n + 1} - 2u_{j}^{n} - 2u_{j}^{n + 1} + u_{j + 1}^{n} + u_{j + 1}^{n + 1}} \bigr) \\ &\quad = \frac{1}{{12}} \bigl( {u_{j - 1} ^{n + 1} - u_{j - 1}^{n} + 10u_{j}^{n + 1} - 10u_{j}^{n} + u_{j + 1} ^{n + 1} - u_{j + 1}^{n}} \bigr) \\ &\qquad{}- \frac{\tau }{{12}} \bigl( {f({x_{j - 1}}, {t_{n + { \frac{1 }{2}}}}) + 10 f({x_{j}},{t_{n + { \frac{1 }{2}}}}) + f({x_{j + 1}},{t_{n + { \frac{1 }{2}}}})} \bigr). \end{aligned} $$
(14)

Finally, considering the initial and boundary conditions Eq. (1), we obtain the compact difference formula for problem \(\mathrm{P}_{1}\):

$$ \begin{aligned}[b] & \biggl(\frac{1}{{12}} - \frac{r}{2} \biggr)u_{j - 1}^{n + 1} + \biggl( \frac{5}{6} + r \biggr)u _{j}^{n + 1} + \biggl( \frac{1}{{12}} - \frac{r}{2} \biggr)u_{j + 1} ^{n + 1} \\ &\quad = \biggl(\frac{1}{{12}} + \frac{r}{2} \biggr)u_{j - 1}^{n} + \biggl(\frac{5}{6} - r \biggr)u _{j}^{n} + \biggl(\frac{1}{{12}} + \frac{r}{2} \biggr)u_{j + 1}^{n} \\ &\qquad{}+ \frac{\tau }{{12}} \bigl( {f({x_{j - 1}},{t_{n + { \frac{1 }{2}}}}) + 10 f({x_{j}},{t_{n + { \frac{1 }{2}}}}) + f({x_{j + 1}},{t_{n + { \frac{1 }{2}}}})} \bigr), \\ &1 \le j \le J - 1,\qquad 0 \le n \le N - 1, \\ &u_{j}^{0} = \varphi ({x_{j}}),\quad 0 \le j \le J,\qquad u_{0}^{n} = 0 = u_{J}^{n} = 0,\quad 1 \le n \le N, J, \\ &u_{0}^{n} = 0 = u_{J}^{n} = 0,\quad 1 \le n \le N. \end{aligned} $$
(15)

The stability of Eq. (15) is unconditionally stable [40], and the error of the above compact difference scheme is as follows:

$$ \bigl\vert {u({x_{j}}, {t_{n}}) - u_{j}^{n}} \bigr\vert = O \bigl( {\tau ^{2}}, {h^{4}} \bigr). $$
(16)

3 A reduced fourth-order compact FDS for 1D parabolic equation

In the scheme mentioned above, we only use three points to achieve high accuracy. However, the CFDS4 can get high accuracy by the scheme in Eq. (15), which usually needs extended finite difference stencils or small temporal step. Nevertheless, it will bring overlong computational time in the large real-life engineering problems. Hence, a key issue for engineering problems is how to build a reduced model with the high efficiency and hold sufficiently high accuracy compared with the scheme in Eq. (15).

The POD is an effective approach which can not only reduce the computational time significantly, but also guarantee the high accuracy of algorithm. The main idea of POD is to generate a group of optimal bases of the assemble of snapshots. Then, the reduced CFDS4 is constructed by the optimal basis. The snapshots, in this paper, are collected from some numerical solutions of CFDS4. As a matter of fact, when we compute the actual problem, since the development and change of a large number of future nature phenomena are closely related to previous results, one may choose the numerical experiments or observation data as the snapshots. Therefore, in this section, we first obtain the optimal basis from the snapshot and then use the optimal basis to derive a R-CFDS4 for parabolic equations.

The set of snapshots \(\{ {u_{j}^{{n_{i}}}} \}_{i = 1} ^{d}\) (\(j = 1, 2, \ldots ,J - 1\), \(1 \le {n_{1}} \le {n_{2}} < \cdots < n_{d} \le N \)) can be written as an \(( {J - 1} ) \times d\) matrix A as follows:

A(J1)×d=(u1n1u1n2u1ndu2n1u2n2u2nduJ1n1uJ1n2uJ1nd).
(17)

Applying the singular value decomposition (SVD) on matrix A, we obtain

A(J1)×d=U(Dr000)VT,
(18)

where \({\boldsymbol{U}} = {{\boldsymbol{U}}_{ ( {J - 1} ) \times ( {J - 1} )}}\) and \({\boldsymbol{V}} = {{\boldsymbol{V}}_{d \times d}}\) are all orthogonal matrices, \({{\boldsymbol{D}}_{r}} = \operatorname{diag}({\lambda _{1}},{\lambda _{2}}, \ldots ,{\lambda _{r}})\). The matrix \({\boldsymbol{U}} = ( {{\boldsymbol{\theta }}_{1}},{{\boldsymbol{\theta }}_{2}}, \ldots ,{{\boldsymbol{\theta }} _{J - 1}})\) contains the orthogonal eigenvectors to \({\boldsymbol{A}}{{\boldsymbol{A}} ^{\mathrm{T}}}\). The singular values \({\lambda _{i}} \) (\({i = 1,2, \ldots ,r} \)) satisfy the relation of \({\lambda _{1}} \ge {\lambda _{2}} \ge \cdots \ge {\lambda _{r}} > 0\). Denote d columns of \({{\boldsymbol{A}} _{ ( {J-1} ) \times d}}\) by \({{\boldsymbol{\alpha }} ^{i}}={ ( {u_{1}^{{n_{i}}},u_{2}^{{n_{i}}}, \ldots ,u_{J-1}^{{n_{i}}}} )^{\mathrm{T}}}\) (\(i = 1,2, \ldots ,d\)), and define a projection \({P_{M}}\) by

$$ {P_{M}} \bigl({{\boldsymbol{\alpha }}^{i}} \bigr) = \sum_{j = 1}^{M} { \bigl({{\boldsymbol{\theta }} _{j}},{{\boldsymbol{\alpha }}^{i}} \bigr){{\boldsymbol{\theta }}_{j}}}, $$
(19)

where \(0 < M \le d\) and \(( \cdot , \cdot )\) is the inner product of vectors. Then there exists the following result[41, 42]:

$$ { \bigl\Vert {{{\boldsymbol{\alpha }}^{i}} - {P_{M}} \bigl({{\boldsymbol{\alpha }}^{i}} \bigr)} \bigr\Vert _{2}} \le {\lambda _{M + 1}}, $$
(20)

where \(\| \cdot \|_{2}\) is the standard normal of vector. Thus \(\{ {{\boldsymbol{{\theta }}_{l}}} \}_{l = 1}^{M}\) is a group of optimal bases and \({\boldsymbol{\theta }} = ({{\boldsymbol{\theta }}_{1}},{{\boldsymbol{\theta }} _{2}}, \ldots ,{{\boldsymbol{\theta }}_{M}})\) is a matrix constructed by the orthogonal eigenvectors such that \({{\boldsymbol{\theta }}^{T}}{\boldsymbol{\theta }} = {\boldsymbol{I}}\) (I is a unit matrix of order M)

In the following, we construct a reduced CFDS4 for problem \(\mathrm{P}_{1}\). Equation (15) can be rewritten as follows:

$$ \begin{aligned}[b] &u_{j}^{n + 1} - u_{j}^{n} \\ &\quad = \biggl(\frac{3}{5}r - \frac{1}{{10}} \biggr)u_{j - 1} ^{n + 1} - \frac{6}{5}ru_{j}^{n + 1} + \biggl( \frac{3}{5}r - \frac{1}{{10}} \biggr)u _{j + 1}^{n + 1} + \biggl(\frac{3}{5}r + \frac{1}{{10}} \biggr)u_{j - 1}^{n} \\ &\qquad {}- \frac{6}{5}ru_{j}^{n} + \biggl( \frac{3}{5}r + \frac{1}{{10}} \biggr)u_{j + 1} ^{n} + \frac{\tau }{{10}} \bigl( {f({x_{j - 1}},{t_{n + { \frac{1 }{2}}}}) + 10f({x_{j}},{t_{n + { \frac{1 }{2}}}}) + f({x_{j + 1}},{t_{n + { \frac{1 }{2}}}})} \bigr). \end{aligned} $$
(21)

Let \({{\boldsymbol{u}}^{n}} = { ( {u_{1}^{n},u_{2}^{n}, \ldots ,u_{J - 1} ^{n}} )^{\mathrm{T}}}\), then we can rewrite Eq. (21) as the following matrix form:

$$ \boldsymbol{u^{n + 1}} = \boldsymbol{u^{n}} + \boldsymbol{K_{1}}\boldsymbol{u^{n + 1}} + \boldsymbol{K_{2}} \boldsymbol{u^{n}} + \frac{\tau }{{10}}\boldsymbol{F^{n}} ,\quad n = 0,1, \ldots ,N - 1, $$
(22)

where

K1=(65r35r110035r11065r35r11035r11065r35r110035r11065r),K2=(65r35r+110035r+11065r35r+11035r+11065r35r+110035r+11065r)

and

Fn=(f1n+(35r+110)u0n(35r110)uJn+1f2nfJ2nfJ1n+(35r+110)u0n(35r110)uJn+1),

\(f_{j}^{n} = f({x_{j - 1}},{t_{n + { \frac{1 }{2}}}}) + 10f( {x_{j}},{t_{n + { \frac{1 }{2}}}}) + f({x_{j + 1}},{t_{n + { \frac{1 }{2}}}})\) (\(j = 1,2, \ldots ,J - 1\)).

Substituting

$$ {{\boldsymbol{u}}^{*n}} = {\boldsymbol{\theta }} {{ \boldsymbol{\beta }}^{n}} = {{\boldsymbol{\theta }} _{ ( {J - 1} ) \times M}} { \bigl({{\boldsymbol{\beta }}^{n}} \bigr)_{M \times 1}},\quad n = 0,1, \ldots ,N, $$
(23)

to \({{\boldsymbol{u}}^{n}}\) of Eq. (22) and noting that \(\boldsymbol{\theta ^{T}}\boldsymbol{\theta } = I\), we obtain the R-CFDS4 as follows:

$$ \boldsymbol{\beta ^{n + 1}} = \boldsymbol{\beta ^{n}} + \boldsymbol{\theta ^{\mathrm{T}}} \boldsymbol{K_{1}} \boldsymbol{\beta ^{n + 1}} + \boldsymbol{\theta ^{\mathrm{T}}} \boldsymbol{K_{2}} \boldsymbol{\beta ^{n}} + \frac{\tau }{{10}}\boldsymbol{\theta ^{\mathrm{T}}} \boldsymbol{F^{n}}, n = 0,1, \ldots ,N - 1, $$
(24)

where

$$ \boldsymbol{\beta ^{0}} = \boldsymbol{\theta ^{T}}\boldsymbol{u^{0}} = \boldsymbol{\theta ^{T}} { \bigl(u _{1}^{0},u_{2}^{0}, \ldots ,u_{J - 1}^{0} \bigr)^{T}}. $$
(25)

After \(\boldsymbol{\beta ^{n}}\) is obtained from Eq. (24), one gets the POD optimal solution \(\boldsymbol{u^{*n}} = \boldsymbol{\theta }\boldsymbol{\beta ^{n}}\). From the above derivation process, it can be seen that CFDS needs to solve the \(( {J - 1} ) \times ( {J - 1} )\) equations in each time step, while R-CFDS4 Eq. (24) only needs \(M \times M\) equations, usually \(J \gg M\), which means R-CFDS4 requires much less computational time than CFDS4 within each time step. Thus, R-CFDS4 can save a lot of computational time during the whole process of solving parabolic equations.

4 The error analysis of a reduced fourth-order FDS for 1D parabolic problem

Theorem 1

Let \(\boldsymbol{u^{n}}\) be the solution vector of Eq. (15) and \(\boldsymbol{u^{*n}}\) be the solution vectors of Eq. (24). If \(\{ {u_{i}^{l}} \}_{l = 1}^{d}\) are uniformly chosen from \(\{ {u_{i}^{n}} \}_{n = 1}^{N}\) and \({ \Vert \boldsymbol{{K_{1}}} \Vert _{2}} \le \frac{1}{2}\), \({ \Vert \boldsymbol{{K_{2}}} \Vert _{2}} \le \frac{1}{2}\), then

$$ { \bigl\Vert {\boldsymbol{u^{*n}} - \boldsymbol{u^{n}}} \bigr\Vert _{2}} \le {\lambda _{M + 1}}\exp (N/d),\quad n = 1,2, \ldots ,N, $$
(26)

where \({ \Vert {\boldsymbol{K_{1}}} \Vert _{2}}\) and \({ \Vert {\boldsymbol{K_{2}}} \Vert _{2}}\) are the normal of matrix \(\boldsymbol{K_{1}}\) and \(\boldsymbol{K_{2}}\), respectively.

Proof

There are two cases in Eq. (26). The first case is \(n = {n_{l}}\) (\(l = 1,2, \ldots ,d\)). Using the definition (Eq. (19))

$$ {P_{M}} \bigl({{\boldsymbol{u}}^{{n_{l}}}} \bigr) = \sum_{j = 1}^{M} { \bigl({ \boldsymbol{{\theta }} _{j}},{{\boldsymbol{u}}^{{n_{l}}}} \bigr){ \boldsymbol{{\theta }}_{j}}} $$
(27)

and Eq. (23) and Eq. (24), we have that \({{\boldsymbol{u}}^{*}} ^{{n_{l}}} = {P_{M}}({{\boldsymbol{u}}^{{n_{l}}}})\). Applying Eq. (20), we have the following conclusion:

$$ { \bigl\Vert {\boldsymbol{u^{{n_{l}}}} - \boldsymbol{u^{*{n_{l}}}}} \bigr\Vert _{2}} = { \bigl\Vert { \boldsymbol{u^{{n_{l}}}} - {P_{M}} \bigl( \boldsymbol{u^{{n_{l}}}} \bigr)} \bigr\Vert _{2}} \le { \lambda _{M + 1}} \quad (l = 1,2, \ldots ,d). $$
(28)

The second case is \(n \ne {n_{l}}\). We let \({t_{n}} \in ( {{t _{{n_{l}}}},{t_{{n_{l}} + 1}}} )\), comparing Eq. (22) with Eq. (24), Eq. (24) can be written as similar forms of Eq. (22) as follows:

$$ \boldsymbol{u^{{*}n + 1}} = \boldsymbol{u^{{*}n}} + \boldsymbol{K_{1}} \boldsymbol{u^{{*}n + 1}} + \boldsymbol{K_{2}}\boldsymbol{u^{{*}n}} + \frac{ \tau }{{10}}\boldsymbol{F^{n}},\quad n = 0,1, \ldots ,N - 1. $$
(29)

Subtracting Eq. (29) from Eq. (22), we obtain that

$$ { \bigl\Vert {\boldsymbol{u^{n + 1}} - \boldsymbol{u^{*n + 1}}} \bigr\Vert _{2}} \le { \bigl\Vert {\boldsymbol{u^{n}} - \boldsymbol{u^{*n}}} \bigr\Vert _{2}} + { \Vert {\boldsymbol{K_{1}}} \Vert _{2}} { \bigl\Vert {\boldsymbol{u^{n + 1}} - \boldsymbol{u^{*n + 1}}} \bigr\Vert _{2}} + { \Vert { \boldsymbol{K_{2}}} \Vert _{2}} { \bigl\Vert { \boldsymbol{u^{n}} - \boldsymbol{u^{*n}}} \bigr\Vert _{2}}. $$
(30)

We let \(\frac{c}{2} = \max \{ {{{ \Vert {{{\boldsymbol{K}}_{1}}} \Vert } _{2}},{{ \Vert {{{\boldsymbol{K}}_{2}}} \Vert }_{2}}} \}\), then

$$ { \bigl\Vert {\boldsymbol{u^{n + 1}} - \boldsymbol{u^{*n + 1}}} \bigr\Vert _{2}} \le { \bigl\Vert {\boldsymbol{u^{n}} - \boldsymbol{u^{*n}}} \bigr\Vert _{2}} + \frac{c}{2} \bigl[ {{{ \bigl\Vert { \boldsymbol{u^{n + 1}} - \boldsymbol{u^{*n + 1}}} \bigr\Vert }_{2}} + {{ \bigl\Vert {\boldsymbol{u^{n}} - \boldsymbol{u^{*n}}} \bigr\Vert }_{2}}} \bigr]. $$
(31)

Summing Eq. (31) from \({n_{l}}\), \({n_{l}} + 1, \ldots ,n - 1\), we get

$$ { \bigl\Vert {\boldsymbol{u^{n + 1}} - \boldsymbol{u^{*n + 1}}} \bigr\Vert _{2}} \le { \bigl\Vert {\boldsymbol{u^{{n_{l}}}} - \boldsymbol{u^{*{n_{l}}}}} \bigr\Vert _{2}} + c\sum_{j = {n_{l}}}^{n - 1} {{{ \bigl\Vert {\boldsymbol{u^{j}} - \boldsymbol{u^{*j}}} \bigr\Vert }_{2}}}. $$
(32)

By the discrete Gronwall lemma, we get

$$ { \bigl\Vert {\boldsymbol{u^{n + 1}} - \boldsymbol{u^{*n + 1}}} \bigr\Vert _{2}} \le { \bigl\Vert {\boldsymbol{u^{{n_{l}}}} - \boldsymbol{u^{*{n_{l}}}}} \bigr\Vert _{2}} \exp \bigl(c(n - {n_{l}}) \bigr). $$
(33)

If \({ \Vert {\boldsymbol{K_{1}}} \Vert _{2}} \le \frac{1}{2}\) and \({ \Vert {\boldsymbol{K_{2}}} \Vert _{2}} \le \frac{1}{2}\), \(\{ {u_{i}^{{n_{l}}}} \}_{l = 1}^{d}\) are uniformly chosen from \(\{ {u_{i}^{n}} \}_{n = 1}^{N}\), which implies \(n - {n_{l}} \le N/d\), then we obtain from the above inequality that

$$ { \bigl\Vert {\boldsymbol{u^{n + 1}} - \boldsymbol{u^{*n + 1}}} \bigr\Vert _{2}} \le { \bigl\Vert {\boldsymbol{u^{{n_{l}}}} - \boldsymbol{u^{*{n_{l}}}}} \bigr\Vert _{2}} \exp (N/d) \le {\lambda _{M + 1}}\exp (N/d). $$

In the synthesis, the above two cases obtain Eq. (26), which completes Theorem 1. □

Theorem 2

Under the assumption of Theorem 1, let \({u_{j} ^{*n}}\) be the solution of the R-CFDS4 and \(u ( {{x_{j}},{t_{n}}} )\) be the exact solution of problem (\(\mathrm{P}_{1}\)). Then there are the following error estimates:

$$ \begin{aligned}[b] \bigl\vert {u_{j}^{*n} - u ( {{x_{j}},{t_{n}}} )} \bigr\vert = O \bigl({ \lambda _{M + 1}}\exp (N/d),{\tau ^{2}},{h^{4}} \bigr), \quad n = 1,2, \ldots ,N. \end{aligned} $$
(34)

Proof

Let \(u_{j}^{n}\) be the solution of the CFDS4, then

$$ \begin{aligned}[b] & \bigl\vert {u_{j}^{*n} - u ( {{x_{j}},{t_{n}}} )} \bigr\vert = \bigl\vert { \bigl(u_{j}^{*n} - u_{j}^{n} \bigr) + u_{j}^{n} - u ( {{x_{j}}, {t_{n}}} )} \bigr\vert \le \bigl\vert {u_{j}^{*n} - u_{j}^{n}} \bigr\vert + \bigl\vert {u_{j}^{n} - u ( {{x_{j}},{t_{n}}} )} \bigr\vert , \\ & j = 1,2, \ldots ,J - 1, \end{aligned} $$
(35)

since

$$ \bigl\vert {u_{j}^{n} - u ( {{x_{j}},{t_{n}}} )} \bigr\vert = O \bigl( {\tau ^{2}} , {h^{4}} \bigr). $$
(36)

Meanwhile, according to Theorem 1 and vector norm, we get

$$ \bigl\vert {u_{j}^{n} - u ( {{x_{j}},{t_{n}}} )} \bigr\vert \le { \bigl\Vert { \boldsymbol{u^{*n}} - \boldsymbol{u^{n}}} \bigr\Vert _{2}} \le {\lambda _{M + 1}} \exp (N/d). $$
(37)

Thus, we can obtain Eq. (34). □

5 A reduced fourth-order compact FDS for 2D parabolic problem and its error analysis

In this section, we firstly construct alternating direction implicit fourth-order compact finite difference scheme (ADI-CFDS4) for the 2D parabolic equations, then develop R-ADI-CFDS4 for the 2D parabolic equations and provide error estimates between solutions of the R-ADI-CFDS4 and exact solutions, which is similar to 1D parabolic problems.

Let \(\varOmega \subset {R^{2}}\) be a bounded domain. Consider the following initial-boundary value problem (\(\mathrm{P}_{2}\)):

$$ \textstyle\begin{cases} \frac{{\partial u ( {x,y,t} )}}{{\partial t}} - ( { \frac{{{\partial ^{2}}u ( {x,y,t} )}}{{\partial {x^{2}}}} + \frac{{{\partial ^{2}}u ( {x,y,t} )}}{{\partial {y^{2}}}}} ) = f ( {x,y,t} ), & (x,y,t) \in \varOmega \times (0,T), \\ u({x_{i}},{y_{j}},{t_{0}}) = \varphi ({x_{i}},{y_{j}}),& (x,y) \in \varOmega , \\ u({x_{0}},{y_{j}},{t_{k}}) = g({y_{j}},{t_{k}}),& (x,y,t) \in \partial \varOmega \times (0,T) \end{cases} $$
(38)

Partition the domain \(\varOmega \times (0,T)\) with \({x_{i}} = i{h_{x}} \) (\(i = 0,1,2, \ldots ,L\)), \({y_{j}} = j{h_{y}}\) (\(j = 0,1,2, \ldots ,J\)), and \({t_{n}} = n\tau \) (\(n = 0,1,2, \ldots ,N\)), where τ is temporal step increment, \({h_{x}}\) and \({h_{y}}\) are the spatial step increments on x-direction and y-direction, respectively.

Let \(u_{i,j}^{n} \approx u({x_{i}},{y_{j}},{t_{n}})\); \({r_{1}} = \frac{ \tau }{{h_{x}^{2}}}\), \({r_{2}} = \frac{\tau }{{h_{y}^{2}}}\)

$$ \textstyle\begin{cases} \varepsilon _{x}^{2}u_{i,j}^{n} = \frac{1}{{12}}(u_{i - 1,j}^{n} + 10u _{i,j}^{n} + u_{i + 1,j}^{n}), \\ \varepsilon _{y}^{2}u_{i,j}^{n} = \frac{1}{{12}}(u_{i,j - 1}^{n} + 10u _{i,j}^{n} + u_{i,j + 1}^{n}) \end{cases} $$
(39)

and

$$ \textstyle\begin{cases} \delta _{x}^{2}u_{i,j}^{n} = u_{i - 1,j}^{n} - 2u_{i,j}^{n} + u_{i + 1,j} ^{n}, \\ \delta _{y}^{2}u_{i,j}^{n} = u_{i,j - 1}^{n} - 2u_{i,j}^{n} + u_{i,j + 1}^{n} . \end{cases} $$
(40)

Then the ADI-CFDS4 of problem can be described as follows:

$$ \textstyle\begin{cases} (\varepsilon _{x}^{2} - \frac{{{r_{1}}}}{2}\delta _{x}^{2})\bar{u}_{i,j} ^{n + 1} = ({r_{1}}\varepsilon _{y}^{2}\delta _{x}^{2} + {r_{2}} \varepsilon _{x}^{2}\delta _{y}^{2})u_{i,j}^{n} + \tau \varepsilon _{x} ^{2}\varepsilon _{y}^{2}f_{i,j}^{n + \frac{1}{2}}, \\ (\varepsilon _{y}^{2} - \frac{{{r_{2}}}}{2}\delta _{y}^{2})u_{i,j}^{n + 1} = \bar{u}_{i,j}^{n + 1} + (\varepsilon _{y}^{2} - \frac{{{r_{2}}}}{2}\delta _{y}^{2})u_{i,j}^{n}. \end{cases} $$
(41)

In Eq. (41), \(1 \le i \le L - 1\), \(1 \le j \le J - 1 \), \(0 \le n \le N - 1\).

The difference scheme is unconditionally stable [40], and the error is as follows:

$$ \bigl|u_{i,j}^{n} - u({x_{i}},{y_{j}},{t_{n}})\bigr| = O \bigl({\tau ^{2}},h_{x}^{4},h _{y}^{4} \bigr). $$
(42)

Let \(u_{l}^{n} = u_{i,j}^{n}\), \(f_{l}^{n} = f_{i,j}^{n}\) (\(l = ( {i - 1} ) \cdot L + j\), \(m = L \cdot J\), \(1 \le l \le m\), \(1 \le j \le J\), \(1 \le i \le L\), \(1 \le n \le N\)). Using the similar methods as in Sect. 3, we obtain the POD base \(\boldsymbol{\theta } = {\boldsymbol{\theta } _{m \times M}}\) such that \(\boldsymbol{\theta ^{T}}\boldsymbol{\theta } = {\boldsymbol{I}_{M \times M}}\). Then the corresponding vector formulation of Eq. (41) can be written as follows:

$$ \textstyle\begin{cases} ({{\boldsymbol{K}}_{\mathbf{{1}}}} - \frac{{{r_{1}}}}{2}{{\boldsymbol{K}}_{ \mathbf{{2}}}}){\bar{\boldsymbol{u}}^{n}} = ({r_{1}}{{\boldsymbol{K}}_{\mathbf{{2}}}} {{\boldsymbol{K}}_{\mathbf{{1}}}} + {r_{2}}{{\boldsymbol{K}}_{\mathbf{{2}}}}{{\boldsymbol{K}} _{\mathbf{{1}}}})\boldsymbol{u^{n}} + \tau {{\boldsymbol{K}}_{1}}{{\boldsymbol{K}}_{ \mathbf{{1}}}}\boldsymbol{F^{n}}, \\ ({{\boldsymbol{K}}_{\mathbf{{2}}}} - \frac{{{r_{2}}}}{2}{{\boldsymbol{K}}_{ \mathbf{{1}}}})\boldsymbol{u^{n + 1}} = {\bar{\boldsymbol{u}}^{n}} + ({{\boldsymbol{K}}_{ \mathbf{{1}}}} - \frac{{{r_{2}}}}{2}{{\boldsymbol{K}}_{\mathbf{{2}}}}) \boldsymbol{u^{n}}, \end{cases} $$
(43)

where \(\boldsymbol{u^{n}} = {(u_{1}^{n},u_{2}^{n},u_{3}^{n}, \ldots ,u_{m} ^{n})^{T}}\), \(\boldsymbol{F^{n}} = {(f_{1}^{n},f_{2}^{n},f_{3}^{n}, \ldots ,f _{m}^{n})^{T}}\), and \(\bar{\boldsymbol{u}}^{\boldsymbol{n}} = {(\bar{u}_{1}^{n},\bar{u} _{2}^{n},\bar{u}_{3}^{n}, \ldots ,\bar{u}_{m}^{n})^{T}}\). The form of \(\boldsymbol{K_{1}}\) and \(\boldsymbol{K_{2}}\) is the same as Eq. (21) except its order number. If \(\boldsymbol{u^{n}}\) and \(\bar{\boldsymbol{u}}^{\boldsymbol{n}}\) of Eq. (38) are substituted for \(\boldsymbol{u^{*n}} = {\boldsymbol{\theta } _{m \times M}}{( \boldsymbol{\alpha ^{n}})_{M \times 1}}\), \(\bar{\boldsymbol{u}}^{\boldsymbol{n}} = {\boldsymbol{\theta } _{m \times M}}{(\bar{\boldsymbol{\alpha }}^{\boldsymbol{n}})_{M \times 1}}\), \(\boldsymbol{F^{n}} = {\boldsymbol{\theta } _{m \times M}}(\bar{\boldsymbol{F}}^{\boldsymbol{n}})_{M \times 1}\) (\(n = 0,1,2, \ldots ,N\)), respectively, then we obtain the R-ADI-CFDS4 as follows:

$$ \textstyle\begin{cases} \boldsymbol{\theta ^{T}}(\boldsymbol{K_{1}} - \frac{{{r_{1}}}}{2}\boldsymbol{K_{2}}) \boldsymbol{\theta } \bar{\boldsymbol{\alpha }}^{\boldsymbol{n}} = \boldsymbol{\theta ^{T}}({r_{1}} \boldsymbol{K_{2}}\boldsymbol{K_{1}} + {r_{2}}\boldsymbol{K_{2}}\boldsymbol{K_{1}})\boldsymbol{\theta } \boldsymbol{\alpha ^{n}} + \tau \boldsymbol{\theta ^{T}}\boldsymbol{K_{1}}\boldsymbol{K_{1}} \boldsymbol{\theta } \bar{\boldsymbol{F}}^{\boldsymbol{n}}, \\ \boldsymbol{\theta ^{T}}(\boldsymbol{K_{2}} - \frac{{{r_{2}}}}{2}\boldsymbol{K_{1}}) \boldsymbol{\theta } \boldsymbol{\alpha ^{n+1}} = \bar{\boldsymbol{\alpha }}^{\boldsymbol{n}} + \boldsymbol{\theta ^{T}}(\boldsymbol{K_{2}} - \frac{{{r _{2}}}}{2}\boldsymbol{K_{1}})\boldsymbol{\theta } \boldsymbol{\alpha ^{n}} , \end{cases} $$
(44)

where \(n = 0,1,2,3, \ldots ,N\), \(\boldsymbol{\alpha ^{0}} = \boldsymbol{\theta ^{T}} \boldsymbol{u^{0}} = \boldsymbol{\theta ^{T}} \boldsymbol{(u_{1}^{0},u_{2}^{0}, \ldots ,u_{m}^{0})^{T}}\).

After solving \({\alpha ^{n}}\) from Eq. (44), one gets the POD optimal solutions \(\boldsymbol{u^{*n}} = \boldsymbol{\theta } \boldsymbol{\alpha ^{n}} \) (\(n = 0,1,2, \ldots ,N\)) of problem (\(\mathrm{P}_{2}\)).

Theorem 3

Let \(u_{i,j}^{*n}\) be the POD solutions \(\boldsymbol{u^{*n}} = \boldsymbol{\theta }\boldsymbol{\alpha ^{n}} \) (\(n = 0,1,2, \ldots ,N\)) of Eq. (44). \(u_{i,j}^{n}\) be the solutions of Eq. (43). \(u ( {{x_{i}},{y_{j}},{t_{n}}} )\) be the exact solutions of problem (\(\mathrm{P}_{2}\)). If \({ \Vert {\boldsymbol{K_{1}}} \Vert _{2}} \le \frac{1}{2}\) and \({ \Vert {\boldsymbol{K_{2}}} \Vert _{2}} \le \frac{1}{2}\) are uniformly chosen from \(\{ {u_{i,j}^{n}} \} _{n = 1}^{N}\), we conclude that

$$ \bigl\vert {u_{i,j}^{*n} - u ( {{x_{i}},{y_{j}},{t_{n}}} )} \bigr\vert = O \bigl( {{\lambda _{M + 1}}\exp (N/d),{\tau ^{2}},h_{x}^{4},h _{y}^{4}} \bigr). $$
(45)

6 Numerical examples

In this section, we use the four test problems to demonstrate the advantages of the R-CFDS4 for 1D parabolic equations and the R-ADI-CFDS4 for 2D parabolic equations. Our algorithm is implemented with MATLAB R2017a running on a desktop with Intel Core i7 4790 CPU at 2.93 GHz and 7.98 GB memory.

Example 1

Consider 1D parabolic equation (\(\mathrm{SP} _{1}\))

$$ \textstyle\begin{cases} \frac{{\partial {u}}}{{\partial t}} - \frac{{{\partial ^{2}}u}}{{\partial {x^{2}}}} = 0, & 0 < t < 2, 0 < x < \pi , \\ u(0,t) = u(\pi ,t) = 0,& 0 \le t \le 2, \\ u(x,0) = \sin (x), & 0 \le x \le \pi . \end{cases} $$

The exact solution is \(u(x,t) = \exp ( - t)\sin (x)\). We take the numerical solutions of CFDS4 with \(h=0.02\pi \), \(\tau =0.01\) at \(t = 0.1,0.2,0.3, \ldots ,2\) as snapshots. It can be obtained by computing that eigenvalue \({\lambda _{6}} \le 6 \times {10^{ - 16}}\). The number of optimal bases is referred to the paper [31]. By computation in Theorem 1, we know that as long as the initial five or more eigenvectors of matrix \({\boldsymbol{A}}{{\boldsymbol{A}}^{\mathrm{T}}}\) are chosen as the optimal basis, the R-CFDS4 can satisfy the desirable accuracy requirement. Numerical solutions of R-CFDS4 and the difference between CFDS4 and R-CFDS4 at \(t=0.4,0.8,1.2,1.6,2.0\) have been drawn in Fig. 1, which shows that the results of CFDS4 are in very good agreement with those of R-CFDS4. The errors of CFDS4 and R-CFDS4 at several points and the computational time are also reported in Table 1 and Table 2. This implies that the R-CFDS4 is more efficient than the CFDS4 under guaranteeing with the sufficient accuracy of numerical solutions for solving the 1D parabolic equation. In Fig. 2, the errors of CFDS4 and R-CFDS4 are almost similar. Besides, the difference between the CFDS4 and R-CFDS4 is drawn, which means that R-CFDS4 can achieve almost the same accuracy as CFDS4 under the same nodes and step. The differences between the CFDS4 and R-CFDS4 with five optimal bases are no more than \(3 \times {10^{ - 15}}\) in Fig. 1. In Fig. 3 and Table 3, the R-CFDS4 solution based five POD bases at different moments and details for consuming time at \(t=6\) have been given, which shows that the R-CFDS4 can be extended to a time interval that is longer than the time interval on which the snapshots were collected.

Figure 1
figure 1

The figures of solutions of R-CFDS4(left) and difference(right) between R-CFDS4 and CFDS4 with \(h=0.02\pi \) and \(\tau =0.01\) for \(\mathrm{SP} _{1}\)

Figure 2
figure 2

The errors of solutions of R-CFDS4(left) and CFDS4(right) with \(h=0.02\pi \) and \(\tau =0.01\) for \(\mathrm{SP} _{1}\)

Figure 3
figure 3

The figures of solutions of R-CFDS4(left) and difference(right) between R-CFDS4 and CFDS4 with \(h=0.02\pi \), \(\tau =0.01\) for \(\mathrm{SP} _{1}\)

Table 1 Numerical results of \(\mathrm{SP} _{1}\) at \(t = 2\), with \(h=0.02\pi \), \(\tau =0.01\)
Table 2 Numerical results of \(\mathrm{SP} _{1}\) at \(t = 2\), with \(h=0.01\pi \), \(\tau =0.0025\)
Table 3 The consuming time of two schemes for \(\mathrm{SP} _{1}\) at \(t=6\), with \(h=0.02\pi \), \(\tau =0.01\)

To show that Eq. (34) is fourth-order accurate in space and second-order in time, we first let \(\tau =0.0002\), then reduce h by factor of 2 each time. The data in Table 4 clearly show that Eq. (34) is fourth-order accurate in space since the maximal error is reduced by a factor about 24 each time. In a similar way, the second-order accurate in time of Eq. (34) has been confirmed because the maximal error is reduced by a factor about 22 each time at \(h=0.0025\pi \) in Table 5.

Table 4 The convergence order for R-CFDS4 at \(\tau = 0.0002\)
Table 5 The convergence order for R-CFDS4 at \(h=0.0025\pi \)

Example 2

Consider 1D parabolic equation (\(\mathrm{SP} _{2}\))

$$ \textstyle\begin{cases} \frac{{\partial {u}}}{{\partial t}} - \frac{{{ \partial ^{2}}u}}{{\partial {x^{2}}}} = x{e^{t}} - 6x, & 0 < t \le 1, 0 < x < 1, \\ u(0,t) = 0,\qquad u(1,t) = 1 + {e^{t}}, & 0 \le t \le 1, \\ u(x,0) = {x^{3}} + x, & 0 \le x \le 1. \end{cases} $$

The exact solution is \(u(x,t) = x({x^{2}} + {e^{t}})\). We take the numerical solutions of CFDS4 with \(h=0.02\), \(\tau =0.01\) at \(t = 0.05,0.1,0.15, \ldots ,1\) as snapshots. We choose the initial ten vectors in the U as the optimal basis to ensure desirable accuracy requirement. It is shown that eigenvalue \({\lambda _{11}} \le 2 \times {10^{ - 13}}\) by computing. Similar to Example 1, numerical solutions of R-CFDS4 and difference between R-CFDS4 and CFDS4 at \(t=0.2,0.4,0.6,0.8,1.0\) have been drawn in Fig. 4. It is not difficult to see that the results of CFDS4 are in excellent agreement with those of R-CFDS4. The errors of CFDS4 and R-CFDS4 are shown in Table 6 and exhibited in Fig. 5. It is not difficult to see that the CFDS4 and R-CFDS4 are basically identical. Furthermore, by comparing the expended CPU time of the R-CFDS4 with that of the CFDS4 in Table 6 and Table 7, the obvious advantages of R-CFDS4 in computational efficiency can be clearly found. The differences between the CFDS4 and R-CFDS4 with ten optimal bases are no more than \(8 \times {10^{ - 10}}\) in Fig. 4. In Fig. 6 and Table 8, the R-CFDS4 solution based ten POD bases at different moments and details for consuming time at \(t=5\) have been given, which shows that the R-CFDS4 can be extended to a time interval that is longer than the time interval on which the snapshots were collected.

Figure 4
figure 4

The figures of solutions of R-CFDS4(left) and difference(right) between R-CFDS4 and CFDS4 with \(h=0.02\) and \(\tau =0.01\) for \(\mathrm{SP} _{2}\)

Figure 5
figure 5

The errors of solutions of R-CFDS4(left) and CFDS4(right) with \(h=0.02\) and \(\tau =0.01\) for \(\mathrm{SP} _{2}\)

Figure 6
figure 6

The figures of solutions of R-CFDS4(left) and difference(right) between R-CFDS4 and CFDS4 with \(h=0.02\) and \(\tau =0.01\) for \(\mathrm{SP} _{2}\)

Table 6 Numerical results of \(\mathrm{SP} _{2}\) at \(t = 1\), with \(h=0.02\), \(\tau =0.01\)
Table 7 Numerical results of \(\mathrm{SP} _{2}\) at \(t = 1\), with \(h=0.01\), \(\tau =0.0025\)
Table 8 The consuming time of two schemes for \(\mathrm{SP} _{2}\) at \(t=5\), with \(h=0.02\), \(\tau =0.01\)

In order to explore whether the convergence order is consistent with the theoretical results, τ is fixed as 0.0025, then h is reduced by a factor of 2 each time. The maximal error in Table 9 is reduced by a factor about 24 each time and has confirmed the fourth-order accurate in space of Eq. (34). In a similar way, we fix h as 0.0025, the maximal error is reduced by a factor about 22 each time in Table 10, which indicates that the R-CFDS4 is second-order accurate in time.

Table 9 The convergence order for R-CFDS4 at \(\tau =0.0025\)
Table 10 The convergence order for R-CFDS4 at \(h=0.0025\)

Example 3

Consider the 2D parabolic equation (\(\mathrm{SP} _{3}\))

$$ \textstyle\begin{cases} \frac{{\partial u}}{{\partial t}} = \frac{{{\partial ^{2}}u}}{{\partial {x^{2}}}} + \frac{{{\partial ^{2}}u}}{{\partial {y^{2}}}}, & (x,y) \in \varOmega ,0 < t \le 2, \\ u(x,y,0) = \sin x\sin y, & (x,y) \in \varOmega , \\ u(x,y,t) = 0, & (x,y) \in \partial \varOmega , 0 < t \le 2, \end{cases} $$

where \(\varOmega = \{ (x,y);0 \le x \le \pi ,0 \le y \le \pi \} \), ∂Ω denotes the boundary of Ω.

In this example, the exact solution is \(u(x,y,t) = {e^{ - 2t}}\sin x \sin y\). We take the numerical solutions of ADI-CFDS4 with \({h_{x}}= {h_{y}}= 0.02\pi \), \(\tau = 0.02\) at \(t = 0.1,0.2,0.3, \ldots ,2\) as snapshots. It is shown that eigenvalue \({\lambda _{6}} \le 2 \times {10^{ - 15}}\) by computing. Similar with Example 1, we also choose five POD bases for our computation of 2D parabolic equations. Numerical solutions of R-ADI-CFDS4 and the difference between R-ADI-CFDS4 and ADI-CFDS4 have been drawn in Fig. 7 and Fig. 8. We can clearly find that the numerical solutions of R-ADI-CFDS4 are in very excellent agreement with ADI-CFDS4. In Table 13 and Table 14, Error 1 is the error of R-ADI-CFDS4, Error 2 is the error of ADI-CFDS4. The errors of ADI-CFDS4 and R-ADI-CFDS4 at several points and the computational time are shown and compared in Table 13 and Table 14, which indicates that R-ADI-CFDS4 could save more computational time than ADI-CFDS4 under guaranteeing the sufficient accuracy of ADI-CFDS4. In Fig. 9 and Fig. 7, it can be found that the numerical solution of R-ADI-CFDS4 is almost similar with the numerical solution of ADI-CFDS4. In Fig. 8, the error of ADI-CFDS4 is depicted, which shows the sufficient accuracy of ADI-CFDS4. The differences between the ADI-CFDS4 and R-ADI-CFDS4 with five optimal bases are no more than \(3 \times {10^{ - 14}}\) in Fig. 9 and Fig. 7. In Table 15, the details for consuming time at \(t=4\) have been given, which shows the advantages of R-ADI-CFDS4.

Figure 7
figure 7

The figures of solutions of R-ADI-CFDS4(left) and difference(right) between R-ADI-CFDS4 and ADI-CFDS4 at \(t=4\), with \({h_{x}}={h_{y}}=0.02\pi \) and \(\tau =0.02\) for \(\mathrm{SP} _{3}\)

Figure 8
figure 8

The error of ADI-CFDS4 at \(t=2\), with \({h_{x}}={h_{y}}=0.02\pi \) and \(\tau =0.02\) for \(\mathrm{SP} _{3}\)

Figure 9
figure 9

The figures of solutions of R-ADI-CFDS4(left) and difference(right) between R-ADI-CFDS4 and ADI-CFDS4 at \(t=2\), with \({h_{x}}={h_{y}}=0.02\pi \) and \(\tau =0.02\) for \(\mathrm{SP} _{3}\)

In order to verify the convergence order of Eq. (42), we fix τ as 0.0001 in Table 11. The maximal error is reduced by a factor about 24 each time, which shows the R-ADI-CFDS4 is fourth-order accurate in space. Similarly, the data in Table 12 have confirmed the second-order accurate in time of R-ADI-CFDS4.

Table 11 The convergence order for R-ADI-CFDS4 at \(\tau =0.0001\)
Table 12 The convergence order for R-ADI-CFDS4 at \({h_{x}}={h_{y}}= 0.02\pi \)
Table 13 Numerical results of \(\mathrm{SP} _{3}\) at time \(t = 2\), with \({h_{x}}={h_{y}}= 0.02\pi \), \(\tau = 0.02\)
Table 14 Numerical results of \(\mathrm{SP} _{3}\) at \(t=2\), with \({h_{x}}={h_{y}}= 0.01\pi \), \(\tau =0.005\)
Table 15 The consuming time of two schemes for \(\mathrm{SP} _{3}\) at \(t=4\), with \({h_{x}}={h_{y}}= 0.02\pi \), \(\tau = 0.02\)

Example 4

Consider the 2D parabolic equation (\(\mathrm{SP} _{4}\))

$$ \textstyle\begin{cases} \frac{{\partial u}}{{\partial t}} - ( { \frac{{{\partial ^{2}}u}}{{\partial {x^{2}}}} + \frac{{{\partial ^{2}}u}}{{\partial {y^{2}}}}} ) = {e^{ - t}} \sin (x)\sin (y), (x,y) \in \varOmega , 0 < t \le 2, \\ u(x,y,0) = \sin (x)\sin (y), (x,y) \in \varOmega , \\ u(x,y,t) = 0, (x,y) \in \partial \varOmega , 0 < t \le 2. \end{cases} $$

\(\varOmega = \{ (x,y);0 \le x \le \pi ,0 \le y \le \pi \} \), ∂Ω denotes the boundary of Ω. The exact solution is \(u(x,y,t) = {e^{ - t}}\sin x\sin y\). We take the numerical solutions of ADI-CFDS4 with \({h_{x}}={h_{y}}= 0.02\pi \), \(\tau = 0.02\) at \(t = 0.1,0.2,0.3, \ldots ,2\) as snapshots. It is shown that eigenvalue \({\lambda _{6}} \le 7 \times {10^{ - 15}}\) by computing. Numerical solutions of R-ADI-CFDS4 and difference between R-ADI-CFDS4 and ADI-CFDS4 have been drawn in Fig. 10. We can clearly find that the approximate solutions of R-ADI-CFDS4 are in very excellent agreement with ADI-CFDS4. Error 1 and Error 2 are the same as in Example 3. The errors of several points and computational time are written in Table 16 and Table 17, which shows the high efficiency and reliability of R-ADI-CFDS4 compared with ADI-CFDS4. The differences between the ADI-CFDS4 and R-ADI-CFDS4 with five optimal bases are no more than \(4 \times {10^{ - 14}}\) in Fig. 10 and Fig. 11. R-ADI-CFDS4 solution based five POD bases at \(t=4\) are given in Fig. 11, while the details for consuming time at \(t=4\) are listed in Table 18. Meanwile, we also list the error of ADI-CFDS4 at \(t=2\) in Fig. 12.

Figure 10
figure 10

The figures of solutions of R-ADI-CFDS4(left) and difference(right) between R-ADI-CFDS4 and ADI-CFDS4 at \(t=2\), with \({h_{x}}={h_{y}}=0.02\pi \) and \(\tau =0.02\) for \(\mathrm{SP} _{4}\)

Figure 11
figure 11

The figures of solutions of R-ADI-CFDS4(left) and difference(right) between R-ADI-CFDS4 and ADI-CFDS4 at \(t=4\), with \({h_{x}}={h_{y}}=0.02\pi \) and \(\tau =0.02\) for \(\mathrm{SP} _{4}\)

Figure 12
figure 12

The error of ADI-CFDS4 at \(t=2\), with \({h_{x}}={h_{y}}=0.02\pi \) and \(\tau =0.02\) for \(\mathrm{SP} _{4}\)

Table 16 Numerical results of \(\mathrm{SP} _{4}\) at \(t=2\), with \(h = 0.02\pi \), \(\tau = 0.02\)
Table 17 Numerical results of \(\mathrm{SP} _{4}\) at \(t=2\), with \(h=0.01\pi \), \(\tau =0.005\)
Table 18 The consuming time of two schemes for \(\mathrm{SP} _{4}\) at \(t=4\), with \({h_{x}}={h_{y}}= 0.02\pi \), \(\tau = 0.02\)

The convergence order of R-ADI-CFDS4 is also verified. In Table 19, we first fix τ as 0.0001. Next, \(h_{x}\) and \(h_{y}\) are reduced a factor of 2 each time, the R-ADI-CFDS4 is fourth-order accurate in space since the maximal error is reduced by a factor about 24 each time. Then, in Table 20, we take \(h_{x}=h_{y}\) as 0.02π. The τ is reduced a factor of 2 each time, R-ADI-CFDS4 is second-order accurate in time since the maximal error is reduced by a factor about 22 each time.

Table 19 The convergence order for R-ADI-CFDS4 at \(\tau = 0.0001\)
Table 20 The convergence order for R-ADI-CFDS4 at \({h_{x}}={h_{y}}=0.02\pi \)

7 Conclusions

In this paper, we developed a reduced fourth-order compact difference scheme for solving parabolic equations. The efficiency and accuracy of the proposed algorithm were examined by two 1D problems and two 2D problems. The numerical examples illustrated that the fourth-order compact difference scheme coupled with POD technique not only keeps high computational accuracy, but also brings significant computational time saving for solving parabolic equation. In the future, we plan to improve our algorithm to solve more complicated parabolic equations in three dimensions.