1 Introduction

Let \(\varOmega \subset \mathbb{R}^{2}\) be a bounded region with boundary ∂Ω. We are concerned with the following telegraph equation:

$$ \textstyle\begin{cases} u_{tt}-\triangle u+\alpha u_{t}+\beta u=f(t,x,y), \quad t\in (0,T], (x,y) \in \varOmega , \\ u(t,x,y)=0, \quad t\in (0,T], (x,y)\in \partial \varOmega , \\ u(0,x,y)=G_{0}(x,y),\qquad u_{t}(0,x,y)=G_{1}(x,y),\quad (x,y)\in \varOmega , \end{cases} $$
(1)

where u represents the unknown voltage or current, \(u_{t}=\partial u/\partial t\), \(u_{tt}=\partial ^{2}u/\partial t^{2}\), \(\triangle = \partial ^{2}/\partial x^{2} +\partial ^{2}/\partial y^{2}\), \(\alpha =GR(L\hat{C})^{-1}\), \(\beta =(R\hat{C}+GL)(L\hat{C})^{-1}\), while G is the known dielectric material conductance, R stands for the known conductor distributing resistance, L represents the known distributing inductance, Ĉ represents the known capacitance between two conductors, T stands for the ultima time, and \(G_{0}(x,y)\), \(G_{1}(x,y)\), and \(f(t,x,y)\) are three known functions.

The telegraph equation possesses very significant physical meanings. It can be used for simulating the electric signal propagation in transmission cable and the interaction between diffusion and reaction in biology and physics branches (see [1, 2]). But the telegraph equation in real world usually contains the complex initial and boundary values, or source function, or discontinuous coefficients. Thus, it has generally no genuine solution, one has to rely on numerical solutions.

The accuracy of spectral method (see [111]) is far higher than that of the finite element (FE), finite difference (FD), and finite volume element (FVE) methods (see [1218]) as its unknown functions are approximated with the smoothing functions like triangle functions or Legendre’s, Jacobi’s, and Chebyshev’s polynomials, while the unknown functions for the FE and FVE models are approximated with the general polynomials, but the derivatives of the unknown functions of the FD method are approximated via difference quotients. In particular, the CCS model for the telegraph equation in [18] possesses the super-convergence with respect to spatial variables, but it includes lots unknowns. Therefore, the round off errors in the calculations are accumulated very rapidly, resulting in floating point overflow after computing some steps and being unable to obtain desired results. Hence, the issue of how to reduce the unknowns of the CCS format to retard the round off error amassing is urgent, and it needs to be solved in practical applications (such as mechanic engineering), which is the main objective of this paper.

Many numerical experimentations (see, e.g., [1929]) have verified that the POD technique can immensely lessen the unknowns in the numerical methods and retard the round off error accumulation and the calculating load. It is successfully used for reduced order in the Galerkin, FE, FD, and FVE methods as well as the parametric problems as just mentioned.

Unfortunately, as we are concerned, so far there have been no reports on the ROECS model for the telegraph equation based on POD. Hance, we here set up an ROECS model of matrix-form for the coefficient vectors of the CCS solutions such that the ROECS model possesses the same base functions as the CCS one and is simultaneously equipped with merits that the CCS model possesses the higher accuracy and the POD method could lessen the unknowns. In addition, we employ the matrix theory to demonstrate the existence, convergence, and stability for the ROECS solutions such that the theoretical argumentation becomes very succinct. In so doing, the ROECS model totally distinguishes from the existing POD-based reduced order ones as stated in the above-mentioned.

The paper is organized as follows. In Sect. 2, the CCS method to the telegraph equation is proposed. Based on the proposal, in Sect. 3, we make up snapshot matrix with the first few CCS solution coefficient vectors, producing a series of POD bases from the snapshot matrix, building the ROECS format of matrix-form via the POD bases, and proving the existence, convergence, and stability to the ROECS solutions via the matrix theory. Section 4 supplies two sets of numeric experimentations in the magnetic field produced by two parallel wires with the same voltage to check out that the numeric computational results are consistent with the theory consequences and the ROECS format is very efficient when solving the telegraph equation. Section 5 summarizes the main conclusions of this study.

2 The CCS method of the telegraph equation

2.1 The CCS method

Since the closed bounded region Ω̅ is able to be approximately covered by several rectangles \([a_{i}, b_{i}]\times [c _{i}, d_{i}]\) (\(1\leqslant i\leqslant I\)) and by formulas \(x' = {2(x-a_{i})}/{(b_{i}-a_{i})-1}\) and \(y' = {2(y-c_{i})}/{(d_{i}-c_{i})} -1\) we can insure \([a_{i},b_{i}]\leftrightarrow [-1,1]\) and \([c_{i},d_{i}]\leftrightarrow [-1,1]\), respectively, assuming that \(\overline{\varOmega }=[-1, 1]\times [-1, 1]\), i.e., \({\varOmega }=(-1, 1) \times (-1, 1)\).

Let \(P_{N}\) be an interpolation subspace. For convenience, let \(\{\omega _{k}\}^{N}_{k=0}\) be a set of weights and \(\{y_{k}\}_{k=0} ^{N}\) and \(\{x_{k}\}_{k=0}^{N}\) be, respectively, two groups of Chebyshev–Gauss–Lobatto (CGL) quadrature points in the y and x directions (see [4]), which hold the same number and are denoted by

$$ y_{k} = -\cos \frac{\pi k}{N},\qquad x_{k} = -\cos \frac{k\pi }{N}, \qquad \omega _{k} = \frac{\pi }{c_{k}N},\quad 0\leqslant k\leqslant N, $$
(2)

the above \(c_{0} = c_{N} = 2\) and \(c_{k} = 1\) (\(1\leqslant k\leqslant N-1\)). \(\{y_{j}\}_{j=0}^{N}\) and \(\{x_{k}\}_{k=0}^{N}\) as well as \(\{\omega _{k}\}^{N}_{k=0}\) have the following properties (see [4, p. 44]).

Theorem 1

The sets of CGL quadrature points\(\{x_{k}\}^{N}_{k = 0}\)and\(\{y_{k}\}_{k=0}^{N}\), and the sets of weights\(\{\omega _{k}\}^{N} _{k=0}\)satisfy

$$ \int ^{1}_{-1} \int ^{1}_{-1}\omega (x,y)q(x,y)\,\mathrm{d}x\,\mathrm{d}y= \sum ^{N}_{j=0}\sum ^{N}_{k=0}q(x_{j},y_{k}) \omega _{j}\omega _{k},\quad \forall q(x,y)\in P_{2N-1}, $$

where\(\omega (x,y)=\omega (x)\omega (y)\), \(\omega (x) = {1}/{\sqrt{1-x ^{2}}}\), and\(\omega (y) = {1}/{\sqrt{1-y^{2}}}\).

In fact, the CCS method aims to seek an approximated solution of \(u(x,y)\) via the following formula:

$$ u_{N}(x,y) = \sum_{j=0}^{N} \sum_{k=0}^{N}u_{N}(x_{j},y_{k})h_{j}(x)h _{k}(y)=\boldsymbol{U}_{N}\cdot \boldsymbol{H}(x,y), \quad u_{N}(x,y)\in P_{N}, $$
(3)

in which \(h_{j}(x)\) and \(h_{k}(y)\) are Lagrange’s interpolated multinomials of the CGL quadrature points, \(\boldsymbol{U}_{N} = (u_{N_{0,0}},u _{N_{1,0}},\ldots ,u_{N_{N,0}},u_{N_{0,1}},u_{N_{1,1}},\ldots ,u_{N _{N,1}},\ldots ,u_{N_{0,N}},\ldots ,u_{N_{N,N}})^{T}\), and \(\boldsymbol{H}(x, y)=(h _{0}(x)h_{0}(y), h_{1}(x)h_{0}(y),\ldots ,h_{N}(x)h_{0}(y),h_{0}(x)h _{1}(y),h_{1}(x)h_{1}(y),\ldots ,h_{N}(x)h_{1}(y), \ldots , h_{0}(x)h_{N}(y), h_{1}(x)h_{N}(y), \ldots , h_{N}(x)h_{N}(y))^{T}\). Furthermore, the derivatives of \(u_{N}(x,y)\) for x at \(x_{k}\) are formulated with

$$ \frac{\partial u_{N}(x_{k},y)}{\partial x} = \sum^{N}_{j=0} \sum^{N} _{l=0}u_{N}(x_{j},y_{l})h^{\prime }_{j}(x_{k})h_{l}(y){= \boldsymbol{U}_{N}\cdot \frac{ \partial }{\partial x}\boldsymbol{H}(x_{k},y)}, \quad 0\leqslant k\leqslant N, $$
(4)

where

$$ h_{i}^{\prime }(x_{k}) = \textstyle\begin{cases} -\frac{2N^{2}+1}{6}, &i =k = 0, \\ \frac{c_{k}}{c_{i}}\frac{(-1)^{k+i}}{x_{k}-x_{i}}, &i \neq k, 0 \leqslant i,k \leqslant N, \\ -\frac{x_{k}}{2(1-x^{2}_{k})}, &1 \leqslant i = k \leqslant N-1, \\ \frac{2N^{2}+1}{6}, &i = k = N. \end{cases} $$
(5)

In the above formulas, it should be noted \(c_{0} = c_{N} = 2\) and \(c_{k} = 1\) (\(1\leqslant k\leqslant N-1\)). By replacing x with y in (5) and (4), one immediately gains the calculating formulas for \({\partial u_{N}(x,y_{k})}/{\partial y}\).

2.2 Some useful Sobolev spaces

The Sobolev spaces and their norms arisen in the subsequent are classical (see [30, 31]). For instance, \(L^{2}( \varOmega )\) denotes a set of square-integrable functions in Ω that endows the norms as well as inner product as follows:

$$ \Vert v \Vert _{0} = \biggl( \int _{\varOmega } \vert v \vert ^{2}\,\mathrm{d}x\,\mathrm{d}y\biggr)^{1/2},\qquad (u,v)= \int _{\varOmega }uv\,\mathrm{d}x\,\mathrm{d}y, \quad \forall u, v\in L^{2}( \varOmega ). $$

For a dual-index \(\boldsymbol{\alpha }=(\alpha _{1},\alpha _{2})\) (here integers \(\alpha _{i}\geqslant 0\)) and an integer \(m\geqslant 0\), the norm and semi-norm in \(H^{m}(\varOmega )= \{ v\in L^{2}(\varOmega ): D^{ \alpha }v\in L^{2}(\varOmega ), 0\leqslant |\alpha |\leqslant m \} \) are defined as

$$ \Vert v \Vert _{m} = \Biggl(\sum_{ \vert \alpha \vert =0}^{m} \bigl\Vert D^{\alpha }v \bigr\Vert ^{2}_{0} \Biggr) ^{1/2},\qquad \vert v \vert _{m} = \biggl(\sum _{ \vert \alpha \vert = m} \bigl\Vert D^{\alpha }v \bigr\Vert ^{2}_{0} \biggr) ^{1/2},\quad \forall v\in H^{m}(\varOmega ), $$

where \(D^{\alpha }v=\frac{\partial ^{\alpha _{1}+\alpha _{2}}v}{\partial x^{\alpha _{1}}\partial y^{\alpha _{2}}}\).

Additionally, let \(\omega = {1}/{\sqrt{(1-x^{2})(1-y^{2})}}\), let \(\varOmega = (-1,1)^{2}\), let \(L^{2}_{\omega }(\varOmega )\) stand for the set that all functions are square-integrable in Ω about ω, which is, respectively, equipped with the norm inner product as follows:

$$ \Vert u \Vert _{0,\omega } = \biggl( \int _{\varOmega } \vert u \vert ^{2}\omega \,\mathrm{d}x\,\mathrm{d}y\biggr) ^{1/2}, \qquad (u,v)_{\omega }= \int _{\varOmega }uv\omega \,\mathrm{d}x\,\mathrm{d}y,\quad \forall u, v\in L^{2}(\varOmega ), $$

and let \(H^{m}_{\omega }(\varOmega )=\{v\in L^{2}_{\omega }(\varOmega ): D ^{\alpha }v\in L^{2}_{\omega }(\varOmega ),0\leqslant |\alpha | \leqslant m\}\) stand for a weighted Sobolev space in Ω about ω, which is equipped with a norm

$$ \Vert u \Vert _{m,\omega } = \Biggl(\sum_{ \vert \alpha \vert =0}^{ m} \bigl\Vert D^{\alpha }u \bigr\Vert ^{2}_{0,\omega } \Biggr)^{\frac{1}{2}}. $$

Moreover, let \(H_{0,\omega }^{1}(\varOmega ) = \{u\in H^{1}_{\omega }( \varOmega ): u|_{\partial \varOmega } = 0\}\), let \(L^{2}_{\omega }(\varOmega )=H ^{0}_{\omega }(\varOmega )\), and let

$$ H^{l} \bigl(0, T; H^{m}_{\omega }(\varOmega ) \bigr) \equiv \bigl\{ v(t)\in H^{m} _{\omega }(\varOmega ): \Vert v \Vert _{H^{l}(H^{m}_{\omega })}^{2}< \infty \bigr\} , $$

where \(\|v\|_{H^{l}(H^{m}_{\omega })}^{2}=\int _{0}^{T}\sum_{i=0}^{l} \Vert {\,\mathrm{d}^{i}}v(t)/{\mathrm{d}t ^{i}} \Vert _{m,\omega }^{2} \,\mathrm{d}t\).

In addition, define the \(H_{\omega }^{1}\)-orthogonal projection \(R_{N}: H_{0,\omega }^{1}(\varOmega )\rightarrow P_{N}\) such that, for any \(u\in H_{0,\omega }^{1}(\varOmega )\), there holds

$$ \bigl(\nabla (R_{N}u - u),\nabla v_{N} \bigr)_{\omega } = 0,\quad \forall v_{N}\in P_{N}, $$
(6)

equivalently,

$$ R_{N}u(x,y)= \sum_{j=0}^{N} \sum_{k=0}^{N}R_{N}u(x_{j},y_{k})h_{j}(x)h _{k}(y), $$

where \(R_{N}u(x_{j},y_{k})\)s are values of the solution \(R_{N}u(x,y)\) of (6) at points \((x_{j},y_{k})\).

Thus, a function \(u(x,y)\) may be approximated with \(R_{N}u(x,y)\), too. Furthermore, \(R_{N}\) possesses the following properties (see [4, Theorems 2.16–2.18]).

Theorem 2

\(\forall w\in H^{k}_{\omega }(\varOmega )\) (\(k\geqslant 1\)) satisfies

$$ \Vert \nabla R_{N}w \Vert _{0,\omega }\leqslant \Vert \nabla w \Vert _{0,\omega }; \qquad \bigl\Vert \partial ^{m}(R_{N}w-w) \bigr\Vert _{0,\omega } = O \bigl(N^{m-k} \bigr),\quad 0\leqslant m \leqslant k \leqslant N+1. $$

2.3 The CCS method of the telegraph equation

Consider the following variational form of the telegraph equation.

Problem 1

Seek \(u\in H_{0,\omega }^{1}(\varOmega )\) such that, \(\forall t\in (0,T)\),

$$ \textstyle\begin{cases} (u_{tt} + \alpha u_{t}+\beta u, v)_{\omega } + (\nabla u,\nabla v)_{ \omega } = (f,v)_{\omega }, \quad \forall v\in H_{0,\omega }^{1}(\varOmega ), \\ u(0,x,y)=G_{0}(x,y),\qquad u_{t}(0,x,y)=G_{1}(x,y), \quad (x,y)\in \varOmega . \end{cases} $$
(7)

The next consequences of the existence as well as the stability for the solution to Problem 1 have been given in [18, Theorem 4].

Theorem 3

When\(f\in L^{2}(0,T;L^{2}_{\omega }(\varOmega ))\)and\(G_{1}\in L^{2} _{\omega }(\varOmega )\)as well as\(G_{0}\in H^{1}_{\omega }(\varOmega )\), Problem 1has a unique solution satisfying the following stability:

$$ \Vert u \Vert _{1,\omega }+ \Vert u_{t} \Vert _{0,\omega } \leqslant \tilde{\sigma }\bigl( \Vert f \Vert _{L^{2}(H^{-1}_{\omega })}+ \Vert G_{1} \Vert _{0,\omega }+ \Vert G_{0} \Vert _{1, \omega }\bigr), $$
(8)

where\(\tilde{\sigma }=2\sqrt{\max \{\beta , 1, 0.5\alpha ^{-1}\}/ \min \{1, \beta \}}\).

In order to settle Problem 1 with the CCS method, one needs to discretize \(u_{tt}\) and \(u_{t}\) with the second order time difference quotients and the spatial variables with the CCS technique. The main aim for the CCS method is to seek all approximate solutions at the CGL quadrature points and at time nodes \(t_{n}=n\Delta t\) (where \(T/K=:\Delta t \) is the time step and \(K>0\) is integer meeting \(T=K\Delta t\)) such that \(u(x,y,n\Delta t)\), \(u_{t}\), \(u_{tt}\), and \(u^{n}(x,y)\) are respectively approximated with \(u^{n}\), \({(u^{n+1}-u ^{n-1})}/{(2\Delta t)}\), \({(u^{n+1}-2u^{n}+u^{n-1})}/{\Delta t^{2}}\), and \(u_{N}^{n}(x,y)\), namely

$$ u^{n}(x,y) \approx u_{N}^{n}(x,y) = \sum _{j=0}^{N}\sum _{k=0}^{N}u _{N}^{n}(x_{j},y_{k})h_{k}(y)h_{j}(x),\quad 0\leqslant n\leqslant K. $$

Then the CCS model of the telegraph equation can be established as follows.

Problem 2

Seek \(u_{N}^{n} \in U_{N} \equiv H_{0,\omega }^{1}(\varOmega )\cap P _{N}\) such that

$$ \textstyle\begin{cases} (u^{n+1}_{N}-2u^{n}_{N}+u^{n-1}_{N},v_{N})_{\omega } +\frac{\Delta t ^{2}}{2}(\nabla u_{N}^{n+1}+\nabla u_{N}^{n-1},\nabla v_{N})_{\omega } \\ \qquad {}+\frac{\alpha \Delta t}{2} (u_{N}^{n+1}-u_{N}^{n-1},v_{N})_{\omega } +\frac{\beta \Delta t^{2}}{2}(u_{N}^{n+1}+u_{N}^{n-1}, v_{N})_{\omega } \\ \quad = \Delta t^{2}(f(x,y,t_{n}),v_{N})_{\omega },\quad \forall v_{N}\in U_{N}, 1\leqslant n\leqslant K-1, \\ u^{0}_{N}(x,y) =R_{N} G_{0}(x,y),\qquad u^{1}_{N}(x,y) =2\Delta tR_{N} G_{1}(x,y)+u_{N}^{0},\quad (x,y)\in \varOmega . \end{cases} $$
(9)

The next consequences of the existence and convergence as well as stability for the CCS solutions of Problem 2 had been proven in [18, Theorems 6 and 8].

Theorem 4

When\(f\in L^{2}(0,T;L^{2}_{\omega }(\varOmega ))\)and\(G\in H^{1}_{ \omega }(\varOmega )\)as well as\(H\in H^{1}_{\omega }(\varOmega )\), Problem 2has a unique series of solutions\(u_{N}^{n}\in U_{N}\) (\(n=1, 2, \ldots, K\)) that satisfy the following stability:

$$\begin{aligned} \bigl\Vert u_{N}^{n} \bigr\Vert _{1,\omega } \leqslant& \biggl(\frac{8+C_{p}^{2}+\beta }{C_{p}^{2}\min \{1,\beta \}} \biggr)^{1/2} \bigl( \Vert \nabla G_{0} \Vert _{0, \omega }+ \Vert \nabla G_{1} \Vert _{0,\omega }\bigr) \\ &{} + \Biggl(\frac{\Delta t}{\alpha \min \{1,\beta \}}\sum_{j=1}^{n} \bigl\Vert f(t _{j}) \bigr\Vert _{0,\omega }^{2} \Biggr)^{1/2},\quad n=1, 2, \ldots, K. \end{aligned}$$
(10)

In addition, if the solutions\(u(t_{n})\in H^{m}_{\omega }(\varOmega )\) (\(2 \leqslant m\leqslant N+1\)) to Problem 1, the error estimates between the solution of Problem 1and the solutions of Problem 2are as follows:

$$ \bigl\Vert u(t_{n})-u_{N}^{n} \bigr\Vert _{1,\omega } = O \bigl(\Delta t^{2},N^{-m} \bigr), \quad 2 \leqslant m\leqslant N+1, 1\leqslant n\leqslant K. $$
(11)

2.4 The matrix-form of the CCS model

In the following, we rewrite the CCS model as the matrix-form. To do this, let

$$ u_{N}^{n} = \sum _{k=0}^{N}\sum_{i=0}^{N}u_{N_{k,i}}^{n}h_{k}(y)h _{i}(x). $$
(12)

By taking \(v_{N} =h_{m}(y)h_{l}(x)\in U_{N}\) (\(0\leqslant l, m\leqslant N\)) in Problem 2, we gain the formulas

$$\begin{aligned}& \bigl(u_{N}^{n+1},v_{N} \bigr)_{\omega } = \sum_{k=0}^{N} \sum_{i=0}^{N}u_{N _{k,i}}^{n+1} \bigl(h_{k}(y)h_{i}(x),h_{m}(y)h_{l}(x) \bigr)_{\omega }, \\& \bigl(\nabla u_{N}^{n+1},\nabla v_{N} \bigr)_{\omega } = \sum_{k=0}^{N} \sum_{i=0}^{N}u_{N_{k,i}}^{n+1} \bigl[ \bigl(h_{k}(y)h^{\prime }_{i}(x),h _{m}(y)h^{\prime }_{l}(x) \bigr)_{\omega } + \bigl(h^{\prime }_{k}(y)h_{i}(x),h^{\prime }_{m}(y)h _{l}(x) \bigr)_{\omega } \bigr]. \end{aligned}$$

Let

$$\begin{aligned}& \begin{aligned}[b] A_{jm,kl}&= \bigl(h_{j}(x)h_{k}(y),h_{m}(x)h_{l}(y) \bigr)_{\omega } \\ &=\sum_{p=0}^{N}\sum _{q=0}^{N}h_{j}(x_{p})h _{m}(x_{p})\omega _{p}h_{k}(y_{q})h_{n}(y_{q}) \omega _{q},\quad 0\leqslant j,k,l,m \leqslant N, \end{aligned} \end{aligned}$$
(13)
$$\begin{aligned}& \begin{aligned}[b] B_{jm,kl}&= \bigl(h_{j}^{\prime }(x)h_{k}(y),h_{m}^{\prime }(x)h_{l}(y) \bigr)_{\omega } + \bigl(h _{j}(x)h_{k}^{\prime }(y),h_{m}(x)h_{l}^{\prime }(y) \bigr)_{\omega } \\ &=\sum_{p=0}^{N}\sum _{q=0}^{N}h^{\prime }_{j}(x _{p})h^{\prime }_{m}(x_{p})\omega _{p}h_{k}(y_{q})h_{l}(y_{q}) \omega _{q} \\ &\quad {}+\sum_{p=0}^{N}\sum _{q=0}^{N}h_{j}(x_{p})h _{m}(x_{p})\omega _{p}h^{\prime }_{k}(y_{q})h^{\prime }_{l}(y_{q}) \omega _{q}, \quad 0 \leqslant j,k,l,m\leqslant N, \end{aligned} \end{aligned}$$
(14)

and let

$$\begin{aligned}& \boldsymbol{B} =(B_{jm,kl})_{(N+1)^{2}\times (N+1)^{2}},\qquad \boldsymbol{A} = (A_{jm,kl})_{(N+1)^{2}\times (N+1)^{2}}, \\& \boldsymbol{U}_{N}^{n} = \bigl(u_{N_{0,0}}^{n},u_{N_{1,0}}^{n}, \ldots ,u_{N_{N,0}} ^{n},u_{N_{0,1}}^{n}, u_{N_{1,1}}^{n}, \ldots , u_{N_{N,1}}^{n}, \ldots , u_{N_{0,N}}^{n}, \ldots , u_{N_{N,N}}^{n} \bigr)^{T}, \\& \boldsymbol{F}^{n} = \bigl(F^{n}_{0,0}, F^{n}_{1,0}, \ldots , ^{n}_{N,0}, F^{n} _{0,1}, \ldots , F^{n}_{N,1}, \ldots ,F^{n}_{0,N}, \ldots ,F^{n}_{N,N} \bigr)^{T},\qquad F^{n}_{m,l} = f(x_{m},y_{l}, n\Delta t). \end{aligned}$$

Therefore, Problem 2 may be rewritten into the following matrix-form.

Problem 3

Seek \(\boldsymbol{U}_{N}^{n}\in \mathbb{R}^{(N+1)^{2}}\) (\(n=2,3, \ldots, K\)) such that

$$\begin{aligned} & \bigl[ \bigl(2+\alpha \Delta t+\beta \Delta t^{2} \bigr)\boldsymbol{A} +\Delta t^{2} \boldsymbol{B} \bigr] \boldsymbol{U}_{N}^{n+1} \\ &\quad =4\boldsymbol{A} \boldsymbol{U}_{N}^{n}+2\Delta t^{2} \boldsymbol{F}^{n}- \bigl[ \bigl(2-\alpha \Delta t+\beta \Delta t^{2} \bigr) \boldsymbol{A} +\Delta t^{2} \boldsymbol{B} \bigr] \boldsymbol{U}_{N}^{n-1} ,\quad 1\leqslant n\leqslant K-1. \end{aligned}$$
(15)

Remark 1

As the coefficient matrixes are constructed by some trigonometric values, in spite that the accuracy of the CCS model is higher than those of other numeric models, for example, the FE, FD, and FVE models, the CCS model is more intricate than other numeric models as it takes more weighty calculating burden. Therefore, the order reduction of the CCS model is more vital than that of other numeric models. Therefore, we take the initial L vectors \({\boldsymbol{U}}_{N}^{1}, {\boldsymbol{U}}_{N}^{2},\ldots, {\boldsymbol{U}}_{N}^{L}\) (\(L\ll K\)) from the set of coefficient vectors \(\{{\boldsymbol{U}}_{N}^{n}\}_{n=1}^{K}\) for the CCS matrix-format (15) to make up of a \((N+1)^{2}\times L\) snapshot matrix \(\boldsymbol{Q}=({\boldsymbol{U}}_{N}^{1}, {\boldsymbol{U}}_{N}^{2}, \ldots, {\boldsymbol{U}}_{N}^{L})\).

3 The ROECS method for the telegraph equation

3.1 Formulation of POD basis

For the snapshot matrix \(\boldsymbol{Q}=({\boldsymbol{U}}_{N}^{1}, {\boldsymbol{U}}_{N}^{2}, \ldots, {\boldsymbol{U}}_{N}^{L})\) constituted in Sect. 2.4, let \(\lambda _{1} \geqslant \lambda _{2}\geqslant \cdots\geqslant \lambda _{r}>0\) (\(\gamma =: \operatorname{rank} (\boldsymbol{Q})\)) stand for all positive eigenvalues of \(\boldsymbol{Q}\boldsymbol{Q}^{T}\), and let \(\boldsymbol{U}=(\boldsymbol{\phi }_{1},\boldsymbol{\phi }_{2}, \ldots , \boldsymbol{\phi }_{r})\in \mathbb{R}^{(N+1)^{2}\times r}\) stand for the eigenmatrix formed by the associated orthonormal eigenvectors of \(\boldsymbol{Q}\boldsymbol{Q}^{T}\). Thus, a set of POD bases \(\boldsymbol{\varPhi }=(\boldsymbol{\phi } _{1},\boldsymbol{\phi }_{2},\ldots , \boldsymbol{\phi }_{d})\) (\(d\leqslant \gamma \)) is gained by the first d vectors in U that possess the following property (see [21, 24]):

$$ \bigl\Vert \boldsymbol{Q}-\boldsymbol{\varPhi }\boldsymbol{\varPhi }^{T} \boldsymbol{Q} \bigr\Vert _{2,2}= \sqrt{\lambda _{d+1}}, $$
(16)

where \(\|{\boldsymbol{Q}}\|_{2,2}=\sup_{\boldsymbol{v}\neq\boldsymbol{0}}{\|{\boldsymbol{Q}}\boldsymbol{v} \|_{2}}/{\| \boldsymbol{v} \|_{2}}\) and \(\|\boldsymbol{v}\|_{2}\) stands for the Euclidean norm to vector v. It follows that

$$\begin{aligned} \bigl\Vert \boldsymbol{U}_{N}^{n}- \boldsymbol{\varPhi }\boldsymbol{\varPhi }^{T}\boldsymbol{U}_{N}^{n} \bigr\Vert _{2} =& \bigl\Vert \bigl( \boldsymbol{Q}-\boldsymbol{ \varPhi }\boldsymbol{\varPhi }^{T}\boldsymbol{Q} \bigr)\boldsymbol{e}_{n} \bigr\Vert _{2} \\ \leqslant& \bigl\Vert \boldsymbol{Q}-\boldsymbol{\varPhi }\boldsymbol{\varPhi }^{T} \boldsymbol{Q} \bigr\Vert _{2,2} \Vert \boldsymbol{e}_{n} \Vert _{2}\leqslant \sqrt{\lambda _{d+1}}, \quad 1 \leqslant n \leqslant L, \end{aligned}$$
(17)

where \(\boldsymbol{e}_{i}\) (\(1\leqslant i\leqslant L\)) stands for the unit vectors whose ith component is 1.

Remark 2

Thanks to the number of order L of \({\boldsymbol{Q}}^{T}{\boldsymbol{Q}}\) being far smaller than the number of order \((N+1)^{2}\) of \({\boldsymbol{Q}}{\boldsymbol{Q}} ^{T}\), but both positive eigenvalues \(\lambda _{i}\) (\(1\leqslant i \leqslant \gamma \)) are identical, one may firstly seek the eigenvalues \(\lambda _{i}\) (\(1\leqslant i\leqslant \gamma \)) of \({\boldsymbol{Q}}^{T}{\boldsymbol{Q}}\) and the associated eigenvectors \({\boldsymbol{\varphi }} _{i}\) (\(1\leqslant i\leqslant \gamma \)), and then, according to \({\boldsymbol{\phi }}_{i}={\boldsymbol{Q}}{\boldsymbol{\varphi }}_{i}/\sqrt{{\lambda _{i}}}\) (\(1\leqslant i\leqslant r\)), one may easily acquire the eigenvectors \({\boldsymbol{\phi }}_{i}\) (\(1\leqslant i\leqslant \gamma \)) of \({\boldsymbol{Q}} {\boldsymbol{Q}}^{T}\).

3.2 The ROECS model

From (17) in Sect. 3.1 one may acquire the initial L coefficient vectors to the ROECS solutions \(\boldsymbol{U}_{d}^{n} =\boldsymbol{\varPhi }\boldsymbol{\varPhi }^{T}\boldsymbol{U}_{N}^{n}=:\boldsymbol{\varPhi }\boldsymbol{\beta }_{d} ^{n}\) (\(n=1,2,\ldots \) , \(L\leqslant K\)), where \(\boldsymbol{U}_{d}^{n} = (u_{d,0,0}^{n}, u_{d,1,0}^{n}, \ldots , u_{d, N,0}^{n}, u_{d,0,1}^{n}, u_{d,1,1}^{n}, \ldots , u_{d,N,1}^{n}, \ldots , u_{d,0,N}^{n}, u_{d,1,N}^{n}, \ldots , u_{d,N,N} ^{n+1})^{T}\), and \(\boldsymbol{\beta }_{d}^{n}=(\beta _{1}^{n}, \beta _{2} ^{n},\ldots ,\beta _{d}^{n})^{T}\). Thanks to the matrix \([(2+\alpha \Delta t+\beta \Delta t^{2})\boldsymbol{A} +\Delta t^{2} \boldsymbol{B}]\) being reversible, when the coefficient vectors \(\boldsymbol{U}_{N}^{n}\) in (15) are replaced with \(\boldsymbol{U}_{d}^{n}=\boldsymbol{\varPhi }\boldsymbol{\beta } _{d}^{n}\) (\(L+1\leqslant n\leqslant K\)), one can get the following ROECS model:

$$ \textstyle\begin{cases} \boldsymbol{\varPhi }\boldsymbol{\beta }_{d}^{n} =\boldsymbol{\varPhi }\boldsymbol{\varPhi }^{T}\boldsymbol{U}_{N}^{n},\quad 1\leqslant n\leqslant L; \\ \boldsymbol{\varPhi }\boldsymbol{\beta }_{d}^{n+1}=4\mathbb{A} \boldsymbol{A}\boldsymbol{\varPhi }\boldsymbol{\beta }_{d}^{n} -\mathbb{A}[(2-\alpha \Delta t+ \beta \Delta t^{2})\boldsymbol{A} +\Delta t^{2} \boldsymbol{B}]\boldsymbol{\varPhi }\boldsymbol{\beta } _{d}^{n-1}+2\Delta t^{2} \boldsymbol{\mathbb{A}}\boldsymbol{F}^{n}, \\ \quad L\leqslant n \leqslant K-1, \\ \boldsymbol{U}_{d}^{n}=\boldsymbol{\varPhi }\boldsymbol{\beta }_{d}^{n},\quad 1\leqslant n\leqslant K, \end{cases} $$
(18)

where \(\boldsymbol{\mathbb{A}}=[(2+\alpha \Delta t+\beta \Delta t^{2})\boldsymbol{A} + \Delta t^{2} \boldsymbol{B}]^{-1}\), \(\boldsymbol{U}_{N}^{n}\) (\(1\leqslant n\leqslant L\)) stand for the initial L coefficient vectors in (15), and A and B are given in (15).

Furthermore, model (18) can be simplified as

$$ \textstyle\begin{cases} \boldsymbol{\beta }^{n}_{d}=\boldsymbol{\varPhi }^{T}\boldsymbol{U}_{N}^{n},\quad n=1, 2, \ldots, L; \\ \boldsymbol{\beta }_{d}^{n+1}=4\boldsymbol{\varPhi }^{T} \boldsymbol{\mathbb{A}}\boldsymbol{A}\boldsymbol{\varPhi }\boldsymbol{\beta }_{d}^{n} -\boldsymbol{\varPhi }^{T} \boldsymbol{\mathbb{A}}[(2-\alpha \Delta t+\beta \Delta t^{2})\boldsymbol{A} +\Delta t^{2} \boldsymbol{B}]\varPhi \boldsymbol{\beta }_{d}^{n-1} \\ \hphantom{\boldsymbol{\beta }_{d}^{n+1}={}}{}+2\Delta t^{2}\boldsymbol{\varPhi }^{T}\boldsymbol{\mathbb{A}}\boldsymbol{F}^{n},\quad n= L, L+1, \ldots, K-1, \\ \boldsymbol{U}_{d}^{n}=\boldsymbol{\varPhi }\boldsymbol{\beta }_{d}^{n},\quad n=1, 2, \ldots, K. \end{cases} $$
(19)

Remark 3

CCS model (15) includes \((N+1)^{2}\) unknowns at every time node, but ROECS model (19) at the same node has only d unknowns (where \(d\leqslant L\ll (N+1)^{2}\), for instance, in the numeric experimentations of Sect. 4, \((N+1)^{2}=10201\), whereas \(d=6\)). It follows that ROECS model (19) is clearly superior to CCS model (15). After having gotten \(\boldsymbol{U}_{d}^{n}=(u_{d,0,0}^{n}, u_{d,1,0}^{n}, \ldots ,u_{d, N,0}^{n},u_{d,0,1}^{n}, u_{d,1,1} ^{n},\ldots ,u_{d,N,1}^{n},\ldots ,u_{d,0,N}^{n},u_{d,1,N}^{n},\ldots , u_{d,N,N}^{n+1})^{T}\) by (19), we may gain the authentic ROECS solutions \(u_{d}^{n}(x,y) = \sum_{j=0}^{N}\sum_{k=0}^{N}u_{d,j,k}^{n}h _{j}(x)h_{k}(y)\) (\(1\leqslant n\leqslant K\)).

3.3 The existence, convergence, and stability of the ROECS solutions

Analyzing the existence, convergence, and stability of the ROECS solutions requires the following max-norms to matrix as well as vector:

$$\begin{aligned}& \Vert {\boldsymbol{D}} \Vert _{\infty }=\max_{1\leqslant i\leqslant m} \sum_{j=1}^{l} \vert d _{ij} \vert ,\quad \forall {\boldsymbol{D}}=(d_{ij})_{m\times l} \in \mathbb{R}^{m}\times \mathbb{R}^{l}, \\& \Vert {\boldsymbol{\chi }} \Vert _{\infty }=\max _{1\leqslant j\leqslant m} \vert \chi _{j} \vert ,\quad \forall { \boldsymbol{\chi }}=(\chi _{1}, \chi _{2}, \ldots, \chi _{m})^{T}\in \mathbb{R}^{m}. \end{aligned}$$

The existence and stability as well as convergence of the ROECS solutions hold the following consequence.

Theorem 5

Under the same assumptions for Theorem 4, ROECS model (19) has a unique series of solutions\(\{ u_{d}^{n} \} _{n=1}^{K}\)satisfying the following stability:

$$ \bigl\Vert \nabla u_{d}^{n} \bigr\Vert _{0,\omega }\leqslant C(G_{0},G_{1},f),\quad 1\leqslant n\leqslant K, $$
(20)

where\(C(G_{0},G_{1},f)\)is the positive constant that only relies on\(G_{0}\), \(G_{1}\), andf. Furthermore, when\(u(t_{n})\in H_{\omega } ^{m}(\varOmega )\) (\(2\leqslant m\leqslant N+1\)), the error estimates between the solutions to Problem 1with the ROECS solutions are as follows:

$$ \bigl\Vert u(t_{n})-u_{d}^{n} \bigr\Vert _{0,\omega } \leqslant {C} \bigl(\Delta t ^{2}+N^{-m}+ \sqrt{\lambda _{d+1}}N^{-1/2}\Delta t^{-1} \bigr),\quad n=1,2,\ldots, K. $$
(21)

Proof

(1) The proof for existence as well as stability of the ROECS solutions.

As \([(2+\alpha \Delta t+\beta \Delta t^{2})\boldsymbol{A} +\Delta t^{2} \boldsymbol{B}]\) is a reversible matrix, from ROECS model (19) as well as Remark 3, we may judge that ROECS model (19) has a unique series of the ROECS solutions.

Using (18), one may restore ROECS model (19) into the following scheme:

$$\begin{aligned}& \boldsymbol{U}_{d}^{n} = \boldsymbol{\varPhi }\boldsymbol{\varPhi }^{T}\boldsymbol{U}_{N}^{n},\quad 1\leqslant n\leqslant L; \end{aligned}$$
(22)
$$\begin{aligned}& \boldsymbol{U}_{d}^{n+1}=4 \boldsymbol{\mathbb{A}}\boldsymbol{A}\boldsymbol{U}_{d}^{n}+2 \Delta t^{2} \boldsymbol{\mathbb{A}}\boldsymbol{F}^{n} - \boldsymbol{\mathbb{A}} \bigl[ \bigl(2-\alpha \Delta t+\beta \Delta t^{2} \bigr)\boldsymbol{A} +\Delta t^{2} \boldsymbol{B} \bigr]\boldsymbol{U}_{d}^{n-1}, \\& \quad L\leqslant n \leqslant K-1. \end{aligned}$$
(23)

Note that \(\boldsymbol{H}=(h_{0}(x)h_{0}(y), h_{1}(x)h_{0}(y), \ldots ,h _{N}(x)h_{0}(y), h_{0}(x)h_{1}(y), h_{1}(x)h_{1}(y), \ldots , h_{N}(x) h_{1}(y), \ldots , h_{0}(x)h_{N}(y), h_{1}(x)h_{N}(y), \ldots , h_{N}(x)h_{N}(y))^{T}\). Then the solutions to Problem 2 may be denoted by \(u_{N}^{n} =\boldsymbol{H}^{T}\boldsymbol{U}_{N}^{n}= \boldsymbol{H}\cdot \boldsymbol{U}_{N}^{n}\). Similarly, \(u_{d}^{n} =\boldsymbol{H}^{T}\boldsymbol{U} _{d}^{n} =\boldsymbol{H}\cdot \boldsymbol{U}_{d}^{n}\).

When \(1\leqslant n\leqslant L\), we get

$$\begin{aligned} \bigl\Vert u_{d}^{n} \bigr\Vert _{0,\omega } =& \bigl\Vert \boldsymbol{\varPhi }\boldsymbol{\varPhi }^{T}\boldsymbol{H}^{T} \boldsymbol{U}_{N}^{n} \bigr\Vert _{0,\omega } \\ \leqslant& \bigl\Vert \boldsymbol{\varPhi }\boldsymbol{\varPhi }^{T} \bigr\Vert _{ \infty } \bigl\Vert \boldsymbol{H}^{T} \boldsymbol{U}_{N}^{n} \bigr\Vert _{0,\omega } \leqslant \bigl\Vert u_{N} ^{n} \bigr\Vert _{0,\omega },\quad 1\leqslant n\leqslant L. \end{aligned}$$
(24)

Thus, uniting Theorem 4 with (24), one can obtain that (20) is right when \(1\leqslant n\leqslant L\).

When \(L+1\leqslant n\leqslant K\), due to the positive definiteness of the matrix B, one can rewrite (23) into

$$\begin{aligned}& \boldsymbol{B}^{-1}\boldsymbol{A} \bigl( \boldsymbol{U}_{d}^{n+1}-2\boldsymbol{U}_{d}^{n}+ \boldsymbol{U}_{d}^{n-1} \bigr) +\frac{\alpha \Delta t\boldsymbol{B}^{-1}\boldsymbol{A}}{2} \bigl(\boldsymbol{U}_{d}^{n+1}-\boldsymbol{U} _{d}^{n-1} \bigr) \\& \qquad {} + \frac{\beta \Delta t^{2}\boldsymbol{B}^{-1}\boldsymbol{A}}{2} \bigl(\boldsymbol{U}_{d}^{n-1}+ \boldsymbol{U} _{d}^{n+1} \bigr) + \frac{\Delta t^{2}}{2} \bigl(\boldsymbol{U}_{d}^{n-1}+\boldsymbol{U}_{d}^{n+1} \bigr) \\& \quad =\Delta t^{2}\boldsymbol{B}^{-1}\boldsymbol{F}^{n},\quad L\leqslant n \leqslant K-1. \end{aligned}$$
(25)

Moreover, the following inequalities can be attained by the FE theory in [31], the spectral theory in [4], and the matrix properties:

$$ \bigl\Vert {\boldsymbol{A}}^{-1} \bigr\Vert _{\infty } \leqslant C;\qquad \bigl\Vert {\boldsymbol{B}}^{-1} \bigr\Vert _{\infty }\leqslant CN^{-1};\qquad \Vert { \boldsymbol{A}} \Vert _{\infty }\leqslant C;\qquad \Vert { \boldsymbol{B}} \Vert _{\infty }\leqslant CN. $$
(26)

Because of the positive definiteness for matrix \(\boldsymbol{B}^{-1}\boldsymbol{A}\), there exist an orthogonal matrix \(\boldsymbol{B}_{1}\) and a diagonal matrix \(\boldsymbol{D}_{1}\) such that

$$\begin{aligned}& \bigl(\boldsymbol{U}_{d}^{n+1}- \boldsymbol{U}_{d}^{n-1} \bigr)^{T} \boldsymbol{B}^{-1}\boldsymbol{A} \bigl(\boldsymbol{U} _{d}^{n+1}-\boldsymbol{U}_{d}^{n-1} \bigr) \\& \quad = \bigl(\boldsymbol{U}_{d}^{n+1}-\boldsymbol{U}_{d}^{n-1} \bigr)^{T}( \boldsymbol{B}_{1}\boldsymbol{D}_{1})^{T} (\boldsymbol{B}_{1}\boldsymbol{D}_{1}) \bigl( \boldsymbol{U}_{d}^{n+1}- \boldsymbol{U}_{d}^{n-1} \bigr) \\& \quad = \bigl\Vert (\boldsymbol{B}_{1}\boldsymbol{D}_{1}) \bigl( \boldsymbol{U}_{d}^{n+1}- \boldsymbol{U}_{d}^{n-1} \bigr) \bigr\Vert _{2}^{2}, \end{aligned}$$
(27)
$$\begin{aligned}& \bigl(\boldsymbol{U}_{d}^{n+1}- \boldsymbol{U}_{d}^{n-1} \bigr)^{T} \boldsymbol{B}^{-1}\boldsymbol{A} \bigl(\boldsymbol{U} _{d}^{n+1}+\boldsymbol{U}_{d}^{n-1} \bigr) \\& \quad = \bigl(\boldsymbol{U}_{d}^{n+1}-\boldsymbol{U}_{d}^{n-1} \bigr)^{T}( \boldsymbol{B}_{1}\boldsymbol{D}_{1})^{T} (\boldsymbol{B}_{1}\boldsymbol{D}_{1}) \bigl( \boldsymbol{U}_{d}^{n+1}- \boldsymbol{U}_{d}^{n-1} \bigr) \\& \quad = \bigl\Vert \boldsymbol{B}_{1}\boldsymbol{D}_{1} \boldsymbol{U}_{d}^{n+1} \bigr\Vert _{2}^{2}- \bigl\Vert \boldsymbol{B}_{1}\boldsymbol{D}_{1} \boldsymbol{U}_{d}^{n-1} \bigr\Vert _{2}^{2}. \end{aligned}$$
(28)

Making the scalar product of vector with \((\boldsymbol{U}_{d}^{n+1}-\boldsymbol{U} _{d}^{n-1})^{T}\) to (25) and using the Cauchy–Schwarz inequality, (26), and (27), we have

$$\begin{aligned}& \bigl(\boldsymbol{U}_{d}^{n+1}- \boldsymbol{U}_{d}^{n} \bigr)^{T} \boldsymbol{B}^{-1}\boldsymbol{A} \bigl(\boldsymbol{U}_{d} ^{n+1}-\boldsymbol{U}_{d}^{n} \bigr) - \bigl( \boldsymbol{U}_{d}^{n}-\boldsymbol{U}_{d}^{n-1} \bigr)^{T}\boldsymbol{B} ^{-1}\boldsymbol{A} \bigl( \boldsymbol{U}_{d}^{n}-\boldsymbol{U}_{d}^{n-1} \bigr) \\& \qquad {} +\frac{\alpha \Delta t}{2} \bigl\Vert (\boldsymbol{B}_{1} \boldsymbol{D}_{1}) \bigl(\boldsymbol{U}_{d}^{n+1}- \boldsymbol{U}_{d}^{n-1} \bigr) \bigr\Vert _{2}^{2} +\frac{ \beta \Delta t^{2}}{2} \bigl( \bigl\Vert \boldsymbol{B}_{1}\boldsymbol{D}_{1}\boldsymbol{U}_{d}^{n+1} \bigr\Vert _{2} ^{2}- \bigl\Vert \boldsymbol{B}_{1} \boldsymbol{D}_{1}\boldsymbol{U}_{d}^{n-1} \bigr\Vert _{2}^{2} \bigr) \\& \qquad {} + \frac{\Delta t^{2}}{2} \bigl( \bigl\Vert \boldsymbol{U}_{d}^{n+1} \bigr\Vert _{2}^{2}- \bigl\Vert \boldsymbol{U}_{d}^{n-1} \bigr\Vert _{2}^{2} \bigr) \\& \quad =\Delta t^{2} \bigl( \boldsymbol{U}_{d}^{n+1}- \boldsymbol{U}_{d}^{n-1} \bigr)^{T}\boldsymbol{B}^{-1}\boldsymbol{F}^{n} \\& \quad \leqslant \frac{\alpha \Delta t}{2} \bigl\Vert ( \boldsymbol{B}_{1} \boldsymbol{D}_{1}) \bigl(\boldsymbol{U}_{d}^{n+1}- \boldsymbol{U}_{d}^{n-1} \bigr) \bigr\Vert _{2}^{2} +C \Delta t^{3}N^{-2} \bigl\Vert \boldsymbol{F}^{n} \bigr\Vert _{2}^{2}, \end{aligned}$$
(29)

where \(n=L, L+1, \ldots, K-1\). Simplifying (29) as well as summating from L through n, we have

$$\begin{aligned}& \bigl(\boldsymbol{U}_{d}^{n+1}- \boldsymbol{U}_{d}^{n} \bigr)^{T} \boldsymbol{B}^{-1}\boldsymbol{A} \bigl(\boldsymbol{U}_{d} ^{n+1}-\boldsymbol{U}_{d}^{n} \bigr) + \frac{\Delta t^{2}}{2} \bigl( \bigl\Vert \boldsymbol{U}_{d}^{n+1} \bigr\Vert _{2}^{2}+ \bigl\Vert \boldsymbol{U}_{d}^{n} \bigr\Vert _{2}^{2} \bigr) \\& \qquad {} +\frac{\beta \Delta t^{2}}{2} \bigl( \bigl\Vert \boldsymbol{B}_{1} \boldsymbol{D}_{1}\boldsymbol{U}_{d}^{n+1} \bigr\Vert _{2}^{2}+ \bigl\Vert \boldsymbol{B}_{1} \boldsymbol{D}_{1}\boldsymbol{U}_{d} ^{n} \bigr\Vert _{2}^{2} \bigr) \\& \quad \leqslant \frac{\Delta t^{2}}{2} \bigl( \bigl\Vert \boldsymbol{U}_{d}^{L} \bigr\Vert _{2}^{2}+ \bigl\Vert \boldsymbol{U}_{d}^{L-1} \bigr\Vert _{2}^{2} \bigr) + \bigl(\boldsymbol{U}_{d}^{L}-\boldsymbol{U}_{d}^{L-1} \bigr)^{T} \boldsymbol{B}^{-1}\boldsymbol{A} \bigl( \boldsymbol{U}_{d}^{L}-\boldsymbol{U}_{d}^{L-1} \bigr) \\& \qquad {}+\frac{\beta \Delta t^{2}}{2} \bigl( \bigl\Vert \boldsymbol{B}_{1} \boldsymbol{D}_{1}\boldsymbol{U}_{d}^{L} \bigr\Vert _{2}^{2}+ \bigl\Vert \boldsymbol{B}_{1} \boldsymbol{D}_{1}\boldsymbol{U}_{d}^{L-1} \bigr\Vert _{2}^{2} \bigr) \\& \qquad {} +CN^{-2}\Delta t^{3}\sum _{i=L}^{n} \bigl\Vert \boldsymbol{F} ^{i} \bigr\Vert _{2}^{2} \leqslant \frac{C\Delta t^{2}}{2} \bigl( \bigl\Vert \boldsymbol{U}_{d}^{L} \bigr\Vert _{2}^{2}+ \bigl\Vert \boldsymbol{U}_{d}^{L-1} \bigr\Vert _{2}^{2} \bigr) \\& \qquad {} +CN^{-1} \bigl\Vert \boldsymbol{U}_{d}^{L}- \boldsymbol{U}_{d}^{L-1} \bigr\Vert _{2}^{2} +CN^{-2}\Delta t^{3} \sum_{i=L}^{n} \bigl\Vert \boldsymbol{F}^{i} \bigr\Vert _{2}^{2} ,\quad L \leqslant n \leqslant K-1. \end{aligned}$$
(30)

Due to the orthogonality for components of H, using the Taylor formula and Theorem 4, we can obtain

$$\begin{aligned} \bigl\Vert \boldsymbol{U}_{d}^{L}- \boldsymbol{U}_{d}^{L-1} \bigr\Vert _{2} =& \bigl( \bigl( \boldsymbol{U}_{d}^{L}-\boldsymbol{U} _{d}^{L-1} \bigr)\boldsymbol{H}, \bigl( \boldsymbol{U}_{d}^{L}-\boldsymbol{U}_{d}^{L-1} \bigr)\boldsymbol{H} \bigr)_{\omega } ^{1/2} \\ =& \bigl(u_{d}^{L}-u_{d}^{L-1},u_{d}^{L}-u_{d}^{L-1} \bigr)_{ \omega }^{1/2} = \bigl\Vert u_{d}^{L}-u_{d}^{L-1} \bigr\Vert _{0,\omega } \\ \leqslant& \bigl\Vert u_{d}^{L}-u_{N}^{L} \bigr\Vert _{0,\omega }+ \bigl\Vert u_{N}^{L}-u(t_{L}) \bigr\Vert _{0,\omega } + \bigl\Vert u(t_{L})-u(t_{L-1}) \bigr\Vert _{0, \omega } \\ &{} + \bigl\Vert u(t_{L-1})-u_{N}^{L-1} \bigr\Vert _{0,\omega }+ \bigl\Vert u _{N}^{L-1}-u_{d}^{L} \bigr\Vert _{0,\omega } \\ \leqslant& C(G_{0},G_{1},f) \bigl(\sqrt{\lambda _{d+1}}+\Delta t+N^{-1} \bigr). \end{aligned}$$
(31)

We derive \((\boldsymbol{U}_{d}^{n+1}-\boldsymbol{U}_{d}^{n})^{T}\boldsymbol{B}^{-1}\boldsymbol{A}( \boldsymbol{U}_{d}^{n+1}-\boldsymbol{U}_{d}^{n})\geqslant 0\) from the positive definiteness for \(\boldsymbol{B}^{-1}\boldsymbol{A}\). Hence, if \(\max \{N^{-2}, N^{-1}{\lambda _{d+1}}\}\leqslant C\Delta t^{2}\) with (31) and (30), we get

$$\begin{aligned} \bigl\Vert \boldsymbol{U}_{d}^{n+1} \bigr\Vert _{2}^{2} \leqslant& C \bigl( \bigl\Vert \boldsymbol{U}_{d}^{L} \bigr\Vert _{2}^{2}+ \bigl\Vert \boldsymbol{U}_{d}^{L-1} \bigr\Vert _{2}^{2} \bigr) + C(G_{0},G_{1},f)+CN^{-2} \Delta t\sum_{i=L}^{n} \bigl\Vert \boldsymbol{F}^{i} \bigr\Vert _{2}^{2} \\ \leqslant& C(G_{0},G_{1},f),\quad n=L,L+1,\ldots,K-1, \end{aligned}$$
(32)

here \(C(G_{0},G_{1},f)\) is the positive constant that only depends on \(G_{0}\) and \(G_{1}\) as well as f. Thus, we have

$$\begin{aligned} & \bigl\Vert u_{d}^{n} \bigr\Vert _{0,\omega } = \bigl\Vert \boldsymbol{H}(x,y)\cdot \boldsymbol{U}_{d}^{n} \bigr\Vert _{0, \omega } \leqslant C(G_{0},G_{1},f),\quad L= L+1, L+2, \ldots, K, \end{aligned}$$
(33)

which signifies that (20) is right when \(L+1\leqslant n\leqslant K\).

(2) Estimating errors for the ROECS solutions.

Let \(\boldsymbol{E}^{n}=\boldsymbol{U}_{N}^{n}-\boldsymbol{U}_{d}^{n}\). As \(n=1, 2, \ldots, L\), by (17) we get

$$ \bigl\Vert \boldsymbol{E}^{n} \bigr\Vert _{2}= \bigl\Vert \boldsymbol{U}_{N}^{n}- \boldsymbol{U}_{d}^{n} \bigr\Vert _{2} = \bigl\Vert \boldsymbol{U} _{N}^{n}- \boldsymbol{\varPhi } \boldsymbol{\varPhi }^{T}\boldsymbol{U}_{N}^{n} \bigr\Vert _{2}\leqslant \sqrt{ \lambda _{d+1}}. $$
(34)

When \(L+1\leqslant n\leqslant K\), using (15) as well as (25), we have

$$\begin{aligned}& \frac{\alpha \Delta t\boldsymbol{B}^{-1}\boldsymbol{A}}{2} \bigl(\boldsymbol{E}^{n+1}- \boldsymbol{E}^{n-1} \bigr) +\boldsymbol{B}^{-1} \boldsymbol{A} \bigl(\boldsymbol{E}^{n-1}-2\boldsymbol{E}^{n}+ \boldsymbol{E}^{n+1} \bigr) \\& \quad {} + \frac{\beta \Delta t^{2}\boldsymbol{B}^{-1} \boldsymbol{A}}{2} \bigl(\boldsymbol{E}^{n-1}+ \boldsymbol{E}^{n+1} \bigr) + \frac{\Delta t^{2}}{2} \bigl( \boldsymbol{E} ^{n-1}+\boldsymbol{E}^{n+1} \bigr) = \boldsymbol{0},\quad L \leqslant n \leqslant K-1. \end{aligned}$$
(35)

Making the scalar product of vector with \((\boldsymbol{E}^{n+1}-\boldsymbol{E}^{n-1})^{T}\) to (35) and using the Cauchy–Schwarz inequality as well as (26)–(28), we have

$$\begin{aligned}& \bigl(\boldsymbol{E}^{n+1}-\boldsymbol{E}^{n} \bigr)^{T}\boldsymbol{B}^{-1}\boldsymbol{A} \bigl( \boldsymbol{E}^{n+1}-\boldsymbol{E} ^{n} \bigr) - \bigl( \boldsymbol{E}^{n}-\boldsymbol{E}^{n-1} \bigr)^{T}\boldsymbol{B}^{-1}\boldsymbol{A} \bigl( \boldsymbol{E}^{n}- \boldsymbol{E}^{n-1} \bigr) \\& \quad {} +\frac{\alpha \Delta t}{2} \bigl\Vert \boldsymbol{B}_{1} \boldsymbol{D}_{1} \bigl(\boldsymbol{E}^{n+1}- \boldsymbol{E}^{n-1} \bigr) \bigr\Vert _{2}^{2} + \frac{\Delta t^{2}}{2} \bigl( \bigl\Vert \boldsymbol{E}^{n+1} \bigr\Vert _{2}^{2}- \bigl\Vert \boldsymbol{E}^{n-1} \bigr\Vert _{2} ^{2} \bigr) \\& \quad {} + \frac{\Delta t^{2}}{2} \bigl( \bigl\Vert \boldsymbol{B}_{1} \boldsymbol{D}_{1}\boldsymbol{E}^{n+1} \bigr\Vert _{2}^{2}- \bigl\Vert \boldsymbol{B}_{1} \boldsymbol{D}_{1} \boldsymbol{E}^{n-1} \bigr\Vert _{2}^{2} \bigr) =0,\quad L \leqslant n \leqslant K-1. \end{aligned}$$
(36)

Summing (36) from L to n and simplifying it, we have

$$\begin{aligned}& \bigl(\boldsymbol{E}^{n+1}-\boldsymbol{E}^{n} \bigr)^{T}\boldsymbol{B}^{-1}\boldsymbol{A} \bigl( \boldsymbol{E}^{n+1}-\boldsymbol{E} ^{n} \bigr) \\& \qquad {} +\frac{\Delta t^{2}}{2} \bigl( \bigl\Vert \boldsymbol{E}^{n+1} \bigr\Vert _{2}^{2}+ \bigl\Vert \boldsymbol{E}^{n} \bigr\Vert _{2}^{2} + \bigl\Vert \boldsymbol{B}_{1} \boldsymbol{D}_{1}\boldsymbol{E}^{n+1} \bigr\Vert _{2}^{2}+ \bigl\Vert \boldsymbol{B}_{1} \boldsymbol{D}_{1}\boldsymbol{E}^{n} \bigr\Vert _{2}^{2} \bigr) \\& \quad \leqslant \frac{\Delta t^{2}}{2} \bigl( \bigl\Vert \boldsymbol{E} ^{L} \bigr\Vert _{2}^{2}+ \bigl\Vert \boldsymbol{E}^{L-1} \bigr\Vert _{2}^{2} \bigr) + \bigl(\boldsymbol{E}^{L}- \boldsymbol{E}^{L-1} \bigr)^{T} \boldsymbol{B}^{-1} \boldsymbol{A} \bigl( \boldsymbol{E}^{L}-\boldsymbol{E}^{L-1} \bigr) \\& \quad \leqslant \frac{\Delta t^{2}}{2} \bigl( \bigl\Vert \boldsymbol{E} ^{L} \bigr\Vert _{2}^{2}+ \bigl\Vert \boldsymbol{E}^{L-1} \bigr\Vert _{2}^{2} \bigr)+CN^{-1} \bigl\Vert \boldsymbol{E}^{L}- \boldsymbol{E} ^{L-1} \bigr\Vert _{2}^{2} \\& \qquad {} + \frac{\Delta t^{2}}{2} \bigl( \bigl\Vert \boldsymbol{B}_{1} \boldsymbol{D}_{1}\boldsymbol{E}^{L} \bigr\Vert _{2}^{2}+ \bigl\Vert \boldsymbol{B}_{1} \boldsymbol{D}_{1} \boldsymbol{E}^{L-1} \bigr\Vert _{2}^{2} \bigr),\quad n=L,L+1, \ldots, K-1. \end{aligned}$$
(37)

Using (34) and (37), we have

$$\begin{aligned} \bigl\Vert \boldsymbol{E}^{n} \bigr\Vert _{2} \leqslant& C \bigl( \bigl\Vert \boldsymbol{E}^{L} \bigr\Vert _{2}+ \bigl\Vert \boldsymbol{E}^{L-1} \bigr\Vert _{2} +N^{-1/2}\Delta t^{-1} \bigl\Vert \boldsymbol{E}^{L}-\boldsymbol{E}^{L-1} \bigr\Vert _{2} \bigr) \\ \leqslant& C\sqrt{\lambda _{d+1}}N^{-1/2} \Delta t^{-1} ,\quad n=L+1,L+2,\ldots, K. \end{aligned}$$
(38)

Whereupon, using \(u_{N}^{n}=\boldsymbol{H}\cdot {\boldsymbol{U}}_{N}^{n}\), \(u_{d}^{n}= \boldsymbol{H}\cdot {\boldsymbol{U}}_{d}^{n}\), \(\|{\boldsymbol{H}}\|_{0,\omega }\leqslant 1\), and the inverse estimation theorem, thanks to the orthogonality for components of \(\boldsymbol{H}(x,y)\), we get

$$ \bigl\Vert u_{N}^{n}-u_{d}^{n} \bigr\Vert _{0,\omega } = \bigl\Vert \boldsymbol{H}\cdot \boldsymbol{E}^{n} \bigr\Vert _{0,\omega } \leqslant C\sqrt{ \lambda _{d+1}}N^{-1/2} \Delta t^{-1},\quad n=1,2, \ldots, K. $$
(39)

Uniting Theorem 4 with (39) yields (21), and this accomplishes the proof for Theorem 5. □

Remark 4

Theorem 5 is explained in two ways.

  1. (1)

    The error term \(\sqrt{\lambda _{d+1}}N^{-1/2}\Delta t^{-1}\) to Theorem 5 is produced from the reduced order procedure for the CCS model, which may be used to suggest the choice for the POD basis, i.e., if only that we opt for d that satisfies \({\lambda _{d+1}}N ^{-1}\Delta t^{-2} \leqslant \max \{\Delta t^{4},N^{-2m}\}\), we can reach the optimal order error estimates.

  2. (2)

    Theorem 5 signifies that when the solution \(u(t_{n}) \in H_{\omega }^{m}(\varTheta )\) (\(3\leqslant m\leqslant N+1\)) to Problem 1, the ROECS solutions relative to the spatial variables possess super-convergence. Even though the solution \(u(t_{n})\in H _{\omega }^{2}(\varTheta )\), the error estimates for the ROECS solutions also reach optimal order, which shows that it is feasible and valid for ROECS model (19) to settle the telegraph equation.

4 Two sets of numerical experimentations

Hereby, we propose two sets of numeric experimentations to check out the superiority of the ROECS method for the telegraph equation.

To show the variation in the magnetic field produced by two parallel wires with the same voltage from the top to down, in telegraph equation (1), we take \(\alpha =\beta =1\), the computational region \(\varOmega =(-1, 1)\times (-1, 1)\), \(\Delta t = 0.01\), the number of nodes \(N = 200\) in the y and x directions, \(f(x,y,t) =0\), and

$$\begin{aligned}& H(x,y)=-H^{0}(x,y)\sqrt{ \bigl\vert y^{2}-0.25 \bigr\vert },\quad (x,y)\in [-1, 1]\times [-1, 1], \\& G(x,y)=H^{0}(x,y) \vert x \vert ,\quad (x,y)\in [-1, 1]\times [-1, 1], \end{aligned}$$

where \(H^{0}(x,y)=\frac{1}{36} [12+15\cos (\frac{10r\pi }{3} )+6\cos (\frac{10r\pi }{3} )+3 \cos (10r\pi ) ]\) if \(r\leqslant 0.3\) and \(H^{0}(x,y)=0\) when \(r> 0.3\), and \(r=\sqrt{|x^{2}-0.25|+|y^{2}-0.25|}\).

We firstly seek out the first 20 solutions \(\boldsymbol{U}_{N}^{n}(x)\) with the CCS model, i.e., Problem 2, at initial 20 time nodes \(t_{n}\)\((1\leqslant n\leqslant 20)\) forming the snapshot matrix \(\boldsymbol{Q}= [\boldsymbol{U}_{N}^{1},\boldsymbol{U}_{N}^{2},\ldots,\boldsymbol{U}_{N}^{20} ]\). Next, we seek out the eigenvalues \(\lambda _{1}\geqslant \lambda _{2} \geqslant \cdots \geqslant \lambda _{20}\geqslant 0\) and the corresponding eigenvectors \({\boldsymbol{\varphi }}_{i}\) (\(1\leqslant i\leqslant 20\)) for the matrix \(\boldsymbol{Q}^{T}\boldsymbol{Q}\). They are attained by estimating that \(\sqrt{\lambda _{d+1}}N^{-1/2}\Delta t^{-1}\leqslant 10^{-4}\). Whereupon, we only need to make up the initial six POD bases \(\boldsymbol{\varPhi }= \{\boldsymbol{\phi }_{1},\boldsymbol{\phi }_{2},\ldots ,\boldsymbol{\phi }_{6} \}\) via the formula \({\boldsymbol{\phi }}_{i}={\boldsymbol{Q}}{\boldsymbol{\varphi }}_{i}/\sqrt{ {\lambda _{i}}}\) (\(1\leqslant i\leqslant 6\)). Lastly, the ROECS solutions at \(t =1.0\mbox{ and }2.0\) are found by the ROECS model, as depicted in Figs. 1 and 3, respectively.

Figure 1
figure 1

The ROECS solution when \(t=1.0\)

To compare reasonability, we also seek out the CCS solutions at \(t =1.0\mbox{ and }2.0\) by the CCS model, i.e., Problem 2, as depicted in Figs. 2 and 4, respectively.

Figure 2
figure 2

The CCS solution when \(t=1.0\)

The pair of Figs. 1 and 2 and the pair of Figs. 3 and 4 are nearly the same, but the consequences for the ROECS method are better. Specially, in the aforementioned calculation procedure, the ROECS method at each time node possesses only six degrees of freedom, whereas the CCS model possesses thirty-nine thousand six hundred and one degrees of freedom. By computational records of the CCS model with the ROECS model in the identical Laptop, it has been found that the CPU elapsed time to settle the CCS model on \(0\leqslant t\leqslant 2\) takes about five thousand eight hundred seconds, whereas the CPU elapsed time to settle the ROECS model takes about forty-six seconds, namely the CPU elapsed time to the CCS model is about as one hundred and twenty-six times as that to the ROECS model. It follows that the ROECS method could not only alleviate the calculation burden as well as decrease the round off error accumulation, but could also greatly spare CPU elapsed time as well as the storage requirements. Figure 5 exhibits the errors between the CCS solution and the ROECS solutions with variational number of POD bases when \(t=2.0\), which are consistent with the theory results as both errors reach \(O(10^{-4})\). This fully validates the correctness of the theory consequences as well as shows that the ROECS model excels far the CCS one.

Figure 3
figure 3

The ROECS solution when \(t=2\)

Figure 4
figure 4

The CCS solution when \(t=2.0\)

Figure 5
figure 5

The errors between the ROECS solutions with different number of POD bases and the CCS solution when \(t=2.0\)

5 Conclusions

Here, the order reduction of the CCS solution coefficient vectors for the telegraph equation has been researched. Based on building the POD-based ROECS model for the telegraph equation, the existence and convergence as well as stability to the ROECS solutions have been proven. Moreover, two sets of numerical experimentations have verified the correctness to the theoretical consequence and illustrated that the ROECS model excels far the CCS one. As the unknowns for ROECS model are far fewer than those for the CCS one, compared with the CCS model, the ROECS model may vastly decrease the calculated burden as well as the accumulation of round off errors and spare the CPU elapsed time in the calculations. Most importantly, the ROECS model for the telegraph equation is first proposed and belongs to a fully new development for the existing POD-based reduced order techniques since the accuracy for the ROECS model is far higher than that for other POD reduced order models such as the POD-based reduced order FD, FE, and FVE models.

So far, we have only considered the ROECS model for the telegraph equation in rectangle region \({\varOmega }=(a, b)\times (c, d)\). The approach here may be used to solve the more intricate engineering problems. Hence, it has widespread application prospect in engineering-related field.