1 Introduction

In this paper, we discuss the topic related to the controllability for the space semi-discretizations of the internally controlled one-dimensional wave equation. Let us first introduce certain notations and state the controlled system as studied in this paper. Let \(\omega=(a,b)\) be an open and nonempty subset of \((0,1)\), \(\chi_{\omega}\) represent the characteristic function of ω. The controlled wave equation is represented as follows:

$$ \left \{ \textstyle\begin{array}{l@{\quad}l} \partial_{tt}y(x,t)-\partial_{xx}y(x,t)=\chi_{\omega}u(x,t), & (x,t)\in(0,1)\times(0,T),\\ y(0,t)=y(1,t)=0, & t\in(0,T),\\ y(x,0)=y_{0}(x),\qquad \partial_{t}y(x,0)=y_{1}(x),& x\in(0,1), \end{array}\displaystyle \right . $$
(1.1)

where \(T>0\), the initial value \((y_{0},y_{1})\) belongs to \(H^{1}_{0}(0,1)\times L^{2}(0,1)\) and \(u(\cdot)\) is a control function taken from the space \(L^{2}(0,T;L^{2}(0,1))\).

Problem (1.1) is said to be exactly controllable from the initial value \((y_{0},y_{1})\in H^{1}_{0}(0,1)\times L^{2}(0,1)\) in time T if there exists a control function \(u(\cdot)\in L^{2}(0,T;L^{2}(0,1))\), such that the solution of (1.1) satisfies \((y(T),\partial_{t}y(T))=(0,0)\). The problem of the controllability of wave equations has also been the object of numerous studies. Extensive related references can be found in [13] and the rich work cited therein.

In this work, we shall mainly focus on the issue of how the controllability property can be achieved under the numerical approximation schemes. Now, we will introduce the numerical project by using the finite difference method. Given \(N\in\mathbb{N}\), we define \(h=\frac{1}{N+1}\). We consider the nodal points

$$x_{0}=0;\qquad x_{j}=jh, \quad j=1,\ldots,N; \qquad x_{N+1}=1, $$

which divides \([0,1]\) into \(N+1\) subintervals \(I_{j}=[x_{j},x_{j+1}]\), \(j=0,1,\ldots,N\). We suppose that the nodal points \(x_{k+1},\ldots,x_{k+p}\in\omega\), and \(x_{1},\ldots,x_{k},x_{k+p+1},\ldots,x_{N}\in(0,1)\setminus\omega\), for some \(k, p \in\mathbb{N}\) with \(k + p\leq N\).

Now, we consider the following finite difference semi-discretization of (1.1):

$$ \left \{ \textstyle\begin{array}{l@{\quad}l} y_{j}''(t)- \frac{1}{h^{2}}[y_{j+1}(t)+y_{j-1}(t)-2y_{j}(t)]=(\chi_{\omega}u)_{j}(t), & t\in(0,T),\\ y_{0}(t)=y_{N+1}(t)=0, & t\in(0,T),\\ y_{j}(0)=y^{0}_{j},\qquad y_{j}'(0)=y_{j}^{1}, & j=1,\ldots,N, \end{array}\displaystyle \right . $$
(1.2)

where

$$\begin{aligned} (\chi_{\omega}u)_{j}(t)= \left \{ \textstyle\begin{array}{l@{\quad}l} u(x_{j},t), & \mbox{if } x_{j}\in\omega,\\ 0, & \mbox{if } x_{j}\in(0,1)\setminus\omega. \end{array}\displaystyle \right . \end{aligned}$$

The conditions \(y_{0}(t)=y_{N+1}(t)=0\), \(t\in(0,T)\), are the Dirichlet boundary conditions in semi-discrete case. Next, we shall rewrite equation (1.2) by vectors. Let

$$\begin{aligned}& \vec{y}_{h}(t)=\bigl(y_{1}(t),\ldots,y_{N}(t) \bigr)^{T},\\& \vec{y}_{h}^{0}=\bigl(y_{0}(x_{1}), \ldots,y_{0}(x_{N})\bigr)^{T},\\& \vec{y}_{h}^{1}=\bigl(y_{1}(x_{1}), \ldots,y_{1}(x_{N})\bigr)^{T}, \end{aligned}$$

and

$$\begin{aligned} \vec{u}_{h}(t)=\bigl(u_{1}(t),\ldots,u_{N}(t) \bigr)^{T}. \end{aligned}$$

Define the two \(N\times N\) matrices

$$\begin{aligned} A_{h}=\frac{1}{h^{2}} \begin{bmatrix} 2 & -1 & \cdots& 0 & 0 \\ -1 & 2 & \cdots& 0 & 0 \\ \vdots& \vdots& \vdots& \vdots& \vdots\\ 0 & 0 & \cdots& 2& -1 \\ 0 & 0 & \cdots& -1 & 2 \end{bmatrix} \end{aligned}$$

and

$$\begin{aligned} B_{h}= \begin{bmatrix} O_{k\times k} & \cdots& O\\ \vdots & I_{p\times p} & \vdots\\ O& \cdots& O_{(N-k-p)\times(N-k-p)} \end{bmatrix} , \end{aligned}$$

where \(I_{p\times p}\) is a \(p\times p\) identity matrix, and \(p, k\in\mathbb{N}\), which are mentioned above (1.2). The system (1.2) can be rewritten as follows:

$$ \left \{ \textstyle\begin{array}{l@{\quad}l} \vec{y}''_{h}(t)+A_{h}\vec{y}_{h}(t)=B_{h}\vec{u}_{h}(t), & t\in(0,T),\\ y_{0}(t)=y_{N+1}(t)=0, & t\in(0,T),\\ \vec{y}_{h}(0)=\vec{y}^{0}_{h}, \qquad\vec{y}_{h}'(0)=\vec{y}^{1}_{h}, \end{array}\displaystyle \right . $$
(1.3)

where ′ denotes derivation with respect to time. In fact, system (1.3) is in the form of linear ordinary differential equations for an unknown vector function \(\vec{y}(t)_{h}= (y_{1}(t),\ldots,y_{N}(t) )^{T}\), with the boundary conditions \({y}_{0}(t)={y}_{N+1}(t)=0\), and \(\vec{u}_{h}(t)\) plays the role of control function. The adjoint system for system (1.3) can be represented as

$$ \left \{ \textstyle\begin{array}{l@{\quad}l} \vec{\phi}''_{h}(t)+A_{h}\vec{\phi}_{h}(t)=0, & t\in(0,T),\\ \phi_{0}(t)=\phi_{N+1}(t)=0, & t\in(0,T),\\ \vec{\phi}_{h}(0)=\vec{\phi}_{0}^{h}, \qquad\vec{\phi}_{h}'(0)=\vec{\phi}_{1}^{h}, \end{array}\displaystyle \right . $$
(1.4)

where the initial date \(\vec{\phi}_{0}^{h}=(\phi^{0}_{1},\ldots,\phi ^{0}_{N})^{T}\), and \(\vec{\phi}_{1}^{h}=(\phi^{1}_{1},\ldots,\phi^{1}_{N})^{T}\). It is easy to check that \(\phi^{0}_{0}=\phi^{0}_{N+1}=0\) and \(\phi^{1}_{0}=\phi^{1}_{N+1}=0\) are the compatibility conditions. Throughout the paper, we suppose that the compatibility conditions hold for any initial value.

Taking the boundary conditions \({\phi}_{0}(t)={\phi}_{N+1}(t)=0\), \(\forall t\in[0,T]\), for the solution of (1.4) we define the energy of the semi-discrete system (1.4) as

$$\begin{aligned} E_{h}(t)=\frac{h}{2}\sum_{i=0}^{N} \biggl(\bigl|\phi'_{i}(t)\bigr|^{2}+\biggl| \frac{\phi _{i+1}(t)-\phi_{i}(t)}{h}\biggr|^{2}\biggr), \quad \forall t\in[0,T]. \end{aligned}$$

Since \(B_{h}\) is a symmetric matrix, the observability inequality of (1.4) can be formulated as: To find a constant \(C(T,h)\) such that (see [46])

$$\begin{aligned} E_{h}(0)\leq C(T,h) \int_{0}^{T}\bigl\| B_{h}\vec{ \phi}_{h}(t)\bigr\| _{\mathbb{R}^{N}}^{2}\,dt, \end{aligned}$$
(1.5)

where

$$E_{h}(0)=\frac{h}{2}\sum_{i=0}^{N} \biggl(\bigl|\phi^{1}_{i}\bigr|^{2}+\biggl| \frac{\phi ^{0}_{i+1}-\phi^{0}_{i}}{h}\biggr|^{2} \biggr). $$

Remark 1.1

The energy \(E_{h}(t)\) is conserved for the solution of (1.4). Namely, for any \(h>0\) and solution \(\vec{\phi}_{h}(t)\) of (1.4), we have (see [5])

$$\begin{aligned} E_{h}(t)=E_{h}(0), \quad\text{for any } t\in[0,T]. \end{aligned}$$

In this paper, we will study whether the inequality (1.5) for adjoint system (1.4) holds. We are also interested in whether the constant \(C(T,h)\) is bounded as \(h\rightarrow0\). The main results of the paper are presented as follows.

Theorem 1.1

For any \(T>0\), the observability estimate (1.5) for the adjoint system (1.4) holds.

Remark 1.2

Theorem 1.1 shows that the semi-discrete system (1.2) or (1.3) is controllable for any time \(T>0\).

Theorem 1.2

For any \(T>0\), we have

$$\begin{aligned} \sup_{\textit{solution of }(1.4)}\frac{E_{h}(0)}{\int_{0}^{T}\| B_{h}\vec{\phi}_{h}(t)\|_{\mathbb{R}^{N}}^{2}\,dt}\rightarrow+\infty, \quad\textit{as } h\rightarrow0. \end{aligned}$$
(1.6)

To the best of our knowledge, Infante and Zuazua made the first study of this topic in [5]. They studied a controllability result for the semi-discrete 1-D wave equation with boundary control. However, the uniform controllability for the semi-discrete systems in [5] cannot be derived as the discretization parameter \(h\rightarrow0\). The main differences between [5] and our paper are as follows. In [5], the authors focused on a one-dimensional boundary controlled wave equation, and we mainly study the internally controlled 1-D wave equation. In this case, the controller is more complicated than the case with controller on boundary. Regarding other works on this subject, we mention [4, 7, 8] and [6].

The paper is organized as follows: Section 2 briefly describes some preliminary results on the finite difference scheme. The proofs of Theorem 1.1 and Theorem 1.2 are provided in Section 3.

2 The finite difference scheme

In this section, we will discuss the numerical project by the finite difference method. We consider the numerical problem for (1.1) in the state space \(\mathbb{R}^{N}\) with the usual Euclidean norm and inner product denoted by \(\|\cdot\| _{\mathbb{R}^{N}}\) and \(\langle\cdot,\cdot\rangle_{\mathbb{R}^{N}}\), respectively. To this end, we first introduce some properties for the eigenvalues and eigenvectors of the matrix \(A_{h}\). The spectrum for \(A_{h}\) can be explicitly computed in this case (see [9]). The eigenvalues \(\lambda_{i}(h)\) (\(i=1,\ldots, N\)) satisfy

$$\begin{aligned} \lambda_{i}(h)=\frac{4}{h^{2}}\sin^{2} \biggl(\frac{i\pi h}{2}\biggr), \end{aligned}$$
(2.1)

and the corresponding unit eigenvectors in \(\mathbb{R}^{N}\) are

$$\begin{aligned} \vec{w}_{i}^{h}=\bigl(w_{i,1}^{h}, \ldots,w_{i,N}^{h}\bigr)^{T}, \quad \text{and}\quad w_{i,j}^{h}=\sqrt{\frac{2}{N+1}}\sin(i\pi jh), \quad j=1, \ldots,N. \end{aligned}$$
(2.2)

Clearly, the family of eigenvectors

$$\begin{aligned} \bigl\{ \vec{w}_{1}^{h},\vec{w}_{2}^{h}, \ldots,\vec{w}_{N}^{h}\bigr\} \text{ forms an orthonormal basis of } \mathbb{R}^{N}. \end{aligned}$$
(2.3)

Indeed, we can easily get the following properties for these eigenvectors.

Lemma 2.1

  1. (i)

    For any eigenvector \(w=(w_{1},w_{2},\ldots,w_{N})^{T}\) with eigenvalue \(\lambda(h)\) of matrix \(A_{h}\), the following identity holds:

    $$\begin{aligned} \sum_{j=0}^{N}\biggl|\frac{w_{j}-w_{j+1}}{h}\biggr|^{2}= \lambda(h)\sum_{j=1}^{N}|w_{j}|^{2}, \end{aligned}$$
    (2.4)

    where \(w_{0}=w_{N+1}=0\).

  2. (ii)

    If \(w_{k}=(w_{k,1},w_{k,2},\ldots,w_{k,N})^{T}\) and \(w_{l}=(w_{l,1},w_{l,2},\ldots,w_{l,N})^{T}\) are eigenvectors associated to eigenvalue \(\lambda_{k}\), \(\lambda_{l}\), and \(\lambda_{k}\neq\lambda_{l}\), then we have

    $$\begin{aligned} \sum_{j=0}^{N}(w_{k,j}-w_{k,j+1}) (w_{l,j}-w_{l,j+1})=0, \end{aligned}$$
    (2.5)

    where \(w_{k,0}=w_{k,N+1}=0\), and \(w_{l,0}=w_{l,N+1}=0\).

This lemma is quoted from [5].

Lemma 2.2

Assume N is large enough so that ω contains at least two consecutive nodal points. Then

$$B_{h}\vec{w}_{i}^{h}\neq0, \quad\textit{for all } i=1,2,\ldots,N. $$

Proof

Since there are more than two consecutive nodal points in \(\omega=(a, b)\), we let \(l(h)\) denote the first natural number such that \(l(h)h\in(a,b)\), and \(m(h)\) denote the last natural number such that \(m(h)h\in(a,b)\). Then we have

$$\begin{aligned} B_{h}\vec{w}_{i}^{h}=\bigl(0,\ldots,0, w_{i,l(h)}^{h},\ldots,w_{i,m(h)}^{h}, 0,\ldots,0 \bigr)^{T}. \end{aligned}$$
(2.6)

Here,

$$w_{i,l(h)}^{h}=\sqrt{\frac{2}{N+1}}\sin\bigl(i\pi l(h)h \bigr)=\sqrt{\frac {2}{N+1}}\sin\biggl(\frac{il(h)\pi}{N+1}\biggr), $$

and

$$w_{i,l(h)+1}^{h}=\sqrt{\frac{2}{N+1}}\sin\bigl(i\pi \bigl(l(h)+1\bigr)h\bigr)=\sqrt{\frac {2}{N+1}}\sin\biggl(\frac{i(l(h)+1)\pi}{N+1} \biggr) $$

could not be zero at the same time. Thus, we complete the proof of the lemma. □

Especially, we can get the following property for the eigenvectors for the matrix \(A_{h}\), which will play a key role in the proof of the main results.

Proposition 2.1

Assume N is large enough so that ω contains at least two consecutive nodal points. Then there exists a positive constant L which is independent on h such that

$$\bigl\| B_{h}\vec{w}_{i}^{h}\bigr\| _{\mathbb{R}^{N}}>L $$

holds for the unit eigenvector \(\vec{w}_{i}^{h}\) (\(i=1,\ldots, N\)) of the matrix \(A_{h}\).

Proof

First of all, we claim that there exists a positive number M which is independent on r such that

$$\int_{a}^{b}\sin^{2}(r\pi t)\,dt>M,\quad \text{for } r=1,2,\ldots. $$

To this end, we calculate the following integrations:

$$\begin{aligned} \int_{a}^{b}\sin^{2}(r\pi t)\,dt =& \frac{b-a}{2}-\frac{\sin(2r\pi b)-\sin (2r\pi a)}{4r\pi} \\ \rightarrow& \frac{b-a}{2}, \quad\text{as } r\rightarrow\infty. \end{aligned}$$
(2.7)

On the one hand, there exists a positive number \(N_{1}\) depending only on \(b-a\), such that

$$\begin{aligned} \int_{a}^{b}\sin^{2}(r\pi t)\,dt> \frac{b-a}{4}, \quad\text{as } r>N_{1}. \end{aligned}$$

On the other hand, there exists a positive number \(M_{1}\) depending only on \(N_{1}\), such that

$$\begin{aligned} \int_{a}^{b}\sin^{2}(r\pi t)\,dt>M_{1}, \quad\text{as } 0< r\leq N_{1}. \end{aligned}$$

Taking \(M=\min\{\frac{b-a}{4}, M_{1}\}\), we see that the claim is correct.

After directly calculating, we obtain

$$\begin{aligned} \bigl\| B_{h}\vec{w}_{i}^{h} \bigr\| _{\mathbb{R}^{N}}^{2} =&\sum_{jh\in(a,b)} \frac {2}{N+1}\bigl[\sin(i\pi jh)\bigr]^{2} \\ =&\sum_{jh\in(a,b)}\frac{2}{N+1}\frac{1-\cos(2i\pi jh)}{2} \\ =&\sum_{jh\in(a,b)}\frac{1}{N+1}-\frac{1}{N+1}\sum _{jh\in (a,b)}\cos(2i\pi jh). \end{aligned}$$
(2.8)

Let \(l(h)\) denote the first natural number such that \(l(h)h\in(a,b)\), and \(m(h)\) denote the last natural number such that \(m(h)h\in(a,b)\). Obviously, we have

$$\begin{aligned} l(h)=\biggl[\frac{a}{h}\biggr]+1 \end{aligned}$$

and

$$\begin{aligned} m(h)=\left \{ \textstyle\begin{array}{l@{\quad}l} [\frac{b}{h}], & \text{if } \frac{b}{h}\notin\mathbb{N},\\ \frac{b}{h}-1, & \text{if } \frac{b}{h}\in\mathbb{N}. \end{array}\displaystyle \right . \end{aligned}$$

Note that

$$\begin{aligned} \frac{1}{N+1}\sum_{jh\in(a,b)}\cos(2i\pi jh)=\frac{1}{2(N+1)}\frac {\sin([2m(h)+1]i\pi h)-\sin([2l(h)-1]i\pi h)}{\sin(i\pi h)}. \end{aligned}$$
(2.9)

Hence, there exists a natural number \(I_{1}\) depending only on \(b-a\), such that \(\frac{1}{i}<\frac{b-a}{4}\), as \(i>I_{1}\). From the theory of classic analysis, we have

$$\begin{aligned} \frac{2}{\pi}\leq\frac{\sin x}{x}\leq1, \quad \text{as } x\in \biggl(0, \frac{\pi}{2}\biggr], \end{aligned}$$

i.e.,

$$\begin{aligned} \sin x\geq\frac{2}{\pi}x, \quad \text{as } x\in\biggl(0,\frac{\pi}{2}\biggr]. \end{aligned}$$

Combining the above inequality with (2.9), we see that

$$\begin{aligned} \biggl|\frac{1}{N+1}\sum_{jh\in(a,b)}\cos(2i\pi jh)\biggr|\leq\frac {1}{2(N+1)}\frac{2}{i\pi h}=\frac{1}{i\pi}< \frac{b-a}{4}, \end{aligned}$$
(2.10)

If \(0< i\leq I_{1}\), then

$$\begin{aligned} \sum_{jh\in(a,b)}\frac{2}{N+1}\bigl[\sin(i\pi jh) \bigr]^{2}\rightarrow2 \int _{a}^{b}\sin^{2}(i\pi t)\,dt, \end{aligned}$$

for \(i=1,2,\ldots,I_{1}\), as \(N\rightarrow\infty\), and

$$\sum_{jh\in(a,b)}\frac{1}{N+1}=\frac{m(h)-l(h)}{N+1} \rightarrow b-a, \quad\text{as } N\rightarrow\infty. $$

Thus, there exists a positive number \(N_{2}>I_{1}\) depending only on a, b and \(b-a\), such that

$$\begin{aligned} \sum_{jh\in(a,b)}\frac{1}{N+1}> \frac{b-a}{2} \end{aligned}$$
(2.11)

and

$$\begin{aligned} \sum_{jh\in(a,b)}\frac{2}{N+1}\bigl[ \sin(i\pi jh)\bigr]^{2}>M, \end{aligned}$$
(2.12)

for \(i=1,2,\ldots,I_{1}\), when \(N>N_{2}\).

Case I: \(N>N_{2}\), \(N\geq i>I_{1}\). From (2.8), (2.10), and (2.11), we can easily see that

$$\begin{aligned} \bigl\| B_{h}\vec{w}_{i}^{h}\bigr\| _{\mathbb{R}^{N}}^{2} =& \sum_{jh\in(a,b)}\frac {1}{N+1}-\frac{1}{N+1}\sum _{jh\in(a,b)}\cos(2i\pi jh) >\frac{b-a}{4}. \end{aligned}$$

Case II: \(N>N_{2}\), \(0< i\leq I_{1}\). According to (2.12), we can derive

$$\begin{aligned} \bigl\| B_{h}\vec{w}_{i}^{h}\bigr\| _{\mathbb{R}^{N}}^{2}= \sum_{jh\in(a,b)}\frac {2}{N+1}\bigl[\sin(i\pi jh) \bigr]^{2}>M. \end{aligned}$$

Case III: \(N\leq N_{2}\). According to Lemma 2.2, there exists a positive constant \(L_{1}\), which is independent on h, such that

$$\bigl\| B_{h}\vec{w}_{i}^{h}\bigr\| _{\mathbb{R}^{N}}> L_{1}, $$

for any unit eigenvector \(\vec{w}_{i}^{h}\) (\(i=1,\ldots, N\)).

In summary, taking \(L=\min\{\frac{b-a}{4}, \frac{M}{2}, L_{1}\}\), we can complete the proof of this conclusion. □

Remark 2.1

This theorem gives a fundamental property for the unit eigenvectors \(\vec{w}_{i}^{h}\) (\(i=1,\ldots,N\)) of the discrete Laplacian operator. It shows that the energy for these unit eigenvectors have an uniform lower boundary, which is positive and not dependent on h, in a nonempty and open subset \(\omega\subset(0,1)\).

Now, we need to introduce certain notations. Let \(X_{0}^{h}\) denote the space \(\mathbb{R}^{N}\) equipped with the norm \(\|\cdot\|_{0}\)

$$\begin{aligned} \|u\|_{0}=h\sum_{j=1}^{N}|u_{j}|^{2}, \quad\text{for any } u=(u_{1},u_{2},\ldots ,u_{N})^{T} \in\mathbb{R}^{N}. \end{aligned}$$

Let \(X_{1}^{h}\) denote the space \(\mathbb{R}^{N}\) equipped with the norm \(\|\cdot\|_{1}\),

$$\begin{aligned} \|u\|_{1}=h\sum_{j=0}^{N}\biggl| \frac{u_{j}-u_{j+1}}{h}\biggr|^{2}, \quad\text{for any } u=(u_{1},u_{2}, \ldots,u_{N})^{T}\in\mathbb{R}^{N}, \end{aligned}$$

where \(u_{0}=u_{N+1}=0\). According to the definitions of the discrete norms, the energy can be represented as \(E_{h}(t)=\frac{1}{2}(\|\vec{\phi}'_{h}(t)\|_{0}+\|\vec {\phi}_{h}(t)\|_{1})\), where \(\vec{\phi}_{h}(t)\) is the solution of equation (1.4).

Lemma 2.3

For any vector \(u=(u_{1},u_{2},\ldots,u_{N})^{T}\in\mathbb{R}^{N}\), we have the following inequality:

$$\begin{aligned} \lambda_{1}(h)\|u\|_{0}\leq\|u\|_{1}. \end{aligned}$$

This lemma can easily be deduced from Lemma 2.1.

Remark 2.2

  1. (i)

    According to Lemma 2.3, it is easy to find that the spaces \(X_{0}^{h}\) and \(X_{1}^{h}\) are both Banach spaces. In fact, \(X_{0}^{h}\) and \(X_{1}^{h}\) can be regarded as the discrete version of the space \(L^{2}(0,1)\) and \(H_{0}^{1}(0,1)\), respectively. Thus, Lemma 2.3 can be regarded as the discrete version of Poincaré’s inequality.

  2. (ii)

    Since \(\mathbb{R}^{N}\times\mathbb{R}^{N}\) is a finite dimensional space, thus all norms of this space are equivalent. In particular, there exist positive numbers \(C_{1}\), \(C_{2}\), such that

    $$\begin{aligned} C_{1}\bigl\| (z_{1},z_{2}) \bigr\| _{X_{0}^{h}\times X_{1}^{h}}\leq\bigl\| (z_{1},z_{2})\bigr\| _{\mathbb{R}^{N}\times\mathbb{R}^{N}}\leq C_{2}\bigl\| (z_{1},z_{2})\bigr\| _{X_{0}^{h}\times X_{1}^{h}} \end{aligned}$$
    (2.13)

    hold for any \((z_{1},z_{2})\in\mathbb{R}^{N}\times\mathbb{R}^{N}\).

3 The proof of Theorem 1.1 and Theorem 1.2

3.1 The proof of Theorem 1.1

Proof

First of all, we will prove that there exists a positive constant \(C(T,h)\) such that the inequality

$$\begin{aligned} \bigl\| \bigl(\vec{\phi}_{0}^{h}, \vec{ \phi}_{1}^{h}\bigr)\bigr\| ^{2}_{\mathbb{R}^{N}\times \mathbb{R}^{N}}\leq C(T,h) \int_{0}^{T}\bigl\| B_{h}\vec{ \phi}_{h}(t)\bigr\| _{\mathbb{R}^{N}}^{2}\,dt \end{aligned}$$
(3.1)

holds, where \(\vec{\phi}_{h}(t)\) is the solution of (1.4) with initial data \((\vec{\phi}_{0}^{h},\vec{\phi}_{1}^{h})\).

Let \(T>0\). We first define a function \(F: \mathbb{R}^{N}\times\mathbb{R}^{N}\rightarrow\mathbb {R}\) as

$$F\bigl(\vec{\phi}_{0}^{h},\vec{\phi}_{1}^{h} \bigr)= \int_{0}^{T}\bigl\| B_{h}\vec{ \phi}_{h}(t)\bigr\| _{\mathbb{R}^{N}}^{2}\,dt, $$

where \(\vec{\phi}_{h}(t)\) is the solution of (1.4) with initial data \((\vec{\phi}_{0}^{h},\vec{\phi}_{1}^{h})\). Obviously, F is continuous. Now, we will prove that

$$\begin{aligned} \min\bigl\{ F\bigl(\vec{\phi}_{0}^{h},\vec{ \phi}_{1}^{h}\bigr) ; \bigl\| \bigl(\vec{\phi }_{0}^{h},\vec{\phi}_{1}^{h}\bigr) \bigr\| _{\mathbb{R}^{N}\times\mathbb{R}^{N}}=1\bigr\} \geq L(h,T) \end{aligned}$$
(3.2)

holds for certain positive constant \(L(h,T)\) only depending on h and T.

Suppose that there exists an unit vector \((\vec{\varphi}_{0}^{h},\vec{\varphi}_{1}^{h})\) in \(\mathbb{R}^{N}\times \mathbb{R}^{N}\) such that \(F(\vec{\varphi}_{0}^{h},\vec{\varphi}_{1}^{h})=0\). Since \(\|(\vec{\varphi}_{0}^{h},\vec{\varphi}_{1}^{h})\|_{\mathbb{R}^{N}\times \mathbb{R}^{N}}=1\), \(\vec{\varphi}_{0}^{h}\) and \(\vec{\varphi}_{1}^{h}\) could not be zero at the same time. Without loss of generality, we assume that \(\vec{\varphi}_{0}^{h}\neq0\). According to (2.3), we have

$$\begin{aligned} \vec{\varphi}_{0}^{h}=\sum_{j=1}^{N} \varphi_{0}^{j} \vec{w}_{j}^{h} \end{aligned}$$

and

$$\begin{aligned} \vec{\varphi}_{1}^{h}=\sum_{j=1}^{N} \varphi_{1}^{j} \vec{w}_{j}^{h}, \end{aligned}$$

where \(\sum_{j=1}^{N}|\varphi_{0}^{j}|^{2}=\|\vec{\varphi}_{0}^{h}\|_{\mathbb {R}^{N}}^{2}\neq0\). Solving (1.4), we can deduce that

$$\begin{aligned} \vec{\phi}_{h}(t)=\sum_{j=1}^{N} \beta _{j}(t)\vec{w}_{j}^{h}, \end{aligned}$$
(3.3)

where \(\beta_{j}(t)=\varphi_{0}^{j}\cos(\sqrt{\lambda_{j}(h)}t)+\frac {\varphi_{1}^{j}}{\sqrt{\lambda_{j}(h)}}\sin(\sqrt{\lambda_{j}(h)}t)\).

From the definition of the function F and the assumption that \(F(\vec{\varphi}_{0}^{h},\vec{\varphi}_{1}^{h})=0\), we have

$$\begin{aligned} 0=F\bigl(\vec{\varphi}_{0}^{h},\vec{\varphi}_{1}^{h} \bigr)= \int_{0}^{T}\bigl\| B_{h}\vec{\phi }_{h}(t)\bigr\| _{\mathbb{R}^{N}}^{2}\,dt = \int_{0}^{T}\sum_{j=1}^{N}\bigl| \beta_{j}(t)B_{h}\vec{w}_{j}^{h}\bigr|^{2}\,dt. \end{aligned}$$

Thus,

$$\begin{aligned} \sum_{j=1}^{N} \beta_{j}(t)B_{h}\vec{w}_{j}^{h}=0, \quad\text{for any } t\in[0,T]. \end{aligned}$$
(3.4)

It follows from Lemma 2.1 or Proposition 2.1 that \(B_{h}\vec{w}_{j}^{h}\neq0\) for any \(j=1,2,\ldots,N\). It is obvious that the rank of subspace spanned by \(\{B_{h}\vec{w}_{1}^{h}, \ldots,B_{h}\vec{w}_{N}^{h}\}\) is less than N. Therefore, we can assume that \(B_{h}\vec{w}_{1}^{h}, \ldots,B_{h}\vec{w}_{\alpha}^{h}\), with \(1\leq\alpha< N\), are linear independent in \(\mathbb{R}^{N}\), and

$$\operatorname{span}\bigl\{ B_{h}\vec{w}_{1}^{h}, \ldots,B_{h}\vec{w}_{\alpha}^{h}\bigr\} =\operatorname{span} \bigl\{ B_{h}\vec{w}_{1}^{h}, \ldots,B_{h} \vec{w}_{N}^{h}\bigr\} . $$

Hence,

$$\begin{aligned} B_{h}\vec{w}_{q}^{h}=\sum _{j=1}^{\alpha}b_{qj}B_{h} \vec{w}_{j}^{h},\quad\text{for any } q=\alpha+1,\ldots,N. \end{aligned}$$

For any \(q=\alpha+1,\ldots,N\), there exists at least one scalar \(b_{qj(q)}\) (\(1\leq j(q)\leq\alpha\)) such that \({b_{qj(q)}\neq0}\). This, together with (3.4), indicates that

$$\begin{aligned} 0 =&\sum_{j=1}^{\alpha}\beta_{j}(t)B_{h} \vec{w}_{j}^{h}+\sum_{q=\alpha +1}^{N} \beta_{q}(t) \Biggl(\sum_{j=1}^{\alpha}b_{qj}B_{h}\vec{w}_{j}^{h} \Biggr) \\ =&\sum_{j=1}^{\alpha} \Biggl( \beta_{j}(t)+\sum_{q=\alpha+1}^{N}\beta _{q}(t)b_{qj} \Biggr)B_{h}\vec{w}_{j}^{h}, \quad\text{for any } t\in[0,T]. \end{aligned}$$
(3.5)

According to the linear independence of \(\{B_{h}\vec{w}_{j}^{h}\}_{j=1}^{\alpha}\), we can deduce that

$$\begin{aligned} \beta_{j}(t)+\sum_{q=\alpha+1}^{N} \beta_{q}(t)b_{qj}=0, \quad\text{for any } j=1,2,\ldots,\alpha, \text{and for any } t\in[0,T]. \end{aligned}$$
(3.6)

Taking \(t=0\), we get

$$\begin{aligned} \varphi_{0}^{j}+\sum_{q=\alpha+1}^{N} \varphi_{0}^{q}b_{qj}=0, \quad\text{for any } j=1,2, \ldots,\alpha. \end{aligned}$$

Differentiating (3.6) twice and taking \(t=0\), we have

$$\begin{aligned} \lambda_{j}(h)\varphi_{0}^{j}+\sum _{q=\alpha+1}^{N}\lambda_{q}(h)\varphi _{0}^{q}b_{qj}=0, \quad \text{for any } j=1,2,\ldots, \alpha. \end{aligned}$$

By induction, we obtain

$$\begin{aligned} \lambda_{j}^{m}(h)\varphi_{0}^{j}+\sum _{q=\alpha+1}^{N}\lambda _{q}^{m}(h) \varphi_{0}^{q}b_{qj}=0, \quad\text{for any } j=1,2, \ldots,\alpha, \text{and } m\in \mathbb{N}^{+}. \end{aligned}$$
(3.7)

It follows from (2.1) that \(\{\lambda_{j}(h)\}_{j=1}^{N}\) are different from each other. Thus, we can deduce that

$$\begin{aligned} \varphi_{0}^{j}=0, \quad\text{for any } j=1,2, \ldots,\alpha, \end{aligned}$$
(3.8)

and

$$\begin{aligned} \varphi_{0}^{q}b_{qj}=0, \quad\text{for any } q= \alpha+1,\ldots,N, \text{and } j=1,2,\ldots,\alpha. \end{aligned}$$

Taking \(j=j(q)\), we have

$$\begin{aligned} \varphi_{0}^{q}=0, \quad\text{for any } q=\alpha+1,\ldots,N. \end{aligned}$$

This, together with (3.8), leads to a contradiction to the assumption that \(\vec{\varphi}_{0}^{h}\neq0\). Thus, (3.2) holds. Note that (3.3) implies \(F(\mu\upsilon_{1}, \mu\upsilon _{2})=\mu^{2}F(\mu\upsilon_{1}, \mu\upsilon_{2})\) for every \((\upsilon_{1}, \upsilon_{2})\in{\mathbb{R}^{N}}\times{\mathbb {R}^{N}}\) and \(\mu\in\mathbb{R}\). Thus, it is obvious that inequality (3.2) leads to (3.1).

From (3.1) with (2.13), it is easy to obtain the observability inequality (1.5) of the semi-discrete system (1.4). This completes the proof of this theorem. □

3.2 The proof of Theorem 1.2

Proof

Given the initial data \(\vec{\phi}_{0}^{h}=\vec{w}_{N}^{h}\), and \(\vec{\phi}_{1}^{h}=0\), the solution of equation (1.4) can be represented as

$$\begin{aligned} \vec{\phi}_{h}(t)=\cos\bigl(\sqrt{\lambda_{j}(h)}t\bigr) \vec{w}_{N}^{h}. \end{aligned}$$

By Lemma 2.1, one shows that

$$\begin{aligned} E_{h}(0)=\frac{h}{2}\sum _{i=0}^{N}\biggl|\frac{w_{N,i+1}-w_{N,i}}{h}\biggr|^{2}= \frac {h}{2}\lambda_{N}(h), \end{aligned}$$
(3.9)

where \(w_{N,0}=w_{N,N+1}=0\). Then

$$\begin{aligned} \int_{0}^{T}\bigl\| B_{h}\vec{ \phi}_{h}(t)\bigr\| _{\mathbb{R}^{N}}^{2}\,dt =& \int_{0}^{T}\bigl|\cos \bigl(\sqrt{\lambda_{N}(h)}t \bigr)B_{h}\vec{w}_{N}^{h}\bigr|^{2}\,dt \end{aligned}$$
(3.10)
$$\begin{aligned} =&\bigl\| B_{h}\vec{w}_{N}^{h}\bigr\| _{\mathbb{R}^{N}}^{2} \int_{0}^{T}\bigl|\cos\bigl(\sqrt{\lambda _{N}(h)}t\bigr)\bigr|^{2}\,dt\leq T. \end{aligned}$$
(3.11)

It follows from (3.9) and (3.10) that

$$\begin{aligned} \frac{E_{h}(0)}{\int_{0}^{T}\|B_{h}\vec{\phi}_{h}(t)\|_{\mathbb {R}^{N}}^{2}\,dt}\geq \frac{\frac{h}{2}\lambda_{N}(h)}{T}=\frac{h\lambda_{N}(h)}{2T}. \end{aligned}$$
(3.12)

By (2.1), we derive that

$$\begin{aligned} \frac{h\lambda_{N}(h)}{2T}=\frac{2\sin^{2}(\frac{\pi Nh}{2})}{Th}\rightarrow\infty, \quad \text{as } h \rightarrow0. \end{aligned}$$
(3.13)

This completes the proof of this theorem. □