1 Introduction

We consider the model of a Timoshenko beam slowly rotating in a horizontal plane, where the left end of the beam is clamped to the disk of a driving motor [1]. A number of works concerning various controllability problems for the model were published. In particular, Krabs and Sklyar [2] analyzed the appropriate non-Fourier trigonometric moment problem in 1999, inspired by Russel (e.g., [3]). They showed that the system is rest-to-rest controllable, under some conditions on the physical properties of the materials of the beam, if the time of steering is strictly greater than the certain critical value (minimal controllability time). The more general problem of description of all reachable states of the system of the beam and the disk was solved in [4, 5]. Those results were used to obtain a new controllability condition of the beam in the form of smoothness of end states [6]. A similar model of a non-homogeneous beam was considered in [7] where the minimal time for a rest-to-rest controllability was found.

It is worth to mention that there exists only at most one control function steering from the preassigned position of rest to another, if the time of steering equals exactly the minimal value. If we allow the time to be greater than the minimal controllability one, then there exists an infinite number of such controls. Therefore, it is natural to consider only the intervals larger than the minimal time and searching for controls satisfying additional special requirements. The numerical construction of a control function steering the beam from one position of rest to another was given in 2000 for a time of movement large enough, in the form of a piecewise constant function [8]. In the work, we consider another problem. Namely, we want to construct a control that is optimal in the sense of minimal energy (i.e., fuel) consumption during the movement, steering the beam from one assigned position of rest to another, given the time is greater than the minimal controllability time. The idea of the framework is to observe that the optimal function lies in a closure of a specific linear span, which allows to rewrite the control as a solution of an appropriate moment problem. The moment problem obtained in this way is non-Fourier—the eigenvalues of a related operator are simple, however they come in pairs asymptotically close to each other. It still appears to be possible to approximate such pairs of related moment generating functions corresponding to close eigenvalues by some divided differences. Besides, those functions are close to periodic ones, and this fact allows us to approximate the moment problem by another infinite moment problem of a special type. Such an idea was used for solving a certain problem of optimal control of a string in [9]. In the end we obtain the finite set of linear equations. The approach of replacing an infinite set of equations of some non-Fourier moment problem by a finite set is used, for example, in [10, 11]. A similar idea for a non-homogeneous string was presented lately in [12]. We also show that the speed of convergence of the presented method is essentially better than the one obtained in the classical truncation method [13].

2 Optimal Control Problem as a Moment Problem

We consider the linearized model from [1] of a Timoshenko beam slowly rotating in a horizontal plane. The left end of the beam is clamped to the disk of a driving motor. We denote by \(r>0\) the radius of that disk and let \(\theta =\theta (t)\) be the rotation angle considered as a function of time \((t \ge 0)\). Further on, we denote some of the physical properties of the beam: \(E\)—the flexural rigidity, \(K\)—shear stiffness and \(A\)—cross section area; the length of the beam is assumed to be \(1\). We denote by \(w(x,t)\) the deflection of the center line of the beam and by \(\xi (x,t)\) the rotation angleFootnote 1 of the cross section area at the location \(x\) and at the time \(t\). Then the linearized model of behavior of \(w\) and \(\xi \) is given (see [14] and the references therein) in the form of the following system of two partial differential equations:

$$\begin{aligned} \begin{array}{l} \ddot{w}(x,t)-w''(x,t)-\xi '(x,t)=-\ddot{\theta }(t)(r+x),\\ \ddot{\xi }(x,t)-\gamma ^2\xi ''(x,t)+w'(x,t)+\xi (x,t)=\ddot{\theta }(t) \end{array} \end{aligned}$$
(1)

with \(x\in ]0,1[\) and \(t>0\), where \(\gamma ^2=\frac{EA}{K}\), \(\dot{y}(x,t)=\frac{\partial }{\partial t}y(x,t)\), \(y'(x,t)=\frac{\partial }{\partial x}y(x,t)\). In addition to (1), we impose boundary conditions given by

$$\begin{aligned} w(0,t)=\xi (0,t)= w'(1,t)+\xi (1,t)=\xi '(1,t)=0 \end{aligned}$$
(2)

for \(t\ge 0\), which mean that there is no deformation at the clamped end and the energy balance law holds on the other end. We also assume that the radius \(r\) of the disk is non-singular [14]. The beam will be controlled by the angular acceleration of the motor disk, that is by \(\ddot{\theta }\).

From now on we assume that \(\gamma =1\). Let time \(T>0\) (large enough) and position \(\theta _T\in \mathbb {R}\) be given. We steer the beam by angular acceleration of the disk, that is by \(\ddot{\theta }\). For our convenience let us denote \(u(t)=\ddot{\theta }(T-t)\) and assume that \(T=2M\), \(M\in \mathbb {N}\), \(M>2\). We want to find a control \(u\in L^2[0,T]\) such that it moves the beam from the position of rest at time \(t=0\) and angle \(\theta =0\), i.e.,

$$\begin{aligned} w(x,0)&=\dot{w}(x,0)=0, \nonumber \\ \xi (x,0)&=\dot{\xi }(x,0)=0, \nonumber \\ \theta (0)&=\dot{\theta }(0)=0, \end{aligned}$$
(3)

\(x\in [0,1]\), to the position of rest at time \(t=T\) and angle \(\theta _T\), i.e.,

$$\begin{aligned} w(x,T)&=\dot{w}(x,T)=0, \nonumber \\ \xi (x,T)&=\dot{\xi }(x,T)=0, \nonumber \\ \dot{\theta }(T)&=0,\quad \theta (T)=\theta _T, \end{aligned}$$
(4)

\(x\in [0,1]\). Moreover, we want \(u\) to be optimal in the following sense:

$$\begin{aligned} \min _u\int \nolimits _0^Tu^2(t)\mathrm{{d}}t. \end{aligned}$$
(5)

It is shown [1] that if \(T\ge 4\) the problem of finding a rest to rest control (1)–(4) is solvable, and the solution is a real function. It is proven that it is equivalent to solving the following moment problem

$$\begin{aligned} \begin{array}{rrl} &{}&{}\displaystyle \int \nolimits _0^T\mathrm{{e}}^{i\sqrt{\lambda _n}t}u(t)\mathrm{{d}}t=0, \quad n\in \mathbb {Z},\\ &{}&{}\displaystyle \int \nolimits _0^Tu(t)\mathrm{{d}}t=0,\\ &{}&{}\displaystyle \int \nolimits _0^Ttu(t)\mathrm{{d}}t=\theta _T, \end{array} \end{aligned}$$
(6)

where for \(n>0\), \(\lambda _n\) is an eigenvalue of the operator connected with the beam equation (1), and for \(n\le 0\) we put \(\lambda _{-n}=-\lambda _{n+1}\). In [2] an analysis of values \(\lambda _n\) is given, in particular it is shown that for \(n>0\) we have

$$\begin{aligned} \sqrt{\lambda _n}=\left\{ \begin{array}{rcll} \frac{2k-1}{2}\pi -\varepsilon _n &{} \quad \hbox {for}&{}n=2k-1,&{}\\ \frac{2k-1}{2}\pi +\varepsilon _n &{} \quad \hbox {for}&{}n=2k, &{}\varepsilon _n=\mathcal {O}(\frac{1}{n}).\\ \end{array} \right. \end{aligned}$$
(7)

The moment problem (6) has a major disadvantage—its generating family contains two families of exponential functions with exponents that approach one another, therefore it does not constitute a Riesz basis. We will work this out by considering an equivalent moment problem. It is easy to see that each pair of moment equations from (6), corresponding to the eigenvalues close to each other, that is

$$\begin{aligned} \int \nolimits _0^T\mathrm{{e}}^{i\sqrt{\lambda _{2k-1}}t}u(t)\mathrm{{d}}t=0, \quad \int \nolimits _0^T\mathrm{{e}}^{i\sqrt{\lambda _{2k}}t}u(t)\mathrm{{d}}t=0, \end{aligned}$$

is equivalent to a pair of equations

$$\begin{aligned} \int \nolimits _0^T\mathrm{{e}}^{i\sqrt{\lambda _{2k-1}}t}u(t)\mathrm{{d}}t=0, \quad \int \nolimits _0^T\frac{\mathrm{{e}}^{i\sqrt{\lambda _{2k}}t}-\mathrm{{e}}^{i\sqrt{\lambda _{2k-1}}t}}{\sqrt{\lambda _{2k}}t-\sqrt{\lambda _{2k-1}}t}u(t)\mathrm{{d}}t=0, \end{aligned}$$

which allows us to rewrite the original moment problem (6) as

$$\begin{aligned} \begin{array}{rrl} &{}&{}\displaystyle \int \nolimits _0^T \mathrm{{e}}^{i\sqrt{\lambda _k}t} u(t)\mathrm{{d}}t=0,\quad -N+1 \le 2k-1 < N,\\ &{}&{}\displaystyle \int \nolimits _0^T\mathrm{{e}}^{i\sqrt{\lambda _{2k-1}}t}u(t)\mathrm{{d}}t=0,\quad -N+1 > 2k-1 \hbox { or } 2k-1 > N, \\ &{}&{}\displaystyle \int \nolimits _0^T\frac{\mathrm{{e}}^{i\sqrt{\lambda _{2k}}t}-\mathrm{{e}}^{i\sqrt{\lambda _{2k-1}}t}}{\sqrt{\lambda _{2k}}t-\sqrt{\lambda _{2k-1}}t}u(t)\mathrm{{d}}t=0,\quad -N+1 > 2k-1 \hbox { or } 2k-1 > N, \\ &{}&{}\displaystyle \int \nolimits _0^T u(t)\mathrm{{d}}t=0,\\ &{}&{}\displaystyle \int \nolimits _0^T tu(t)\mathrm{{d}}t=\theta _T, \end{array} \end{aligned}$$
(8)

for a fixed even integer \(N\), with generating functions constituting a Riesz basis (see [5]).

3 Classical Truncation Method

In a general Hilbert (separable) space setting, solving an infinite moment problem of the form

$$\begin{aligned} \langle u, z_j \rangle = c_j, \quad j \in \mathbb {N} \end{aligned}$$
(9)

(of which (8) is a special case), is not an easy task. One of the most common approaches consists in truncating the system into a finite number of equations (see, for example, [13] or the monograph [15]). Let us discuss it in details.

Following the notation from [15], consider the truncated moment problem

$$\begin{aligned} \langle u, z_j \rangle = c_j, \quad j \in \{1, \ldots , N\} \end{aligned}$$
(10)

for an arbitrary \(N \in \mathbb {N}\). Providing the sequence \((z_j)\) is linearly independent, there exists exactly one solution \(u^N\) of (10) of the form

$$\begin{aligned} u^N:=\sum _{j=1}^N \xi _j^N z_j, \end{aligned}$$

and \(u^N\) has the smallest possible norm \(\Vert u^N\Vert \) among all solutions of (10). Moreover, if we assume that

$$\begin{aligned} \lim _{N\rightarrow \infty }\inf \Big \{\Vert u\Vert \, \big | \, u\,\mathrm{{solves\,(10)}}\Big \} < \infty , \end{aligned}$$

then there is exactly one solution \(u = u^\infty \) of (9) with the least norm, given by

$$\begin{aligned} u^\infty :=\lim _{N\rightarrow \infty }u^N. \end{aligned}$$

The truncating method has no bound though for the convergence speed of the solutions \(u^N\) of (10) to the solution \(u^\infty \) of (9), and this speed can be made arbitrarily slow. Let us consider a following example. Let \(e_0, e_1, \ldots \) be an orthonormal basis, choose \(z_0 \ne e_0\) such that \(\langle e_0, z_0 \rangle =1\). Now consider a following moment problem:

$$\begin{aligned} \begin{array}{l} \langle u, e_0 \rangle = 1, \qquad \langle u, \tilde{e}_k \rangle = 0, \quad k \in \mathbb {N}, \end{array} \end{aligned}$$
(11)

where \(\tilde{e}_k=e_k - \langle z_0, e_k \rangle e_0\). The generating family of (11), that is \(e_0, \tilde{e}_1, \tilde{e}_2, \ldots \), is in fact a biorthogonal set to \(z_0, e_1, e_2, \ldots \), therefore it constitutes a Riesz basis. One can easily observe that this system has a unique solution, namely \(u^\infty =z_0\). Now consider a truncation of (11), that is

$$\begin{aligned} \begin{array}{l} \langle u, e_0 \rangle = 1,\qquad \langle u, \tilde{e}_k \rangle = 0, \quad k \in \{1, \ldots , N\}. \end{array} \end{aligned}$$
(12)

Observe that for the solution \(u^N\) of (12) holds the following:

$$\begin{aligned} u^N&= z_0+\sum _{k=N+1}^{\infty } \Big \langle u^N, \tilde{e}_k \Big \rangle e_k = z_0 - \sum _{k=N+1}^{\infty } \langle z_0, e_k \rangle e_k,\\ \end{aligned}$$

so \(u^N\) converges to \(z_0\), but the speed of convergence is arbitrarily slow, depending on \(z_0\).

In our case, a moment problem (8) is very similar to (11), we also need to find a vector \(z_0\) biorthogonal to \(e_0(t)=t\). We deal with a much more complicated set of generating vectors though, they are not biorthogonal to each other. The classical general methods, like truncation or Galerkin approximation, do not use any properties of a moment problem generating vectors. In case of not biorthogonal generating vectors, the resulting speed of convergence may be even less predictable. Here we propose another approach. The moment problem in question, (8), is generated by some special functions. We exploit the fact that the exponents \(\sqrt{\lambda _n}\) have a special asymptotic behavior, given by (7). This means that those exponential functions are in fact close to trigonometric functions—at least for large indices. Knowing this we try not to truncate the problem, but to find the orthogonal complement subspace, and to seek for the approximation of the optimal solution in this subspace. The speed of convergence of the obtained approximations is significantly better than of the one obtained by general methods.

4 Approximation of a Moment Problem

In this section we show that the optimal solution of moment problem (8) can be approximated by a solution that is optimal in the sense of (5) of

$$\begin{aligned} \begin{array}{rrl} &{}&{}\displaystyle \int \nolimits _0^T \mathrm{{e}}^{i\sqrt{\lambda _n}t} u(t)\mathrm{{d}}t=0,\quad \left\lfloor \frac{-N-2}{2} \right\rfloor \le n \le \left\lfloor \frac{N+1}{2} \right\rfloor ,\\ &{}&{}\displaystyle \int \nolimits _0^T \mathrm{{e}}^{i\frac{2n+1}{2}\pi t} u(t)\mathrm{{d}}t=0,\quad \left\lfloor \frac{-N-2}{2} \right\rfloor \ge n \hbox { or } n \ge \left\lfloor \frac{N+1}{2} \right\rfloor , \\ &{}&{}\displaystyle \int \nolimits _0^T t\mathrm{{e}}^{i\frac{2n+1}{2}\pi t} u(t)\mathrm{{d}}t=0,\quad \left\lfloor \frac{-N-2}{2} \right\rfloor \ge n \hbox { or } n \ge \left\lfloor \frac{N+1}{2} \right\rfloor , \\ &{}&{}\displaystyle \int \nolimits _0^T u(t)\mathrm{{d}}t=0,\\ &{}&{}\displaystyle \int \nolimits _0^T tu(t)\mathrm{{d}}t=\theta _T, \end{array} \end{aligned}$$
(13)

where \(N\in 2\mathbb {N}\) is large enoughFootnote 2.

To do this we will use the notion of quadratically close Riesz bases and some results from [12], which we recollect here briefly.

We say that two families \((\varphi _k)\) and \((\varphi '_k)\) of a separable Hilbert space are quadratically \(\varepsilon \)-close if

$$\begin{aligned} \sum _k\Vert \varphi _k-\varphi '_k\Vert ^2<\varepsilon . \end{aligned}$$

We will use the following two lemmas later, the first one is an essential theorem from [16].

Lemma 4.1

(cf. [16]) Assume \((\psi _k)\) and \((\psi '_k)\) are Riesz bases biorthogonal to \((\varphi _k)\) and \((\varphi '_k)\) respectively. If \((\varphi _k)\) and \((\varphi '_k)\) are quadratically \(\varepsilon \)-close then \((\psi _k)\) and \((\psi '_k)\) are quadratically \(\varepsilon \)-close.

Next lemma was stated in [12].

Lemma 4.2

Denote by \(\mathcal {L}\) and \(\mathcal {L}_N\) the closures of linear spans of \((\varphi _k)\) and \((\varphi ^N_k)\) respectively, and by \(\mathcal {L}^\bot \) and \(\mathcal {L}_N^\bot \) the orthonormal complements of \(\mathcal {L}\) and \(\mathcal {L}_N\). Let \((x_k)\) be the orthonormal basis of \(\mathcal {L}^\bot \) and let \(P:\mathcal {L}^\bot \rightarrow \mathcal {L}_N^\bot \) be a projection. There exist a positive constant \(C\) such that for any sufficiently small \(\varepsilon >0\) the systems \((x_k)\) and \((Px_k)\) are quadratically \(C\varepsilon \)-close, provided \((\varphi _k)\) and \((\varphi ^N_k)\) are quadratically \(\varepsilon \)-close.

Proof

Given \(x \in \mathcal {L}^\bot \) we write \(x = Px + Qx\), where \(Q : \mathcal {L}^\bot \rightarrow \mathcal {L}_N\) is the projection. Then, because \((\varphi ^N_n)\) is a Riesz basis, there exists a constant \(C\) such that for any \(y \in \mathcal {L}_N\) the inequality

$$\begin{aligned} \Vert y\Vert ^2 \le C \sum _n \Big |\langle y,\varphi ^N_n\rangle \Big |^2 \end{aligned}$$

holds [17] and because \((\varphi ^N_n)\) is quadratically \(\varepsilon -\)close to \((\varphi _n)\), for sufficiently small \(\varepsilon \), \(C\) can be chosen universally for all \(N \ge N_0\). Then we have

$$\begin{aligned} \sum _k\Vert x_k-Px_k\Vert ^2&= \sum _k\Vert Qx_k\Vert ^2 \le C \sum _{k,n} |\langle Qx_k,\varphi ^N_n\rangle |^2\\&= C \sum _{k,n} \Big |\langle x_k,\varphi ^N_n\rangle \Big |^2 = C \sum _{k,n} \Big |\langle x_k,\varphi ^N_n-\varphi _n\rangle \Big |^2. \end{aligned}$$

Because \((x_k)\) constitutes an orthonormal set, the last expression is less or equal to

$$\begin{aligned} C\sum _n\Vert \varphi _n^N-\varphi _n\Vert ^2. \end{aligned}$$

It follows that

$$\begin{aligned} \sum _k\Big \Vert x_k-Px_k\Big \Vert ^2 \le C\sum _n\Big \Vert \varphi _n^N-\varphi _n\Big \Vert ^2<C\varepsilon . \end{aligned}$$

The proof is complete. \(\square \)

Given a fixed even integer \(N\), define \((\varphi _k)\) to be a sequence of functions generating the moment problem (8), i.e., \(\mathrm{{e}}^{i\sqrt{\lambda _{k}}t} (|k| \le N)\), \(\mathrm{{e}}^{i\sqrt{\lambda _{2k-1}}t} (|k|> N)\),

\(\frac{\mathrm{{e}}^{i\sqrt{\lambda _{2k}}t}-\mathrm{{e}}^{i\sqrt{\lambda _{2k-1}}t}}{\sqrt{\lambda _{2k}}t-\sqrt{\lambda _{2k-1}}t}\) \((|k|> N)\), \(1\) and \(t\), with \(\varphi _0(t)=t\), and define \((\varphi ^N_k)\) to be a sequence of functions generating the moment problem (13), i.e., \(\mathrm{{e}}^{i\sqrt{\lambda _k}t}\) \((|k| \le N)\), \(\mathrm{{e}}^{i\frac{2k+1}{2}\pi t}\) \((|k|>N)\), \(t\mathrm{{e}}^{i\frac{2k+1}{2}\pi t}\) \((|k|>N)\), \(1\) and \(t\), with \(\varphi _0^N(t)=t\). The family of sequences \((\varphi ^N_k)\) approximates the initial one, \((\varphi _k)\), in the sense of the following theorem.

Theorem 4.1

For any \(\varepsilon >0\) there exists \(N_0\) such that for \(N\ge N_0\) the families \((\varphi _k)\) and \((\varphi ^N_k)\) are quadratically \(\varepsilon \)-close.

Proof

It suffices to prove that two series,

$$\begin{aligned} \displaystyle \sum _{|n|>N}\left\| \mathrm{{e}}^{i\sqrt{\lambda _{2n-1}}t}-\mathrm{{e}}^{i\frac{2n-1}{2}\pi t}\right\| ^2 \end{aligned}$$

and

$$\begin{aligned} \displaystyle \sum _{|n|>N}\left\| \frac{\mathrm{{e}}^{i\sqrt{\lambda _{2n}}t}-\mathrm{{e}}^{i\sqrt{\lambda _{2n-1}}t}}{\sqrt{\lambda _{2n}}t-\sqrt{\lambda _{2n-1}}t}-t\mathrm{{e}}^{i\frac{2n+1}{2}\pi t}\right\| ^2, \end{aligned}$$

should be as small as needed for sufficiently large \(N\). We know that \(\varepsilon _{2n-1}=\frac{2n-1}{2}\pi - \sqrt{\lambda _{2n-1}} = \mathcal {O}(\frac{1}{n})\) by (7) and therefore we can rewrite the \(n\)-th term of the first series as

$$\begin{aligned} \left\| \mathrm{{e}}^{i\sqrt{\lambda _{2n-1}}t}-\mathrm{{e}}^{i\frac{2n-1}{2}\pi t}\right\| ^2&= \left\| \mathrm{{e}}^{i(n\pi -\varepsilon _{2n-1}) t}-\mathrm{{e}}^{in\pi t}\right\| ^2\\&= 2\int \nolimits _0^T\mathrm{{d}}t - 2\int \nolimits _0^T\cos (\varepsilon _{2n-1}t)\mathrm{{d}}t =\mathcal {O}\left( \frac{1}{n^2}\right) , \end{aligned}$$

which means that for an arbitrary \(\varepsilon >0\) we can find such a large \(N_1\) that for any \(N>N_1\) the following inequality holds:

$$\begin{aligned} \displaystyle \sum _{|n|>N}\left\| \mathrm{{e}}^{i\sqrt{\lambda _{2n-1}}t}-\mathrm{{e}}^{i\frac{2n-1}{2}\pi t}\right\| ^2<\frac{\varepsilon }{2}. \end{aligned}$$

Using the same method, one can see that there exists such a large \(N_2\) that for any \(N>N_2\),

$$\begin{aligned} \displaystyle \sum _{|n|>N}\left\| \frac{\mathrm{{e}}^{i\sqrt{\lambda _{2n}}t}-\mathrm{{e}}^{i\sqrt{\lambda _{2n-1}}t}}{\sqrt{\lambda _{2n}}t-\sqrt{\lambda _{2n-1}}t}-t\mathrm{{e}}^{i\frac{2n+1}{2}\pi t}\right\| ^2<\frac{\varepsilon }{2}. \end{aligned}$$

Hence we obtain

$$\begin{aligned} \sum _k\Big \Vert \varphi _k-\varphi ^N_k\Big \Vert ^2<\varepsilon \end{aligned}$$

for any \(N>\max \{N_1,N_2\}\) which completes the proof. \(\square \)

Lemma 4.1 implies that not only the families \((\varphi _k)\) and \((\varphi ^N_k)\) are quadratically \(\varepsilon \)-close, but their biorthogonal families \((\psi _k)\) and \((\psi ^N_k)\) are also quadratically \(\varepsilon \)-close. Now we are ready to state the following

Theorem 4.2

Let

$$\begin{aligned} u_N:=\sum _k\Big \langle u_N,\varphi _k^N \Big \rangle \psi _k^N, \qquad u_0:=\sum _k \Big \langle u_0,\varphi _k \Big \rangle \psi _k, \end{aligned}$$

where \(\varphi _k^N\), \(\psi _k^N\), \(\varphi _k\) and \(\psi _k\) are defined above. Then

$$\begin{aligned} \lim _{N\rightarrow \infty }\Vert u_N-u_0\Vert =0. \end{aligned}$$

In consequence, the sequence of optimal solutions of moment problems (13) converges to the optimal solution of a moment problem (6).

Proof

Define \(\left( \psi _{\xi }\right) :=\left( \psi _{k}\right) \cup \left( x_{k}\right) \) and \(\left( \psi _{\xi }^N\right) :=\left( \psi _{k}^N\right) \cup \left( Px_{k}\right) \). Because \(\left( \psi _{\xi }\right) \) and \(\left( \psi _{\xi }^N\right) \) are Riesz bases in \(H=L^2[0,T]\) we may write

$$\begin{aligned} u_N=\sum _{\xi }\Big \langle u_N,\varphi _{\xi }^N\Big \rangle \psi _{\xi }^N,\qquad u_0=\sum _{\xi }\Big \langle u_0,\varphi _{\xi }\Big \rangle \psi _{\xi }. \end{aligned}$$

Let \(\varepsilon >0\) be given. By Theorem 4.1 there exists \(N_1\) such that for any \(N>N_1\) the families \((\varphi _k)\) and \((\varphi ^N_k)\) are quadratically (\(\frac{\varepsilon ^2}{2\theta _T^2}\))-close, from Lemma 4.1 we see that the families \((\psi _k)\) and \((\psi ^N_k)\) are also quadratically (\(\frac{\varepsilon ^2}{2\theta _T^2}\))-close. Lemma 4.2 implies that there exists \(N_2\) such that for any \(N>N_2\) the families \((x_k)\) and \((Px_k)\) are quadratically (\(\frac{\varepsilon ^2}{2\theta _T^2}\))-close, too, therefore we see that the inequality

$$\begin{aligned} \sum _{n}\Big \Vert \psi _n^N-\psi _n\Big \Vert ^2<\frac{\varepsilon ^2}{\theta _T^2} \end{aligned}$$

holds for \(N>N_0=\max \{N_1,N_2\}\), which in particular means that

$$\begin{aligned} \Big \Vert \psi _0^N-\psi _0\Big \Vert <\frac{\varepsilon }{|\theta _T|}. \end{aligned}$$

Then for \(N>N_0\) we have

$$\begin{aligned} \Vert u_N-u_0\Vert&=\left\| \sum _n\langle u_N,\varphi _n^N\rangle \psi _n^N-\sum _n\langle u_0,\varphi _n\rangle \psi _n\right\| \\&=\left\| \theta _T(\psi ^N_0-\psi _0)\right\| <\varepsilon .\\ \end{aligned}$$

Therefore \((u_N)\) converges to \(u_0\). \(\square \)

We have shown that \(\lim _{N\rightarrow \infty }\Vert u_N-u_0\Vert =0\). Using the following theorem one can see that even a stronger claim

$$\begin{aligned}\lim _{N\rightarrow \infty }N\Vert u_N-u_0\Vert =0\end{aligned}$$

holds. To this end observe that the proof of Theorem 4.1 shows not only that the families \((\varphi _k)\) and \((\varphi ^N_k)\) are quadratically \(\varepsilon \)-close, but their differences admit the following representation:

$$\begin{aligned} \varphi _k(x)-\varphi ^N_k(x)=\frac{1}{k}\varphi _k(x)\left( g_k(x)+\frac{1}{k}h_k(x)\right) , \end{aligned}$$

where \((g_k)\) is a uniformly bounded family, that is \(\Vert g_k(x)\Vert \le M\). Now we can proceed with

Theorem 4.3

Let \(u_N\), \(u_0\), \(\varphi _k\), \(\psi _k\), \(\varphi ^N_k\) and \(\psi ^N_k\) be as in Theorem 4.2. Then we have not only \(\lim _{N\rightarrow \infty }\Vert u_N-u_0\Vert =0\) as before, but even

$$\begin{aligned} \lim _{N\rightarrow \infty }N\Vert u_N-u_0\Vert =0. \end{aligned}$$

Proof

For simplicity of the proof we assume that the family \((g_k)\) consists of one function only, \(g_k(x)=g(x)\). From previous considerations we know that

$$\begin{aligned} u_N=u_0+\sum _{k=N+1}^\infty \Big \langle u_N,\varphi _k\Big \rangle \psi _k + \sum _m\Big \langle u_N, Px_m\Big \rangle x_m, \end{aligned}$$

because \(u_N\) is a solution of (13), and

$$\begin{aligned} u_N=\sum _{k=0}^N\alpha _k\varphi _k + \sum _{k=N+1}^\infty \alpha ^N_k \varphi ^N_k \end{aligned}$$

for some \(\alpha _k\), \(\alpha ^N_k\)’s, because we assumed that \(u_N\) belongs to \(\hbox {cl}\left( {\hbox {Lin}\{(\varphi _k)\}}\right) \). Thus we can write

$$\begin{aligned} N(u_N-u_0)=N\sum _{k=N+1}^\infty \Big \langle u_N,\varphi _k-\varphi ^N_k\Big \rangle \psi _k + N\sum _{k=N+1}^\infty \sum _m\Big \langle \alpha ^N_k(\varphi ^N_k-\varphi _k), Px_m\Big \rangle x_m, \end{aligned}$$

where again we used the fact that \(u_N\) is a solution of (13). We will estimate the first sum only, the second one can be done in a similar way. We have

$$\begin{aligned} N^2\left\| \sum _{k=N+1}^\infty \langle u_N,\varphi _k-\varphi ^N_k\rangle \psi _k\right\| ^2&\le M_1\sum _{k=N+1}^\infty \frac{N^2}{k^2}\left| \langle u_N,g\varphi _k\rangle \right| ^2\\&+ M_1\sum _{k=N+1}^\infty \frac{N^2}{k^2}\left| \langle u_N,\frac{1}{k} h_k\varphi _k\rangle \right| ^2. \end{aligned}$$

Now we observe that both summands tend to zero (when \(N\rightarrow \infty \)), namely

$$\begin{aligned} \sum _{k=N+1}^\infty \frac{N^2}{k^2}\Big |\langle u_N,g\varphi _k\rangle \Big |^2&\le \sum _{k=N+1}^\infty \Big |\langle u_N,g\varphi _k\rangle \Big |^2\\&\le \sum _{k=N+1}^\infty \Big |\langle u_0\overline{g},\varphi _k\rangle \Big |^2 + \sum _{k=N+1}^\infty \Big |\langle (u_N-u_0)\overline{g},\varphi _k\rangle \Big |^2\\&\le \sum _{k=N+1}^\infty \Big |\langle u_0\overline{g},\varphi _k\rangle \Big |^2 +\Big \Vert (u_N-u_0)\overline{g}\Big \Vert ^2\quad \rightarrow 0 \end{aligned}$$

because the rest of a convergent series (of coefficients of \(u_0\overline{g}\) in basis \((\psi _k)\)) tends to zero, and \(u_N\rightarrow u_0\), and

$$\begin{aligned} \sum _{k=N+1}^\infty \frac{N^2}{k^2}\left| \langle u_N,\frac{1}{k} h_k\varphi _k\rangle \right| ^2&\le \sum _{k=N+1}^\infty \frac{1}{k^2}\left| \langle u_N,h_k\varphi _k\rangle \right| ^2\\&\le \Vert u_N\Vert \sup \Vert h_k\Vert \sup \Vert \varphi _k\Vert \sum _{k=N+1}^\infty \frac{1}{k^2}\quad \rightarrow 0 \end{aligned}$$

by similar arguments. Summarizing we obtain that

$$\begin{aligned} \lim _{N\rightarrow \infty }N\Vert u_N-u_0\Vert =0. \end{aligned}$$

\(\square \)

We have shown that the sequence of solutions \(u_N\) of the approximate moment problems (13) is not only convergent to the solution \(u_0\) of the original moment problem (6), but the speed of convergence is considerably fast, better than any general methods could provide without taking into account the essential, individual properties of the families appearing in the moment problem in question.

5 Numerical Solution of an Approximated Moment Problem

Now after establishing the fact that the original moment problem (6) can be approximated by another moment problem (13) we will try to find an equivalent formulation of the latter, but now in a form of a finite number of equations. This in the end will enable the numerical analysis of the problem of construction of the optimal control.

At first let us observe that the set

$$\begin{aligned}S_N:=\left\{ 1, t, \mathrm{{e}}^{i\sqrt{\lambda _n}t} (|n|\le N), \mathrm{{e}}^{i\frac{2n+1}{2}\pi t} (|n|> N), t\mathrm{{e}}^{i\frac{2n+1}{2}\pi t} (|n|> N)\right\} \end{aligned}$$

is an \(\mathcal {L}\)-basis of \(L^2[0,T]=L^2[0,2M]\), that is it is a Riesz basis in closure of its linear span \(V=\hbox {cl}\left( \hbox {Lin} S_N\right) \). Any control \(u\in L^2[0,2M]\) can be written in the unique form \(u = u_1 + u_2\), where \(u_1 \in V\) and \(u_2 \in V^\perp \). Since we have \(\Vert u\Vert ^2 = \Vert u_1\Vert ^2 + \Vert u_2\Vert ^2\) then due to the nature of our moment problem (13), which is generated by elements of \(S_N\), we see that the solution \(u\) has the least norm (i.e., it fulfills (5)) if and only if \(\Vert u_2\Vert = 0\), that is if and only if \(u \in V\). This allows us to express the (unique) optimal solution \(u\) as

$$\begin{aligned} u(t) = \sum _{|n|\le N}\alpha _n \mathrm{{e}}^{i\sqrt{\lambda _n}t} + \sum _{|n|> N}\beta _n \mathrm{{e}}^{i\frac{2n+1}{2}\pi t} + \sum _{|n|> N}\gamma _n t\mathrm{{e}}^{i\frac{2n+1}{2}\pi t} + A + Bt, \end{aligned}$$

where \(\alpha _n\), \(\beta _n\), \(\gamma _n\), \(A\), \(B \in \mathbb {C}\) are (unknown) constants. Further on we rewrite

$$\begin{aligned} \sum _{|n|> N}\beta _n \mathrm{{e}}^{i\frac{2n+1}{2}\pi t} = \mathrm{{e}}^{i\frac{\pi }{2}t}\zeta (t), \qquad \sum _{|n|> N}\gamma _n t\mathrm{{e}}^{i\frac{2n+1}{2}\pi t} = \mathrm{{e}}^{i\frac{\pi }{2}t}t\eta (t), \end{aligned}$$

where \(\zeta \), \(\eta \in L^2[0,2M]\) are (unknown) functions periodic on \([0,2]\). Thus

$$\begin{aligned} u(t) = \sum _{|n|\le N}\alpha _n \mathrm{{e}}^{i\sqrt{\lambda _n}t} + \mathrm{{e}}^{i\frac{\pi }{2}t}\zeta (t) + \mathrm{{e}}^{i\frac{\pi }{2}t}t\eta (t) + A + Bt. \end{aligned}$$
(14)

Directly from the definitions of \(\zeta , \eta \) we see that

$$\begin{aligned} \begin{array}{rrl} &{}&{}\displaystyle \int \nolimits _0^2 \zeta (t)\mathrm{{e}}^{in\pi t}\mathrm{{d}}t=0, \quad |n|\le N,\\ &{}&{}\displaystyle \int \nolimits _0^2 \eta (t)\mathrm{{e}}^{in\pi t}=0, \quad |n|\le N. \end{array} \end{aligned}$$
(15)

Now we want to change our time interval from \([0,2M]\) to \([0,2]\), where selected summands of (14) are either periodic or close to periodic. To this end we define

$$\begin{aligned} \widehat{f}(s) := \sum _{k=0}^{M-1}f(s+2k), \quad s\in [0,2] \end{aligned}$$

for any \(f\in L^2[0,2M]\). In long formulas we will write \( [f(s)]^{ \wedge }\) instead of \(\widehat{f}(s)\). Substituting (14) into second line of (13) and using (15) we obtain

$$\begin{aligned} 0&= \int \nolimits _0^{2M}\mathrm{{e}}^{i\frac{2n+1}{2}\pi t}u(t)\mathrm{{d}}t = \sum _{k=0}^{M-1}\int \nolimits _{2k}^{2k+2}\mathrm{{e}}^{i\frac{2n+1}{2}\pi t}u(t)\mathrm{{d}}t\\&= \int \nolimits _0^2\mathrm{{e}}^{in\pi s}\left[ \mathrm{{e}}^{i\frac{\pi }{2}s}M\left( \frac{1}{M}\sum _{|n|\le N}\alpha _n\left[ {(-1)^k\mathrm{{e}}^{i\sqrt{\lambda _n}s}}\right] ^{ \wedge }\right. \right. \\&+\mathrm{{e}}^{i\frac{\pi }{2}s}\Bigl (\zeta (s)+s\eta (s)+(M-1)\eta (s)\Bigr )+\widehat{(-1)^k}\Bigl (A+Bs+(M-1)B\Bigr )\Biggr )\Biggr ]\mathrm{{d}}s \end{aligned}$$

for all \(|n|>N\), therefore we can rewrite the expression in the square brackets as \(\displaystyle \sum _{|p|\le N}a_p\mathrm{{e}}^{ip\pi s}\) with some unknown constants \(a_p\in \mathbb {C}\), \(|p|\le N\). Proceeding the same way with third line of (13) we express a similar term as \(\displaystyle \sum _{|p|\le N}b_p\mathrm{{e}}^{ip\pi s}\) with some unknown constants \(b_p\in \mathbb {C}\), \(|p|\le N\). Using those we can derive formulas for \(\zeta \) and \(\eta \), namely

$$\begin{aligned} \eta (s)&= \displaystyle \frac{3}{M(M^2-1)}\left( -\sum \limits _{|m|\le N}\alpha _m\left[ {(-1)^ks\mathrm{{e}}^{i\sqrt{\lambda _m}s}}\right] ^{ \wedge }\mathrm{{e}}^{\frac{-i\pi s}{2}}\right. \\&+(M+s-1)\sum _{|m|\le N}\alpha _m\left[ {(-1)^k\mathrm{{e}}^{i\sqrt{\lambda _m}s}}\right] ^{ \wedge }\mathrm{{e}}^{\frac{-i\pi s}{2}}\\&+A\mathrm{{e}}^{\frac{-i\pi }{2}}\left( -(-1)^{M-1}\left( M-\frac{1}{2}\right) +\frac{1}{2}+\frac{1-(-1)^M}{2}(M-1)\right) \\&+B\mathrm{{e}}^{\frac{-i\pi s}{2}}\left( -\frac{1-(-1)^M}{2}-2M(M-1)s-4(-M+1)\frac{M}{2}\right. \\&\left. +(M+s-1)\left( \frac{1-(-1)^M}{2}s+(-1)^{M-1}\left( M-\frac{1}{2}\right) -\frac{1}{2}\right) \right) \\&\left. +\sum _{|p|\le N}b_p\mathrm{{e}}^{ip\pi s}\mathrm{{e}}^{-i\pi s}-\sum _{|p|\le N}a_p(M+s-1)\mathrm{{e}}^{ip\pi s}\mathrm{{e}}^{-is\pi }\right) , \end{aligned}$$

and

$$\begin{aligned} \zeta (s)&= -\frac{1}{M}\sum _{|m|\le N}\left[ {(-1)^k\mathrm{{e}}^{i\sqrt{\lambda _m}s}}\right] ^{ \wedge }\mathrm{{e}}^\frac{-is\pi }{2}-\frac{1}{M}A\frac{1-(-1)^M}{2}\mathrm{{e}}^\frac{-is\pi }{2}\\&-\frac{1}{M}B\mathrm{{e}}^\frac{-is\pi }{2}\left( \frac{1-(-1)^M}{2}s+(-1)^{M-1}\left( M-\frac{1}{2}\right) -\frac{1}{2}\right) \\&\quad +\frac{1}{M}\sum _{|p|\le N}a_p\mathrm{{e}}^{ip\pi s}\mathrm{{e}}^{-is\pi }-(M+s-1)\eta (s). \end{aligned}$$

Now using all above and the remaining (first, fourth and fifth) lines of (13) we finally obtain \(6N+2\) linear equations of the following form:

$$\begin{aligned}&\displaystyle \int \nolimits _0^2\mathrm{{e}}^{-in\pi s}\left( -\sum _{|m|\le N}\alpha _m\left[ {(-1)^ks\mathrm{{e}}^{i\sqrt{\lambda _m}s}}\right] ^{ \wedge }\mathrm{{e}}^{\frac{-i\pi s}{2}}\right. \\&\qquad \qquad \qquad \quad +\sum _{|m|\le N}\alpha _m\left[ {(-1)^k\mathrm{{e}}^{i\sqrt{\lambda _m}s}}\right] ^{ \wedge }\mathrm{{e}}^{\frac{-i\pi s}{2}}(M+s-1)\\&\qquad \qquad \qquad \quad +A\mathrm{{e}}^{\frac{-i\pi }{2}}\left( -(-1)^{M-1}\left( M-\frac{1}{2}\right) +\frac{1}{2}+\frac{1-(-1)^M}{2}(M-1))\right) \\&\qquad \qquad \qquad \quad +B\mathrm{{e}}^{\frac{-i\pi s}{2}}\left( -\frac{1-(-1)^M}{2}-2M(M-1)s-4(-M+1)\frac{M}{2}\right. \\&\qquad \qquad \qquad \quad \qquad \qquad \qquad \left. +(M\!+\!s\!-\!1)\left( \frac{1\!-\!(-1)^M}{2}s\!+\!(-1)^{M-1}\left( M\!-\!\frac{1}{2}\right) \!-\!\frac{1}{2}\right) \right) \\&\qquad \qquad \qquad \quad \left. +\sum _{|p|\le N}b_p\mathrm{{e}}^{ip\pi s}\mathrm{{e}}^{-i\pi s}\right) \mathrm{{d}}s=0, \quad |n|\le N,\\ \end{aligned}$$
$$\begin{aligned}&\displaystyle \int \nolimits _0^2\left( -\frac{1}{M}\sum _{|m|\le N}\left[ {(-1)^k\mathrm{{e}}^{i\sqrt{\lambda _m}s}}\right] ^{ \wedge }\mathrm{{e}}^\frac{-is\pi }{2} -\frac{1}{M}A\frac{1-(-1)^M}{2}\mathrm{{e}}^\frac{-is\pi }{2}\right. \\&-\frac{1}{M}B\mathrm{{e}}^\frac{-is\pi }{2}\left( \frac{1-(-1)^M}{2}s+(-1)^{M-1}\left( M-\frac{1}{2}\right) -\frac{1}{2}\right) \\&\left. -(M+s-1)\eta (s)+\frac{1}{M}\sum _{|p|\le N}a_p\mathrm{{e}}^{ip\pi s}\mathrm{{e}}^{-is\pi }\right) \mathrm{{d}}s=0, \quad |n|\le N, \end{aligned}$$
$$\begin{aligned}&\displaystyle \sum _{|m|\le N}\alpha _m\int \nolimits _0^{2M}\mathrm{{e}}^{i(\sqrt{\lambda _m}+\sqrt{\lambda _n})t}\mathrm{{d}}t + \int \nolimits _0^{2M}\mathrm{{e}}^{i\frac{\pi }{2}t}\zeta (t)\mathrm{{e}}^{i\sqrt{\lambda _n}t}\mathrm{{d}}t\\&+ \int \nolimits _0^{2M}\mathrm{{e}}^{i\frac{\pi }{2}t}t\eta (t)\mathrm{{e}}^{i\sqrt{\lambda _n}t}\mathrm{{d}}t + A \int \nolimits _0^{2M}\mathrm{{e}}^{i\sqrt{\lambda _n}t}\mathrm{{d}}t + B \int \nolimits _0^{2M}t\mathrm{{e}}^{i\sqrt{\lambda _n}t}\mathrm{{d}}t = 0, \quad |n|\le N, \end{aligned}$$
$$\begin{aligned}&\displaystyle \sum _{|m|\le N}\alpha _m\int \nolimits _0^{2M}\mathrm{{e}}^{i\sqrt{\lambda _m}t}\mathrm{{d}}t + \int \nolimits _0^{2M}\mathrm{{e}}^{i\frac{\pi }{2}t}\zeta (t)\mathrm{{d}}t + \int \nolimits _0^{2M}\mathrm{{e}}^{i\frac{\pi }{2}t}t\eta (t)\mathrm{{d}}t +A2M + B2M^2\\&=0, \end{aligned}$$

and

$$\begin{aligned}&\displaystyle \sum _{|m|\le N}\alpha _m\int \nolimits _0^{2M}t\mathrm{{e}}^{i\sqrt{\lambda _m}t}\mathrm{{d}}t + \int \nolimits _0^{2M}\mathrm{{e}}^{i\frac{\pi }{2}t}t\zeta (t)\mathrm{{d}}t + \int \nolimits _0^{2M}\mathrm{{e}}^{i\frac{\pi }{2}t}t^2\eta (t)\mathrm{{d}}t\\&\qquad +A2M^2 + B\frac{8}{3}M^3=\theta _T. \end{aligned}$$

After a careful investigation (which we do not present here due to a sheer complication of its form in the general case) one can easily eliminate \(\zeta \) and \(\eta \) from above equations and in the end obtain a Cramer system of \(6N+2\) linear equations with exactly \(6N+2\) unknown values, namely \(\alpha _m \; (|m|\le N)\), \(a_p \; (|p|\le N)\), \(b_p \; (|p|\le N)\), \(A\) and \(B\).

This way we showed that solving the approximate optimal moment problem (13) can be reduced to solving a finite number of linear equations which in turn can be solved numerically. Note that entries of Cramer matrix of this system depend on the first \(N\) eigenvalues \(\lambda _n\), which also have to be found numerically, but using the analytic formulas from [2] that can be done with any required level of precision. Thus we reduced the infinite dimensional problem of optimal control (1)–(5) to solving a system of a finite number of linear equations.

Remark 5.1 One should notice that the method presented herein uses only one approximation step—replacing the moment problem (8) by a close one (13)—and the latter can be solved without any further discretization errors.

6 Conclusions

In the paper, we constructed the optimal control for a slowly rotating Timoshenko beam. Basing on the specific character of the moment problem, corresponding to our optimal control problem, we presented a new method of approximating the original problem by another, special moment problem. We also showed that our approximation converges to the optimal solution with essentially faster speed than the one obtained by the classical truncation method. In the end, using special periodicity properties of the family of functions appearing in the approximated moment problem, we presented a method of reducing the system of infinite number of equations to the equivalent Kramer system of a finite number of equations. The whole process of constructing the optimal control function uses only one approximation step—replacing the moment problem in question by a close one—and the latter can be solved without any further discretization errors.

The methods introduced in the present paper can be also used in analysis of different control problems for other models, which spectral functions are close to periodic, e.g., vibrating strings, beams, and plates.