1 Introduction

To analyze the vibration behavior of a system completely, one has to consider all its components individually. For large-scale systems such a detailed analysis is often not applicable, hence all system components are combined to a single quantity, e.g., Euclidean norm or any other norm. This simplification is a rough measure of the vibration behavior of the system and therefore, it does not show its exact behavior. In this paper, we derive bounds on the norm of the solution of linear time-periodic systems. We investigate various norms and with the respective bounds on the solution the vibration behavior of the system and its transient analysis can be supported and, e.g., stability and robustness can be analyzed.

Linear time-invariant systems arise in many fields of application, e.g., via linearization of vibrational systems [33], and have been an active area of research. Their solution is defined via the matrix exponential and their numerical evaluation can be done by methods to solve ordinary differential equations, e.g., by Runge–Kutta methods [12], or the computation of the matrix exponential [13, 21]. Two-sided bounds for the solution of linear time-invariant systems have been investigated in a series of papers [17, 18]. A time-varying system in general does not possess a closed form solution (unless the system matrix commutes for any two times). Hence, the theory derived in [17, 18] cannot easily be extended to a general linear time-varying system with an infinite time horizon. In this paper, we investigate linear time-periodic systems and generalize the theory of bounds to the solution of linear time-periodic systems while using its solution structure defined by Floquet’s theory [8]. In general, Floquet’s normal form is non-constructive, hence an approximation by numerical methods is needed, e.g., in [30, 31]. As long as the approximation is not exact, it involves an error and the bounds on the solution of the approximated system then may not be valid w.r.t. the solution of the original linear time-periodic system. In [29], the stability of a linear time-periodic system is analyzed by an approximation approach with quadratic polynomials. We generalize the idea in three different ways. Firstly, we use trigonometric splines [27, 28], which can be seen as a natural choice for time-periodic systems, since they mimic its time-periodicity. Here, we show bounds on the solution for quadratic trigonometric splines. In general, bounds can be derived for higher order, as well, as long as the method converges, see e.g., [25]. But a spline approximation of order 4 or larger is divergent [20]. Secondly, we do not limit ourselves to quadratic polynomials but use a general framework such that the polynomial approximation can be performed with any desired degree by Chebyshev projections. In [30, 31], numerical methods based on Chebyshev projection have already been considered to solve linear time-periodic systems. We generalize the integration [30] or differentiation [31] scheme by a general framework. Here, we do not approximate the solution but the time-varying system matrix by Chebyshev polynomials [5]. We use results from approximation theory [36] in order to obtain bounds on the approximated system. The solution of the approximated system is then entire and it has an infinite series representation. Hence, it can be truncated and a bound on the truncation error is derived. Within this approximation framework we show that the truncated solution of the approximated system converges to the original solution of the linear time-periodic system which is a very important extension to the work in [30, 31]. Thirdly and most importantly, the trigonometric splines and the Chebyshev approximation framework yield rigorous bounds on the solution of a linear time-periodic system, i.e., we do not only approximate the solution by the aforementioned methods but we obtain bounds on the solution as well. These bounds essentially behave like the approximated solutions, i.e., they converge to the original solution at the same rate as the approximated solution. Transient analysis of the original linear time-periodic system can be supported by stability and robustness analysis of the aforementioned bounds due to their rigorousness. The ideas and bounds for trigonometric splines and Chebyshev projections on the solution of linear time-periodic systems can be applied to general time-varying systems over a finite time interval.

The paper is structured as follows. In Sect. 3, rigorous bounds are obtained due to the structure of the solution. In Sect. 3.1 we summarize results for linear time-invariant systems [18, 19]. Two-sided bounds on the solution with the differential calculus of norms, e.g., in [1517], are shown. In Sect. 3.2 we generalize the results for time-invariant to time-periodic systems. Here, the matrix logarithm of the monodromy matrix w.r.t. the length of the period takes the role of the time-invariant coefficient matrix. A newly defined time-dependent norm yields two-sided bounds and properties such as decoupling, vibration suppression and monotonicity of the solution, as well. This is a generalization of the time-invariant case in [19]. We use and explain two methods to solve the linear time-periodic system. The first one is described in Sect. 4, where we approximate the solution of the system by trigonometric splines following ideas in [20, 24, 25] and then we establish bounds on the quality of the approximation. The second method is the so-called spectral method [11, 26], which is explained in the setting of polynomial approximation of linear ordinary differential equation [10] in Sect. 5. We derive an upper bound based on the approximation quality and show its convergence to the solution of the linear time-periodic system. We conclude our theory on rigorous bounds of time-periodic systems with some remarks about convergence and computational complexity and show its effectiveness in Sect. 6 on various examples which include an anisotropic rotor-bearing system and a parametrically excited Cantilever beam.

2 Preliminaries

A linear time-periodic system is a set of linear ordinary differential equations (ODEs) with time-periodic coefficients with some periodicity T and a given initial condition

$$\begin{aligned} \dot{x}(t)= & {} A(t)x(t)\quad \forall t \in {\mathbb {R}}, \nonumber \\ A(t)= & {} A(t+T) \quad \forall t \in {\mathbb {R}}, \\ x(0)= & {} x_0,\nonumber \end{aligned}$$
(1)

where \(x :{\mathbb {R}}\rightarrow {\mathbb {R}}^{n}\) and \(A:{\mathbb {R}} \rightarrow {\mathbb {R}}^{n \times n}\).

Throughout this paper, we denote with \({\mathcal {C}}(X,Y)\) the space of continuous functions and with \({\mathcal {C}}^k(X,Y)\) the space of k-times continuously differentiable functions that map the domain \(X \subseteq {\mathbb {R}}\) to its range \(Y\subseteq {\mathbb {R}}^{n\times n}\).

2.1 Existence and uniqueness of a solution

First of all, we pose the question whether a solution to (1) exists and when it does if it is unique. We therefore cite a global existence and uniqueness result from [6] in the context of linear systems. Here, the periodicity of the system matrix can be omitted.

Proposition 1

Let \(A\in {\mathcal {C}}({\mathbb {R}},{\mathbb {R}}^{n\times n})\). Then there exists a unique solution x(t) of (1).

2.2 Floquet’s Theorem

The most fundamental result in the setting of linear time-periodic systems is Floquet’s Theorem [8]. Originally, it was given for a scalar ordinary differential equation of order \(m>1\). But here we follow the presentation of the proposition for a linear system of ordinary differential equations e.g., given in [22].

Proposition 2

(Floquet’s Theorem 1883) Let \(\varPhi (t)\) be a principal fundamental matrix of (1). Then

$$\begin{aligned} \varPhi (t+T) = \varPhi (t)C \qquad \forall t\in {\mathbb {R}}, \end{aligned}$$
(2)

where \(C=\varPhi (T)\) is a constant nonsingular matrix which is known as the monodromy matrix. In addition, for a matrix L such that

$$\begin{aligned} e^{LT}=\varPhi (T), \end{aligned}$$
(3)

there is a periodic matrix function \(t \mapsto Z(t)\) such that

$$\begin{aligned} \varPhi (t) = Z(t) e^{L t} \qquad \forall t\in {\mathbb {R}}. \end{aligned}$$
(4)

Equation (4) is called Floquet normal form since the structure of the solution to (1) is given by Floquet’s Theorem as

$$\begin{aligned} x(t) = Z(t) e^{Lt}x_0, \end{aligned}$$

where \(L,\, Z(t) \in {\mathbb {C}}^{n\times n}\) and \(Z(t)=Z(t+T)\) are nonsingular \(\forall t \in {\mathbb {R}}\). The eigenvalues of the matrix L, also known as Floquet exponents, determine the asymptotic behavior of the system. The real parts of the Floquet exponents are called Lyapunov exponents. The zero solution is asymptotically stable if all Lyapunov exponents are negative. It is stable if the Lyapunov exponents are non-positive and whenever the Lyapunov exponent vanishes, the geometric and algebraic multiplicity of the eigenvalue must coincide. Otherwise the zero solution is unstable.

The proof of Floquet’s Theorem is non-constructive, hence one needs other methods and/or bounds to approximate the solution. Nevertheless, determining the fundamental solution (4) in the interval [0, T] is sufficient due to its semigroup property given in Eq. (2).

3 Bounds for time-dependent norm

We generalize results obtained for constant linear systems in [18] and [19] to time-periodic systems. First, we recall the obtained results in order to base our generalization on them. We consider the general case when the constant coefficient matrix is non-diagonalizable. The results for a diagonalizable matrix are stated in [18] and [19]. Basically, the difference for a diagonalizable matrix is, that the algebraic and geometric multiplicity of each eigenvalue coincide. Hence, each Jordan block has size one.

Our generalization is based on Floquet’s Theorem that yields the so-called Floquet-Lyapunov coordinate transformation \(z(t)=Z^{-1}(t)x(t)=e^{Lt}x_0\) such that the original problem (1) is transformed into a linear system with constant coefficients

$$\begin{aligned} \begin{array}{lcl} \dot{z}(t) &{}=&{} Lz(t) \qquad \forall t \in {\mathbb {R}},\\ z(0) &{}=&{} x_0. \end{array} \end{aligned}$$
(5)

The solution of the transformed system (5) is \(z(t)=e^{Lt}x_0\).

3.1 Time-invariant setting

For \(u \in {\mathbb {C}}^{n}\) and \(A\in {\mathbb {C}}^{n\times n}\) in the following let \(u^*\) and \(A^*\) denote the conjugate transpose of u and A, respectively. Let \(v_k^{(i)}\) for \(k=1,\ldots ,m_i\) be the chain of right principal vectors, i.e.

$$\begin{aligned} L^*v_k^{(i)}=\lambda _i(L^*) v_k^{(i)} + v_{k-1}^{(i)} \end{aligned}$$

and \(v_0^{(i)}=0\) for \(i=1,\ldots ,r\), corresponding to an eigenvalue \(\lambda _i\) of \(L^*\). Let r be the number of Jordan blocks and \(m_i\) the algebraic multiplicity of the eigenvalue \(\lambda _i\). Then define the following matrices:

$$\begin{aligned} R_i^{(k,k)}:= & {} v_k^{(i)}v_k^{(i)^*} \qquad \text{ for }\quad k=1,\ldots ,m_i ,\quad i=1,\ldots ,r,\\ R_i:= & {} \sum _{k=1}^{m_i}{R_i^{(k,k)}},\\ R:= & {} \sum _{i=1}^r{R_i}. \end{aligned}$$

The matrices \(R_i\) are eigenmatrices of the matrix eigenvalue problem \(R_iL+L^*R_i = 2 \hbox {Re}({\lambda _i}) R_i\). Here, L replaces the time-invariant system matrix in [18]. We recall the following results given in Proposition 3, 4 and Lemma 1 from [18] for a time-invariant system given in Eq. (5) and a possibly non-diagonalizable system matrix L.

Proposition 3

For \(k=1,\ldots ,m_i,\, i=1,\ldots ,r\), \(R_i^{(k,k)}\) and \(R_i\) are positive semi-definite and R is positive definite.

Hence, \(\Vert \cdot \Vert _R\) is a norm defined by \(\Vert v\Vert _R^2 = (Rv,v),\, v \in {\mathbb {C}}^n\) and \(\Vert \cdot \Vert _{R_i}\) is a semi-norm defined by \(\Vert v\Vert _{R_i}^2 = (R_iv,v),\, v\in {\mathbb {C}}^n\). In general, \(\Vert \cdot \Vert _{R_i}\) does not fulfill the definiteness property. Furthermore, the square of the semi-norm \(\Vert \cdot \Vert _{R_i}^2\) has a decoupling and filter effect shown by the next proposition [18].

Proposition 4

Let z(t) be the solution to the IVP (5) and

$$\begin{aligned} p_{x_0,k-1}^{(i)}(t):=\left( x_0, v_1^{(i)} \frac{t^{k-1}}{(k-1)!}+\cdots +v_{k-1}^{(i)}t+v_{k}^{(i)}\right) , \end{aligned}$$
(6)

for \(k=1,\ldots , m_i, \, i=1,\ldots ,r\). Then

$$\begin{aligned} \Vert z(t) \Vert _{R_i^{(k,k)}}^2 = \left| p_{x_0,k-1}^{(i)}(t)\right| ^2 e^{2t \hbox {Re}{\lambda _i}} \quad \text{ for } t \in {\mathbb {R}}, \end{aligned}$$
(7)

and

$$\begin{aligned} \Vert z(t) \Vert _{R}^2 = \sum _{i=1}^{r}\sum _{k=1}^{m_i} \Vert z(t) \Vert _{R_i^{(k,k)}}^2 = \sum _{i=1}^{r}\sum _{k=1}^{m_i} \left| p_{x_0,k-1}^{(i)}(t)\right| ^2 e^{2t \hbox {Re}{\lambda _i}} \quad \text{ for } t \in {\mathbb {R}}. \end{aligned}$$

The polynomials in \(p_{x_0,k-1}^{(i)}(t)\) of Eq. (6) are due to the Jordan blocks, hence to the non-diagonalizability of the matrix L, i.e., if the matrix L is diagonalizable, then all polynomials in (6) are constant.

Lemma 1

Let

$$\begin{aligned} \psi _k^{(i)} (t) := p_{x_0,k-1}^{(i)}(t) e^{t \hbox {Re}\lambda _i} \quad \text{ for } t \in {\mathbb {R}}, \end{aligned}$$
(8)

\(\psi ^{(i)}(t)=[\psi _1^{(i)},\ldots ,\psi _{k}^{(i)},\ldots \psi _{m_i}^{(i)}]^T\) for \(i=1,\ldots ,r\) and \(k=1,\ldots ,m_i\) and \(\psi (t) = [\psi ^{(1)}(t)^T,\ldots ,\psi ^{(i)}(t)^{T},\ldots ,\psi ^{(r)}(t)^T]^T\). Then

$$\begin{aligned} \Vert z(t) \Vert _{{R}} = \Vert {\psi }(t) \Vert _2 \qquad \text{ for } t \in {\mathbb {R}}. \end{aligned}$$
(9)

Lemma 1 shows the connection to the Euclidean norm of the function \(\psi \). By the equivalence of norms in finite-dimensional vector spaces, a two-sided bound \(c\Vert \psi (t)\Vert _p\le \Vert x(t)\Vert _R \le C\Vert \psi (t)\Vert _p\) with \(1\le p \le \infty \) can be derived. For \(p=2\), the constants cC can be chosen as unity by Lemma 1.

3.2 Time-periodic setting

In the following we denote with \(B^{-*}\) the inverse of the conjugate transpose of B, i.e. \(B^{-*}=(B^*)^{-1}=(B^{-1})^*\). First, we show that the matrix \(\tilde{R}(t)\) is Hermitian, positive definite and bounded for any \(t\in {\mathbb {R}}\) under the right assumptions on R. For the definition of a more general time-dependent norm, see [32].

Lemma 2

Let R be Hermitian and positive definite and \(\tilde{R}(t)=Z^{-*}(t)RZ^{-1}(t)\), where Z(t) is defined by Floquet’s normal form (4). Then

  1. 1.

    \(\tilde{R}(t)\) is positive definite for all \(t \in {\mathbb {R}}\),

  2. 2.

    \(\tilde{R}(t)\) is Hermitian for all \(t\in {\mathbb {R}}\),

  3. 3.

    \(\tilde{R}(t)\) is T-periodic, i.e. \(\tilde{R}(t)=\tilde{R}(T+t)\), for all \(t\in {\mathbb {R}}\) and

  4. 4.

    \(\tilde{R}(t)\) is bounded, i.e., there exist \(c,C>0: c\le \Vert \tilde{R}(t)\Vert \le C\) for all \(t\in {\mathbb {R}}\).

Proof

  1. 1.

    Choose u and t arbitrarily but fixed and let \(\tilde{u}=Z^{-1}(t)u\), then

    $$\begin{aligned} u^*\tilde{R}(t)u = u^*Z^{-*}(t)RZ^{-1}(t)u=\tilde{u}^*R\tilde{u} \ge 0, \end{aligned}$$

    for all \(\tilde{u} \in {\mathbb {C}}^{n}\) since R is positive definite. Now,

    $$\begin{aligned} \tilde{u}^*R\tilde{u}=0 \Leftrightarrow \tilde{u}=0 \Leftrightarrow \tilde{u}=Z^{-1}(t)u=0 \Leftrightarrow u=0, \end{aligned}$$

    since Z(t) has full rank and is invertible for all t.

  2. 2.

    \(\tilde{R}(t)\) is Hermitian, since R is Hermitian.

  3. 3.

    \(\tilde{R}(t)\) is T-periodic, since Z(t) is T-periodic.

  4. 4.

    \(Z^{-1}(t)=e^{Lt}\varPhi ^{-1}(t)\) and \(Z^{-*}(t)=\varPhi ^{-*}(t)e^{L^*t}\) are continuous and periodic with periodicity T. Note, that \(\varPhi (t)\) is a fundamental matrix, \(\varPhi ^{-1}(t) = \varPhi (-t)\) holds [22]. \(\tilde{R}(t)\) and \(p: t \mapsto \Vert \tilde{R}(t) \Vert \) are continuous and periodic as well. Due to the extreme value theorem [9], p attains its minimum c and maximum C in \(t_c \in \left[ 0,T \right] \) and \(t_C \in \left[ 0,T \right] \), respectively. Since p is periodic, it can be bounded globally: \(c \le \Vert \tilde{R}(t)\Vert \le C\). Since \(\tilde{R}(t)\) has full rank for all \(t\in {\mathbb {R}}\), \(\tilde{R}(t_c)\) has full rank and hence, \(\tilde{R}(t_c) \ne 0\) and therefore \(c>0\), i.e.

    $$\begin{aligned} \exists c,C>0: c \le \Vert \tilde{R}(t)\Vert \le C \quad \forall t \in {\mathbb {R}}. \end{aligned}$$

\(\square \)

Let \(\Vert \cdot \Vert _R\) be a global norm, then we define a local (time-dependent) norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\), see e.g., [32]. We define the (time-dependent) norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\) as

$$\begin{aligned} \Vert u \Vert _{\tilde{R}(t)} := (Z^{-*}(t)RZ^{-1}(t) u, u)^\frac{1}{2}. \end{aligned}$$
(10)

By Lemma 2, \(\Vert \cdot \Vert _{\tilde{R}(t)}\) is well defined and fulfills the axioms of a norm. Furthermore,

$$\begin{aligned} \Vert x(t) \Vert _{\tilde{R}(t)}= & {} (Z^{-*}(t)RZ^{-1}(t) x(t), x(t))^\frac{1}{2}\\= & {} (RZ^{-1}(t) x(t),Z^{-1}(t) x(t))^\frac{1}{2} \\= & {} \Vert Z^{-1}(t) x(t) \Vert _R = \Vert z(t)\Vert _R = \Vert e^{Lt}x_0 \Vert _R \end{aligned}$$

holds. In the following we generalize results from the previous Sect. 3.1 to the norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\).

Theorem 1

(Decoupling and filter effect of the norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\)) Let L be a complex matrix such that it fulfills (3) and z(t) be the solution to the IVP (5). Then

$$\begin{aligned} \Vert x(t) \Vert _{\tilde{R}(t)}^2 = \Vert z(t) \Vert _{R}^2 = \sum _{i=1}^{r}\sum _{k=1}^{m_i} \left| p_{x_0,k-1}^{(i)}(t)\right| ^2 e^{2t\hbox {Re}{\lambda _i}} \quad \text{ for } t \in {\mathbb {R}}, \end{aligned}$$
(11)

where \(p_{x_0,k-1}^{(i)}(t)\) for \(k=1,\ldots , m_i\) and \(i=1,\ldots ,r\) are defined in (6).

Proof

The relation \(\Vert x(t) \Vert _{\tilde{R}(t)}^2 = \Vert z(t) \Vert _{R}^2\) is given by (10) and \(\Vert z(t) \Vert _{R}^2 = \sum _{i=1}^{r}\sum _{k=1}^{m_i} \left| p_{x_0,k-1}^{(i)}(t)\right| ^2 e^{2t \hbox {Re}{\lambda _i}} \quad \text{ for } t \in {\mathbb {R}}\) is given by Proposition 4. \(\square \)

By Proposition 4 a decoupling and filter effect on the semi-norms \(\Vert \cdot \Vert _{R_i^{(k,k)}}^2\) for \(k=1,\ldots ,m_i\) and \(i=1,\ldots ,r\) is shown, which carries over to the norms \(\Vert \cdot \Vert _R^2\) by Proposition 4 and \(\Vert \cdot \Vert _{\tilde{R}(t)}^2\) by Theorem 1. Decoupling and filtering are meant in the sense that we obtain a system of decoupled differential equations, where only the real part of the eigenvalues is passed and the imaginary parts are suppressed. The semi-norms suppress vibration in the sense of decoupling and filtering which is given by Corollary 1.

Corollary 1

(Vibration-suppression property of \(\Vert x(t) \Vert _{\tilde{R}(t)}\))

  • If L is diagonalizable, then

    $$\begin{aligned} \Vert x(t) \Vert _{\tilde{R}(t)}^2 = \sum _{i=1}^{n} \left\| x_0\right\| _{R_i}^2 e^{2t\hbox {Re}{\lambda _i}} \quad \text{ for } t \in {\mathbb {R}}. \end{aligned}$$
  • If L is non-diagonalizable, then

    $$\begin{aligned} \Vert x(t) \Vert _{\tilde{R}(t)}^2 = \sum _{i=1}^{r}\sum _{k=1}^{m_i} \left| p_{x_0,k-1}^{(i)}(t)\right| ^2 e^{2t\hbox {Re}{\lambda _i}} \quad \text{ for } t \in {\mathbb {R}}. \end{aligned}$$

If the spectral abscissa \(\nu [L]=\max _{i=1,\ldots ,r} \hbox {Re}\lambda _i \) is negative, i.e., \(\nu [L]<0\), and \( d = \max _{i=1,\ldots ,r}\max _{k=1,\ldots ,m_i} {\text {degree}}(p_{x_0,k-1}^{(i)}(t)),\) then \(\Vert x(t) \Vert _{\tilde{R}(t)}\) behaves essentially in a way similar to \(t^d e^{-t}\), i.e., there exist \(t_1>0\) such that \(\Vert x(t) \Vert _{\tilde{R}(t)}\searrow 0\) (monotonic decrease) for \(t\ge t_1\) as \(t \rightarrow \infty \). If the matrix L is diagonalizable and the spectral abscissa is nonzero, then one can conclude a monotonic behavior in \(\Vert \cdot \Vert _{\tilde{R}(t)}\) since no Jordan block occurs.

Corollary 1 does not state, that in the linear time-periodic system (1) the vibrations are suppressed, but in the \(\tilde{R}(t)\)-norm of its solution due to the decoupling and filtering effect of the norm. We would like to mention the following two cases of monotonic behavior:

  1. 1.

    If the spectral abscissa \(\nu [L]=\max _{i=1}^n \hbox {Re}\lambda _i <0\) for a diagonalizable matrix L, then \(\Vert x(t) \Vert _{\tilde{R}(t)}\) tends monotonically to zero, i.e., \(\Vert x(t) \Vert _{\tilde{R}(t)} \searrow 0\) as \(t \rightarrow \infty \).

  2. 2.

    If all eigenvalue have positive real part, i.e., \(\hbox {Re}\lambda _i >0\) for \(i=1,\ldots ,r\), then \(\Vert x(t) \Vert _{\tilde{R}(t)}\) tends monotonically to infinity, i.e., \(\Vert x(t) \Vert _{\tilde{R}(t)} \nearrow \infty \) as \(t \rightarrow \infty \). In general, if a mechanical system is vibrating with an increasing amplitude, then the system will eventually collapse.

The monotonic behavior of \(\Vert x(t) \Vert _{\tilde{R}(t)}\) can be used to derive upper bounds on the amplitude of \(\Vert x(t)\Vert _\infty \).

4 Trigonometric spline bound

In [20], the authors introduced a method of spline approximation in order to solve ODEs. This idea was further developed by many other researchers, see e.g., [23, 24] and [25]. They used trigonometric B-splines of second and third order to solve a nonlinear ODE. We use a modified approach in order to apply it to a linear system of ODEs and further equip the computation with rigorous bounds [4]. The unknown quantities are the coefficients of the trigonometric splines. While in the nonlinear approach one has to solve a series of nonlinear systems, this simplifies to a series of structured linear systems. Hence, a decrease of computational complexity and an effective speed-up is achieved. For further details on trigonometric splines we refer the interested reader to [27] and [28].

First, we need some mathematical basics. Let \(({\mathbb {R}}^n,\Vert \cdot \Vert _\infty )\) be a normed vector space and \({\mathcal {L}}^\infty ([0,T],{\mathbb {R}}^n)\) be the space of measurable and essentially bounded functions from [0, T] to \({\mathbb {R}}^n\). For a function \(x\in {\mathcal {L}}^\infty ([0,T],{\mathbb {R}}^n)\), its essential supremum serves as an appropriate norm:

$$\begin{aligned} \Vert x\Vert _\infty := \inf \{ C\ge 0 : \Vert x(t)\Vert \le C \text{ for } \text{ almost } \text{ every } t \in [0,T] \}. \end{aligned}$$

As a reminder, \(\Vert x(t)\Vert _\infty \) denotes the maximum norm of a vector, i.e., its maximal absolute component,

$$\begin{aligned} \Vert x(t) \Vert _\infty := \max \left\{ |x_1(t)|,\ldots ,|x_n(t)|\right\} . \end{aligned}$$
(12)
Fig. 1
figure 1

Quadratic trigonometric B-splines at equidistant nodes

Here, the idea is that the solution x(t) to (1) is approximated by splines. Due to the periodicity of our initial problem (1), trigonometric splines are chosen which mimic the behavior of the periodic system matrix A(t). In order to perform a spline interpolation, we need a node sequence and for the sake of simplicity we choose \(r+1\) equidistant nodes \(\varOmega _r=\left\{ t_0,\ldots ,t_r\right\} \) in the interval [0, T] with \(t_0=0\) and \(t_r=T\), i.e., \(t_i=ih\) for \(i=0,1,\ldots ,r\) with \(h=\frac{T}{r}\). The restriction of the quadratic trigonometric splines to any subinterval \([t_i, t_{i+1}]\) is a linear combination of \(\left\{ 1,\cos (t),\sin (t)\right\} \). Trigonometric B-splines \(S_i(t)\) are defined by

$$\begin{aligned} S_i(t) = \theta {\left\{ \begin{array}{ll} \sin ^2 \big ({\frac{t-t_{i-1}}{2}}\big ) &{}\text{ if } t\in \left[ t_{i-1}, t_{i} \right) , \\ \sin \big ({\frac{t-t_{i-1}}{2}}\big ) \sin \big ({\frac{t_{i+1}-t}{2}}\big ) +\sin \big ({\frac{t_{i+2}-t}{2}}\big )\sin \big ({\frac{t-t_{i}}{2}}\big ) &{}\text{ if } t\in \left[ t_{i}, t_{i+1} \right) , \\ \sin ^2\big ({\frac{t_{i+2}-t}{2}}\big ) &{}\text{ if } t\in [ t_{i+1}, t_{i+2} ),\\ 0 &{}\text{ if } t\notin \left[ t_{i-1}, t_{i+2} \right] , \end{array}\right. } \end{aligned}$$

with \(\theta = \frac{1}{\sin (h) \sin \left( \frac{h}{2}\right) }\), see [23, 24, 28].

A trigonometric B-spline \(S_i(t)\) is shown in Fig. 1. First, as it can be seen in Fig. 1, for any inner subinterval \([t_i,t_{i+1}]\) with \(0<i<r\), the spline \(S_i(t)\) is fully described. For the intervals \([t_0,t_1]\) and \([t_{r-1},t_r]\), artificial intervals \([t_{-1},t_0]\) and \([t_r,t_{r+1}]\) have to be included in the definition of \(S_i(t)\) such that the restriction to the respective subinterval is still a linear combination of the functions \(1,\cos (t)\) and \(\sin (t)\). If we denote by \(S_2(\varOmega _r)\) the space of quadratic trigonometric splines in [0, T] w.r.t. the nodes \(\varOmega _r\), then \(S_2(\varOmega _r) = \mathop {\mathrm {span}} \left\{ S_i\right\} _{i=-1}^r\). Hence every quadratic trigonometric spline can be expressed in the form \(\sum _{i=-1}^r{\alpha _i S_i(t)}\). For representing a spline, the summation index i is from \(-1\) to r which does not represent the number of nodes, but the number of intervals \([t_i,t_{i+1}]\) for \(i=-1,\ldots ,r\) which includes the aforementioned artificial intervals. In our case, the coefficients \(\alpha _i\) are unknown and have to be determined.

Now, we describe in more detail a method how to compute the coefficients \(\alpha _i\). First, let us generalize the quadratic trigonometric B-splines with compact support \(s_j(t)\) to a vector \(s(t)=[s_1(t),\ldots ,s_n(t)]^T\) such that each \(s_j(t)\) approximates \(x_j(t)\) for \(j=1,\ldots ,n\), i.e.

$$\begin{aligned} x(t)= \left[ \begin{array}{c} x_1(t)\\ \vdots \\ x_n(t) \end{array}\right] \approx \left[ \begin{array}{c} s_1(t)\\ \vdots \\ s_n(t)\\ \end{array}\right] = \sum _{i=-1}^r{\alpha ^{(i)}S_i(t)} = s(t) \qquad \text{ for } t \in [0,T], \end{aligned}$$

where the unknown coefficients of the trigonometric B-splines are given by the coefficient vectors \(\alpha ^{(i)}\in {\mathbb {R}}^n\) for \(i=-1,0,\ldots ,r\). By demanding that the spline s fulfills the ODE (1), i.e., \(\dot{s}(t_i)=A(t_i)s(t_i)\) at the nodes \(t_i\) for \(i=0,\ldots ,r\), one obtains a sequence of \(r+1\) linear systems

$$\begin{aligned} A^{(i)} \alpha ^{(i)} = b^{(i)} \end{aligned}$$

for the coefficient vector \(\alpha ^{(i)}\). It is a sequence since the coefficient matrix \(A^{(i)}\) and the right-hand side \(b^{(i)}\) change w.r.t. the i-th node \(t_i\)

$$\begin{aligned} \begin{aligned} A^{(i)}&= I_n - \tan {\left( \frac{h}{2}\right) }A(t_{i}) \qquad&\text{ for } i=0,\ldots ,r,\\ b^{(i)}&= \left( I_n+\tan {\left( \frac{h}{2}\right) }A(t_{i}) \right) \alpha ^{(i-1)} \qquad&\text{ for } i=0,\ldots ,r, \end{aligned} \end{aligned}$$

where \(I_n\) is the n-dimensional identity matrix and the initial condition \(s(t_0)=x_0\) yields \(\alpha ^{(-1)} = \cos {\left( \frac{h}{2}\right) }x_0 - \sin {\left( \frac{h}{2}\right) }A(t_0)x_0\).

Nikolis has investigated this procedure for nonlinear systems [23], where one does not solve a sequence of linear systems but a sequence of nonlinear systems by an iterative method such as the Newton method. In fact, trigonometric splines are L-splines [28]. Here the L corresponds to a certain linear differential operator, which in our case is \(L_3 x:=x'''+x'\), where x is the solution of (1). The convergence result from nonlinear systems carries over to the linear case and is stated in Proposition 5.

Proposition 5

(Nikolis [23]) For \(A\in {\mathcal {C}}^2([0,T],{\mathbb {R}}^{n \times n})\), the quadratic trigonometric spline converges quadratically to the solution, more precisely \(\Vert x-s \Vert _{\infty } = {\mathcal O}(\Vert L_3 x \Vert _\infty r^{-2})\).

The following rigorous upper bound is based on Proposition 5, see [4].

Theorem 2

Let \(A\in {\mathcal {C}}^2([0,T],{\mathbb {R}}^{n\times n})\). Then, \(L_3x\in {\mathcal {L}}^\infty ([0,T],{\mathbb {R}}^n)\) and

$$\begin{aligned} \Vert x(t)\Vert _\infty \le \Vert s(t)\Vert _\infty + \left\| L_3 x \right\| _{\infty } \left( \varTheta _1\varTheta _2(t) + \varTheta _3(t) + \varTheta _4(t)\right) , \end{aligned}$$
(13)

where

$$\begin{aligned} \varTheta _1&= \frac{ \left| 2\tan {\left( \frac{h}{2}\right) }-h\right| }{L|\sin (h)|+L\left| \tan {\left( \frac{h}{2}\right) }\right| } \left[ \left( \frac{1+L\left| \sin (h)\right| }{1-L\left| \tan \left( \frac{h}{2}\right) \right| }\right) ^i-1\right] \\ \varTheta _{2}(t)&= 1+L|\sin (t-t_i)|+L\left| \cot (h)(1-\cos (t-t_i))\right| \\ \varTheta _{3}(t)&= L \frac{ \left| 2\tan {\left( \frac{h}{2}\right) }-h\right| }{1-L \left| \tan {\left( \frac{h}{2}\right) } \right| } \left| \frac{1-\cos (t-t_i)}{\sin (h)}\right| \\ \varTheta _{4}(t)&= \left| \frac{(1-\cos (h))(1-\cos (t-t_i))}{\sin (h)}\right| + |t-t_i-\sin (t-t_i)| \end{aligned}$$

for \(t \in (t_i,t_{i+1}]\) and h is sufficiently small, i.e. \(L|\tan {\left( \frac{h}{2}\right) }|<1\) for L being the Lipschitz constant of the ODE (1), and \(L_3 x=x'''+x'\).

Since the proof of Theorem 2 is lengthy, it is given in Appendix. The spline and the upper bound converge to the solution resp., to the norm of the solution by Proposition 5 resp. Theorem 2 as \(h\rightarrow 0\).

5 Spectral bound by Chebyshev projections

The key idea is to replace the system (1) by an approximation. We use the spectral method [11, 34] in the setting of polynomial approximation of linear ordinary differential equations [3, 10]. The solution of the approximated system is entire and hence, the truncation error of the approximated solution can be given. Here, we approximate the system matrix by Chebyshev polynomials [5] and use results from approximation theory [36] in order to derive rigorous bounds on the original solution x(t). As preliminaries, we need some results from approximation theory, here we focus on Chebyshev polynomials, which were introduced in [5] and Chebyshev projections. We follow the presentation of Chebyshev projections based on [36]. Any approximation can be used to replace the original system, but our focus is on Chebyshev polynomials since they minimize the maximal error, which is a property we sought for in the previously introduced bounds. In Sect. 5.2 we explain the general idea of the spectral method and how we use the results from approximation theory in order to derive bounds. The bound depends heavily on how well the original system is approximated.

5.1 Chebyshev polynomials and projections

Chebyshev polynomials of the first kind can be defined by the three term recurrence relation

$$\begin{aligned} T_{k+1}(t) = 2t T_{k}(t) - T_{k-1}(t), \end{aligned}$$
(14)

where \(T_0(t)=1\), \(T_1(t)=t\) for \(k = 1,2,3, \ldots \)

Chebyshev polynomials are orthogonal over the interval \([-1,1]\):

$$\begin{aligned} (T_i,T_j)_\omega := \int _{-1}^1{T_i(t)T_j(t) \omega (t) \mathrm{d}t} = \left\{ \begin{array}{ll} 0 &{}\quad \text{ for } i \ne j \\ \pi &{}\quad \text{ for } i=j=0 \\ \frac{\pi }{2} &{}\quad \text{ for } i=j\ne 0 \\ \end{array} \right. \end{aligned}$$
(15)

with the weight function \(\omega (t)=\frac{1}{\sqrt{1-t^2}}\). In the following, we state results only for the interval \([-1,1]\), but they can be generalized to any interval since by an affine time transformation, the Chebyshev polynomials can be mapped to an arbitrary interval. A Lipschitz continuous f has a unique representation as a Chebyshev series [36],

$$\begin{aligned} f(t) = \sum _{k=0}^\infty {c_k T_k(t)}, \end{aligned}$$

which is absolutely and uniformly convergent. The coefficients \(c_k\) are given by the orthogonality relationship (15),

$$\begin{aligned} c_0 = \frac{1}{\pi }(f,T_0)_\omega \qquad \text{ and } \text{ for } k>0: \qquad c_k = \frac{2}{\pi }(f,T_k)_\omega . \end{aligned}$$
(16)

The m-truncated Chebyshev series is defined as

$$\begin{aligned} f_m(t) := (P_m f)(t):= \sum _{k=0}^m{c_k T_k(t)}. \end{aligned}$$
(17)

For \(m\in {\mathbb {N}}\), let \({\mathcal {P}}_m\) be the space of polynomials of degree at most m. Clearly, the Chebyshev polynomials \(T_k\), \(k=0,1,\ldots ,m\), are a basis of \({\mathcal {P}}_m\). Let \({\mathcal {C}}\) be the space of continuous functions. Then \(P_m:{\mathcal {C}}\rightarrow {\mathcal {P}}_m\) defined by (17) is a linear operator and it is also called Chebyshev projection since \(P_m p = p\) for any \(p \in {\mathcal {P}}_m\) and \(P_m T_k = 0\) for \(k>m\). We recall the following two propositions given in [35, 36] which are essential for the derivation of our spectral bounds.

Proposition 6

If f and its derivatives through \(f^{(\nu -1)}\) are absolutely continuous on \([-1,1]\) and if the \(\nu \)-th derivative \(f^{(\nu )}\) is of bounded variation V for some \(\nu \ge 1\), then for any \(m>\nu \), the Chebyshev projection satisfies

$$\begin{aligned} \Vert f-f_m \Vert _{\infty } \le \frac{2V}{\pi \nu (m-\nu )^\nu }. \end{aligned}$$

For \(\rho >1\) let the Bernstein ellipse \({\mathcal {E}}_\rho \) be defined as

$$\begin{aligned} {\mathcal {E}}_\rho := \left\{ \frac{r e^{i\theta }+r^{-1} e^{-i\theta }}{2} \in {\mathbb {C}}:\, -\pi \le \theta \le \pi ,\, 0\le r \le \rho \right\} . \end{aligned}$$

Since \(r e^{i\theta }+r^{-1} e^{-i\theta } = (\rho +\rho ^{-1})\cos (\theta )+(\rho -\rho ^{-1})i \sin (\theta )\) for \(-\pi \le \theta \le \pi \), the boundary of the Bernstein ellipse \(\partial {\mathcal {E}}_\rho \) can be written in parametric form as \(\partial {\mathcal {E}}_\rho =\left\{ z \in {\mathbb {C}}:\frac{\hbox {Re}(z)^2}{a_\rho ^2}+\frac{\hbox {Im}(z)^2}{b_\rho ^2}=1 \right\} \), where its semi-axes are \(a_\rho = \frac{\rho +\rho ^{-1}}{2}\) and \(b_\rho = \frac{\rho -\rho ^{-1}}{2}\) with foci at \(\pm 1\). Figure 2 shows Bernstein ellipses in the complex plane for \(\rho =1.1,1.2,\ldots ,1.5\) as in [36].

Fig. 2
figure 2

Bernstein ellipses \(\partial {\mathcal {E}}_\rho \) for \(\rho =1.1,1.2,\ldots ,1.5\)

Proposition 7

If f is analytic in \([-1,1]\) and analytically continuable to the open Bernstein ellipse \({\mathcal {E}}_\rho \), where it satisfies \(|f(t)|\le M\) for some M, then for each \(m\ge 0\) its Chebyshev projection satisfies

$$\begin{aligned} \Vert f-f_m \Vert _{\infty } \le \frac{2M\rho ^{-m}}{\rho -1}. \end{aligned}$$

5.2 Spectral method and spectral bound

We now return to our original problem of a linear time-periodic system (1) but instead of solving it directly, we first approximate it by the following system

$$\begin{aligned} \begin{array}{lcl} \dot{y}(t) &{}= &{}(P_{m} A)(t)y(t) \qquad \forall t \in [0,T],\\ y(0) &{}=&{} x_0, \end{array} \end{aligned}$$
(18)

where \((P_{m} A)\) denotes the component-wise Chebyshev projection of A, see (17). If \((P_{m} A)(t_1)\) commutes with \((P_{m} A)(t_2)\) for all times \(t_1\) and \(t_2\), then the solution to the approximated system (18) is given by \(y(t) = \exp \left( \int _0^t{(P_m A)(\tau ) \mathrm{d}\tau } \right) x_0\). y(t) is entire since polynomials and their exponentials are entire. But in general the commutativity of \((P_m A)(t)\) is a rather strong assumption. Hence, we cite a more general result which, e.g., is given in [6].

Proposition 8

Suppose \({\mathcal {A}}:{\mathbb {R}} \rightarrow {\mathbb {R}}^{n\times n}\) is analytic at \(\tau \in {\mathbb {R}}\), where \(\varrho \) is its radius of convergence, and u(t) is the unique solution to the ODE

$$\begin{aligned} \dot{u}(t) = {\mathcal {A}}(t)u(t), \end{aligned}$$
(19)

with \(u(0)=u_0\). Then u is also analytic at \(\tau \in {\mathbb {R}}\) with the same convergence radius \(\varrho \).

As a corollary, it follows that the solution y(t) is entire since the function \((P_m A)(t)\) is a polynomial which by definition is entire. If the approximation is exact, i.e., \(a_{ij}(t)\) is a polynomial of degree at most m for \(1\le i,j \le n\), then x(t) and y(t) coincide. In order to prove rigorous upper bounds on x(t), we use Propositions 6 and 7 to bound the difference between the original function A and its Chebyshev projection. These bounds depend on the smoothness of the system matrix A. Furthermore, define \(\gamma \) for a matrix function \(A:{\mathbb {R}} \rightarrow {\mathbb {R}}^{n\times n}\) as its maximal absolute entry, i.e.

$$\begin{aligned} \gamma := \max \limits _{1 \le i,j \le n} \, \max _{s\in [0,T]} |a_{ij}(s)|. \end{aligned}$$
(20)

Let \(\hbox {AC}\) denote the set of absolutely continuous functions and \(\hbox {AC}^{k}\) the set of k-times differentiable functions such that \(f^{(j)}\in \hbox {AC}\) for \(0\le j \le k\).

Theorem 3

If \(a_{ij} \in \hbox {AC}^{k-1}([0,T])\) and the k-th derivative \(a_{ij}^{(k)}\) is of bounded variation V for all \(i,j=1,\ldots ,n\), then for any \(m>k>0\):

$$\begin{aligned} \Vert x(t)\Vert _\infty \le \left\| y(t)\right\| _\infty + \frac{2nVe^{n\gamma t}}{\pi k(m-k)^{k}} \int _{0}^t \left\| y(s)\right\| _\infty \mathrm{d}s. \end{aligned}$$
(21)

Theorem 4

If \(a_{ij}\) is analytic in [0, T] and analytically continuable to the open Bernstein ellipse \({\mathcal {E}}_\rho \), where it satisfies \(|a_{ij}(t)| \le M\) for all \(i,j=1,\ldots ,n\) for some M, then for any \(m\ge 0\):

$$\begin{aligned} \Vert x(t)\Vert _\infty \le \left\| y(t)\right\| _\infty +\frac{2Mn\rho ^{-m} e^{n\gamma t}}{\rho -1} \int _{0}^t{\left\| y(s)\right\| _\infty \mathrm{d}s}. \end{aligned}$$
(22)

Proving Theorems 3 and 4 can be combined, but for this, Gronwall’s lemma is needed. Here, we use the integral version by R. Bellman [2], which e.g., is given in [38].

Lemma 3

(Gronwall’s lemma) Let \(g: [a,b] \mapsto {\mathbb {R}}\) and \(\beta : [a,b] \mapsto {\mathbb {R}}\) be continuous, \(\alpha : [a,b] \mapsto {\mathbb {R}}\) be integrable on [ab] and \(\beta (t)\ge 0\). Assume g(t) satisfies

$$\begin{aligned} g(t) \le \alpha (t) + \int _{a}^{t}{\beta (s)g(s)\mathrm{d}s}, \qquad t \in [a,b]. \end{aligned}$$

Then

$$\begin{aligned} g(t) \le \alpha (t) + \int _a^t\alpha (s)\beta (s)\exp \biggl (\int _s^t\beta (r)\,{\mathrm {d}}r\biggr ){\mathrm {d}}s,\qquad \forall t \in [a,b]. \end{aligned}$$

Furthermore, if \(\alpha \) is non-decreasing and \(\beta >0\) is constant, then

$$\begin{aligned} g(t) \le \alpha (t) e^{\beta (t-a)}, \qquad \forall t \in [a,b]. \end{aligned}$$

Now we return to the proof of Theorems 3 and 4.

Proof

x(t) and y(t) fulfill the integral formulation of the ODE

$$\begin{aligned} x(t)-y(t)= & {} \int _{0}^t A(s)x(s)-(P_m A)(s)y(s)\mathrm{d}s \\= & {} \int _{0}^t A(s)x(s)-A(s)y(s)+ A(s)y(s)-(P_m A)(s)y(s)\mathrm{d}s \\= & {} \int _{0}^t A(s)\left[ x(s)-y(s)\right] +\left[ A(s)-(P_m A)(s)\right] y(s)\mathrm{d}s \end{aligned}$$

Taking the maximum norm \(\Vert \cdot \Vert _\infty \) (12), which is a compatible matrix norm, on both sides and using the triangle inequality yields

$$\begin{aligned} \Vert x(t)-y(t)\Vert _\infty\le & {} \int _{0}^t \big ( \Vert A(s)\Vert _\infty \Vert x(s)-y(s)\Vert _\infty \big .\\&+\,\big .\Vert A(s)-(P_m A)(s)\Vert _\infty \Vert y(s)\Vert _\infty \big ) \mathrm{d}s \end{aligned}$$

The case of \(\gamma =0\), i.e., \(A \equiv 0\) and \(x=const\), is trivial. Otherwise, define \(\beta \) in Gronwall’s lemma as \(\beta :=n\gamma >0\), hence

$$\begin{aligned} \Vert A(t)\Vert _\infty = \max \limits _{1 \le i \le n} \sum _{j=1}^n \underbrace{| a_{ij}(t)|}_{\le \gamma } \le n \gamma =\beta . \end{aligned}$$
  1. 1.

    If the assumptions of Theorem 3 are fulfilled, then

    $$\begin{aligned} \Vert A(s)-(P_m A)(s)\Vert _\infty= & {} \max \limits _{1 \le i \le n} \sum _{j=1}^n \underbrace{| a_{ij}(s)-(P_m a_{ij})(s) |}_{\le \frac{2V}{\pi k(m- k)^k}}\le \frac{2nV}{\pi k(m- k)^k}. \end{aligned}$$

    Therefore,

    $$\begin{aligned} \Vert x(t)-y(t)\Vert _\infty\le & {} \beta \int _{0}^t \Vert x(s)-y(s)\Vert _\infty \mathrm{d}s + \frac{2nV}{\pi k(m- k)^k} \int _{0}^t \Vert y(s)\Vert _\infty \mathrm{d}s \end{aligned}$$

    and applying Gronwall’s lemma with

    $$\begin{aligned} g(t)= & {} \Vert x(t)-y(t)\Vert _\infty , \\ \alpha (t)= & {} \frac{2nV(A)}{\pi k(m- k)^ k} \int _{0}^t \Vert y(s)\Vert _\infty \mathrm{d}s \end{aligned}$$

    and \(\beta ={\text {const}}>0\) yields

    $$\begin{aligned} \Vert x(t)-y(t)\Vert _\infty \le \frac{2nV e^{t n\gamma } }{\pi k(m- k)^ k} \int _{0}^t \Vert y(s)\Vert _\infty \mathrm{d}s. \end{aligned}$$
    (23)

    With the reverse triangle inequality the theorem follows.

  2. 3.

    If the assumptions of Theorem 4 are fulfilled, then

    $$\begin{aligned} \Vert A(s)-(P_m A)(s)\Vert _\infty = \max \limits _{1 \le i \le n} \sum _{j=1}^n \underbrace{| a_{ij}(s)-(P_m a_{ij})(s) |}_{\le \frac{2M\rho ^{-m}}{\rho -1}}\le \frac{2nM\rho ^{-m}}{\rho -1}. \end{aligned}$$

    The remaining proof is analogous to the previous case.\(\square \)

The ODE system (18) has to be solved nevertheless, but we know that the solution y is entire due to Proposition 8. Hence by Proposition 7, \(\Vert y(t)-(P_m y)(t)\Vert _\infty \le \frac{2M\rho ^{-m}}{\rho -1}\) in the Bernstein ellipse \({\mathcal {E}}_\rho \), where it satisfies \(|y_i(t)|\le M\) for some M and \(i=1,\ldots , n\). The Chebyshev projections of A and y so not necessarily have the same degree, hence in the following we distinguish them by their subscripts. The index A refers to the matrix function A and an index y to the solution of (18). For a higher approximation by Chebyshev projections one hopes to have a sharper upper bound. This convergence result is established by the following inequality which is due to Eq. (23) in the proof of Theorems 3 and 7. For a matrix function A with the assumptions of Theorem 3, we obtain

$$\begin{aligned} \Vert x(t)-(P_{m_y}y)(t)\Vert _\infty\le & {} \Vert x(t)-y(t)\Vert _\infty +\Vert y(t)-(P_{m_y}y)(t)\Vert _\infty \nonumber \\\le & {} \frac{2nV e^{t n\gamma } }{\pi k(m_A- k)^ k} \int _{0}^t \Vert y(s)\Vert _\infty \mathrm{d}s+\frac{2M_y\rho _y^{-m_y}}{\rho _y-1}. \end{aligned}$$
(24)

And for an analytic matrix function A, we obtain

$$\begin{aligned} \Vert x(t)-(P_{m_y}y)(t)\Vert _\infty\le & {} \frac{2M_An\rho _A^{-m_A} e^{n\gamma t}}{\rho _A-1} \int _{0}^t \Vert y(s)\Vert _\infty \mathrm{d}s+\frac{2M_y\rho _y^{-m_y}}{\rho _y-1}. \end{aligned}$$
(25)

Since \(\int _0^t\Vert y(s)\Vert _\infty \mathrm{d}s\) is bounded, the right-hand sides of (24) and (25) tend to zero as \(m_A,m_y\rightarrow \infty \). Hence, the approximated solution \(P_{m_y} y\) converges to the original solution x for better approximation levels \(m_A\) and \(m_y\), i.e., \(P_{m_y} y\rightarrow x\) as \(m_A,m_y\rightarrow \infty \). In the first case, the rate of convergence is of order k, while for an analytic matrix function A one obtains geometric convergence.

With the reverse triangle inequality, we obtain the rigorous bounds with the assumptions on the matrix function A of Theorem 3

$$\begin{aligned} \Vert x(t)\Vert _\infty \le \left\| (P_{m_y}y)(t)\right\| _\infty + \frac{2nVe^{n\gamma t}}{\pi k(m_A-k)^{k}} \int _{0}^t \left\| (P_{m_y}y)(t)\right\| _\infty \mathrm{d}s + \epsilon (t), \end{aligned}$$
(26)

where \(\epsilon (t) = \frac{2M_y\rho _y^{-m_y}}{\rho _y-1}\left( 1+\frac{2nVe^{n\gamma t}}{\pi k(m_A-k)^{k}}t\right) \). And for the case of an analytic matrix function A (as in Theorem 4)

$$\begin{aligned} \Vert x(t)\Vert _\infty \le \left\| (P_{m_y}y)(t)\right\| _\infty +\frac{2M_An\rho _A^{-m_A} e^{n\gamma t}}{\rho _A-1} \int _{0}^t{\left\| (P_{m_y}y)(t)\right\| _\infty \mathrm{d}s} + \delta (t), \end{aligned}$$
(27)

where \(\delta (t) = \frac{2M_y\rho _y^{-m_y}}{\rho _y-1}\left( 1+\frac{2M_An\rho _A^{-m_A} e^{n\gamma t}}{\rho _A-1}t\right) \). The rigorous upper bounds (26) and (27) tend to the norm of the solution \(\Vert x(t)\Vert _\infty \) as \(m_A,m_y\rightarrow \infty \) since \(P_{m_y}y\rightarrow x\) as \(m_A,m_y\rightarrow \infty \) by (24) and (25).

If the matrix function A is analytic, one does not need to replace the original system by (18) since even for the original system the solution is analytic by Proposition 8. But for the sake of completeness we also derived bounds in this case and the bounds are very tight for moderate \(m_A\) as shown in Sect. 6.

Table 1 Convergence for trigonometric spline and spectral bound

Similar results can be obtained for interpolation instead of Chebyshev projection. In this context, the main question concerns the interpolation points. If Chebyshev points are chosen, then the Chebyshev interpolant satisfies Propositions 6 and 7 with an additional factor 2, see e.g., [36]. Hence, one can obtain results such as Theorems 3 and 4 with the same additional factor.

6 Overview and numerical results

First, we discuss the convergence of trigonometric splines and of the spectral method depending on the smoothness of A indicated by Theorem 2 and Propositions 6 and 7. In Table 1, the convergence rates for the trigonometric spline bound defined in Theorem 2 and the spectral bounds defined in Eqs. (26) and (27) are given for various function classes and they are visualized in Figs. 10 and 11. The computational complexity for the trigonometric spline bound is dominated by computing the spline solution. Trigonometric splines with compact support, i.e., trigonometric B-splines, are chosen due to the local influence of each spline. For general splines, a linear system of dimension \(n(r+1)\times n(r+1)\) has to be solved while for B-splines, \(r+1\) systems of dimension \(n\times n\) have to be solved. Hence, the computational complexity for trigonometric B-splines is \({\mathcal O}(n^3(r+1))\). For the spectral bound, each element of the system matrix A has to be approximated, which can be done by fast Fourier transformations (FFT) in \({\mathcal O}((m+1)\log (m+1))\). The convergence of the trigonometric spline bound is local, i.e., a trigonometric spline \(S_i\), which is visualized in Fig. 1, converges on its support, i.e., \({\mathrm {supp}}(S_i)=\left\{ t \in [0,T]: S_i(t)\ne 0\right\} = [t_{i-1},t_{i+2}]\), to the solution. The spectral bound converges globally, i.e., on the whole interval [0, T], to the solution. The rigorous bounds are illustrated for three examples which all can be described by a time-periodic system of the form (1). An overview of the settings are given in Table 2. In the following, the parameters r and \(m_A\) of the trigonometric spline or the spectral bound, respectively, are chosen such that firstly, a visible difference between the solution and its respective upper bounds can be seen and secondly, an effect of the parameters can be noticed. If the order of the Chebyshev projection \(m_A\) is increased slightly in the Figures 3, 5 and 7, the spectral bound cannot be distinguished from the original solution. This observation does not hold for the trigonometric spline bound since its convergence is slower, see Table 1 and Fig. 10 compared to Fig. 11b. But for a larger number of nodes r, the trigonometric spline bound tends to the solution by Proposition 5, compare Figs. 3, 5 and 7. Computation of global extrema is not an easy task due to the possibly large number of local minima and maxima of the objective function [14, 37]. The constants \(L,\Vert L_3 x\Vert _\infty \) and \(\gamma \) are determined by the fminsearch routine in MATLAB.Footnote 1 But in general only a local minimum is found by fminsearch, therefore we combined it with a Global Search strategy of the Global Optimization Toolbox in MATLAB. The computed values for \(L,\Vert L_3 x\Vert _\infty \) and \(\gamma \) are given in Table 3. They are used in the figures mentioned above and also appear in the convergence rates of the methods in Table 1. Note, that the parameters \(\rho _A\) and \(M_A\) with respect to the spectral bound are not unique, especially any Bernstein ellipse can be chosen since the function is entire. Here, we chose \(\rho _A\) with respect to the decay of the Chebyshev coefficients \(|c_k|\) given by (16) but for the sake of simplicity the derivation is omitted and for the appropriate examples \(\rho _A\) is given in Table 3. \(M_A\) is determined by the strategy mentioned above, i.e., by a combination of fminsearch and Global Search.

Table 2 Setting for trigonometric spline and spectral bound
Fig. 3
figure 3

Solution for \(A(t)=|\sin (2\pi t)|^3\) for \(t \in [0,1]\)

Table 3 Constants used for trigonometric spline bound and spectral bound

The first example is a one-dimensional IVP \(\dot{x}(t)=|\sin (2\pi t)|^3x(t)\) with initial condition \(x(0)=1\). The function of the right hand-side \(A(t)=|\sin (2\pi t)|^3\) is thrice differentiable and \(A^{(k)}\) is absolutely continuous, i.e., \(A\in \hbox {AC}^{3}([0,T])\). We use this example as here, we are able to compare our results to the analytical solution which is

$$\begin{aligned} x(t) = {\left\{ \begin{array}{ll} \exp \left( \frac{\cos ^3(2\pi t)}{6\pi }-\frac{\cos (2\pi t)}{2\pi } \right) &{}\quad \text{ if } t\in [0,0.5), \\ \exp \left( -\frac{\cos ^3(2\pi t)}{6\pi }+\frac{\cos (2\pi t)}{2\pi }+\frac{2}{3\pi } \right) &{}\quad \text{ if } t\in [0.5,1]. \end{array}\right. } \end{aligned}$$

The results of the trigonometric spline bound and the spectral bound are shown in Fig. 3. For better approximation levels, the trigonometric spline and spectral bound are closer to the original solution \(\Vert x(t)\Vert _\infty \) as indicated by the convergence results. The convergence rates are quadratic and cubic as shown in Table 1.

Figure 4 shows the solution of the first example in the Euclidean norm and the weighted time-dependent norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\). For the one-dimensional example, the Euclidean norm and the maximum norm coincide with the absolute value, i.e., \(|\cdot |=\Vert \cdot \Vert _2=\Vert \cdot \Vert _\infty \). Furthermore, the weighted R-norm is a scaling, but since the single eigenvector is normalized, \(|\cdot |=\Vert \cdot \Vert _2=\Vert \cdot \Vert _\infty =\Vert \cdot \Vert _R\) holds. The weighted time-dependent norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\) suppresses the oscillations and since the spectral abscissa is positive, \(\nu [L]=0.424413181578411 >0\), a monotonic increase can be observed, cf. Corollary 1.

Fig. 4
figure 4

Solution for \(A(t)=|\sin (2\pi t)|^3\) for \(t\in [0,5]\)

As the second example, we chose a Jeffcott rotor on an anisotropic shaft supported by anisotropic bearings [1]. It can be modeled as a linear time-periodic system (1) where A(t) is entire with system dimension \(n=4\). The same parameter values are chosen as in [1]. This is an asymptotically stable system since the maximal Lyapunov exponent is \(\nu [L]=-0.002000131812440<0\). The results are illustrated in Fig. 5. The trigonometric spline bound for \(r=40,000\) is highly oscillatory such that some components of its graph in Fig. 5 cannot be distinguished anymore. But nevertheless, the upper bound is valid. If one can further assume smoothness of the solution, interpolation of the valleys of the oscillations would give a smoother upper bound.

Fig. 5
figure 5

Jeffcott rotor on an anisotropic shaft for \(t\in [0,2\pi ]\)

Figure 6 shows the solution of the Jeffcott rotor over time in the interval \([0,10\pi ]\) in various norms, the Euclidean norm, the maximum norm, the weighted time-invariant R-norm and the weighted time-dependent \(\tilde{R}(t)\)-norm. The weighted time-dependent norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\) suppresses the oscillations and since the matrix L is diagonalizable and the spectral abscissa is negative, \(\nu [L]<0\), a monotonic decrease can be observed, cf. Corollary 1.

Fig. 6
figure 6

Jeffcott rotor on an anisotropic shaft for \(t\in [0,10\pi ]\)

The third example is an axial parametrically excited cantilever beam [7]. The planar beam model is composed of m finite elements. We chose the same parameter values as in [7]. The assembling of mass, damping and stiffness matrix by \(m=4\) finite elements is well described in [7]. We used the aforementioned method which results in a periodic system matrix of dimension \(n=16\). The parameter for the parametric excitation frequency \(\nu \) is chosen to be the parametric combination resonances of first order \(\nu =|\varOmega _1 - \varOmega _2|=138.44\). Furthermore, we introduce a coordinate transformation W. Hence, the system (1) is not only given by the original system matrix A(t), but also by the coordinate transformation W, i.e. the system is given by

$$\begin{aligned} \dot{x}(t)= & {} W^{-1}A(t)Wx(t),\\ A(t)= & {} A(t+T), \qquad T=\frac{2\pi }{\nu },\\ x(0)= & {} x_0. \end{aligned}$$

The coordinate transformation W is a diagonal matrix and it is computed by the balance routine in MATLAB for A(t) at \(t=0\) in order to decrease the constant \(\gamma \) in (20). Of course, any \(t\in [0,T]\) could be chosen to determine a coordinate transformation, but our initial choice was sufficient to reduce \(\gamma \) by two orders of magnitude to \(\gamma =32\). The system is asymptotically stable since the maximal Lyapunov exponent is \(\nu [L]=-2.546655954908259\times 10^{-6}<0\).

Fig. 7
figure 7

Parametrically excited cantilever beam for \(t\in [0,\frac{2\pi }{\nu }]\)

Fig. 8
figure 8

Parametrically excited cantilever beam for \(t\in [0,\frac{\pi \times 10^4}{\nu }]\)

Fig. 9
figure 9

Parametrically excited cantilever beam for \(t\in [0,\frac{2\pi \times 10^3}{\nu }]\)

Figure 7 shows the solution of the parametrically excited cantilever beam in the interval \([0,\frac{2\pi }{\nu }]\) in the maximum norm and its trigonometric spline upper bound and spectral upper bound for \(r=20,25\) and \(m_A=41,42\), respectively. From this figure, an asymptotic behavior cannot be concluded, hence we plotted Figs. 8 and 9. While Fig. 8 shows an oscillatory behavior of the solution of the Cantilever beam over time in the interval \([0,\frac{10^4\pi }{nu}]\) in the Euclidean norm and the maximum norm, Fig. 9 shows the solution in the weighted time-invariant R-norm and the weighted time-dependent \(\tilde{R}(t)\)-norm. Firstly, the weighted time-dependent norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\) suppresses the oscillation of Fig. 8 and by Corollary 1 it is proven that the solution decreases monotonically since the matrix L is diagonalizable and its spectral abscissa is negative \(\nu [L]<0\). Hence, the solution is asymptotically stable, i.e., in any norm the solution decays to zero as \(t\rightarrow \infty \). Even with a larger time horizon this effect is not visible in Fig. 8, but due to the vibration suppression it may easily be seen in Fig. 9. Secondly, the matrix \(\tilde{R}(t):=Z(t)^{-*}RZ^{-1}(t)\) for this particular example is almost constant for all times. Surprisingly, the matrices \(\tilde{R}(t)\) and R almost coincide and hence, so do the curves \(\Vert x(t)\Vert _R\) and \(\Vert x(t)\Vert _{\tilde{R}(t)}\) in Fig. 9 (Figs. 10, 11).

Fig. 10
figure 10

Convergence rate for trigonometric splines

Fig. 11
figure 11

Convergence rates for spectral bounds. a Spectral convergence rate for \(A(t)=|\sin (2\pi t)|\). b Spectral convergence rates for Jeffcott rotor and cantilever beam

7 Conclusions

Linear time-periodic systems arise in many fields of application, e.g., in parametrically excited systems and anisotropic rotor-bearing systems. In general, they are obtained by linearizing a nonlinear system about a periodic trajectory. Complete knowledge of the system’s components is necessary to understand its transient behavior which may not be applicable for very complex and large-scale systems. Hence, understanding system characteristics such as stability and robustness may be sufficient. The solution structure for a linear time-periodic system is known (Floquet’s Theorem 2). But nevertheless, in general, it has to be approximated since it cannot be given in closed form. Important physical properties such as stability and robustness can be lost due to the (numerical) approximations. In order to guarantee such properties for the original solution and not only for the approximation, one can derive analytic results on the solution or the approximation error has to be incorporated in the analysis. This is the key idea of this paper: bounds that solely depend on the solution structure or bounds that incorporate the approximation error. Firstly, we were able to generalize results from the linear time-invariant [17, 18] to time-periodic setting and derive a time-varying norm that captures important properties such as decoupling, filtering and monotonicity. Secondly, we used two different methodologies where the approximation error is incorporated in the upper bound. In the first one, an approximated solution is obtained due to time discretization and a quadratic trigonometric spline approximation. The upper bound depends on the discretization grid of the quadratic trigonometric spline solution and converges quadratically to the original solution. The derived upper bound is an extension to work on the solution of ODEs by trigonometric splines [2325]. In the second case we used a general framework — the linear time-periodic system is approximated by Chebyshev projections [36]. Here, we generalized results from [30, 31] w.r.t. convergence and convergence rates and most importantly we could incorporate the two approximation errors of the Chebyshev projections into the rigorous upper bound. While the first approximation error is due to the polynomial approximation of the linear time-periodic system, the second error is due to solving the approximated system. The polynomial approximation of the linear time-periodic system yields properties of the solution such that its solution can be represented by an infinite series. Truncation of this series yields the second error. A series representation of the solution is not necessarily possible for the original system.

In summary, the bounds converge to the original solution of the linear time-periodic system as the number of splines or the degree of the Chebyshev projections is increased. For a smooth time-periodic system, the spectral bound in general superiors the trigonometric spline bound due to its faster convergence. In all cases the upper bounds converge to the norm of the solution if and only if the approximation converges to the solution. The computational complexity and convergence rate for the trigonometric spline bound and the spectral bound are stated. The applicability of all bounds and stability analysis of linear time-periodic systems is demonstrated by means of various examples which include a Jeffcott rotor and a parametrically excited Cantilever beam.