# Trigonometric spline and spectral bounds for the solution of linear time-periodic systems

## Abstract

Linear time-periodic systems arise whenever a nonlinear system is linearized about a periodic trajectory. Examples include anisotropic rotor-bearing systems and parametrically excited systems. The structure of the solution to linear time-periodic systems is known due to Floquet’s Theorem. We use this information to derive a new norm which yields two-sided bounds on the solution and in this norm vibrations of the solution are suppressed. The obtained results are a generalization for linear time-invariant systems. Since Floquet’s Theorem is non-constructive, the applicability of the aforementioned results suffers in general from an unknown Floquet normal form. Hence, we discuss trigonometric splines and spectral methods that are both equipped with rigorous bounds on the solution. The methodology differs systematically for the two methods. While in the first method the solution is approximated by trigonometric splines and the upper bound depends on the approximation quality, in the second method the linear time-periodic system is approximated and its solution is represented as an infinite series. Depending on the smoothness of the time-periodic system, we formulate two upper bounds which incorporate the approximation error of the linear time-periodic system and the truncation error of the series representation. Rigorous bounds on the solution are necessary whenever reliable results are needed, and hence they can support the analysis and, e.g., stability or robustness of the solution may be proven or falsified. The theoretical results are illustrated and compared to trigonometric spline bounds and spectral bounds by means of three examples that include an anisotropic rotor-bearing system and a parametrically excited Cantilever beam.

## Keywords

Time-periodic system Initial value problem Weighted norm Trigonometric approximation Chebyshev projection## 1 Introduction

To analyze the vibration behavior of a system completely, one has to consider all its components individually. For large-scale systems such a detailed analysis is often not applicable, hence all system components are combined to a single quantity, e.g., Euclidean norm or any other norm. This simplification is a rough measure of the vibration behavior of the system and therefore, it does not show its exact behavior. In this paper, we derive bounds on the norm of the solution of linear time-periodic systems. We investigate various norms and with the respective bounds on the solution the vibration behavior of the system and its transient analysis can be supported and, e.g., stability and robustness can be analyzed.

Linear time-invariant systems arise in many fields of application, e.g., via linearization of vibrational systems [33], and have been an active area of research. Their solution is defined via the matrix exponential and their numerical evaluation can be done by methods to solve ordinary differential equations, e.g., by Runge–Kutta methods [12], or the computation of the matrix exponential [13, 21]. Two-sided bounds for the solution of linear time-invariant systems have been investigated in a series of papers [17, 18]. A time-varying system in general does not possess a closed form solution (unless the system matrix commutes for any two times). Hence, the theory derived in [17, 18] cannot easily be extended to a general linear time-varying system with an infinite time horizon. In this paper, we investigate linear time-periodic systems and generalize the theory of bounds to the solution of linear time-periodic systems while using its solution structure defined by Floquet’s theory [8]. In general, Floquet’s normal form is non-constructive, hence an approximation by numerical methods is needed, e.g., in [30, 31]. As long as the approximation is not exact, it involves an error and the bounds on the solution of the approximated system then may not be valid w.r.t. the solution of the original linear time-periodic system. In [29], the stability of a linear time-periodic system is analyzed by an approximation approach with quadratic polynomials. We generalize the idea in three different ways. Firstly, we use trigonometric splines [27, 28], which can be seen as a natural choice for time-periodic systems, since they mimic its time-periodicity. Here, we show bounds on the solution for quadratic trigonometric splines. In general, bounds can be derived for higher order, as well, as long as the method converges, see e.g., [25]. But a spline approximation of order 4 or larger is divergent [20]. Secondly, we do not limit ourselves to quadratic polynomials but use a general framework such that the polynomial approximation can be performed with any desired degree by Chebyshev projections. In [30, 31], numerical methods based on Chebyshev projection have already been considered to solve linear time-periodic systems. We generalize the integration [30] or differentiation [31] scheme by a general framework. Here, we do not approximate the solution but the time-varying system matrix by Chebyshev polynomials [5]. We use results from approximation theory [36] in order to obtain bounds on the approximated system. The solution of the approximated system is then entire and it has an infinite series representation. Hence, it can be truncated and a bound on the truncation error is derived. Within this approximation framework we show that the truncated solution of the approximated system converges to the original solution of the linear time-periodic system which is a very important extension to the work in [30, 31]. Thirdly and most importantly, the trigonometric splines and the Chebyshev approximation framework yield rigorous bounds on the solution of a linear time-periodic system, i.e., we do not only approximate the solution by the aforementioned methods but we obtain bounds on the solution as well. These bounds essentially behave like the approximated solutions, i.e., they converge to the original solution at the same rate as the approximated solution. Transient analysis of the original linear time-periodic system can be supported by stability and robustness analysis of the aforementioned bounds due to their rigorousness. The ideas and bounds for trigonometric splines and Chebyshev projections on the solution of linear time-periodic systems can be applied to general time-varying systems over a finite time interval.

The paper is structured as follows. In Sect. 3, rigorous bounds are obtained due to the structure of the solution. In Sect. 3.1 we summarize results for linear time-invariant systems [18, 19]. Two-sided bounds on the solution with the differential calculus of norms, e.g., in [15, 16, 17], are shown. In Sect. 3.2 we generalize the results for time-invariant to time-periodic systems. Here, the matrix logarithm of the monodromy matrix w.r.t. the length of the period takes the role of the time-invariant coefficient matrix. A newly defined time-dependent norm yields two-sided bounds and properties such as decoupling, vibration suppression and monotonicity of the solution, as well. This is a generalization of the time-invariant case in [19]. We use and explain two methods to solve the linear time-periodic system. The first one is described in Sect. 4, where we approximate the solution of the system by trigonometric splines following ideas in [20, 24, 25] and then we establish bounds on the quality of the approximation. The second method is the so-called spectral method [11, 26], which is explained in the setting of polynomial approximation of linear ordinary differential equation [10] in Sect. 5. We derive an upper bound based on the approximation quality and show its convergence to the solution of the linear time-periodic system. We conclude our theory on rigorous bounds of time-periodic systems with some remarks about convergence and computational complexity and show its effectiveness in Sect. 6 on various examples which include an anisotropic rotor-bearing system and a parametrically excited Cantilever beam.

## 2 Preliminaries

*T*and a given initial condition

Throughout this paper, we denote with \({\mathcal {C}}(X,Y)\) the space of continuous functions and with \({\mathcal {C}}^k(X,Y)\) the space of *k*-times continuously differentiable functions that map the domain \(X \subseteq {\mathbb {R}}\) to its range \(Y\subseteq {\mathbb {R}}^{n\times n}\).

### 2.1 Existence and uniqueness of a solution

First of all, we pose the question whether a solution to (1) exists and when it does if it is unique. We therefore cite a global existence and uniqueness result from [6] in the context of linear systems. Here, the periodicity of the system matrix can be omitted.

### **Proposition 1**

Let \(A\in {\mathcal {C}}({\mathbb {R}},{\mathbb {R}}^{n\times n})\). Then there exists a unique solution *x*(*t*) of (1).

### 2.2 Floquet’s Theorem

The most fundamental result in the setting of linear time-periodic systems is Floquet’s Theorem [8]. Originally, it was given for a scalar ordinary differential equation of order \(m>1\). But here we follow the presentation of the proposition for a linear system of ordinary differential equations e.g., given in [22].

### **Proposition 2**

*L*such that

*Floquet normal form*since the structure of the solution to (1) is given by Floquet’s Theorem as

*L*, also known as Floquet exponents, determine the asymptotic behavior of the system. The real parts of the Floquet exponents are called Lyapunov exponents. The zero solution is asymptotically stable if all Lyapunov exponents are negative. It is stable if the Lyapunov exponents are non-positive and whenever the Lyapunov exponent vanishes, the geometric and algebraic multiplicity of the eigenvalue must coincide. Otherwise the zero solution is unstable.

The proof of Floquet’s Theorem is non-constructive, hence one needs other methods and/or bounds to approximate the solution. Nevertheless, determining the fundamental solution (4) in the interval [0, *T*] is sufficient due to its semigroup property given in Eq. (2).

## 3 Bounds for time-dependent norm

We generalize results obtained for constant linear systems in [18] and [19] to time-periodic systems. First, we recall the obtained results in order to base our generalization on them. We consider the general case when the constant coefficient matrix is non-diagonalizable. The results for a diagonalizable matrix are stated in [18] and [19]. Basically, the difference for a diagonalizable matrix is, that the algebraic and geometric multiplicity of each eigenvalue coincide. Hence, each Jordan block has size one.

### 3.1 Time-invariant setting

*u*and

*A*, respectively. Let \(v_k^{(i)}\) for \(k=1,\ldots ,m_i\) be the chain of right principal vectors, i.e.

*r*be the number of Jordan blocks and \(m_i\) the algebraic multiplicity of the eigenvalue \(\lambda _i\). Then define the following matrices:

*L*replaces the time-invariant system matrix in [18]. We recall the following results given in Proposition 3, 4 and Lemma 1 from [18] for a time-invariant system given in Eq. (5) and a possibly non-diagonalizable system matrix

*L*.

### **Proposition 3**

For \(k=1,\ldots ,m_i,\, i=1,\ldots ,r\), \(R_i^{(k,k)}\) and \(R_i\) are positive semi-definite and *R* is positive definite.

Hence, \(\Vert \cdot \Vert _R\) is a norm defined by \(\Vert v\Vert _R^2 = (Rv,v),\, v \in {\mathbb {C}}^n\) and \(\Vert \cdot \Vert _{R_i}\) is a semi-norm defined by \(\Vert v\Vert _{R_i}^2 = (R_iv,v),\, v\in {\mathbb {C}}^n\). In general, \(\Vert \cdot \Vert _{R_i}\) does not fulfill the definiteness property. Furthermore, the square of the semi-norm \(\Vert \cdot \Vert _{R_i}^2\) has a decoupling and filter effect shown by the next proposition [18].

### **Proposition 4**

*z*(

*t*) be the solution to the IVP (5) and

The polynomials in \(p_{x_0,k-1}^{(i)}(t)\) of Eq. (6) are due to the Jordan blocks, hence to the non-diagonalizability of the matrix *L*, i.e., if the matrix *L* is diagonalizable, then all polynomials in (6) are constant.

### **Lemma 1**

Lemma 1 shows the connection to the Euclidean norm of the function \(\psi \). By the equivalence of norms in finite-dimensional vector spaces, a two-sided bound \(c\Vert \psi (t)\Vert _p\le \Vert x(t)\Vert _R \le C\Vert \psi (t)\Vert _p\) with \(1\le p \le \infty \) can be derived. For \(p=2\), the constants *c*, *C* can be chosen as unity by Lemma 1.

### 3.2 Time-periodic setting

In the following we denote with \(B^{-*}\) the inverse of the conjugate transpose of *B*, i.e. \(B^{-*}=(B^*)^{-1}=(B^{-1})^*\). First, we show that the matrix \(\tilde{R}(t)\) is Hermitian, positive definite and bounded for any \(t\in {\mathbb {R}}\) under the right assumptions on *R*. For the definition of a more general time-dependent norm, see [32].

### **Lemma 2**

*R*be Hermitian and positive definite and \(\tilde{R}(t)=Z^{-*}(t)RZ^{-1}(t)\), where

*Z*(

*t*) is defined by Floquet’s normal form (4). Then

- 1.
\(\tilde{R}(t)\) is positive definite for all \(t \in {\mathbb {R}}\),

- 2.
\(\tilde{R}(t)\) is Hermitian for all \(t\in {\mathbb {R}}\),

- 3.
\(\tilde{R}(t)\) is

*T*-periodic, i.e. \(\tilde{R}(t)=\tilde{R}(T+t)\), for all \(t\in {\mathbb {R}}\) and - 4.
\(\tilde{R}(t)\) is bounded, i.e., there exist \(c,C>0: c\le \Vert \tilde{R}(t)\Vert \le C\) for all \(t\in {\mathbb {R}}\).

### *Proof*

- 1.Choose
*u*and*t*arbitrarily but fixed and let \(\tilde{u}=Z^{-1}(t)u\), thenfor all \(\tilde{u} \in {\mathbb {C}}^{n}\) since$$\begin{aligned} u^*\tilde{R}(t)u = u^*Z^{-*}(t)RZ^{-1}(t)u=\tilde{u}^*R\tilde{u} \ge 0, \end{aligned}$$*R*is positive definite. Now,since$$\begin{aligned} \tilde{u}^*R\tilde{u}=0 \Leftrightarrow \tilde{u}=0 \Leftrightarrow \tilde{u}=Z^{-1}(t)u=0 \Leftrightarrow u=0, \end{aligned}$$*Z*(*t*) has full rank and is invertible for all*t*. - 2.
\(\tilde{R}(t)\) is Hermitian, since

*R*is Hermitian. - 3.
\(\tilde{R}(t)\) is

*T*-periodic, since*Z*(*t*) is*T*-periodic. - 4.\(Z^{-1}(t)=e^{Lt}\varPhi ^{-1}(t)\) and \(Z^{-*}(t)=\varPhi ^{-*}(t)e^{L^*t}\) are continuous and periodic with periodicity
*T*. Note, that \(\varPhi (t)\) is a fundamental matrix, \(\varPhi ^{-1}(t) = \varPhi (-t)\) holds [22]. \(\tilde{R}(t)\) and \(p: t \mapsto \Vert \tilde{R}(t) \Vert \) are continuous and periodic as well. Due to the extreme value theorem [9],*p*attains its minimum*c*and maximum*C*in \(t_c \in \left[ 0,T \right] \) and \(t_C \in \left[ 0,T \right] \), respectively. Since*p*is periodic, it can be bounded globally: \(c \le \Vert \tilde{R}(t)\Vert \le C\). Since \(\tilde{R}(t)\) has full rank for all \(t\in {\mathbb {R}}\), \(\tilde{R}(t_c)\) has full rank and hence, \(\tilde{R}(t_c) \ne 0\) and therefore \(c>0\), i.e.$$\begin{aligned} \exists c,C>0: c \le \Vert \tilde{R}(t)\Vert \le C \quad \forall t \in {\mathbb {R}}. \end{aligned}$$

### **Theorem 1**

*L*be a complex matrix such that it fulfills (3) and

*z*(

*t*) be the solution to the IVP (5). Then

### *Proof*

The relation \(\Vert x(t) \Vert _{\tilde{R}(t)}^2 = \Vert z(t) \Vert _{R}^2\) is given by (10) and \(\Vert z(t) \Vert _{R}^2 = \sum _{i=1}^{r}\sum _{k=1}^{m_i} \left| p_{x_0,k-1}^{(i)}(t)\right| ^2 e^{2t \hbox {Re}{\lambda _i}} \quad \text{ for } t \in {\mathbb {R}}\) is given by Proposition 4. \(\square \)

By Proposition 4 a *decoupling* and *filter effect* on the semi-norms \(\Vert \cdot \Vert _{R_i^{(k,k)}}^2\) for \(k=1,\ldots ,m_i\) and \(i=1,\ldots ,r\) is shown, which carries over to the norms \(\Vert \cdot \Vert _R^2\) by Proposition 4 and \(\Vert \cdot \Vert _{\tilde{R}(t)}^2\) by Theorem 1. *Decoupling* and *filtering* are meant in the sense that we obtain a system of decoupled differential equations, where only the real part of the eigenvalues is passed and the imaginary parts are suppressed. The semi-norms suppress vibration in the sense of decoupling and filtering which is given by Corollary 1.

### **Corollary 1**

- If
*L*is diagonalizable, then$$\begin{aligned} \Vert x(t) \Vert _{\tilde{R}(t)}^2 = \sum _{i=1}^{n} \left\| x_0\right\| _{R_i}^2 e^{2t\hbox {Re}{\lambda _i}} \quad \text{ for } t \in {\mathbb {R}}. \end{aligned}$$ - If
*L*is non-diagonalizable, then$$\begin{aligned} \Vert x(t) \Vert _{\tilde{R}(t)}^2 = \sum _{i=1}^{r}\sum _{k=1}^{m_i} \left| p_{x_0,k-1}^{(i)}(t)\right| ^2 e^{2t\hbox {Re}{\lambda _i}} \quad \text{ for } t \in {\mathbb {R}}. \end{aligned}$$

If the spectral abscissa \(\nu [L]=\max _{i=1,\ldots ,r} \hbox {Re}\lambda _i \) is negative, i.e., \(\nu [L]<0\), and \( d = \max _{i=1,\ldots ,r}\max _{k=1,\ldots ,m_i} {\text {degree}}(p_{x_0,k-1}^{(i)}(t)),\) then \(\Vert x(t) \Vert _{\tilde{R}(t)}\) behaves essentially in a way similar to \(t^d e^{-t}\), i.e., there exist \(t_1>0\) such that \(\Vert x(t) \Vert _{\tilde{R}(t)}\searrow 0\) (monotonic decrease) for \(t\ge t_1\) as \(t \rightarrow \infty \). If the matrix *L* is diagonalizable and the spectral abscissa is nonzero, then one can conclude a monotonic behavior in \(\Vert \cdot \Vert _{\tilde{R}(t)}\) since no Jordan block occurs.

- 1.
If the spectral abscissa \(\nu [L]=\max _{i=1}^n \hbox {Re}\lambda _i <0\) for a diagonalizable matrix

*L*, then \(\Vert x(t) \Vert _{\tilde{R}(t)}\) tends monotonically to zero, i.e., \(\Vert x(t) \Vert _{\tilde{R}(t)} \searrow 0\) as \(t \rightarrow \infty \). - 2.
If all eigenvalue have positive real part, i.e., \(\hbox {Re}\lambda _i >0\) for \(i=1,\ldots ,r\), then \(\Vert x(t) \Vert _{\tilde{R}(t)}\) tends monotonically to infinity, i.e., \(\Vert x(t) \Vert _{\tilde{R}(t)} \nearrow \infty \) as \(t \rightarrow \infty \). In general, if a mechanical system is vibrating with an increasing amplitude, then the system will eventually collapse.

## 4 Trigonometric spline bound

In [20], the authors introduced a method of spline approximation in order to solve ODEs. This idea was further developed by many other researchers, see e.g., [23, 24] and [25]. They used trigonometric B-splines of second and third order to solve a nonlinear ODE. We use a modified approach in order to apply it to a linear system of ODEs and further equip the computation with rigorous bounds [4]. The unknown quantities are the coefficients of the trigonometric splines. While in the nonlinear approach one has to solve a series of nonlinear systems, this simplifies to a series of structured linear systems. Hence, a decrease of computational complexity and an effective speed-up is achieved. For further details on trigonometric splines we refer the interested reader to [27] and [28].

*T*] to \({\mathbb {R}}^n\). For a function \(x\in {\mathcal {L}}^\infty ([0,T],{\mathbb {R}}^n)\), its essential supremum serves as an appropriate norm:

*x*(

*t*) to (1) is approximated by splines. Due to the periodicity of our initial problem (1), trigonometric splines are chosen which mimic the behavior of the periodic system matrix

*A*(

*t*). In order to perform a spline interpolation, we need a node sequence and for the sake of simplicity we choose \(r+1\) equidistant nodes \(\varOmega _r=\left\{ t_0,\ldots ,t_r\right\} \) in the interval [0,

*T*] with \(t_0=0\) and \(t_r=T\), i.e., \(t_i=ih\) for \(i=0,1,\ldots ,r\) with \(h=\frac{T}{r}\). The restriction of the quadratic trigonometric splines to any subinterval \([t_i, t_{i+1}]\) is a linear combination of \(\left\{ 1,\cos (t),\sin (t)\right\} \). Trigonometric B-splines \(S_i(t)\) are defined by

A trigonometric B-spline \(S_i(t)\) is shown in Fig. 1. First, as it can be seen in Fig. 1, for any inner subinterval \([t_i,t_{i+1}]\) with \(0<i<r\), the spline \(S_i(t)\) is fully described. For the intervals \([t_0,t_1]\) and \([t_{r-1},t_r]\), artificial intervals \([t_{-1},t_0]\) and \([t_r,t_{r+1}]\) have to be included in the definition of \(S_i(t)\) such that the restriction to the respective subinterval is still a linear combination of the functions \(1,\cos (t)\) and \(\sin (t)\). If we denote by \(S_2(\varOmega _r)\) the space of quadratic trigonometric splines in [0, *T*] w.r.t. the nodes \(\varOmega _r\), then \(S_2(\varOmega _r) = \mathop {\mathrm {span}} \left\{ S_i\right\} _{i=-1}^r\). Hence every quadratic trigonometric spline can be expressed in the form \(\sum _{i=-1}^r{\alpha _i S_i(t)}\). For representing a spline, the summation index *i* is from \(-1\) to *r* which does not represent the number of nodes, but the number of intervals \([t_i,t_{i+1}]\) for \(i=-1,\ldots ,r\) which includes the aforementioned artificial intervals. In our case, the coefficients \(\alpha _i\) are unknown and have to be determined.

*s*fulfills the ODE (1), i.e., \(\dot{s}(t_i)=A(t_i)s(t_i)\) at the nodes \(t_i\) for \(i=0,\ldots ,r\), one obtains a sequence of \(r+1\) linear systems

*i*-th node \(t_i\)

*n*-dimensional identity matrix and the initial condition \(s(t_0)=x_0\) yields \(\alpha ^{(-1)} = \cos {\left( \frac{h}{2}\right) }x_0 - \sin {\left( \frac{h}{2}\right) }A(t_0)x_0\).

Nikolis has investigated this procedure for nonlinear systems [23], where one does not solve a sequence of linear systems but a sequence of nonlinear systems by an iterative method such as the Newton method. In fact, trigonometric splines are L-splines [28]. Here the *L* corresponds to a certain linear differential operator, which in our case is \(L_3 x:=x'''+x'\), where *x* is the solution of (1). The convergence result from nonlinear systems carries over to the linear case and is stated in Proposition 5.

### **Proposition 5**

(Nikolis [23]) For \(A\in {\mathcal {C}}^2([0,T],{\mathbb {R}}^{n \times n})\), the quadratic trigonometric spline converges quadratically to the solution, more precisely \(\Vert x-s \Vert _{\infty } = {\mathcal O}(\Vert L_3 x \Vert _\infty r^{-2})\).

The following rigorous upper bound is based on Proposition 5, see [4].

### **Theorem 2**

*h*is sufficiently small, i.e. \(L|\tan {\left( \frac{h}{2}\right) }|<1\) for

*L*being the Lipschitz constant of the ODE (1), and \(L_3 x=x'''+x'\).

Since the proof of Theorem 2 is lengthy, it is given in Appendix. The spline and the upper bound converge to the solution resp., to the norm of the solution by Proposition 5 resp. Theorem 2 as \(h\rightarrow 0\).

## 5 Spectral bound by Chebyshev projections

The key idea is to replace the system (1) by an approximation. We use the spectral method [11, 34] in the setting of polynomial approximation of linear ordinary differential equations [3, 10]. The solution of the approximated system is entire and hence, the truncation error of the approximated solution can be given. Here, we approximate the system matrix by Chebyshev polynomials [5] and use results from approximation theory [36] in order to derive rigorous bounds on the original solution *x*(*t*). As preliminaries, we need some results from approximation theory, here we focus on Chebyshev polynomials, which were introduced in [5] and Chebyshev projections. We follow the presentation of Chebyshev projections based on [36]. Any approximation can be used to replace the original system, but our focus is on Chebyshev polynomials since they minimize the maximal error, which is a property we sought for in the previously introduced bounds. In Sect. 5.2 we explain the general idea of the spectral method and how we use the results from approximation theory in order to derive bounds. The bound depends heavily on how well the original system is approximated.

### 5.1 Chebyshev polynomials and projections

*f*has a unique representation as a Chebyshev series [36],

*m*-truncated Chebyshev series is defined as

*m*. Clearly, the Chebyshev polynomials \(T_k\), \(k=0,1,\ldots ,m\), are a basis of \({\mathcal {P}}_m\). Let \({\mathcal {C}}\) be the space of continuous functions. Then \(P_m:{\mathcal {C}}\rightarrow {\mathcal {P}}_m\) defined by (17) is a linear operator and it is also called Chebyshev projection since \(P_m p = p\) for any \(p \in {\mathcal {P}}_m\) and \(P_m T_k = 0\) for \(k>m\). We recall the following two propositions given in [35, 36] which are essential for the derivation of our spectral bounds.

### **Proposition 6**

*f*and its derivatives through \(f^{(\nu -1)}\) are absolutely continuous on \([-1,1]\) and if the \(\nu \)-th derivative \(f^{(\nu )}\) is of bounded variation

*V*for some \(\nu \ge 1\), then for any \(m>\nu \), the Chebyshev projection satisfies

### **Proposition 7**

*f*is analytic in \([-1,1]\) and analytically continuable to the open Bernstein ellipse \({\mathcal {E}}_\rho \), where it satisfies \(|f(t)|\le M\) for some

*M*, then for each \(m\ge 0\) its Chebyshev projection satisfies

### 5.2 Spectral method and spectral bound

*A*, see (17). If \((P_{m} A)(t_1)\) commutes with \((P_{m} A)(t_2)\) for all times \(t_1\) and \(t_2\), then the solution to the approximated system (18) is given by \(y(t) = \exp \left( \int _0^t{(P_m A)(\tau ) \mathrm{d}\tau } \right) x_0\).

*y*(

*t*) is entire since polynomials and their exponentials are entire. But in general the commutativity of \((P_m A)(t)\) is a rather strong assumption. Hence, we cite a more general result which, e.g., is given in [6].

### **Proposition 8**

*u*(

*t*) is the unique solution to the ODE

*u*is also analytic at \(\tau \in {\mathbb {R}}\) with the same convergence radius \(\varrho \).

*y*(

*t*) is entire since the function \((P_m A)(t)\) is a polynomial which by definition is entire. If the approximation is exact, i.e., \(a_{ij}(t)\) is a polynomial of degree at most

*m*for \(1\le i,j \le n\), then

*x*(

*t*) and

*y*(

*t*) coincide. In order to prove rigorous upper bounds on

*x*(

*t*), we use Propositions 6 and 7 to bound the difference between the original function

*A*and its Chebyshev projection. These bounds depend on the smoothness of the system matrix

*A*. Furthermore, define \(\gamma \) for a matrix function \(A:{\mathbb {R}} \rightarrow {\mathbb {R}}^{n\times n}\) as its maximal absolute entry, i.e.

*k*-times differentiable functions such that \(f^{(j)}\in \hbox {AC}\) for \(0\le j \le k\).

### **Theorem 3**

*k*-th derivative \(a_{ij}^{(k)}\) is of bounded variation

*V*for all \(i,j=1,\ldots ,n\), then for any \(m>k>0\):

### **Theorem 4**

*T*] and analytically continuable to the open Bernstein ellipse \({\mathcal {E}}_\rho \), where it satisfies \(|a_{ij}(t)| \le M\) for all \(i,j=1,\ldots ,n\) for some

*M*, then for any \(m\ge 0\):

Proving Theorems 3 and 4 can be combined, but for this, Gronwall’s lemma is needed. Here, we use the integral version by R. Bellman [2], which e.g., is given in [38].

### **Lemma 3**

*a*,

*b*] and \(\beta (t)\ge 0\). Assume

*g*(

*t*) satisfies

Now we return to the proof of Theorems 3 and 4.

### *Proof*

*x*(

*t*) and

*y*(

*t*) fulfill the integral formulation of the ODE

- 1.If the assumptions of Theorem 3 are fulfilled, thenTherefore,$$\begin{aligned} \Vert A(s)-(P_m A)(s)\Vert _\infty= & {} \max \limits _{1 \le i \le n} \sum _{j=1}^n \underbrace{| a_{ij}(s)-(P_m a_{ij})(s) |}_{\le \frac{2V}{\pi k(m- k)^k}}\le \frac{2nV}{\pi k(m- k)^k}. \end{aligned}$$and applying Gronwall’s lemma with$$\begin{aligned} \Vert x(t)-y(t)\Vert _\infty\le & {} \beta \int _{0}^t \Vert x(s)-y(s)\Vert _\infty \mathrm{d}s + \frac{2nV}{\pi k(m- k)^k} \int _{0}^t \Vert y(s)\Vert _\infty \mathrm{d}s \end{aligned}$$and \(\beta ={\text {const}}>0\) yields$$\begin{aligned} g(t)= & {} \Vert x(t)-y(t)\Vert _\infty , \\ \alpha (t)= & {} \frac{2nV(A)}{\pi k(m- k)^ k} \int _{0}^t \Vert y(s)\Vert _\infty \mathrm{d}s \end{aligned}$$With the reverse triangle inequality the theorem follows.$$\begin{aligned} \Vert x(t)-y(t)\Vert _\infty \le \frac{2nV e^{t n\gamma } }{\pi k(m- k)^ k} \int _{0}^t \Vert y(s)\Vert _\infty \mathrm{d}s. \end{aligned}$$(23)
- 3.If the assumptions of Theorem 4 are fulfilled, thenThe remaining proof is analogous to the previous case.\(\square \)$$\begin{aligned} \Vert A(s)-(P_m A)(s)\Vert _\infty = \max \limits _{1 \le i \le n} \sum _{j=1}^n \underbrace{| a_{ij}(s)-(P_m a_{ij})(s) |}_{\le \frac{2M\rho ^{-m}}{\rho -1}}\le \frac{2nM\rho ^{-m}}{\rho -1}. \end{aligned}$$

*y*is entire due to Proposition 8. Hence by Proposition 7, \(\Vert y(t)-(P_m y)(t)\Vert _\infty \le \frac{2M\rho ^{-m}}{\rho -1}\) in the Bernstein ellipse \({\mathcal {E}}_\rho \), where it satisfies \(|y_i(t)|\le M\) for some

*M*and \(i=1,\ldots , n\). The Chebyshev projections of

*A*and

*y*so not necessarily have the same degree, hence in the following we distinguish them by their subscripts. The index

*A*refers to the matrix function

*A*and an index

*y*to the solution of (18). For a higher approximation by Chebyshev projections one hopes to have a sharper upper bound. This convergence result is established by the following inequality which is due to Eq. (23) in the proof of Theorems 3 and 7. For a matrix function

*A*with the assumptions of Theorem 3, we obtain

*A*, we obtain

*x*for better approximation levels \(m_A\) and \(m_y\), i.e., \(P_{m_y} y\rightarrow x\) as \(m_A,m_y\rightarrow \infty \). In the first case, the rate of convergence is of order

*k*, while for an analytic matrix function

*A*one obtains geometric convergence.

*A*of Theorem 3

*A*(as in Theorem 4)

*A*is analytic, one does not need to replace the original system by (18) since even for the original system the solution is analytic by Proposition 8. But for the sake of completeness we also derived bounds in this case and the bounds are very tight for moderate \(m_A\) as shown in Sect. 6.

Convergence for trigonometric spline and spectral bound

Smoothness of | Trigonometric spline bound | Spectral bound |
---|---|---|

\({\mathcal {C}}^{2}\) | \({\mathcal O}(\Vert L_3 x\Vert _\infty r^{-2})\) | \({\mathcal O}(Vm_A^{-1})\) |

\(\hbox {AC}^{k-1}\), \(1\le k<3\) | – | \({\mathcal O}(Vm_A^{-k})\) |

\(\hbox {AC}^{k-1}\), \(k\ge 3\) | \({\mathcal O}(\Vert L_3 x\Vert _\infty r^{-2})\) | \({\mathcal O}(Vm_A^{-k})\) |

Analytic | \({\mathcal O}(\Vert L_3 x\Vert _\infty r^{-2})\) | \({\mathcal O}(M_A\rho _A^{-m_A})\) |

Similar results can be obtained for interpolation instead of Chebyshev projection. In this context, the main question concerns the interpolation points. If Chebyshev points are chosen, then the Chebyshev interpolant satisfies Propositions 6 and 7 with an additional factor 2, see e.g., [36]. Hence, one can obtain results such as Theorems 3 and 4 with the same additional factor.

## 6 Overview and numerical results

*A*indicated by Theorem 2 and Propositions 6 and 7. In Table 1, the convergence rates for the trigonometric spline bound defined in Theorem 2 and the spectral bounds defined in Eqs. (26) and (27) are given for various function classes and they are visualized in Figs. 10 and 11. The computational complexity for the trigonometric spline bound is dominated by computing the spline solution. Trigonometric splines with compact support, i.e., trigonometric B-splines, are chosen due to the local influence of each spline. For general splines, a linear system of dimension \(n(r+1)\times n(r+1)\) has to be solved while for B-splines, \(r+1\) systems of dimension \(n\times n\) have to be solved. Hence, the computational complexity for trigonometric B-splines is \({\mathcal O}(n^3(r+1))\). For the spectral bound, each element of the system matrix

*A*has to be approximated, which can be done by

*fast Fourier transformations (FFT)*in \({\mathcal O}((m+1)\log (m+1))\). The convergence of the trigonometric spline bound is

*local*, i.e., a trigonometric spline \(S_i\), which is visualized in Fig. 1, converges on its support, i.e., \({\mathrm {supp}}(S_i)=\left\{ t \in [0,T]: S_i(t)\ne 0\right\} = [t_{i-1},t_{i+2}]\), to the solution. The spectral bound converges

*globally*, i.e., on the whole interval [0,

*T*], to the solution. The rigorous bounds are illustrated for three examples which all can be described by a time-periodic system of the form (1). An overview of the settings are given in Table 2. In the following, the parameters

*r*and \(m_A\) of the trigonometric spline or the spectral bound, respectively, are chosen such that firstly, a visible difference between the solution and its respective upper bounds can be seen and secondly, an effect of the parameters can be noticed. If the order of the Chebyshev projection \(m_A\) is increased slightly in the Figures 3, 5 and 7, the spectral bound cannot be distinguished from the original solution. This observation does not hold for the trigonometric spline bound since its convergence is slower, see Table 1 and Fig. 10 compared to Fig. 11b. But for a larger number of nodes

*r*, the trigonometric spline bound tends to the solution by Proposition 5, compare Figs. 3, 5 and 7. Computation of global extrema is not an easy task due to the possibly large number of local minima and maxima of the objective function [14, 37]. The constants \(L,\Vert L_3 x\Vert _\infty \) and \(\gamma \) are determined by the

*fminsearch*routine in MATLAB.

^{1}But in general only a local minimum is found by

*fminsearch*, therefore we combined it with a

*Global Search*strategy of the

*Global Optimization Toolbox*in MATLAB. The computed values for \(L,\Vert L_3 x\Vert _\infty \) and \(\gamma \) are given in Table 3. They are used in the figures mentioned above and also appear in the convergence rates of the methods in Table 1. Note, that the parameters \(\rho _A\) and \(M_A\) with respect to the spectral bound are not unique, especially any Bernstein ellipse can be chosen since the function is entire. Here, we chose \(\rho _A\) with respect to the decay of the Chebyshev coefficients \(|c_k|\) given by (16) but for the sake of simplicity the derivation is omitted and for the appropriate examples \(\rho _A\) is given in Table 3. \(M_A\) is determined by the strategy mentioned above, i.e., by a combination of

*fminsearch*and

*Global Search*.

Setting for trigonometric spline and spectral bound

Constants used for trigonometric spline bound and spectral bound

Example | Trigonometric spline bound | Spectral bound | ||||
---|---|---|---|---|---|---|

(27) | (26) | |||||

| \(\Vert L_3x\Vert _\infty \) | \(\gamma \) | \(M_A\) | \(\rho _A\) | | |

\(\dot{x}(t)=|\sin (2\pi t)|^3x(t)\) | 1 | 122.2986 | 1 | – | – | \(4\pi ^3\) |

Jeffcott rotor | 1.1474 | 9.9563 | 1.1111 | 1.1221 | 2.5688 | – |

Cantilever beam | 41.7727 | \(7.3\times 10^4\) | 32 | \(7.8 \times 10^{30}\) | 6.0628 | – |

*R*-norm is a scaling, but since the single eigenvector is normalized, \(|\cdot |=\Vert \cdot \Vert _2=\Vert \cdot \Vert _\infty =\Vert \cdot \Vert _R\) holds. The weighted time-dependent norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\) suppresses the oscillations and since the spectral abscissa is positive, \(\nu [L]=0.424413181578411 >0\), a monotonic increase can be observed, cf. Corollary 1.

*A*(

*t*) is entire with system dimension \(n=4\). The same parameter values are chosen as in [1]. This is an asymptotically stable system since the maximal Lyapunov exponent is \(\nu [L]=-0.002000131812440<0\). The results are illustrated in Fig. 5. The trigonometric spline bound for \(r=40,000\) is highly oscillatory such that some components of its graph in Fig. 5 cannot be distinguished anymore. But nevertheless, the upper bound is valid. If one can further assume smoothness of the solution, interpolation of the valleys of the oscillations would give a smoother upper bound.

*R*-norm and the weighted time-dependent \(\tilde{R}(t)\)-norm. The weighted time-dependent norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\) suppresses the oscillations and since the matrix

*L*is diagonalizable and the spectral abscissa is negative, \(\nu [L]<0\), a monotonic decrease can be observed, cf. Corollary 1.

*m*finite elements. We chose the same parameter values as in [7]. The assembling of mass, damping and stiffness matrix by \(m=4\) finite elements is well described in [7]. We used the aforementioned method which results in a periodic system matrix of dimension \(n=16\). The parameter for the parametric excitation frequency \(\nu \) is chosen to be the parametric combination resonances of first order \(\nu =|\varOmega _1 - \varOmega _2|=138.44\). Furthermore, we introduce a coordinate transformation

*W*. Hence, the system (1) is not only given by the original system matrix

*A*(

*t*), but also by the coordinate transformation

*W*, i.e. the system is given by

*W*is a diagonal matrix and it is computed by the

*balance*routine in MATLAB for

*A*(

*t*) at \(t=0\) in order to decrease the constant \(\gamma \) in (20). Of course, any \(t\in [0,T]\) could be chosen to determine a coordinate transformation, but our initial choice was sufficient to reduce \(\gamma \) by two orders of magnitude to \(\gamma =32\). The system is asymptotically stable since the maximal Lyapunov exponent is \(\nu [L]=-2.546655954908259\times 10^{-6}<0\).

*R*-norm and the weighted time-dependent \(\tilde{R}(t)\)-norm. Firstly, the weighted time-dependent norm \(\Vert \cdot \Vert _{\tilde{R}(t)}\) suppresses the oscillation of Fig. 8 and by Corollary 1 it is proven that the solution decreases monotonically since the matrix

*L*is diagonalizable and its spectral abscissa is negative \(\nu [L]<0\). Hence, the solution is asymptotically stable, i.e., in any norm the solution decays to zero as \(t\rightarrow \infty \). Even with a larger time horizon this effect is not visible in Fig. 8, but due to the vibration suppression it may easily be seen in Fig. 9. Secondly, the matrix \(\tilde{R}(t):=Z(t)^{-*}RZ^{-1}(t)\) for this particular example is almost constant for all times. Surprisingly, the matrices \(\tilde{R}(t)\) and

*R*almost coincide and hence, so do the curves \(\Vert x(t)\Vert _R\) and \(\Vert x(t)\Vert _{\tilde{R}(t)}\) in Fig. 9 (Figs. 10, 11).

## 7 Conclusions

Linear time-periodic systems arise in many fields of application, e.g., in parametrically excited systems and anisotropic rotor-bearing systems. In general, they are obtained by linearizing a nonlinear system about a periodic trajectory. Complete knowledge of the system’s components is necessary to understand its transient behavior which may not be applicable for very complex and large-scale systems. Hence, understanding system characteristics such as stability and robustness may be sufficient. The solution structure for a linear time-periodic system is known (Floquet’s Theorem 2). But nevertheless, in general, it has to be approximated since it cannot be given in closed form. Important physical properties such as stability and robustness can be lost due to the (numerical) approximations. In order to guarantee such properties for the original solution and not only for the approximation, one can derive analytic results on the solution or the approximation error has to be incorporated in the analysis. This is the key idea of this paper: bounds that solely depend on the solution structure or bounds that incorporate the approximation error. Firstly, we were able to generalize results from the linear time-invariant [17, 18] to time-periodic setting and derive a time-varying norm that captures important properties such as *decoupling*, *filtering* and *monotonicity*. Secondly, we used two different methodologies where the approximation error is incorporated in the upper bound. In the first one, an approximated solution is obtained due to time discretization and a quadratic trigonometric spline approximation. The upper bound depends on the discretization grid of the quadratic trigonometric spline solution and converges quadratically to the original solution. The derived upper bound is an extension to work on the solution of ODEs by trigonometric splines [23, 24, 25]. In the second case we used a general framework — the linear time-periodic system is approximated by Chebyshev projections [36]. Here, we generalized results from [30, 31] w.r.t. convergence and convergence rates and most importantly we could incorporate the two approximation errors of the Chebyshev projections into the rigorous upper bound. While the first approximation error is due to the polynomial approximation of the linear time-periodic system, the second error is due to solving the approximated system. The polynomial approximation of the linear time-periodic system yields properties of the solution such that its solution can be represented by an infinite series. Truncation of this series yields the second error. A series representation of the solution is not necessarily possible for the original system.

In summary, the bounds converge to the original solution of the linear time-periodic system as the number of splines or the degree of the Chebyshev projections is increased. For a smooth time-periodic system, the spectral bound in general superiors the trigonometric spline bound due to its faster convergence. In all cases the upper bounds converge to the norm of the solution if and only if the approximation converges to the solution. The computational complexity and convergence rate for the trigonometric spline bound and the spectral bound are stated. The applicability of all bounds and stability analysis of linear time-periodic systems is demonstrated by means of various examples which include a Jeffcott rotor and a parametrically excited Cantilever beam.

## Footnotes

- 1.
MATLAB, The MathWorks, R2014a, 8.3.0.532.

## Notes

### Acknowledgments

Open access funding provided by Max Planck Society (Max Planck Institute for Dynamics of Complex Technical Systems).

## References

- 1.Allen, M.S.: Frequency-domain identification of linear time-periodic systems using LTI techniques. J. Comput. Nonlinear Dyn.
**4**, 041,004.1–041,004.6 (2009)CrossRefGoogle Scholar - 2.Bellman, R.: The stability of solutions of linear differential equations. Duke Math. J.
**10**(4), 643–647 (1943). doi: 10.1215/S0012-7094-43-01059-2 MathSciNetCrossRefzbMATHGoogle Scholar - 3.Benner, P., Denißen, J.: Spectral bounds on the solution of linear time-periodic systems. Proc. Appl. Math. Mech.
**14**(1), 863–864 (2014). doi: 10.1002/pamm.201410412 CrossRefGoogle Scholar - 4.Benner, P., Denißen, J., Kohaupt, L.: Bounds on the solution of linear time-periodic systems. Proc. Appl. Math. Mech.
**13**(1), 447–448 (2013). doi: 10.1002/pamm.201310217 CrossRefGoogle Scholar - 5.Chebyshev, P.L.: Théorie des mécanismes connus sous le nom de parallélogrammes. Mémoires des Savants étrangers présentés à l’Académie de Saint-Pétersbourg
**7**, 539–568 (1854)Google Scholar - 6.Coddington, A., Carlson, R.: Linear Ordinary Differential Equations. SIAM, Philadelphia (1997)CrossRefzbMATHGoogle Scholar
- 7.Dohnal, F., Ecker, H., Springer, H.: Enhanced damping of a Cantilever beam by axial parametric excitation. Arch. Appl. Mech.
**78**(12), 935–947 (2008). doi: 10.1007/s00419-008-0202-0 CrossRefzbMATHGoogle Scholar - 8.Floquet, G.: Sur les équations différentielles linéaires à coefficients périodiques. Annales Scientifiques de l’École Normale Supérieure
**12**(2), 47–88 (1883). doi: 10.1016/j.ansens.2007.09.002 MathSciNetCrossRefzbMATHGoogle Scholar - 9.Forster, O.: Analysis 1. Vieweg+Teubner Verlag, Berlin (2011). doi: 10.1007/978-3-8348-8139-7 CrossRefzbMATHGoogle Scholar
- 10.Funaro, D.: Polynomial approximation of differential equations. Lecture notes in physics. Springer, Berlin (1992)zbMATHGoogle Scholar
- 11.Gottlieb, D., Orszag, S.A.: Numerical Analysis of Spectral Methods: Theory and Applications. CBMS-NSF Regional Conference Series in Applied Mathematics. SIAM (1977)Google Scholar
- 12.Hairer, E., Wanner, G.: Solving Ordinary Differential Equations. II. Stiff and Differential-Algebraic Problems. Springer Series in Computational Mathematics. Springer, Berlin (2010)zbMATHGoogle Scholar
- 13.Higham, N.J.: The scaling and squaring method for the matrix exponential revisited. SIAM Rev.
**51**(4), 747–764 (2009). doi: 10.1137/090768539 MathSciNetCrossRefzbMATHGoogle Scholar - 14.Horst, R., Tuy, H.: Global Optimization. Springer, Berlin (1996). doi: 10.1007/978-3-662-03199-5 CrossRefzbMATHGoogle Scholar
- 15.Kohaupt, L.: Differential calculus for some p-norms of the fundamental matrix with applications. J. Comput. Appl. Math.
**135**(1), 1–22 (2001). doi: 10.1016/S0377-0427(00)00559-8 MathSciNetCrossRefzbMATHGoogle Scholar - 16.Kohaupt, L.: Differential calculus for p-norms of complex-valued vector functions with applications. J. Comput. Appl. Math.
**145**(2), 425–457 (2002). doi: 10.1016/S0377-0427(01)00594-5 MathSciNetCrossRefzbMATHGoogle Scholar - 17.Kohaupt, L.: Computation of optimal two-sided bounds for the asymptotic behavior of free linear dynamical systems with application of the differential calculus of norms. J. Comput. Math. Optim.
**2**, 127–173 (2006)MathSciNetzbMATHGoogle Scholar - 18.Kohaupt, L.: Solution of the matrix eigenvalue problem \({V}{A}^*+{A}{V}=\mu {V}\) with applications to the study of free linear dynamical systems. J. Comput. Appl. Math.
**213**(1), 142–165 (2008). doi: 10.1016/j.cam.2007.01.001 MathSciNetCrossRefzbMATHGoogle Scholar - 19.Kohaupt, L.: On the vibration-suppression property and monotonicity behavior of a special weighted norm for dynamical systems. Appl. Math. Comput.
**222**, 307–330 (2013). doi: 10.1016/j.amc.2013.06.091 MathSciNetzbMATHGoogle Scholar - 20.Loscalzo, F.R., Talbot, T.D.: Spline function approximations for solutions of ordinary differential equations. Bull. Am. Math. Soc.
**73**, 438–442 (1967). doi: 10.1090/S0002-9904-1967-11778-6 MathSciNetCrossRefzbMATHGoogle Scholar - 21.Moler, C., Van Loan, C.: Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later. SIAM Rev.
**45**(1), 3–49 (2003)MathSciNetCrossRefzbMATHGoogle Scholar - 22.Müller, P.C., Schiehlen, W.: Lineare Schwingungen. Akademische Verlagsgesellschaft Wiesbaden, Wiesbaden (1976)zbMATHGoogle Scholar
- 23.Nikolis, A.: Trigonometrische splines und ihre anwendung zur numerischen behandlung von integralgleichungen. Ph.D. thesis, Ludwig-Maximilians-Universität München (1993)Google Scholar
- 24.Nikolis, A.: Numerical solutions of ordinary differential equations with quadratic trigonometric splines. Appl. Math. E-Notes
**4**, 142–149 (2004)MathSciNetzbMATHGoogle Scholar - 25.Nikolis, A., Seimenis, I.: Solving dynamical systems with cubic trigonometric splines. Appl. Math. E-Notes
**5**, 116–123 (2005)MathSciNetzbMATHGoogle Scholar - 26.Orszag, S.A.: Numerical methods for the simulation of turbulence. Phys. Fluids
**12**(Supp. II), 250–257 (1969)zbMATHGoogle Scholar - 27.Schoenberg, I.: On trigonometric spline interpolation. Indiana Univ. Math. J.
**13**, 795–825 (1964)MathSciNetCrossRefzbMATHGoogle Scholar - 28.Schumaker, L.L.: Spline Functions: Basic Theory. Wiley, Hoboken (1981)zbMATHGoogle Scholar
- 29.Sinha, S.C., Chou, C.C., Denman, H.H.: Stability analysis of systems with periodic coefficients: an approximate approach. J. Sound Vib.
**64**, 515–527 (1979). doi: 10.1016/0022-460X(79)90801-0 CrossRefzbMATHGoogle Scholar - 30.Sinha, S.C., Wu, D.H.: An efficient computational scheme for the analysis of periodic systems. J. Sound Vib.
**151**, 91–117 (1991). doi: 10.1016/0022-460X(91)90654-3 MathSciNetCrossRefGoogle Scholar - 31.Sinha, S., Butcher, E.: Solution and stability of a set of p-th order linear differential equations with periodic coefficients via Chebyshev polynomials. Math. Probl. Eng.
**2**, 165–190 (1996). doi: 10.1155/S1024123X96000294 CrossRefzbMATHGoogle Scholar - 32.Söderlind, G., Mattheij, R.M.M.: Stability and asymptotic estimates in nonautonomous linear differential systems. SIAM J. Math. Anal.
**16**(1), 69–92 (1985). doi: 10.1137/0516005 MathSciNetCrossRefzbMATHGoogle Scholar - 33.Tisseur, F., Meerbergen, K.: The quadratic eigenvalue problem. SIAM Rev.
**43**(2), 235–286 (2001). doi: 10.1137/S0036144500381988 MathSciNetCrossRefzbMATHGoogle Scholar - 34.Trefethen, L.N.: Spectral Methods in MatLab. SIAM, Philadelphia (2000)CrossRefzbMATHGoogle Scholar
- 35.Trefethen, L.N.: Is Gauss quadrature better than Clenshaw–Curtis? SIAM Rev.
**50**(1), 67–87 (2008). doi: 10.1137/060659831 MathSciNetCrossRefzbMATHGoogle Scholar - 36.Trefethen, L.N.: Approximation Theory and Approximation Practice. SIAM, Philadelphia (2013)zbMATHGoogle Scholar
- 37.Ugray, Z., Lasdon, L., Plummer, J., Glover, F., Kelly, J., Mart, R.: Scatter search and local NLP solvers: a multistart framework for global optimization. INFORMS J. Comput.
**19**(3), 328–340 (2007). doi: 10.1287/ijoc.1060.0175 MathSciNetCrossRefzbMATHGoogle Scholar - 38.Walter, W.: Differential- und Integral-Ungleichungen. Springer Tracts in Natural Philosophy, vol. 2. Springer, Berlin (1970)Google Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.