1 Introduction

Dynamical systems whose evolution depends on both present and past are ubiquitous in applications, mainly due to their enhanced ability in describing real phenomena. The growing interest is witnessed by, beyond the extensive paper literature, the considerable number of monographs treating all the important aspects, from the mathematical, dynamical and stability theories to the numerical approximation and, indeed, the applications [1, 2, 13, 14, 1618, 20, 2325, 28, 31, 32].

In many practical engineering and biological fields a key question is the (local or linearized) stability of equilibria and periodic orbits, often as a function of varying or uncertain system parameters (e.g., in control or population dynamics). Analytical results are in general unattainable since the presence of the delay makes the state space infinite dimensional, thus requiring efficient and reliable approximations.

In the last decade different numerical techniques have been developed to the scope, mostly based on reducing to finite dimension (i.e., to matrices) the operators whose spectra decide the stability, such as solution operators, monodromy operators and infinitesimal generators. Then stability is inferred from the eigenvalues of such matrices, by looking at the position of the rightmost one w.r.t. the imaginary axis or of the dominant one w.r.t. the unit circle. Among these methods are, e.g., [4, 7, 10, 11, 15, 19, 21, 22, 30]. Other methodologies are also available, which directly point at computing the stability boundaries in given regions of the parameters space, e.g., [26, 27].

This work focuses on the computation of stability charts for both autonomous and (mainly) periodic linear delayed dynamical systems. A stability chart is the decomposition of a region of a two-parameters plane into stable and unstable portions through the stability boundaries. We compute these boundaries as level contours of a function approximating the eigenvalue that determines the stability for each choice of the two parameters. The level contours are obtained via the adaptive triangulation technique developed in [8]. The function approximating the eigenvalue that determines the stability is based on the numerical method recently proposed in [9]. The latter discretizes the evolution family associated to the delayed system by pseudospectral schemes. On the one hand, it is suitable for general linear nonautonomous problems, with any number of discrete and distributed delays, even rationally independent w.r.t. the period when periodic orbits are analyzed. On the other hand, the convergence of the approximations is extremely fast, a much desirable feature when point-by-point investigation has to be performed as it is the case of stability charts.

The paper describes in Sects. 2 and 3 the approximation technique by summarizing from [9] its essential ingredients. Explicit approximating matrices are given for the sake of implementation for those interested. The main objective is then to furnish in Sects. 4 and 5 a benchmark set of tests on stability charts computation, with particular reference to the delayed Mathieu equation, providing experimental data on accuracy and CPU time, mainly for the sake of comparison with other existing or future techniques.

2 Evolution operators and stability

For given \(r>0\) let \(X:=C([-r,0],\mathbb {C}^d)\) be the state space. We consider dynamical systems arising from linear nonautonomous differential equations with delay

$$\begin{aligned} x'(t)=f(t,x_t),\;t\in \mathbb {I}, \end{aligned}$$
(1)

where \(\mathbb {I}\subseteq \mathbb {R}\) is unbounded on the right, the state \(x_{t}\in X\) at time \(t\) is defined as \(x_{t}(\theta ):=x(t+\theta )\) for \(\theta \in [-r,0]\hbox { and }f:\mathbb {I}\times X\rightarrow \mathbb {C}^d\) is a functional given through the Stieltjes integral

$$\begin{aligned} f(t,\varphi )=\int \limits _0^{r}d_\theta [\eta (t,\theta )]\varphi (-\theta ), \end{aligned}$$
(2)

where \(\eta :\mathbb {I}\times [0,r]\rightarrow \mathbb {C}^{d \times d}\) is of normalized bounded variation as a function of \(\theta \) and continuous in the topology of the total variation as a function of \(t\), thus \(f\) is continuous.

For any \(s\in \mathbb {I}\hbox { and }\varphi \in X\), the initial problem

$$\begin{aligned} \left\{ \begin{array}{l} x'(t)=f(t,x_t),\quad t\ge s,\\ x_s=\varphi \end{array} \right. \end{aligned}$$
(3)

has a unique solution which is defined on \([s-r,+\infty )\), continuously differentiable on \([s,+\infty )\) and, for any \(t\ge s,\,x_{t}\) continuously depends on \(\varphi \) [14, 18]. This allows to introduce, for any \(s\in \mathbb {I}\hbox { and }h\ge 0\), the linear bounded operator \(T(s+h,s):X\rightarrow X\) given by

$$\begin{aligned} T(s+h,s)\varphi =x_{s+h}, \end{aligned}$$

i.e. it associates to the initial state \(x_{s}=\varphi \) at time \(s\) the state \(x_{s+h}\) at time \(s+h\). The operator \(T(s+h,s)\) is called an evolution operator and \(\{T(s+h,s):s\in \mathbb {I}\ \text {and}\ h\ge 0\}\) the evolution family [12].

The local asymptotic stability properties of possible equilibria, periodic orbits and other invariants of a nonlinear autonomous delayed system are determined by the spectral properties of the evolution family of the system linearized around the concerned orbit. In particular

  1. (1)

    for equilibria, (1) is autonomous and \(T(s+h,s)=T(h,0)\) for \(s\in \mathbb {R}\hbox { and }h\ge 0\): the evolution family reduces to the standard \(C_{0}\)-semigroup of solution operators \(\{T(h,0):h\ge 0\}\) and, for \(h>0\), the equilibrium is asymptotically stable iff all the eigenvalues of \(T(h,0)\) are inside the unit circle;

  2. (2)

    for periodic orbits, \(\eta \) in (2) is periodic in time, i.e. \(\eta (t+\omega ,\theta )=\eta (t,\theta )\) for \(t\in \mathbb {R},\,\theta \in [-r,0]\) and for some (minimal) period \(\omega >0\); then for \(k\) nonnegative integer and \(a\in [0,\omega )\)

    $$\begin{aligned} T(s+k\omega +a,s)=T(s+a,s)T(s+\omega ,s)^k, \end{aligned}$$

    the eigenvalues of \(T(s+\omega ,s)\) are independent of \(s\in \mathbb {R}\) [14, 18] and thus we can refer to \(T(\omega ,0)\), known as the monodromy operator: the periodic orbit is asymptotically stable iff all the eigenvalues of \(T(\omega ,0)\) are inside the unit circle;

  3. (3)

    for other orbits, (1) is neither autonomous nor periodic and the asymptotic properties are characterized by the Lyapunov exponents, i.e. the eigenvalues of the limit operator \(\lim \limits _{n\rightarrow \infty }[T(s+nr,s)^{H}T(s+nr,s)]^{1/2n}\) [5, 6].

In order to investigate stability we construct in the following section a finite dimensional approximation of \(T:=T(s+h,s)\) and use its nonzero eigenvalues to approximate a finite number of the original (and dominant) ones. Then in Sects. 4 and 5 we will use always \(s=0\), while \(h=r\) for equilibria and \(h=\omega \) for periodic orbits according to (1) and (2) above. First we express \(T\) in a more convenient form.

Let \(X^{+}:=C([0,h],\mathbb {C}^d),\,X^{\pm }:=C([-r,h],\mathbb {C}^d)\) and \(V:X\times X^{+}\rightarrow X^{\pm }\) be the map

$$\begin{aligned} V(\varphi ,z)(\theta ):=\left\{ \begin{array}{ll} \varphi (0)+\int \nolimits _{0}^{\theta }z(t)dt&{}\quad \text {if}\ \theta \in [0,h],\\ \varphi (\theta )&{}\quad \text {if}\ \theta \in [-r,0]. \end{array}\right. \end{aligned}$$

Let also \(V_{1}:X\rightarrow X^{\pm }\hbox { and }V_{2}:X^{+}\rightarrow X^{\pm }\) be given by \(V_{1}\varphi :=V(\varphi ,0)\hbox { and }V_{2}z:=V(0,z)\). Note that

$$\begin{aligned} V(\varphi ,z)=V_{1}\varphi +V_{2}z. \end{aligned}$$
(4)

For \(v\in X^{\pm }\hbox { and }t\in [0,h]\), let \(v_{t}\in X\) denote the function \(v_{t}(\theta ):=v(t+\theta )\) for \(\theta \in [-r,0]\). Being \(f\) continuous, the function \(t\mapsto f(s+t,v_{t})\) for \(t\in [0,h]\) belongs to \(X^{+}\) for any \(v\in X^{\pm }\). Therefore, we can introduce the linear operator \(F_{s}:X^{\pm }\rightarrow X^{+}\) defined by

$$\begin{aligned} \left( F_{s}v\right) (t):=f(s+t,v_{t}),\quad t\in [0,h]. \end{aligned}$$

Now the evolution operator \(T\) can be expressed through \(V\hbox { and }F_{s}\) above as

$$\begin{aligned} T\varphi =V(\varphi ,z^{*})_{h}, \end{aligned}$$
(5)

where \(z^{*}\in X^{+}\) satisfies the fixed point equation

$$\begin{aligned} z^{*}=F_{s}V(\varphi ,z^{*}). \end{aligned}$$
(6)

It is clear that (6) has a fixed point iff (3) has a solution on \([s,s+h]\), and \(z^{*}\) is the derivative of such solution. Above, \(V(\cdot ,\cdot )_{h}\) refers to the notation introduced after (4).

3 Numerical approximation

Since \(T\) can be expressed through (5) and (6), it can be approximated by discretizing the spaces \(X\hbox { and }X^{+}\) as follows.

3.1 Discretization of \(X\)

We treat separately the two cases \(h\ge r\hbox { and }h<r\).

The case \(h\ge r\). For a given positive integer \(M\), consider the mesh \(\varOmega _{M}:=\{\theta _{M,0},\ldots ,\theta _{M,M}\}\) in \([-r,0]\) with \(0=\theta _{M,0}>\cdots >\theta _{M,M}\ge -r\), and set \(X_{M}:=\mathbb {C}^{d(M+1)}\) as the discrete counterpart of \(X\). An element \(\varPhi \in X_M\) is written as \(\varPhi =(\varPhi _{0},\ldots ,\varPhi _{M})^T\) where \(\varPhi _{m}\in \mathbb {C}^d,\,m=0,\ldots ,M\). Let the restriction operator \(R_{M}:X\rightarrow X_{M}\) be given by

$$\begin{aligned} R_{M}\varphi =(\varphi (\theta _{M,0}),\ldots ,\varphi (\theta _{M,M}))^{T}, \end{aligned}$$

i.e. a function \(\varphi \in X\) is discretized by the vector \(R_{M}\varphi \) of its values at the nodes of \(\varOmega _M\). Let the prolongation operator \(P_M:X_M\rightarrow X\) be given by

$$\begin{aligned} (P_{M}\varPhi )(\theta )=\sum \limits _{m=0}^{M}\ell _{M,m}(\theta )\varPhi _{m},\quad \theta \in [-r,0], \end{aligned}$$

where \(\ell _{M,m},\,m=0,\ldots ,M\), are the Lagrange basis polynomials [3] relevant to the nodes of \(\varOmega _{M}\), i.e. a vector \(\varPhi \in X_M\) becomes a function by taking its Lagrange interpolation polynomial \(P_M\varPhi \) at the nodes of \(\varOmega _M\).

The case \(h<r\). We operate piecewise in \([-h,0], [-2h,-h]\), etc. Let \(Q\) be the minimum positive integer \(q\) s.t. \(qh\ge r\), hence \(Q>1\). Set \(\theta _{q}:=-qh,\,q=0,\ldots ,Q-1\), and \(\theta _{Q}:=-r\). For a given positive integer \(M\), consider the mesh \(\varOmega _{M}:=\bigcup _{q=1}^{Q}\{\theta _{M,q,0},\ldots ,\theta _{M,q,M}\}\) in \([-r,0]\) with \(\theta _{q-1}=:\theta _{M,q,0}>\cdots \) \(>\theta _{M,q,M}:=\theta _{q},\,q=1,\ldots ,Q-1\), and \(\theta _{Q-1}=:\theta _{M,Q,0}>\cdots >\theta _{M,Q,M}\ge \theta _{Q}\) and set \(X_{M}:=\mathbb {C}^{d(QM+1)}\) as the discrete counterpart of \(X\). An element \(\varPhi \in X_M\) is written as \(\varPhi =(\varPhi _{1,0},\ldots ,\) \(\varPhi _{1,M-1},\ldots ,\varPhi _{Q,0},\ldots ,\varPhi _{Q,M-1},\varPhi _{Q,M})^T\) where \(\varPhi _{q,m}\in \mathbb {C}^d\) for \(q=1,\ldots ,Q\hbox { and }m=0,\ldots ,M-1\hbox { and }\varPhi _{Q,M}\in \mathbb {C}^d\) (we also set \(\varPhi _{q,M}:=\varPhi _{q+1,0},\,q=1,\ldots ,Q-1\)). Let the restriction operator \(R_{M}:X\rightarrow X_{M}\) be given by

$$\begin{aligned} R_{M}\varphi =\varPhi , \end{aligned}$$

where \(\varPhi _{q,m}=\varphi (\theta _{M,q,m}),\,q=1,\ldots ,Q\hbox { and }m=0,\ldots ,M\). Let the prolongation operator \(P_M:X_M\rightarrow X\) be given by

$$\begin{aligned} (P_{M}\varPhi )(\theta )&= \sum \limits _{m=0}^{M}\ell _{M,q,m}(\theta )\varPhi _{q,m},\quad \theta \in [\theta _{q},\theta _{q-1}],\\&\quad q=1,\ldots ,Q, \end{aligned}$$

where \(\ell _{M,q,m},\,q=1,\ldots ,Q\hbox { and }m=0,\ldots ,M\), are the Lagrange basis polynomials relevant to the nodes \(\theta _{M,q,0},\ldots ,\) \(\theta _{M,q,M}\).

3.2 Discretization of \(X^{+}\)

For a given positive integer \(N\), let \(\varOmega _{N}^{+}:=\{t_{N,1},\ldots ,t_{N,N}\}\) be a mesh in \([0,h]\) with \(0\le t_{N,1}<\cdots <t_{N,N}\le h\) and set \(X_{N}^{+}:=\mathbb {C}^{dN}\) as the discrete counterpart of \(X^{+}\). An element \(Z\in X_{N}^{+}\) is written as \(Z=(Z_{1},\ldots ,Z_{N})^T\) where \(Z_{n}\in \mathbb {C}^d,\,n=1,\ldots ,N\). Let the restriction operator \(R_{N}^{+}:X^{+}\rightarrow X_{N}^{+}\) and the prolongation operator \(P_{N}^{+}:X_{N}^{+}\rightarrow X^{+}\) be given respectively by

$$\begin{aligned} R_{N}^{+}z=(z(t_{N,1}),\ldots ,z(t_{N,N}))^{T} \end{aligned}$$

and

$$\begin{aligned} (P_{N}^{+}Z)(t)=\sum \limits _{n=1}^{N}\ell _{N,n}^{+}(t)Z_{n},\quad t\in [0,h], \end{aligned}$$

where \(\ell _{N,n}^{+},\,n=1,\ldots ,N\), are the Lagrange basis polynomials relevant to the nodes of \(\varOmega _{N}^{+}\).

3.3 Discretization of T

For given positive integers \(M\hbox { and }N\), a finite dimensional approximation \(T_{M,N}:X_M\rightarrow X_M\) of the evolution operator \(T\) defined through (5) and (6) is given by

$$\begin{aligned} T_{M,N}\varPhi =R_MV(P_M\varPhi ,P_{N}^{+}Z^{*})_{h}, \end{aligned}$$
(7)

where \(Z^{*}\in X^{+}_{N}\) satisfies the fixed point equation

$$\begin{aligned} Z^{*}=R_{N}^{+}FV(P_M\varPhi ,P_{N}^{+}Z^{*}). \end{aligned}$$
(8)

Clearly, (7) and (8) are the discrete counterparts of (5) and (6), respectively. In particular, the function \(\varphi \in X\) in (5) and (6) is replaced in (7) and (8) with its interpolation polynomial at the nodes of \(\varOmega _M\) and the Eq. (6) is discretized by a collocation at the nodes of \(\varOmega _{N}^{+}\).

3.4 The matrix form of \(T_{M,N}\)

By using (4), (7) can be rewritten as

$$\begin{aligned} T_{M,N}\varPhi =T_{M}^{(1)}\varPhi +T_{M,N}^{(2)}Z^{*}, \end{aligned}$$

where \(T_{M}^{(1)}:X_M\rightarrow X_M\hbox { and }T_{M,N}^{(2)}:X_N^{+}\rightarrow X_M\) are given respectively by

$$\begin{aligned} T_{M}^{(1)}\varPhi =R_M\left( V_1P_M\varPhi \right) _h \end{aligned}$$

and

$$\begin{aligned} T_{M,N}^{(2)}Z=R_M\left( V_2P_N^{+}Z\right) _h. \end{aligned}$$

Note that \(T_{M}^{(1)}\hbox { and }T_{M,N}^{(2)}\) are independent of \(f\). Always (4) leads to rewrite (8) as

$$\begin{aligned} \left( I_{X_{N}^{+}}-U_{N}^{(2)}\right) Z^{*}=U_{M,N}^{(1)}\varPhi , \end{aligned}$$

where \(U_{M,N}^{(1)}:X_M\rightarrow X_{N}^{+}\hbox { and }U_{N}^{(2)}:X_N^{+}\rightarrow X_{N}^{+}\) are given respectively by

$$\begin{aligned} U_{M,N}^{(1)}=R_{N}^{+}F_{s}V_1P_M \end{aligned}$$

and

$$\begin{aligned} U_{N}^{(2)}=R_{N}^{+}F_{s}V_2P_N^{+}. \end{aligned}$$

Then, for \(N\) sufficiently large, the finite dimensional operator \(T_{M,N}:X_{M}\rightarrow X_{M}\) can be expressed by

$$\begin{aligned} T_{M,N}=T_{M}^{(1)}+T_{M,N}^{(2)}\left( I_{X_{N}^{+}}-U_{N}^{(2)}\right) ^{-1}U_{M,N}^{(1)}. \end{aligned}$$

If \(\mathbf T _{M}^{(1)},\,\mathbf T _{M,N}^{(2)},\,\mathbf U _{M,N}^{(1)}\hbox { and }\mathbf U _{N}^{(2)}\) are the canonical matrices relevant to \(T_{M}^{(1)},\,T_{M,N}^{(2)},\,U_{M,N}^{(1)},\,U_{N}^{(2)}\), respectively, then

$$\begin{aligned} \mathbf T _{M,N}=\mathbf T _{M}^{(1)}+\mathbf T _{M,N}^{(2)}\left( \mathbf I _{dN}-\mathbf U _{N}^{(2)}\right) ^{-1}\mathbf U _{M,N}^{(1)} \end{aligned}$$

is the matrix whose eigenvalues are used as approximations of (part of) the spectrum of \(T\).

Now we give the explicit form of the above matrices for the applications of interest, i.e. for equations with both discrete and distributed delays of the form

$$\begin{aligned} x^{\prime }(t)=\sum \limits _{i=0}^{k}A_{i}(t)x(t-\tau _i)+\int \limits _{0}^{r}B(t,\theta )x(t-\theta )d\theta ,\quad t\in \mathbb {I},\nonumber \\ \end{aligned}$$
(9)

where \(0=:\tau _0<\tau _1<\cdots <\tau _k\le r,\,A_i:\mathbb {I}\rightarrow \mathbb {C}^{d\times d}\) is continuous for any \(i=0,1,\ldots ,k\hbox { and }B:\mathbb {I}\times [0,r]\rightarrow \mathbb {C}^{d\times d}\) is s.t. \(B(\cdot ,\theta )\) is continuous for almost all \(\theta \in [0,r]\) and, for any compact \(I\subseteq \mathbb {I}\), there exists \(m\in L^{1}([0,r],\mathbb {R})\) s.t. \(\Vert B(t,\theta )\Vert \le m(\theta )\) for all \(t\in I\) and for almost all \(\theta \in [0,r]\).

The matrix \(\mathbf T _{M}^{(1)}\). It turns out that \(\mathbf T _{M}^{(1)}=\tilde{\mathbf{T }}_{M}^{(1)}\otimes I_{d}\) where \(\otimes \) denotes the Kronecker product and

$$\begin{aligned} \tilde{\mathbf{T }}_{M}^{(1)}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 1&{}0&{}\cdots &{}0\\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ 1&{}0&{}\cdots &{}0\\ \end{array}\right) \end{aligned}$$

in \(\mathbb {C}^{(M+1)\times (M+1)}\) if \(h\ge r\), while

$$\begin{aligned} \tilde{\mathbf{T }}_{M}^{(1)}= \left( \begin{array}{c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c} 1&{}&{}&{}&{}&{}&{}&{}&{}&{}&{}\\ \vdots &{}&{}&{}&{}&{}&{}&{}&{}&{}&{}\\ 1&{}&{}&{}&{}&{}&{}&{}&{}&{}&{}\\ t^{(2)}_{0,0}&{}\cdots &{}t^{(2)}_{0,M}&{}&{}&{}&{}&{}&{}&{}&{}\\ \vdots &{}&{}\vdots &{}&{}&{}&{}&{}&{}&{}&{}\\ t^{(2)}_{M-1,0}&{}\cdots &{}t^{(2)}_{M-1,M}&{}&{}&{}&{}&{}&{}&{}&{}\\ &{}&{}t^{(3)}_{0,0}&{}\cdots &{}&{}&{}&{}&{}&{}&{}\\ &{}&{}\vdots &{}&{}&{}&{}&{}&{}&{}&{}\\ &{}&{}t^{(3)}_{M-1,0}&{}\cdots &{}&{}&{}&{}&{}&{}&{}\\ &{}&{}&{}&{}\ddots &{}&{}&{}&{}&{}&{}\\ &{}&{}&{}&{}&{}t^{(Q)}_{0,0}&{}\cdots &{}t^{(Q)}_{0,M}&{}0&{}\cdots &{}0\\ &{}&{}&{}&{}&{}\vdots &{}\ddots &{}\vdots &{}\vdots &{}\ddots &{}\vdots \\ &{}&{}&{}&{}&{}t^{(Q)}_{M-1,0}&{}\cdots &{}t^{(Q)}_{M-1,M}&{}0&{}\cdots &{}0\\ &{}&{}&{}&{}&{}t^{(Q)}_{M,0}&{}\cdots &{}t^{(Q)}_{M,M}&{}0&{}\cdots &{}0\\ \end{array}\right) \end{aligned}$$

in \(\mathbb {C}^{(QM+1)\times (QM+1)}\) if \(h<r\), where blank entries are zero,

$$\begin{aligned} t^{(q)}_{m,j}=\ell _{M,q-1,j}(h+\theta _{M,q,m}) \end{aligned}$$

for \(q=2,\ldots ,Q,\,m=0,\ldots ,M-1\hbox { and }j=0,\ldots ,M\) and

$$\begin{aligned} t^{(Q)}_{M,j}=\ell _{M,Q-1,j}(h+\theta _{M,Q,M}) \end{aligned}$$

for \(j=0,\ldots ,M\).

The matrix \(\mathbf T _{M,N}^{(2)}\). It turns out that \(\mathbf T _{M,N}^{(2)}=\tilde{\mathbf{T }}_{M,N}^{(2)}\otimes I_{d}\) where

$$\begin{aligned} \tilde{\mathbf{T }}_{M,N}^{(2)}=\left( \begin{array}{c@{\quad }c@{\quad }c} \int \nolimits _{0}^{h+\theta _{M,0}}\ell _{N,1}^{+}(t)dt&{}\cdots &{}\int \nolimits _{0}^{h+\theta _{M,0}}\ell _{N,N}^{+}(t)dt\\ \vdots &{}\ddots &{}\vdots \\ \int \nolimits _{0}^{h+\theta _{M,M}}\ell _{N,1}^{+}(t)dt\ &{}\cdots &{}\int \nolimits _{0}^{h+\theta _{M,M}}\ell _{N,N}^{+}(t)dt\\ \end{array}\right) \end{aligned}$$

in \(\mathbb {C}^{(M+1)\times N}\) if \(h\ge r\), while

$$\begin{aligned} \tilde{\mathbf{T }}_{M,N}^{(2)}=\left( \begin{array}{c@{\quad }c@{\quad }c} \int \nolimits _{0}^{h+\theta _{M,1,0}}\ell _{N,1}^{+}(t)dt&{}\cdots &{}\int \nolimits _{0}^{h+\theta _{M,1,0}}\ell _{N,N}^{+}(t)dt\\ \vdots &{}\ddots &{}\vdots \\ \int \nolimits _{0}^{h+\theta _{M,1,M-1}}\ell _{N,1}^{+}(t)dt\ &{}\cdots &{}\int \nolimits _{0}^{h+\theta _{M,1,M-1}}\ell _{N,N}^{+}(t)dt\\ 0&{}\cdots &{}0\\ \vdots &{}\ddots &{}\vdots \\ 0&{}\cdots &{}0\\ \end{array}\right) \end{aligned}$$

in \(\mathbb {C}^{(QM+1)\times N}\) if \(h<r\).

The matrix \(\mathbf U _{M,N}^{(1)}\). If \(h\ge r\), it holds

$$\begin{aligned} \mathbf {U}_{M,N}^{(1)}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} C_{1,0}&{}C_{1,1}&{}\cdots &{}C_{1,M}\\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ C_{\hat{N},0}&{}C_{\hat{N},1}&{}\cdots &{}C_{\hat{N},M}\\ C_{\hat{N}+1,0}&{}0&{}\cdots &{}0\\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ C_{N,0}&{}0&{}\cdots &{}0 \end{array} \right) \end{aligned}$$

in \(\mathbb {C}^{dN\times d(M+1)}\) with \(d\times d\) blocks

$$\begin{aligned} C_{n,0}&= \sum \limits _{j=0}^{i(t_{N,n})}A_j(s+t_{N,n})+\int \limits _0^{t_{N,n}}B(s+t_{N,n},\theta )d\theta \\&\quad +\sum \limits _{j=i(t_{N,n})+1}^k \ell _{M,0}(t_{N,n}-\tau _j)A_j(s+t_{N,n})\\&\quad +\int \limits _{t_{N,n}}^r\ell _{M,0}(t_{N,n}-\theta )B(s+t_{N,n},\theta )d\theta \end{aligned}$$

for \(n=1,\ldots ,\hat{N}\),

$$\begin{aligned} C_{n,m}&= \sum \limits _{j=i(t_{N,n})+1}^k\ell _{M,m}(t_{N,n}-\tau _j)A_j(s+t_{N,n})\\&\quad +\int \limits _{t_{N,n}}^r\ell _{M,m}(t_{N,n}-\theta )B(s+t_{N,n},\theta )d\theta \end{aligned}$$

for \(n=1,\ldots ,\hat{N}\hbox { and }m=1,\ldots ,M\) and

$$\begin{aligned} C_{n,0}=\sum \limits _{j=0}^{k}A_j(s+t_{N,n})+\int \limits _0^r B(s+t_{N,n},\theta )d\theta \end{aligned}$$

for \(n=\hat{N}+1,\ldots ,N\) with

$$\begin{aligned} \hat{N}:=\max \limits _{n=1,\ldots ,N}\{t_{N,n}<r\} \end{aligned}$$

and

$$\begin{aligned} i(\theta ):=\left\{ \begin{array}{l@{\quad }l} i&{} \text {if}\ \tau _i\le \theta <\tau _{i+1},\;i=0,\ldots ,k,\\ k&{} \text {if}\ \theta \ge \tau _{k}. \end{array}\right. \end{aligned}$$
(10)

If \(h<r\) it holds

$$\begin{aligned} \mathbf {U}_{M,N}^{(1)}=\left( \begin{array}{c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c} C^{(1)}_{1,0}&{}\cdots &{}C^{(1)}_{1,M-1}&{}\cdots &{}C^{(Q)}_{1,0}&{} C^{(Q)}_{1,1}&{} \cdots &{}C^{(Q)}_{1,M}\\ \vdots &{}\ddots &{}\vdots &{}\ddots &{}\vdots &{}\vdots &{}\ddots &{}\vdots \\ C^{(1)}_{\hat{N},0}&{}\cdots &{}C^{(1)}_{\hat{N},M-1}&{}\cdots &{}C^{(Q)}_{\hat{N},0}&{} C^{(Q)}_{\hat{N},1}&{}\cdots &{}C^{(Q)}_{\hat{N},M}\\ C^{(1)}_{\hat{N}+1,0}&{}\cdots &{}C^{(1)}_{\hat{N}+1,M-1}&{}\cdots &{}C^{(Q)}_{\hat{N}+1,0}&{}0&{}\cdots &{}0\\ \vdots &{}\ddots &{}\vdots &{}\ddots &{}\vdots &{}\vdots &{}\ddots &{}\vdots \\ C^{(1)}_{N,0}&{}\cdots &{}C^{(1)}_{N,M-1}&{}\cdots &{}C^{(Q)}_{N,0}&{}0&{}\cdots &{}0\\ \end{array}\right) \end{aligned}$$

in \(\mathbb {C}^{dN\times d(QM+1)}\) with \(d\times d\) blocks

$$\begin{aligned} C^{(1)}_{n,0}&= \sum \limits _{j=0}^{i(t_{N,n})}A_j(s+t_{N,n})+\int \limits _0^{t_{N,n}} B(s+t_{N,n},\theta )d\theta \\&\quad +\sum \limits _{j=i(t_{N,n})+1}^{i(t_{N,n,1})}\ell _{M,1,0}(t_{N,n}-\tau _j)A_j(s+t_{N,n})\\&\quad +\int \limits _{t_{N,n}}^{t_{N,n,1}}\ell _{M,1,0}(t_{N,n}-\theta )B(s+t_{N,n},\theta )d\theta \end{aligned}$$

for \(n=1,\ldots ,N\),

$$\begin{aligned} C^{(q)}_{n,0}&= \sum \limits _{j=i(t_{N,n,q-2})+1}^{i(t_{N,n,q-1})}\ell _{M,q-1,M}(t_{N,n}-\tau _j)A_j(s+t_{N,n})\\&\quad +\int \limits _{t_{N,n,q-2}}^{t_{N,n,q-1}}\ell _{M,q-1,M}(t_{N,n}-\theta )B(s+t_{N,n},\theta )d\theta \\&\quad +\sum \limits _{j=i(t_{N,n,q-1})+1}^{i(t_{N,n,q})}\ell _{M,q,0}(t_{N,n}-\tau _j)A_j(s+t_{N,n})\\&\quad +\int \limits _{t_{N,n,q-1}}^{t_{N,n,q}}\ell _{M,q,0}(t_{N,n}-\theta )B(s+t_{N,n},\theta )d\theta \end{aligned}$$

for \(n=1,\ldots ,N\hbox { and }q=2,\ldots ,q_{n}\),

$$\begin{aligned} C^{(q)}_{n,m}&= \sum \limits _{j=i(t_{N,n,q-1})+1}^{i(t_{N,n,q})}\ell _{M,q,m}(t_{N,n}-\tau _j)A_j(s+t_{N,n})\\&\quad +\int \limits _{t_{N,n,q-1}}^{t_{N,n,q}}\ell _{M,q,m}(t_{N,n}-\theta )B(s+t_{N,n},\theta )d\theta \end{aligned}$$

for \(n=1,\ldots ,N,\,q=1,\ldots ,q_{n}\hbox { and }m=1,\ldots ,M-1\),

$$\begin{aligned} C^{(Q)}_{n,M}&= \sum \limits _{j=i(t_{N,n,Q-1})+1}^{k}\ell _{M,Q,M}(t_{N,n}-\tau _j)A_j(s+t_{N,n})\\&\quad +\int \limits _{t_{N,n,Q-1}}^{r}\ell _{M,Q,M}(t_{N,n}-\theta )B(s+t_{N,n},\theta )d\theta \end{aligned}$$

for \(n=1,\ldots ,\hat{N}\),

$$\begin{aligned} C^{(Q)}_{n,0}&= \sum \limits _{j=i(t_{N,n,Q-2})+1}^{k}\ell _{M,Q-1,M}(t_{N,n}-\tau _j)A_j(s+t_{N,n})\\&\quad +\int \limits _{t_{N,n,Q-2}}^{r}\ell _{M,Q-1,M}(t_{N,n}-\theta )B(s+t_{N,n},\theta )d\theta \end{aligned}$$

for \(n=\hat{N}+1,\ldots ,N\), with \(i(\theta )\) as in (10),

$$\begin{aligned} \hat{N}:=\max \limits _{n=1,\ldots ,N}\{t_{N,n}<r-(Q-1)h\}, \end{aligned}$$
$$\begin{aligned} q_{n}:=\left\{ \begin{array}{l@{\quad }l} Q&{} \text {if}\ n=1,\ldots ,\hat{N}\\ Q-1&{} \text {if}\ n=\hat{N}+1,\ldots ,N, \end{array}\right. \end{aligned}$$

and \(t_{N,n,q}:=t_{N,n}+qh\) for \(q=0,\ldots ,q_{n}-1\hbox { and }t_{N,n,q_{n}}:=r\).

The matrix \(\mathbf U _N^{(2)}\). It holds

$$\begin{aligned} \mathbf {U}_{N}^{(2)}=\left( \begin{array}{c@{\quad }c@{\quad }cc@{\quad }c} D_{1,1}&{}\cdots &{}D_{1,N} \\ \vdots &{} &{}\vdots \\ D_{N,1}&{}\cdots &{}D_{N,N} \end{array}\right) \end{aligned}$$

in \(\mathbb {C}^{dN\times dN}\) with \(d\times d\) blocks

$$\begin{aligned} D_{n,j}&= \sum \limits _{l=0}^{i(\min \{r,t_{N,n}\})}\left( \int \limits _0^{t_{N,n}-\tau _l}\ell _{N,j}^{+}(t)dt\right) A_l(s+t_{N,n})\\&\quad +\int \limits _0^{\min \{r,t_{N,n}\}}\left( \int \limits _{0}^{t_{N,n}-\theta }\ell _{N,j}^{+}(t)dt\right) B(s+t_{N,n},\theta )d\theta \end{aligned}$$

for \(n,j=1,\ldots ,N\) with \(i(\theta )\) as in (10).

Remark 1

For computational efficiency the Lagrange basis polynomials in the above entries are evaluated by the barycentric formula [3]. Moreover, the integrals need in general to be numerically computed. It is advisable to adopt gaussian quadrature formulae in order to preserve the high-order convergence guaranteed by the overall method. In the case of discontinuities in the function \(B(t,\cdot )\), adaptive or piecewise strategies should be employed. In this work we use quadrature on Chebyshev extremal points, i.e. Clenshaw-Curtis formula [29].

3.5 Convergence

As proved in [9], spectrally accurate approximations of \(T\) and of its spectrum are obtained by letting \(M,N\rightarrow \infty \) with \(M\ge N\) if the mesh \(\varOmega _{N}^{+}\) is made of Chebyshev zeros:

$$\begin{aligned} t_{N,n}=\frac{h}{2}\left( 1-\cos \frac{(2n-1)\pi }{2N}\right) ,\quad n=1,\ldots ,N. \end{aligned}$$

In particular, the error is \(O(N^{-N})\) as shown, e.g., in Fig. 1 for the delayed damped Mathieu equation

$$\begin{aligned} x''(t)+b_{0}x'(t)+c_{0}(t)x(t)=c_{1}x(t-\tau ) \end{aligned}$$
(11)

where

$$\begin{aligned} c_{0}(t)=c_{0\delta }+c_{0\varepsilon }\cos {(2\pi t/\varOmega )} \end{aligned}$$
(12)

for either \(\varOmega =\tau \hbox { and }\varOmega \ne \tau \) rationally independent. In this paper we use Chebyshev zeros for \(\varOmega _{N}^{+}\) and Chebyshev extremal points for \(\varOmega _{M}\).

Fig. 1
figure 1

Error in the dominant eigenvalue for Eq. (11) with \(b_{0}=0.2,\,c_{0\delta }=1,\,c_{0\varepsilon }=2,\,c_{1}={-}1.5,\,\tau =1\) and varying \(\varOmega \)

4 Stability charts

In applications it is often essential to determine stability depending on varying or uncertain system parameters. When two parameters are concerned, one can collect the stability information in a rectangular region of the parameters plane by determining stable/unstable portions separated by the relevant stability boundaries. The latter are the locus \(|\mu |=1\) for the dominant eigenvalue \(\mu \) of either \(T(r,0)\) for equilibria (i.e. (9) with autonomous coefficients) or \(T(\omega ,0)\) for periodic orbits of period \(\omega \) (i.e. (9) with \(\omega \)-periodic coefficients). These so-called stability charts are thus usually obtained by computing the contours at level \(1\) of the surface \(|\mu (p_{1},p_{2})|\) as a function of the two parameters \(p_{1}\hbox { and }p_{2}\). In general this function gives a numerical approximation of the exact eigenvalue that determines the stability, in our case the dominant one. Therefore the overall computational cost of a stability chart is determined by two aspects: first, the accuracy desired for the stability boundaries, i.e. basically the number of times the dominant eigenvalue approximation function is evaluated and, second, the cost of a call to this function for a single choice of the two parameters. In this paper, as for the first aspect we apply the adaptive triangulation technique described in [8], which is on average more efficient than, e.g., Matlab’s contour. As for the second aspect we apply the pseudospectral technique described in Sect. 3. The spectral convergence shown in Fig. 1 is a clear advantage over the total computational cost since accurate estimates of the dominant eigenvalue are obtained with rather low matrix dimensions (e.g., \(N=10\) already gives more than \(5\) digits accuracy in general).

5 Numerical tests

In this section we report on several tests on stability charts of linear autonomous and (mainly) periodic delayed systems. For the sake of comparison, each example is accompanied with the relevant references where similar tests can be found, a plot of the resulting stability chart and a table containing computational information such as the resolution of the stability boundaries (in \(\%\) of the length of the sides of the rectangular domain), the number of evaluations of the dominant eigenvalue approximation function and the overall CPU time. All the tests are performed on a Mac OS X 10.5.8, 2.53 GHz Intel Core 2 Duo, 4 GB RAM and all the algorithms are implemented in Matlab (R2009b).

Remark 2

In all the following stability charts, the stable regions are those inside the closed contours.

5.1 Test \(1\)

We consider the second order equation with a single discrete delay

$$\begin{aligned} x''(t)+c_{0}x(t)=c_{1}x(t-2\pi ), \end{aligned}$$

see [19, Eq. (9)]. We approximate \(T(r,0)\) for \(r=2\pi \). The parameters for the stability chart are \(p_{1}=c_{0}\in [-1,5]\hbox { and }p_{2}=c_{1}\in [-1,1]\), see Table  1 and Fig. 2 (compare with [19, Figs.  2, 3]). The test is relevant to varying \(N\) for the dominant eigenvalue approximation function. Note that \(N=10\) gives a chart indistinguishable from the exact one [19, Fig. 2].

Table 1 Computational data for Test \(1\), see text
Fig. 2
figure 2

Stability chart for Test \(1\), see text

5.2 Test \(2\)

We consider the delayed damped Mathieu equation with distributed delay

$$\begin{aligned} x''(t)+b_{0}x'(t)+c_{0}(t)x(t)=c_{1}\int \limits _{-1}^{0}w(\theta )x(t+\theta )d\theta \end{aligned}$$

where \(b_{0}=0\hbox { and }c_{0}\) is given in (12) with \(\varOmega =1/2\), see [19, Eq. (44)] and [22, Eq. (28)]. We approximate \(T(\omega ,0)\) where the period is \(\omega =1/2\). In a first test, called Test \(2.1\), the parameters for the stability chart are \(p_{1}=c_{0\delta }/(4\pi ^{2})\in [-2,10]\hbox { and }p_{2}=c_{1}/(4\pi ^{2})\in [-2,10]\), while \(w(\theta )=1\), see Table 2 and Fig. 3 (compare with [19, Fig.  7]). In a second test, called Test \(2.2\), the parameters for the stability chart are \(p_{1}=c_{0\delta }/\pi ^{2}\in [-5,20]\hbox { and }p_{2}=c_{1}/\pi ^{2}\in [-50,20]\), while \(w(\theta )=-(\pi /2)\sin ({\pi \theta })\), see Table 3 and Fig. 4 (compare with [19, Fig. 8] and [22, Fig. 6]). In a third test, called Test \(2.3\), the parameters for the stability chart are \(p_{1}=c_{0\delta }/\pi ^{2}\in [-5,35]\hbox { and }p_{2}=c_{1}/\pi ^{2}\in [-50,300]\), while \(w(\theta )=(\pi /2)\sin ({\pi \theta })+(13\pi /77)\sin {(2\pi \theta )}\), see Table 4 and Fig. 5 (compare with [19, Fig. 9] and [22, Fig. 7]). All the tests are relevant to \(N=10\) and varying \(c_{0\varepsilon }=0,20,40,60\).

Table 2 Computational data for Test \(2.1\), see text
Fig. 3
figure 3

Stability charts for Test \(2.1\), see text

Table 3 Computational data for Test \(2.2\), see text
Fig. 4
figure 4

Stability charts for Test \(2.2\), see text

Table 4 Computational data for Test \(2.3\), see text
Fig. 5
figure 5

Stability charts for Test \(2.3\), see text

5.3 Test \(3\)

We consider the delayed Mathieu equation with two discrete delays

$$\begin{aligned} x''(t)+[6+c_{0\varepsilon }\cos {(2\pi t)}]x(t)=x(t-\tau _{1})+x(t-\tau _{2}), \end{aligned}$$

see [19, Eq. (53)]. We approximate \(T(\omega ,0)\) where the period is \(\omega =1\). The parameters for the stability chart are \(p_{1}=\tau _{1}\in [0,10]\hbox { and }p_{2}=\tau _{2}\in [0,10]\), see Table 5 and Fig. 6 (compare with [19, Fig.  10]). The test is relevant to \(N=10\) and varying \(c_{0\varepsilon }=0,6\).

Table 5 Computational data for Test \(3\), see text
Fig. 6
figure 6

Stability charts for Test \(3\), see text

5.4 Test \(4\)

We consider the delayed Mathieu equation with two discrete delays

$$\begin{aligned} x''(t)+[a+b\cos {(t)}]x(t)=cx(t-2\pi )+dx(t-4\pi ), \end{aligned}$$

where \(b=d=0.1\), see [10, Eq. (40)]. We approximate \(T(\omega ,0)\) where the period is \(\omega =2\pi \). The parameters for the stability chart are \(p_{1}=a\in [-1,5]\hbox { and }p_{2}=c\in [-1,1]\), see Table 6 and Fig. 7 (compare with [10, Fig. 11]). The test is relevant to \(N=10\).

Table 6 Computational data for Test \(4\), see text
Fig. 7
figure 7

Stability chart for Test \(4\), see text