1 Introduction

To describe the behavior of complex physical systems accurately, high or even infinite order mathematical models are often required. However, direct simulation of the original high or even infinite order models is very difficult and sometimes prohibitive due to unmanageable levels of storage, high computational cost and long computation time. Therefore, model order reduction (MOR) , which replaces the original complex and high-order system by a reduced-order model (ROM), plays an important role in many areas of engineering, e.g., transmission lines in circuit packaging [31, 38], PCB (printed circuit board) design [11, 48, 58] and networked control systems [18, 28]. The obvious advantages of MOR include that the use of ROMs results in not only considerable savings in storage and computational time, but also fast simulation and verification leading to shortened design cycle [2, 3, 5, 12, 15, 26, 33, 34, 36, 52, 59].

A lot of MOR methods have been presented in the past few decades [4, 14, 16, 17, 2022, 25, 30, 35, 40, 44, 49, 51]. Most of them fall into two categories. The first one are singular value decomposition (SVD)-based methods via constructing or optimizing the ROM according to a suitably chosen criterion, such as the H norm, energy-to-peak gain and Hankel norm [40, 44, 49, 51]. The second category are the moment matching-based methods. For linear time invariant (LTI) systems, the moment matching method [4, 5, 16, 20, 35] is to expand the transfer function by Taylor series, and then create a ROM for which the first few terms (also called moments) of its Taylor series expansion match those of the original model. The projection matrix to derive the ROM is usually obtained from Krylov subspace iterative schemes. Over the past years, moment matching methods are widely used due to the availability of efficient iterative schemes for constructing the projection matrix, in contrast to the SVD approach which usually involves solving expensive matrix equations or convex optimization problems.

In many physical, industrial and circuit systems, time delays occur due to the finite capability of information processing, data transmission among various parts of the systems and some essential simplification of the corresponding process models [9, 27, 29, 37, 39, 49, 5557]. The delaying effect is often detrimental to the performance, and even renders instability. So, the presence of time delays substantially complicates analytical and theoretical aspects of system design. In the past few decades, researchers have paid great attention to the analysis and synthesis of time delay systems (TDSs). Most studies are involved with systems having delays in the system states only, which are often called retarded systems (RSs) [42, 46]. Another important and more general TDSs, called neutral systems (NSs), have dynamics governed by delays not only in the system states, but also in the derivative of system states. For example, in the context of circuit modeling and simulation, NSs can be formulated for the partial element equivalent circuits (PEECs) widely used in electromagnetic (EM) simulation [24]. The NS has attracted a lot of research effort in recent years [23, 32, 39, 41, 45, 47, 53].

The MOR of TDSs is mainly based on the SVD-based method, which constructs the ROM such that the error norm between the original system and the ROM is less than some given tolerance. The H MOR problem for RSs is studied in [54] by solving linear matrix inequalities (LMIs) with a rank constraint. The problem in [54] for linear parameter-varying systems with both discrete and distributed delays is considered in [50] by solving parameterized LMIs. The MOR of a NS with multiple constant delays (MCDs), however, has received little attention despite its importance in theory and practice [43, 45]. The energy-to-peak MOR and H MOR for a NS are studied in [43] and [45], respectively, in terms of LMIs with inverse constraints. LMIs can be solved by interior-point method (IPM) together with Newton’s method via minimizing a strictly convex function whereby all matrix variables are transformed into a high-order vector variable in [8]. In practice, the IPM fails to solve large scale LMIs as the storage of the Hessian matrix of the objective convex function used in Newton’s method is memory-demanding and the computational cost for the Hessian matrix is also very high. Although cone complementarity linearization (CCL) algorithm [13] provides a way to transform the LMIs with inverse constraints to a minimization problem subject to original LMIs and additional LMIs coming from the inverse constraints, much higher computational cost is needed for expanded LMIs. Hence, though the methods in [43, 45] are theoretically correct, they are of little practical use in reducing high-order NSs due to the prohibitive computational cost.

As the SVD-based methods [43, 45] suffer from high computational cost, it is desirable to use moment matching method to approximate the NSs owing to its much faster computation. However, the major difficulty of applying moment matching method on a NS is the generation of moments from the NS transfer function, which are also the coefficient matrices of its Taylor series expansion. The reason is the appearance of nonlinear exponential terms in the transfer function from the delayed state and the derivative of the delayed state, making direct Taylor series expansion infeasible. In this paper, we propose two methods to approximate the nonlinear exponential terms and to generate their moments.

The major contribution of this paper is the reduction of the NS with MCDs by first approximating the nonlinear exponential terms via Padé approximation or Taylor series expansion of the exponential terms. The former results in an expanded-size, but exponential-free, state space which can then be reduced by standard moment matching method. Whereas the latter effectively replaces the exponential terms by truncated Taylor series, this allows the inverse in the transfer function computation to be again exponential-free, but contains only powers-of-s terms whose coefficient matrices are of the same size as those of the original NS. Subsequently, standard moment matching techniques for reduced-order modeling can be readily used. The proposed two methods with low computation cost make it applicable to the reduction of high-order NS.

The outline of this paper is as follows. In Sect. 1, the MOR problem for descriptor system (DS) by moment matching method is reviewed and the challenge of the MOR problem for the NS with MCDs is given. In Sect. 2, Padé approximation of exponential terms results in a delay-free ROM modeled by DS. In Sect. 3, the ROM with the same structure of original NS is given by replacing exponential terms via their Taylor series expansions. Numerical examples to demonstrate the effectiveness of the proposed MOR results and the comparison with other methods are given in Sect. 4. Finally, Sect. 5 draws the conclusion.

1.1 Neutral Systems

Consider a NS with MCDs, denoted by Σ,

(1)
(2)
(3)

where x(t)∈ℝn is the state vector, u(t)∈ℝm is the input and y(t)∈ℝl is the output. E, A, \(A_{h_{i}}\), \(A_{d_{j}}\), B and C, i=1,…,p, j=1,…,q, are properly dimensioned real constant matrices. Here h i and d j , i=1,…,p, j=1,…,q, are the constant delays and α=max{h i ,d j ,i=1,…,p,j=1,…,q}. All derivations in this paper can straightforwardly be extended to time varying delays case by assuming h i and d j as the upper bounds of the time varying delays. The order of the NS Σ is defined as the number of states, i.e., n. Under the assumption x(0)=ϕ(0)=0, the transfer function from input u(t) to state x(t) is given by

$$ G_{X} ( s ) = \Biggl( sE-A-\sum_{i=1}^{p}A_{h_{i}}e^{-\mathrm{sh}_{i}}- \sum_{j=1}^{q}A_{d_{j}}se^{-\mathrm{sd}_{j}}\Biggr)^{-1}B, $$
(4)

by taking Laplace transform on the left and right sides of (1). The NS Σ is also characterized by its transfer function from input u(t) to output y(t)

$$ G ( s ) =C \Biggl( sE-A-\sum_{i=1}^{p}A_{h_{i}}e^{-\mathrm{sh}_{i}}- \sum_{j=1}^{q}A_{d_{j}}se^{-\mathrm{sd}_{j}}\Biggr)^{-1}B. $$
(5)

1.2 MOR of Systems Without Delay

When the NS Σ does not have time delays h i and d j , i.e., \(A_{h_{i}}=0\) and \(A_{d_{j}}=0, i=1,\ldots ,p, j=1,\ldots ,q\), it reduces to a DS Σ ds,

(6)

with transfer function

$$ G_{\mathrm{ds}} ( s ) =C ( sE-A )^{-1}B. $$

The above DS becomes to an LTI system when E=I. The Taylor series expansion of G ds(s) around s=0 is

$$ G_{\mathrm{ds}} ( s ) =-CB-C\bigl(A^{-1}E\bigr)A^{-1}Bs-C \bigl(A^{-1}E\bigr)^{2}A^{-1}Bs^{2}-\cdots , $$
(7)

by assuming that A is invertible. Coefficient matrices of its Taylor series expansion in (7) are called block moments or moments [35] of the DS Σ ds.

The MOR by moment matching method for DS Σ ds is to create a ROM for which the first few moments match those from the original model [4, 14, 16, 17, 20, 21, 35, 44, 49, 52]. The projection matrix \(V\in \mathbb{R}^{n\times \hat{n}}\) to generate the ROM is from

$$ \operatorname{colspan} ( V ) \supseteq K ( 0,\varSigma_{\mathrm{ds}}, \hat{n}) ,\quad V^{T}V=I, $$

where \(K ( 0,\varSigma_{\mathrm{ds}},\hat{n} ) \) is defined as the \(\hat{n}\)th Krylov subspace

(8)

and the system matrices of the resulting ROM are

$$ \hat{E}=V^{T}EV,\quad\quad \hat{A}=V^{T}AV,\quad\quad \hat{B}=V^{T}B,\quad\quad \hat{C}=CV $$
(9)

[7, 35]. Therefore, the key point in MOR by moment matching method is the generation of moments or coefficient matrices of the Taylor series expansion of the transfer function G ds(s).

1.3 MOR of NSs

For the MOR problem of the NS Σ, we also want to find a projection matrix V for constructing the ROM to match the first few moments of transfer function G(s) in (5). However, when time delays are taken into account, G(s) becomes much more complicated than G ds(s) due to exponential terms \(e^{-\mathrm{sh}_{i}}\) and \(e^{-\mathrm{sd}_{j}}\), i=1,…,p, j=1,…,q, from the delayed states and the derivative of delayed states, respectively. As direct Taylor series expansion of G(s) is impossible due to the appearance of nonlinear terms \(e^{-\mathrm{sh}_{i}}\) and \(e^{-\mathrm{sd}_{j}}\), approximation of these \(e^{-\mathrm{sh}_{i}}\) and \(e^{-\mathrm{sd}_{j}}\) gives an exponential-free approximation of the Taylor series expansion of G(s)

$$ G ( s ) \approx G_{0}+G_{1}s+G_{2}s^{2}+ \cdots +G_{n}s^{n}+\cdots , $$
(10)

where G i , i=0,1,…, are constant matrices and called approximated moments of the NS Σ. Two kinds of approximation of exponential terms are used in this paper. One is the Padé approximation, which is the most frequently used method to approximate them by finite rational functions. The other is to expand exponential terms by their Taylor series expansions. The former gives rise to a ROM modeled by a DS in Sect. 2 and the latter results in a ROM modeled by a NS in Sect. 3.

2 ROM by Padé Approximation

The following lemma shows that the exponential term \(e^{-\mathrm{sh}_{i}}\) is approximated by a transfer function of an LTI system in terms of Padé approximation. The most important advantage is that G i in (10) are expressed by moments of an expanded DS. Then projection matrix proposed in [2, 7] is ready for the construction of the ROM.

Lemma 1

\(e^{-\mathrm{sh}_{i}}\) is approximated by the \(\beta_{h_{i}}\) th order transfer function of LTI system

$$ e^{-\mathrm{sh}_{i}}\approx \bar{C}_{h_{i}} ( sI-\bar{A}_{h_{i}} )^{-1}\bar{B}_{h_{i}}+\bar{D}_{h_{i}}, $$
(11)

where

(12)
(13)
(14)
(15)
(16)

and \(\alpha_{h_{i}}\) and \(\beta_{h_{i}}\) with \(\alpha_{h_{i}}\leq \beta_{h_{i}}\) are positive integers.

Proof

From [19, p. 557], \(e^{-\mathrm{sh}_{i}}\) can be approximated by \(\beta_{h_{i}}\)th order Padé approximation,

$$ e^{-\mathrm{sh}_{i}}\approx \frac{\sum_{k=0}^{\alpha _{h_{i}}}b_{k}s^{k}}{\sum_{k=0}^{\beta _{h_{i}}}a_{k}s^{k}}, $$

where a k and b k are defined in (15) and (16). Firstly, we assume that \(\alpha_{h_{i}}=\beta_{h_{i}}\). It is easy to get

(17)

It follows by [1, Theorem 3.5.1] that the controllable canonical realization of the second term in (17) is equivalent to

$$ \bar{C}_{h_{i}} ( sI-\bar{A}_{h_{i}} )^{-1} \bar{B}_{h_{i}}, $$

which gives (11), with \(\bar{C}_{h_{i}}\), \(\bar{A}_{h_{i}}\) and \(\bar{B}_{h_{i}}\) given in (12)–(14). In the case of \(\alpha_{h_{i}}=\beta_{h_{i}}-1\), i.e., \(b_{\beta _{h_{i}}}=0\), (17) becomes a \(\beta_{h_{i}}\)th order transfer function

$$ e^{-\mathrm{sh}_{i}}\approx \frac{\sum_{k=0}^{\beta _{h_{i}}-1}b_{k}s^{k}}{\sum_{k=0}^{\beta _{h_{i}}}a_{k}s^{k}}=\frac{\sum_{k=0}^{\beta _{h_{i}}-1} ( b_{k}/a_{\beta _{h_{i}}} ) s^{k}}{s^{\beta _{h_{i}}}+\sum_{k=0}^{\beta _{h_{i}}-1} ( a_{k}/a_{\beta _{h_{i}}} ) s^{k}}, $$
(18)

which results in (11) for the case \(\alpha_{h_{i}}=\beta_{h_{i}}-1 \) by the controllable canonical realization again. The case \(\alpha_{h_{i}}<\beta_{h_{i}}-1\ \)can be obtained similar to the case \(\alpha_{h_{i}}=\beta_{h_{i}}-1\) by assuming \(b_{\alpha _{h_{i}}+1}=\cdots =b_{\beta _{h_{i}}-1}=0\). The conclusion holds. □

A proposition is followed from Lemma 1 related to the approximation of G(s).

Proposition 1

G(s) is approximated by

(19)

where

(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)

and \(\bar{C}_{h_{i}}\), \(\bar{A}_{h_{i}}\), \(\bar{B}_{h_{i}}\), \(\bar{D}_{h_{i}}\), \(r_{h_{i}}\), i=1,…,p, and \(\bar{C}_{d_{j}}\), \(\bar{A}_{d_{j}}\), \(\bar{B}_{d_{j}}\), \(\bar{D}_{d_{j}}\ \) and \(r_{d_{j}}\), j=1,…,q, are obtained from Lemma 1.

Proof

From Lemma 1, \(e^{-\mathrm{sh}_{i}}\) is approximated by the \(\beta_{h_{i}}\)th order transfer function

$$ \tilde{C}_{h_{i}} ( sI-\tilde{A}_{h_{i}} )^{-1} \tilde{B}_{h_{i}}+\tilde{D}_{h_{i}} $$

which follows that

$$ \mathcal{L} \bigl( x ( t-h_{i} ) \bigr) =e^{-\mathrm{sh}_{i}}X ( s ) \approx \bigl( \tilde{C}_{h_{i}} ( sI-\tilde{A}_{h_{i}} )^{-1}\tilde{B}_{h_{i}}+\tilde{D}_{h_{i}} \bigr) X ( s ) , $$

where X(s) is the Laplace transform of x(t) and \(\tilde{C}_{h_{i}}\), \(\tilde{A}_{h_{i}}\), \(\tilde{B}_{h_{i}}\) and \(\tilde{D}_{h_{i}}\), i=1,…,p, are defined in (23) and (24). So, x(th i ), i=1,…,p, can be treated as the output of a \(\beta_{h_{i}}\)th order LTI system \(\varSigma_{h_{i}}\),

Similarly, it is easy to get

$$ \mathcal{L} \bigl( \dot{x}(t-d_{j}) \bigr) =se^{-\mathrm{sd}_{j}}X ( s ) \approx \bigl( \tilde{C}_{d_{j}} ( sI-\tilde{A}_{d_{j}} )^{-1}\tilde{B}_{d_{j}}+\tilde{D}_{d_{j}} \bigr) X ( s ) , $$

which can be treated as the output of a \(\beta_{d_{j}}\)th order LTI system \(\varSigma_{d_{j}}\),

where \(\tilde{C}_{d_{j}}\), \(\tilde{A}_{d_{j}}\), \(\tilde{B}_{d_{j}}\) and \(\tilde{D}_{d_{j}}\), j=1,…,q, are given in (25) and (26). Together with NS Σ, we have

(28)

By defining \(x_{s} ( t ) =[ x^{T} ( t ) \ x_{h_{1}}^{T} ( t ) \ \cdots \ x_{h_{p}}^{T} ( t ) \ x_{d_{1}}^{T} ( t ) \ \cdots \ x_{d_{q}}^{T} ( t )]^{T}\), the system in (28) can be rewritten as an expanded DS Σ s ,

(29)

with \(\mathcal{E}\), \(\mathcal{A}\), \(\mathcal{B}\ \)and \(\mathcal{C}\) being defined in (20)–(22). Then we can say that the NS Σ can be approximated by a DS Σ s in (29), i.e., \(G ( s ) \approx \mathcal{C} ( s\mathcal{E}-\mathcal{A} )^{-1}\mathcal{B}\), which further gives (19). □

By matching approximated moments in (19), we get the ROM immediately.

Theorem 1

The \(\hat{n}\) th reduced-order DS to approximate the NS Σ, is given by

where V is obtained by

$$ \operatorname{colspan} ( V ) \supseteq \operatorname{Kr} \bigl( \bigl( \mathcal{A}^{-1}\mathcal{E}\bigr),\mathcal{A}^{-1} \mathcal{B},\hat{n} \bigr) , \quad V^{T}V=I. $$
(30)

Proof

The projection matrix V is defined in (30) by the same method in [2]. From (30) and by taking the same steps in [35], we have

Thus, the DS \(\hat{\varSigma}_{s}\) matches the first \(\hat{n}\) moments of the DS Σ s , which gives an approximation to the NS Σ. □

Remark 1

By Padé approximation of \(e^{-\mathrm{sh}_{i}}\) and \(e^{-\mathrm{sd}_{j}}\), the MOR problem of the NS Σ is transformed to the MOR problem of the DS Σ s without delay. The main advantage is that the standard state space techniques can be applied to the ROM directly in the DS Σ s for the analysis and simulation. Furthermore, practical application of a reduced-order DS is more convenient than ROM with delay as traditional software for simulation is all for delay-free systems. However, the order of the DS Σ s , \(\tilde{n}=n ( 1+\sum_{i=1}^{p}\beta_{h_{i}}+\sum_{j=1}^{q}\beta_{d_{j}} ) \) in (27) is determined by the order of Padé approximation of \(e^{-\mathrm{sh}_{i}}\) and \(e^{-\mathrm{sd}_{j}}\), and is higher than n, especially when p and q are large. So the result in Theorem 1 is more suitable for the NS with small p and q, or the case that the ROM should be delay-free system.

Remark 2

When \(A_{d_{j}}=0\), j=1,…,q, the NS Σ becomes a RS Σ rs ,

(31)

and the result in Theorem 1 is also applicable to the RS Σ rs by deleting columns, rows and elements related to \(A_{d_{j}}\) from (20)–(22). Theorem 1 is also true when \(A_{h_{i}}=A_{d_{j}}=0\), which is the MOR result by moment matching for the DS in [35].

3 ROM by Taylor Series Expansion

3.1 Moment Matching Around s 0=0

The inverse formula shown below is needed for later development.

Lemma 2

(See [12], p. 679)

The inverse of X 0+sX 1+s 2 X 2+⋯+s r−1 X r−1 is given by

where

$$ M_{k}=-\sum_{j=0}^{k-1}X_{0}^{-1}X_{k-j}M_{j},\quad M_{0}=I. $$

In order to approximate G X (s) around s 0=0,

$$ \varGamma_{0}=-A-\sum_{i=1}^{p}A_{h_{i}}, $$
(32)

is assumed to be invertible. When the NS Σ reduces to the DS Σ ds in (6), the nonsingular assumption of Γ 0=−A becomes the standard assumption for moments’ computation of the DS in (7). In the following proposition, we approximate G X (s) around s 0=0, by combining the idea of approximating exponential terms \(e^{-\mathrm{sh}_{i}}\) and \(e^{-\mathrm{sd}_{j}}\), i=1,…,p, j=1,…,q, by their Taylor series expansions in [42, p. 834] and the inverse formula in Lemma 2. G i in (10) are expressed by the matrices having the same dimension as the original NS Σ. A lower-order NS is obtained by matching these G i in (10).

Proposition 2

G X (s) is approximated around s 0=0 by

$$ G_{X} ( s ) \approx G_{\mathrm{app}} ( s ) = \Biggl( \sum _{k=0}^{\infty }L_{k}s^{k} \Biggr) \varGamma_{0}^{-1}B, $$
(33)

where

(34)
(35)
(36)

Proof

We first expand exponential terms \(e^{-\mathrm{sh}_{i}}\) and \(e^{-\mathrm{sd}_{j}}\), i=1,…,p, j=1,…,q, by their Taylor series expansions around s 0=0 [42, p. 834],

$$ e^{-\mathrm{sh}_{i}}=\sum_{k=0}^{\infty } \frac{ ( -h_{i} )^{k}}{k!}s^{k}\quad\mbox{and}\quad e^{-\mathrm{sd}_{j}}=\sum _{k=0}^{\infty }\frac{ ( -d_{j} )^{k}}{k!}s^{k}, $$

which render

$$ sE-A-\sum_{i=1}^{p}A_{h_{i}}e^{-\mathrm{sh}_{i}}- \sum_{j=1}^{q}A_{d_{j}}se^{-\mathrm{sd}_{j}}=\varGamma_{0}+s\varGamma_{1}+s^{2} \varGamma_{2}+\cdots +s^{n}\varGamma_{n}+\cdots , $$
(37)

where Γ k , k=0,1,…, are defined in (32), (35) and (36). If we truncate the right hand side of (37) to only the first \(r\geq \hat{n}\) terms, we get

$$ G_{X} ( s ) \approx G_{\mathrm{app}} ( s ) = \bigl( \varGamma_{0}+s\varGamma_{1}+s^{2} \varGamma_{2}+\cdots +s^{r-1}\varGamma_{r-1} \bigr)^{-1}B. $$
(38)

Then (33) can be get directly by applying the inverse formula in Lemma 2 to G app(s). □

From Proposition 2, the Krylov subspace for the NS Σ is approximated by coefficient matrices of G app(s) in (38) in the following proposition.

Proposition 3

The Krylov subspace around 0 for the NS Σ is approximated by

$$ K ( 0,\varSigma ,\hat{n} ) =\operatorname{colspan} \bigl\{ L_{0} \varGamma_{0}^{-1}B, L_{1}\varGamma_{0}^{-1}B, \ldots , L_{\hat{n}-1}\varGamma_{0}^{-1}B \bigr\} . $$

Remark 3

In the case of \(A_{h_{i}}=A_{d_{j}}=0\), i=1,…,p, j=1,…,q, \(K ( 0,\varSigma ,\hat{n} ) \) becomes \(K ( 0,\varSigma_{\mathrm{ds}},\hat{n}) \) in (8). So, Proposition 3 not only provides an approximation to the Krylov subspace for NS, but also gives an extension of Krylov subspace from the DS [7] to the NS.

Similar to the idea of generating the projection matrix V (\(\operatorname{colspan} ( V )\! \supseteq K ( 0,\varSigma_{\mathrm{ds}},\hat{n})\), V T V=I) for the MOR of the DS [35] or the single parameter linear system [12], the projection matrix V is obtained by

$$ \operatorname{colspan} ( V ) \supseteq K ( 0,\varSigma ,\hat{n} ) ,\quad V^{T}V=I, $$
(39)

to get the reduced-order system \(\hat{\varSigma}\) in (40).

Theorem 2

The \(\hat{n}\) th reduced-order NS is given by

(40)

Proof

From Proposition 2, X(s) can be approximated by

where U(s) is the Laplace transform of u(t). Inspiring from [12, 35], we want to find a projection matrix V in (39) by matching the first \(\hat{n}\) terms of the approximated G X (s), which are also first \(\hat{n}\) coefficients of θ(s). By assuming \(\theta ( s ) =V\hat{\theta}( s ) \), and considering

$$ \bigl( \varGamma_{0}+s\varGamma_{1}+s^{2} \varGamma_{2}+\cdots +s^{r-1}\varGamma_{r-1} \bigr) \theta ( s ) =BU ( s ) $$

we obtain

where Y(s) is the Laplace transform of y(t). From the expression of Γ k , \(k=0,1,\ldots ,\hat{n}-1\), in (32), (35) and (36), it is easy to show that

Consequently, it follows that

which is equivalent to the reduced-order NS \(\hat{\varSigma}\), where \(\hat{x}( t ) \) is the inverse Laplace transform of \(\hat{\theta} ( s ) \). □

Corollary 1

The result in Theorem 2 is still true for the RS Σ rs in (31) by taking \(A_{d_{j}}=0\), j=1,…,q, from (35) and (36). In the case of p=q=1, the projection matrix V in (39) is given by

Remark 4

The MOR result in [42] considers the reduction of a special kind of RS with C=B T. The moments of this special RS are approximated by the moments of a large scale DS with system matrices given by

where r is defined in (38). This will result in two obvious shortcomings. One is that the dimension of ROM may be higher than the original RS as low-order ROM may cause large error. The other is that high-order system matrices \(\bar{\mathcal{C}}\), \(\bar{\mathcal{G}}\) and \(\bar{\mathcal{L}}\) make this method fail to reduce higher-order RS as the storage of \(\bar{\mathcal{C}}\), \(\bar{\mathcal{G}}\) and \(\bar{\mathcal{L}}\) can be memory-demanding. Fortunately, Proposition 2 avoids this by using the inverse formula in Lemma 2 to produce moments having the same dimension as the original NS. The comparison with the result in [42, p. 834] is shown in Examples 2 and 3 in Sect. 4.

Remark 5

The MOR problems with ROM in NS form are also investigated in [43] and [45], respectively, by guaranteeing that the H norm or energy-to-peak gain of the error system is less than a given scalar in terms of LMIs with inverse constraints. It is solved by CCL algorithm [13] by transforming it to a minimization problem subject to original LMIs and additional LMIs coming from the inverse constraints, which are further solved by IPM. However, IPM requires that all matrix variables are transformed to a very huge vector variable in [8]. Obviously, this may render an out of memory problem due to large size matrix variables, and this further results in IPM not being able to handle large scale LMIs. Moreover, the computational cost of solving LMIs with inverse constraints is very high because of solving a minimization problem. Although LMI-based method provides a good approximation of the original NS by ensuring global accuracy, as a trade-off, high computational cost makes it inapplicable to reduce high-order NS with MCDs. The comparison with the methods in [43] and [45] is given in Example 4 in Sect. 4.

3.2 Extension to the Point s 0≠0 and multi-point moment matching

The result in Theorem 2 is extended to a nonzero point s 0 in the following theorem. Assume that

$$ \varUpsilon_{0} ( s_{0} ) =s_{0}E-A-\sum _{i=1}^{p}e^{-s_{0}h_{i}}A_{h_{i}}-\sum_{j=1}^{q}s_{0}e^{-s_{0}d_{j}}A_{d_{j}}, $$
(41)

is nonsingular in order to approximate G X (s) around s 0.

Theorem 3

The projection matrix V is obtained from the approximated Krylov subspace around s 0

$$ \operatorname{colspan} ( V ) \supseteq K ( s_{0},\varSigma , \hat{n}) ,\quad V^{T}V=I, $$
(42)

the \(\hat{n}\) th reduced-order NS is given by

where

(43)
(44)
(45)
(46)

Proof

By expanding \(e^{-\mathrm{sh}_{i}}\) and \(e^{-\mathrm{sd}_{j}}\), i=1,…,p, j=1,…,q, by their Taylor series expansions around s 0,

$$ e^{-\mathrm{sh}_{i}}=\sum_{k=0}^{\infty }e^{-s_{0}h_{i}} \frac{ ( -h_{i} )^{k}}{k!} ( s-s_{0} )^{k}\quad\mbox{and}\quad e^{-\mathrm{sd}_{j}}=\sum_{k=0}^{\infty }e^{-s_{0}d_{j}} \frac{ ( -d_{j} )^{k}}{k!} ( s-s_{0} )^{k}, $$

we have

where ϒ k , k=0,1,…, are defined in (41), (45) and (46). Then the proof can be finished similarly to the proof in Proposition 2 and Theorem 2. □

Theorem 3 can be further extended to the multi-point case.

Corollary 2

The projection matrix V in (42) obtained from the approximated Krylov subspace around multiple points, s 1, s 2,…,s g is given by

$$ \operatorname{colspan} ( V ) \supseteq \bigcup_{i=1}^{g}K ( s_{i},\varSigma ,\hat{n} ) ,\quad V^{T}V=I, $$

where \(K ( s_{i},\varSigma ,\hat{n} ) \), i=1,…,g, are defined in (43).

4 Numerical Examples

All the computation described in this section is performed in Intel Core 2 Quad processors with CPU 2.66 GHz and 2.87 GB memory. The first example is to show that with the same order of the ROM, Taylor series expansion method is better than Padé approximation method although it provides a delay-free ROM with better simulation and analysis than systems with delay via the standard state space techniques.

Example 1

An artificial NS Σ of order 24 with a single delay in the state and a single delay in the derivative of the state is considered to be reduced. It has two inputs and two outputs. The 8th order Padé approximation is chosen to approximate exponential terms in the original NS, which gives a transformed DS with order 408 by Proposition 1. A fourth-order ROM modeled by the DS is derived from Theorem 1. If exponential terms are approximated by their Taylor series expansions via truncating them to the first 4 terms, a fourth-order NS from Theorem 2 is obtained. The time domain responses, time domain errors and relative errors are compared in Fig. 1 with an input u(t)=e −1.5tsin(2t)[1 1]T. The frequency domain response comparison in terms of the maximal singular value (MSV) of the transfer functions is shown in Fig. 2. The above figures show that ROM from Taylor series expansion method captures the original NS better than the ROM by Padé approximation method. Higher-order ROM by Padé approximation method may result in better matching of the original system (see Remark 1 for the details).

Fig. 1
figure 1

Time domain responses of Example 1. (a) Time domain responses. (b) Time domain errors. (c) Time domain relative errors

Fig. 2
figure 2

Frequency domain responses of Example 1. (a) Frequency domain responses. (b) Frequency domain errors. (c) Frequency domain relative errors

This example will show that multi-point moment matching is better than single-point moment matching especially when the frequency domain response of the original system has multiple local minima and maxima.

Example 2

A RS is constructed by borrowing E=I 1006, A, B and C from FOM example in [10] and assuming \(A_{h_{1}}=0.1I_{1006}\) and h 1=1. The method in [42] fails due to out of memory (see Remark 4 for the details). From Theorems 2 and 3, and Corollary 2, the time domain comparison in Fig. 3 with an input u(t)=e tsin(8t) shows that the 16th-order RS by multi-point moment matching gives a better approximation than the 16th-order RS by matching moments around zero only. The frequency domain responses, frequency domain errors and frequency domain relative error in Fig. 4 also draw the same conclusion.

Fig. 3
figure 3

Time domain responses of Example 2. (a) Time domain responses. (b) Time domain errors. (c) Time domain relative errors

Fig. 4
figure 4

Frequency domain responses of Example 2. (a) Frequency domain responses. (b) Frequency domain errors. (c) Frequency domain relative errors

Example 3

In this example, a network composing of five lands and 50 conductors transmission line, representing the delay element, and a circuit with RLC components, has been considered for reduction. The network is modeled by a RS with order 602 with four delays in the states. The method in [42] cannot be applied to this example due to out of memory, too. By the proposed method in Theorem 2, a reduced-order RS with order 30 is given by approximating exponential terms by the first 30 terms of their Taylor series expansions. The time domain responses, time domain error and relative error are plotted in Figs. 5 with u(t)=sin(1012 t). Moreover, Figs. 6 and 7 show the frequency domain responses, frequency errors and relative errors in low and high frequencies, respectively. It is clear that the reduced-order NS matches the original system very well.

Fig. 5
figure 5

Time domain responses of Example 3. (a) Time domain responses. (b) Time domain errors. (c) Time domain relative errors

Fig. 6
figure 6

Frequency domain responses at low frequency of Example 3. (a) Frequency domain responses at low frequency. (b) Frequency domain errors at low frequency. (c) Frequency domain relative errors at low frequency

Fig. 7
figure 7

Frequency domain responses at low frequency of Example 3. (a) Frequency domain responses at high frequency. (b) Frequency domain errors at high frequency. (c) Frequency domain relative errors at high frequency.

The comparison with the existing LMI-based method in [43, 45] for reduction of NSs and method in [42] for reducing RS is given in the following example.

Example 4

Four examples are used to for comparison. The above three examples are used and another example is from a small PEEC model borrowed from [6, 24]. The system matrices A, \(A_{h_{1}}\) and \(A_{d_{1}}\ \)and the delays h 1 and d 1 are given in [6]. B and C are chosen to be

$$ B=\left [ \begin{array}{c@{\quad}c@{\quad}c} 1 & 1 & 0\end{array} \right ] ,\quad\quad C=\left [ \begin{array}{c@{\quad}c@{\quad}c} 0 & 1 & 1 \\ 1 & 1 & 0\end{array} \right ] . $$

The time domain and frequency domain comparisons are given in Figs. 8 and 9 with time domain input u(t)=cos(5t). Clearly the ROM from the proposed method matches the original system better than the ROM from LMI-based method. Moreover, the proposed method in Theorem 2 by constructing a projection matrix to match the approximated moments uses less time than the LMI-based method shown in Table 1. We also conclude from Table 1 that LMI-based method cannot reduce the NS with more than order 24 due to out of memory problem (see the details in Remark 5). The method in [42] has the same problem in reducing RS with order 1006. The conclusion is that the proposed method in this paper has more practical use especially in reducing large scale NSs.

Fig. 8
figure 8

Time domain responses of Example 4. (a) Time domain responses. (b) Time domain errors. (c) Time domain relative errors

Fig. 9
figure 9

Frequency domain responses of Example 4. (a) Frequency domain responses. (b) Frequency domain errors. (c) Frequency domain relative errors

Table 1 Comparison with other methods

5 Conclusions

In this paper, the moment matching method is used to get two different kinds of ROM to approximate a NS with MCDs depending on ways of approximating exponential terms in the transfer function of the original NS. The Padé approximation of exponential terms renders a delay-free system modeled by the high-order DS, with the obvious price to be paid of higher storage and computational complexity. However, delay-free ROM facilitates the analysis and application of the standard state space techniques as most of the traditional software for simulation is for delay-free systems. The other ROM has the same structure of the original NS by using Taylor series expansion to replace exponential terms. The most important advantage is that approximated moments have the same dimension as the original NS, which makes this method capable of reducing higher-order NSs. The proposed results can be applied to RSs directly by deleting the matrices related to the derivative of the delayed state. Numerical examples have demonstrated that the Taylor series expansion-based MOR method is much more suitable to reduce high-order NSs than existing MOR methods. Further research will focus on the ROM with stability and passivity preservation by adding additional constraints.