Encyclopedia of Systems and Control

Living Edition
| Editors: John Baillieul, Tariq Samad

Model Order Reduction: Techniques and Tools

  • Peter Benner
  • Heike Faßbender
Living reference work entry
DOI: https://doi.org/10.1007/978-1-4471-5102-9_142-1

Abstract

Model order reduction (MOR) is here understood as a computational technique to reduce the order of a dynamical system described by a set of ordinary or differential-algebraic equations (ODEs or DAEs) to facilitate or enable its simulation, the design of a controller, or optimization and design of the physical system modeled. It focuses on representing the map from inputs into the system to its outputs, while its dynamics are treated as a black box so that the large-scale set of describing ODEs/DAEs can be replaced by a much smaller set of ODEs/DAEs without sacrificing the accuracy of the input-to-output behavior.

Problem Description

This survey is concerned with linear time-invariant (LTI) systems in state-space form
$$E\dot{x}(t) = Ax(t) + Bu(t),\quad y(t) = Cx(t) + Du(t),$$
(1)
where \(E,A \in {\mathbb{R}}^{n\times n}\) are the system matrices, \(B \in {\mathbb{R}}^{n\times m}\) is the input matrix, \(C \in {\mathbb{R}}^{p\times n}\) is the output matrix, and \(D \in {\mathbb{R}}^{p\times m}\) is the feedthrough (or input–output) matrix. The size n of the matrix A is often referred to as the order of the LTI system. It mainly determines the amount of time needed to simulate the LTI system.
Such LTI systems often arise from a finite element modeling using commercial software such as ANSYS or NASTRAN which results in a second-order differential equation of the form
$$M\ddot{x}(t) + D\dot{x}(t) + Kx(t) = Fu(t), y(t) = C_{p}x(t) + C_{v}\dot{x}(t),$$
where the mass matrix M, the stiffness matrix K, and the damping matrix D are square matrices in \({\mathbb{R}}^{s\times s}\), \(F \in {\mathbb{R}}^{s\times m}\), \(C_{p},C_{v} \in {\mathbb{R}}^{q\times s}\), \(x(t) \in {\mathbb{R}}^{s}\), \(u(t) \in {\mathbb{R}}^{m}\), \(y(t) \in {\mathbb{R}}^{q}\). Such second-order differential equations are typically transformed to a mathematically equivalent first-order differential equation
$$\displaystyle\begin{array}{rcl} \underbrace{\left [\begin{array}{@{}c@{\quad }c@{}} I\quad & 0 \\ 0\quad &M \end{array} \right ]}_{E}\underbrace{\left [\begin{array}{*{10}c} \dot{x}(t)\\ \ddot{x}(t) \end{array} \right ]}_{\dot{z}(t)}& =& \underbrace{\left [\begin{array}{@{}c@{\quad }c@{}} 0 \quad & I \\ - K\quad & - D \end{array} \right ]}_{A}\underbrace{\left [\begin{array}{*{10}c} x(t)\\ \dot{x}(t) \end{array} \right ]}_{z(t)} +\underbrace{\left [\begin{array}{*{10}c} 0\\ F \end{array} \right ]}_{B}u(t) \\ y(t)& =& \underbrace{\left [\begin{array}{@{}c@{\quad }c@{}} C_{p}\quad &C_{v} \end{array} \right ]} _{C}\underbrace{\left [\begin{array}{*{10}c} x(t)\\ \dot{x}(t) \end{array} \right ]}_{z(t)}, \\ \end{array}$$
where \(E,A \in {\mathbb{R}}^{2s\times 2s}\), \(B \in {\mathbb{R}}^{2s\times m}\), \(C \in {\mathbb{R}}^{q\times 2s}\), \(z(t) \in {\mathbb{R}}^{2s}\), \(u(t) \in {\mathbb{R}}^{m}\), \(y(t) \in {\mathbb{R}}^{q}\). Various other linearizations have been proposed in the literature.
The matrix E may be singular. In that case the first equation in (1) defines a system of differential-algebraic equations (DAEs); otherwise it is a system of ordinary differential equations (ODEs). For example, for \(E = \left [\begin{matrix}\scriptstyle J&\scriptstyle 0 \\ \scriptstyle 0&\scriptstyle 0\end{matrix}\right ]\) with a j ×j nonsingular matrix J, only the first j equations in the left-hand side expression in (1) form ordinary differential equations, while the last nj equations form homogeneous linear equations. If further \(A = \left [\begin{matrix}\scriptstyle A_{11}&\scriptstyle A_{12} \\ \scriptstyle 0 &\scriptstyle A_{22}\end{matrix}\right ]\) and \(B = \left [\begin{matrix}\scriptstyle B_{1} \\ \scriptstyle B_{2}\end{matrix}\right ]\) with the j ×j matrix A 11, the j ×m matrix B 1 and a nonsingular matrix A 22, this is easily seen: partitioning the state vector \(x(t) = \left [\begin{matrix}\scriptstyle x_{1}(t) \\ \scriptstyle x_{2}(t)\end{matrix}\right ]\) with x 1(t) of length j, the DAE \(E\dot{x}(t) = Ax(t) + Bu(t)\) splits into the algebraic equation \(0 = A_{22}x_{2}(t) + B_{2}u_{2}(t)\), and the ODE
$$J\dot{x}_{1}(t) = A_{11}x_{1}(t) + \left (B_{1} - A_{12}A_{22}^{-1}B_{ 2}\right )u(t).$$

To simplify the description, only continuous-time systems are considered here. The discrete-time case can be treated mostly analogously; see, e.g., Antoulas (2005).

An alternative way to represent LTI systems is provided by the transfer function matrix (TFM), a matrix-valued function whose elements are rational functions. Assuming x(0) = 0 and taking Laplace transforms in (1) yields sX(s) = AX(s) + BU(s), Y (s) = CX(s) + DU(s), where X(s), Y (s), and U(s) are the Laplace transforms of the time signals x(t), y(t) and u(t), respectively. The map from inputs U to outputs Y is then described by Y (s) = G(s)U(s) with the TFM
$$G(s) = C{(sE - A)}^{-1}B + D,\qquad s \in \mathbb{C}.$$
(2)
The aim of model order reduction is to find an LTI system
$$\widetilde{E}\dot{\tilde{x}}(t) =\widetilde{ A}\tilde{x}(t) +\widetilde{ B}u(t),\quad \tilde{y}(t) =\widetilde{ C}\tilde{x}(t) +\widetilde{ D}u(t)$$
(3)
of reduced-order rn such that the corresponding TFM
$$\widetilde{G}(s) =\widetilde{ C}{(s\widetilde{E} -\widetilde{ A})}^{-1}\widetilde{B} +\widetilde{ D}$$
(4)
approximates the original TFM (2). That is, using the same input u(t) in (1) and (3), we want that the output \(\tilde y\)(t) of the reduced order model (ROM) (3) approximates the output y(t) of (1) well enough for the application considered (e.g., controller design). In general, one requires \(\|y(t) -\tilde{ y}(t)\| \leq \epsilon \) for all feasible inputs u(t), for (almost) all t in the time domain of interest, and for a suitable norm \(\|\cdot \|.\) In control theory one often employs the \(\mathcal{L}_{2}\)- or \(\mathcal{L}_{\infty }\)-norms on \(\mathbb{R}\) or [0, ], respectively, to measure time signals or their Laplace transforms. In the situations considered here, the \(\mathcal{L}_{2}\)-norms employed in frequency and time domain coincide due to the Paley-Wiener theorem (or Parseval’s equation or the Plancherel theorem, respectively); see Antoulas (2005) and Zhou et al. (1996) for details. As \(Y (s) -\widetilde{ Y }(s) = (G(s) -\widetilde{ G}(s))U(s)\), one can therefore consider the approximation error of the TFM \(\|G(\cdot ) -\widetilde{ G}(\cdot )\|\) measured in an induced norm instead of the error in the output \(\|y(\cdot ) -\tilde{ y}(\cdot )\|\).
Depending on the choice of the norm, different MOR goals can be formulated. Typical choices are (see, e.g., Antoulas (2005) for a more thorough discussion)
  • \(\|G(\cdot ) -\widetilde{ G}(\cdot )\|_{\mathcal{H}_{\infty }}\), where
    $$\|F(.)\|_{\mathcal{H}_{\infty }} =\mathrm{ sup}_{s\in \mathbb{C}_{+}}\sigma _{\max }(F(s)).$$
    Here, σ max is the largest singular value of the matrix F(s). This minimizes the maximal magnitude of the frequency response of the error system and by the Paley-Wiener theorem bounds the \(\mathcal{L}_{2}\)-norm of the output error.
  • \(\|G(\cdot ) -\widetilde{ G}(\cdot )\|_{\mathcal{H}_{2}}\), where (with \(\imath = \sqrt{-1}\))
    $$\|F(\cdot )\|_{\mathcal{H}_{2}}^{2} = \frac{1} {2\pi }\displaystyle\int _{-\infty }^{+\infty }\mathrm{tr}\left (F{(\imath \omega )}^{{\ast}}F(\imath \omega )\right )d\omega .$$
    This ensures a small error \(\|y(\cdot ) -\tilde{ y}(\cdot )\|_{\mathcal{L}_{\infty }(0,\infty )} =\mathrm{ sup}_{t>0}\|y(t) -\tilde{ y}(t)\|_{\infty }\) (with ∥ . ∥ denoting the maximum norm of a vector) uniformly over all inputs u(t) having bounded \(\mathcal{L}_{2}\)-energy, that is, \(\int _{0}^{\infty }u{(t)}^{T}u(t)dt \leq 1\); see Gugercin et al. (2008).

Besides a small approximation error, one may impose additional constraints for the ROM. One might require certain properties (such as stability and passivity) of the original systems to be preserved. Rather than considering the full nonnegative real line in time domain or the full imaginary axis in frequency domain, one can also consider bounded intervals in both domains. For these variants, see, e.g., Antoulas (2005) and Obinata and Anderson (2001).

Methods

There are a number of different methods to construct ROMs, see, e.g., Antoulas (2005), Benner et al. (2005), Obinata and Anderson (2001), and Schilders et al. (2008). Here we concentrate on projection-based methods which restrict the full state x(t) to an r-dimensional subspace by choosing \(\tilde{x}(t) = {W}^{{\ast}}x(t),\) where W is an n ×r matrix. Here the conjugate transpose of a complex-valued matrix Z is denoted by Z , while the transpose of a matrix Y will be denoted by Y T . Choosing \(V \in {\mathbb{C}}^{n\times r}\) such that \({W}^{{\ast}}V = I \in {\mathbb{R}}^{r\times r}\) yields an n ×n projection matrix Π = VW which projects onto the r-dimensional subspace spanned by the columns of V along the kernel of W . Applying this projection to (1), one obtains the reduced-order LTI system (3) with
$$\widetilde{E} = {W}^{{\ast}}EV,\ \ \tilde{A} = {W}^{{\ast}}AV,\ \ \widetilde{B} = {W}^{{\ast}}B,\ \ \widetilde{C} = CV \ $$
(5)
and an unchanged \(\widetilde{D} = D.\) If V = W, Π is an orthogonal projector and is called a Galerkin projection. If VW, Π is an oblique projector, sometimes called a Petrov-Galerkin projection.

In the following, we will briefly discuss the main classes of methods to construct suitable matrices V and W: truncation-based methods and interpolation-based methods. Other methods, in particular combinations of the two classes discussed here, can be found in the literature. In case the original LTI system is real, it is often desirable to construct a real reduced-order model. All of the methods discussed in the following either do construct a real reduced-order system or there is a variant of the method which does. In order to keep this exposition at a reasonable length, the reader is referred to the cited literature.

Truncation Based Methods

The general idea of truncation is most easily explained by modal truncation: For simplicity, assume that E = I and that A is diagonalizable, \({T}^{-1}AT = D_{A} =\mathrm{ diag}(\lambda _{1},\ldots ,\lambda _{n})\). Further we assume that the eigenvalues \(\lambda _{\ell} \in \mathbb{C}\) of A can be ordered such that
$$\mathrm{Re}(\lambda _{n}) \leq \mathrm{ Re}(\lambda _{n-1}) \leq \ldots \leq \mathrm{ Re}(\lambda _{1}) < 0,$$
(6)
(i.e., all eigenvalues lie in the open left half complex plane). This implies that the system is stable. Let V be the n ×r matrix consisting of the first r columns of T and let W be the first r rows of T − 1, that is, W = V (V V )− 1. Applying the transformation T to the LTI system (1) yields
$$\displaystyle\begin{array}{rcl}{ T}^{-1}\dot{x}(t)& =& ({T}^{-1}AT){T}^{-1}x(t) + ({T}^{-1}B)u(t)\end{array}$$
(7)
$$ \begin{array}{ll}y(t) =& (CT){T}^{-1}x(t) + Du(t) \end{array}$$
(8)
with
$${T}^{-1}AT = \left [\begin{array}{@{}c@{\quad }c@{}} {W}^{{\ast}}AV \quad & \\ \quad &A_{2} \end{array} \right ],\ \ {T}^{-1}B = \left [\begin{array}{c} {W}^{{\ast}}B \\ B_{2} \end{array} \right ],$$
and \(CT = \left [CV \ C_{2}\right ],\) where \({W}^{{\ast}}AV =\mathrm{ diag}(\lambda _{1},\ldots ,\lambda _{r})\) and \(A_{2} =\mathrm{ diag}(\lambda _{r+1},\ldots ,\lambda _{n}).\) Preserving the r dominant poles (eigenvalues with largest real part) by truncating the rest (i.e., discarding A 2, B 2, and C 2 from (7)) yields the ROM as in (5). It can be shown that the error bound
$$\|G(\cdot ) -\widetilde{ G}(\cdot )\|_{\mathcal{H}_{\infty }} \leq \| C_{2}\|\ \|B_{2}\| \frac{1} {\vert \mathrm{Re}(\lambda _{r+1})\vert }$$
holds (Benner 2006). As eigenvalues contain only limited information about the system, this is not necessarily a meaningful reduced-order system. In particular, the dependence of the input–output relation on B and C is completely ignored. This can be enhanced by more refined dominance measures taking B and C into account; see, e.g., Varga (1995) and Benner et al. (2011).

More suitable reduced-order systems can be obtained by balanced truncation. To introduce this concept, we no longer need to assume A to be diagonalizable, but we require the stability of A in the sense of (6). For simplicity, we assume E = I. For treatment of the DAE case (EI), see Benner et al. (2005, Chap. 3). Loosely speaking, a balanced representation of an LTI system is obtained by a change of coordinates such that the states which are hard to reach are at the same time those which are difficult to observe. This change of coordinates amounts to an equivalence transformation of the realization (A, B, C, D) of (1) called state-space transformation as in (7), where T now is the matrix representing the change of coordinates. The new system matrices (T − 1 AT, T − 1 B, CT, D) form a balanced realization of (1). Truncating in this balanced realization the states that are hard to reach and difficult to observe results in a ROM.

Consider the Lyapunov equations
$$AP + P{A}^{T} + B{B}^{T} = 0,\quad {A}^{T}Q + QA + {C}^{T}C = 0.$$
(9)
The solution matrices P and Q are called controllability and observability Gramians, respectively. If both Gramians are positive definite, the LTI system is minimal. This will be assumed from here on in this section.

In balanced coordinates the Gramians P and Q of a stable minimal LTI system satisfy \(P = Q =\mathrm{ diag}(\sigma _{1},\ldots ,\sigma _{n})\) with the Hankel singular values \(\sigma _{1} \geq \sigma _{2} \geq \ldots \geq \sigma _{n} > 0.\) The Hankel singular values are the positive square roots of the eigenvalues of the product of the Gramians PQ, \(\sigma _{k} = \sqrt{\lambda _{k } (PQ)}.\) They are system invariants, i.e., they are independent of the chosen realization of (1) as they are preserved under state-space transformations.

Given the LTI system (1) in a non-balanced coordinate form and the Gramians P and Q satisfying (9), the transformation matrix T which yields an LTI system in balanced coordinates can be computed via the so-called square root algorithm as follows:
  • Compute the Cholesky factors S and R of the Gramians such that P = S T S, Q = R T R.

  • Compute the singular value decomposition of \(S{R}^{T} = \Phi \Sigma {\Gamma }^{T}\), where Φ and Γ are orthogonal matrices and Σ is a diagonal matrix with the Hankel singular values on its diagonal. \(T = {S}^{T}\Phi {\Sigma }^{-\frac{1} {2} }\) yields the balancing transformation (note that \({T}^{-1} = {\Sigma }^{\frac{1} {2} }{\Phi }^{T}{S}^{-T} = {\Sigma }^{-\frac{1} {2} }{\Gamma }^{T}R\)).

  • Partition \(\Phi ,\Sigma ,\Gamma \) into blocks of corresponding sizes,
    $$\Sigma = \left [\begin{array}{@{}c@{\quad }c@{}} \Sigma _{1}\quad & \\ \quad &\Sigma _{2}\end{array} \right ],\ \ \Phi = \left [\begin{array}{c} \Phi _{1} \\ \Phi _{2}\end{array} \right ],\ \ {\Gamma }^{T} = \left [\begin{array}{c} \Gamma _{1}^{T} \\ \Gamma _{2}^{T}\end{array} \right ],$$
    with \(\Sigma _{1} =\mathrm{ diag}(\sigma _{1},\ldots ,\sigma _{r})\) and apply T to (1) to obtain (7) with
    $${ T}^{-1}AT = \left [\begin{array}{@{}c@{\quad }c@{}} {W}^{T}AV \quad &A_{12} \\ A_{21} \quad &A_{22}\end{array} \right ], {T}^{-1}B = \left [\begin{array}{c} {W}^{T}B \\ B_{2}\end{array} \right ],$$
    (10)
    and \(CT = \left [CV \ C_{2}\right ]\) for \(W = {R}^{T}\Gamma _{1}\Sigma _{1}^{-\frac{1} {2} }\) and \(V = {S}^{T}\Phi _{1}\Sigma _{1}^{-\frac{1} {2} }.\) Preserving the r dominant Hankel singular values by truncating the rest yields the reduced-order model as in (5).
As W T V = I, balanced truncation is an oblique projection method. The reduced-order model is stable with the Hankel singular values \(\sigma _{1},\ldots ,\sigma _{r}\). It can be shown that if \(\sigma _{r} >\sigma _{r+1}\), the error bound
$$\|G(\cdot ) -\widetilde{ G}(\cdot )\|_{\mathcal{H}_{\infty }} \leq 2\displaystyle\sum _{k=r+1}^{n}\sigma _{ k}$$
(11)
holds. Given an error tolerance, this allows to choose the appropriate order r of the reduced system in the course of the computations.

As the explicit computation of the balancing transformation T is numerically hazardous, one usually uses the equivalent balancing-free square root algorithm (Varga 1991) in which orthogonal bases for the column spaces of V and W are computed. The so obtained ROM is no longer balanced, but preserves all other properties (error bound, stability). Furthermore, it is shown in Benner et al. (2000) how to implement the balancing-free square root algorithm using low-rank approximations to S and R without ever having to resort to the square solution matrices P and Q of the Lyapunov equations (9). This yields an efficient algorithm for balanced truncation for LTI systems with large dense matrices. For systems with large-scale sparse A efficient algorithms based on sparse solvers for (9) exist; see Benner (2006).

By replacing the solution matrices P and Q of (9) by other pairs of positive (semi-)definite matrices characterizing alternative controllability and observability related system information, one obtains a family of model reduction methods including stochastic/bounded-real/positive-real balanced truncation. These can be used if further properties like minimum phase, passivity, etc. are to be preserved in the reduced-order model; for further details, see Antoulas (2005) and Obinata and Anderson (2001).

The balanced truncation yields good approximation at high frequencies as \(\widetilde{G}(\imath \omega ) \rightarrow G(\imath \omega )\) for \(\omega \rightarrow \infty \) (as \(\widetilde{D} = D\)), while the maximum error is often attained for ω = 0. For a perfect match at zero and a good approximation for low frequencies, one may employ the singular perturbation approximation (SPA, also called balanced residualization). In view of (7) and (10), balanced truncation can be seen as partitioning T − 1 x according to (10) into [x 1 T , x 2 T ] T and setting x 2 ≡ 0 (i.e., \(\dot{x}_{2} = 0\) as well). For SPA, one only sets \(\dot{x}_{2} = 0\), such that
$$\displaystyle\begin{array}{rcl} \dot{x}_{1}& =& WTAV x_{1} + A_{12}x_{2} + WTBu, \\ 0& =& A_{21}x_{1} + A_{22}x_{2} + B_{2}u\end{array}$$
(.)
Solving the second equation for x 2 and inserting it into the first equation yields
$$\dot{x}_{1} = \left (WTAV - A_{12}A_{22}^{-1}A_{ 21}\right )x_{1} + \left (WTB - A_{12}A_{22}^{-1}B_{ 2}\right )u.$$
For the output equation, it follows
$$\tilde{y} = \left (CV - C_{2}A_{22}^{-1}A_{ 21}\right )x_{1} + \left (D - C_{2}A_{22}^{-1}B_{ 2}\right )u.$$
This reduced-order model makes use of the information in the matrices A 12, A 21, A 22, B 2, and C 2 discarded by balanced truncation. It fulfills \(\widetilde{G}(0) = G(0)\) and the error bound (11); moreover, it preserves stability.

Besides SPA, another related truncation method that is not based on projection is optimal Hankel norm approximation (HNA). The description of HNA is technically quite involved; for details, see Zhou et al. (1996) and Glover (1984). It should be noted that the so obtained ROM usually has similar stability and accuracy properties as for balanced truncation.

Interpolation-Based Methods

Another family of methods for MOR is based on (rational) interpolation. The unifying feature of the methods in this family is that the original TFM (2) is approximated by a rational matrix function of lower degree satisfying some interpolation conditions (i.e., the original and the reduced-order TFM coincide, e.g., \(G(s_{0}) =\widetilde{ G}(s_{0})\) at some predefined value s 0 for which As 0 E is nonsingular). Computationally this is usually realized by certain Krylov subspace methods.

The classical approach is known under the name of moment-matching or Padé(-type) approximation. In these methods, the transfer functions of the original and the reduced order systems are expanded into power series, and the reduced-order system is then determined so that the first coefficients in the series expansions match. In this context, the coefficients of the power series are called moments, which explains the term moment matching.

Classically the expansion of the TFM (2) in a power series about an expansion point s 0
$$G(s) =\displaystyle\sum _{ j=0}^{\infty }M_{ j}(s_{0}){(s - s_{0})}^{j}$$
(12)
is used. The moments \(M_{j}(s_{0}),j = 0,1,2,\ldots\), are given by
$$M_{j}(s_{0}) = -C\,{[{(A - s_{0}E)}^{-1}E]}^{j}{(A - s_{ 0}E)}^{-1}B.$$
Consider the (block) Krylov subspace \(\mathcal{K}_{k}(F,H) =\mathrm{ span}\{H,FH,{F}^{2}H,\ldots ,{F}^{k-1}H\}\) for F = (As 0 E)− 1 E and H = − (As 0 E)− 1 B with an appropriately chosen expansion point s 0 which may be real or complex. From the definitions of A, B, and E, it follows that \(F \in {\mathbb{K}}^{n\times n}\) and \(H \in {\mathbb{K}}^{n\times m}\), where \(\mathbb{K} = \mathbb{R}\) or \(\mathbb{K} = \mathbb{C}\) depending on whether s 0 is chosen in or in . Considering \(\mathcal{K}_{k}(F,H)\) column by column, this leads to the observation that the number of column vectors in \(\{H,FH,{F}^{2}H,\ldots ,{F}^{k-1}H\}\) is given by \(r = m \cdot k\), as there are k blocks \({F}^{j}H \in {\mathbb{K}}^{n\times m},j = 0,\ldots ,k - 1\). In the case when all r column vectors are linearly independent, the dimension of the Krylov subspace \(\mathcal{K}_{k}(F,H)\) is \(m \cdot k.\) Assume that a unitary basis for this block Krylov subspace is generated such that the column space of the resulting unitary matrix \(V \in {\mathbb{C}}^{n\times r}\) spans \(\mathcal{K}_{k}(F,G)\). Applying the Galerkin projection Π = VV to (1) yields a reduced system whose TFM satisfies the following (Hermite) interpolation conditions at s 0:
$$\widetilde{{G}}^{(j)}(s_{ 0}) = {G}^{(j)}(s_{ 0}),\;\;j = 0,1,\ldots ,k - 1.$$
That is, the first k − 1 derivatives of G and \(\tilde G \) coincide at s 0. Considering the power series expansion (12) of the original and the reduced-order TFM, this is equivalent to saying that at least the first k moments \(\widetilde{M}_{j}(s_{0})\) of the transfer function \(\tilde G \)(s) of the reduced system (3) are equal to the first k moments M j (s 0) of the TFM G(s) of the original system (1) at the expansion point s 0:
$$M_{j}(s_{0}) =\widetilde{ M}_{j}(s_{0}),\;\;j = 0,1,\ldots ,k - 1.$$
If further the r columns of the unitary matrix W span the block Krylov subspace \(\mathcal{K}_{k}(F,H)\) for F = (As 0 E)T E and H = − (As 0 E)T C T , applying the Petrov-Galerkin projection \(\Pi = V {({W}^{{\ast}}V )}^{-1}{W}^{{\ast}}\) to (1) yields a reduced system whose TFM matches at least the first 2k moments of the TFM of the original system.

Theoretically, the matrix V (and W) can be computed by explicitly forming the columns which span the corresponding Krylov subspace \(\mathcal{K}_{k}(F,H)\) and using the Gram-Schmidt algorithm to generate unitary basis vectors for \(\mathcal{K}_{k}(F,H).\) The forming of the moments (the Krylov subspace blocks F j H) is numerically precarious and has to be avoided under all circumstances. Using Krylov subspace methods to achieve an interpolation-based ROM as described above is recommended. The unitary basis of a (block) Krylov subspace can be computed by employing a (block) Arnoldi or (block) Lanczos method; see, e.g., Antoulas (2005), Golub and Van Loan (2013), and Freund (2003).

In the case when an oblique projection is to be used, it is not necessary to compute two unitary bases as above. An alternative is then to use the nonsymmetric Lanczos process (Golub and Van Loan 2013). It computes bi-unitary (i.e., W V = I r ) bases for the above mentioned Krylov subspaces and the reduced-order model as a by-product of the Lanczos process. An overview of the computational techniques for moment-matching and Padé approximation summarizing the work of a decade is given in Freund (2003) and the references therein.

In general, the discussed MOR approaches are instances of rational interpolation. When the expansion point is chosen to be s 0 = , the moments are called Markov parameters and the approximation problem is known as partial realization. If s 0 = 0, the approximation problem is known as Padé approximation.

As the use of one single expansion point s 0 leads to good approximation only close to s 0, it might be desirable to use more than one expansion point. This leads to multipoint moment-matching methods, also called rational Krylov methods; see, e.g., Ruhe and Skoogh (1998), Antoulas (2005), and Freund (2003).

In contrast to balanced truncation, these (rational) interpolation methods do not necessarily preserve stability. Remedies have been suggested; see, e.g., Freund (2003).

The use of complex-valued expansion points will lead to a complex-valued reduced-order system (3). In some applications (in particular, in case the original system is real valued), this is undesired. In that case one can always use complex-conjugate pairs of expansion points as then the entire computations can be done in real arithmetic.

The methods just described provide good approximation quality locally around the expansion points. They do not aim at a global approximation as measured by the \(\mathcal{H}_{2}\)- or \(\mathcal{H}_{\infty }\)-norms. In Gugercin et al. (2008), an iterative procedure is presented which determines locally optimal expansion points w.r.t. the \(\mathcal{H}_{2}\)-norm approximation under the assumption that the order r of the reduced model is prescribed and only 0th- and 1st-order derivatives are matched. Also, for multi-input multi-output systems (i.e., m and p in (1) are both larger than one), no full moment matching is achieved, but only tangential interpolation: \(G(s_{j})b_{j} =\widetilde{ G}(s_{j})b_{j},\) \(c_{j}^{{\ast}}G(s_{j}) = c_{j}^{{\ast}}\widetilde{G}(s_{j}),\) \(c_{j}^{{\ast}}G'(s_{j})b_{j} = c_{j}^{{\ast}}\widetilde{G}'(s_{j})b_{j},\) for certain vectors b j , c j determined together with the optimal s j by the iterative procedure.

Tools

Almost all commercial software packages for structural dynamics include modal analysis/truncation as a means to compute a ROM. Modal truncation and balanced truncation are available in the MATLAB®; Control System Toolbox and the MATLAB®; Robust Control Toolbox.

Numerically reliable, well-tested, and efficient implementations of many variants of balancing-based MOR methods as well as Hankel norm approximation and singular perturbation approximation can be found in the Subroutine Library In Control Theory (SLICOT, http://www.slicot.org) (Varga 2001). Easy-to-use MATLAB interfaces to the Fortran 77 subroutines from SLICOT are available in the SLICOT Model and Controller Reduction Toolbox (http://slicot.org/matlab-toolboxes/basic-control); see Benner et al. (2010). An implementation of moment matching via the (block) Arnoldi method is available in MOR for ANSYS®;(http://modelreduction.com/Software.html).

There exist benchmark collections with mainly a number of LTI systems from various applications. There one can find systems in computer-readable format which can easily be used to test new algorithms and software:

The MOR WiKi http://morwiki.mpi-magdeburg.mpg.de/morwiki/ is a platform for MOR research and provides discussions of a number of methods, links to further software packages (e.g., MOREMBS and MORPACK), as well as additional benchmark examples.

Summary and Future Directions

MOR of LTI systems can now be considered as an established computational technique. Some open issues still remain and are currently investigated. These include methods yielding good approximation in finite frequency or time intervals. Though numerous approaches for these tasks exist, methods with sharp local error bounds are still desirable. A related problem is the reduction of closed-loop systems and controller reduction. Also, the generalization of the methods discussed in this essay to descriptor systems (i.e., systems with DAE dynamics), second-order systems, or unstable LTI systems has only been partially achieved. An important problem class getting a lot of current attention consists of (uncertain) parametric systems. Here it is important to preserve parameters as symbolic quantities in the ROM. Most of the current approaches are based in one way or another on interpolation. MOR for nonlinear systems has also been a research topic for decades. Still, the development of satisfactory methods in the context of control design having computable error bounds and preserving interesting system properties remains a challenging task.

Bibliography

  1. Antoulas A (2005) Approximation of large-scale dynamical systems. SIAM, PhiladelphiaCrossRefzbMATHGoogle Scholar
  2. Benner P (2006) Numerical linear algebra for model reduction in control and simulation. GAMM Mitt 29(2):275–296zbMATHMathSciNetGoogle Scholar
  3. Benner P, Quintana-Ortí E, Quintana-Ortí G (2000) Balanced truncation model reduction of large-scale dense systems on parallel computers. Math Comput Model Dyn Syst 6:383–405CrossRefzbMATHGoogle Scholar
  4. Benner P, Mehrmann V, Sorensen D (2005) Dimension reduction of large-scale systems. Lecture Notes in Computational Science and Engineering, vol 45. Springer, Berlin/HeidelbergGoogle Scholar
  5. Benner P, Kressner D, Sima V, Varga A (2010) Die SLICOT-Toolboxen für Matlab (The SLICOT-Toolboxes for Matlab) [German]. at-Automatisierungstechnik 58(1):15–25. English version available as SLICOT working note 2009-1, 2009, http://slicot.org/working-notes/
  6. Benner P, Hochstenbach M, Kürschner P (2011) Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers. In: Proceedings of the International Conference on Communications, Computing and Control Applications (CCCA), March 3-5, 2011 at Hammamet, Tunisia, IEEE Publications (6 pages)Google Scholar
  7. Freund R (2003) Model reduction methods based on Krylov subspaces. Acta Numer 12:267–319CrossRefzbMATHMathSciNetGoogle Scholar
  8. Glover K (1984) All optimal Hankel-norm approximations of linear multivariable systems and their L norms. Internat J Control 39:1115–1193CrossRefzbMATHMathSciNetGoogle Scholar
  9. Golub G, Van Loan C (2013) Matrix computations, 4th edn. Johns Hopkins University Press, BaltimorezbMATHGoogle Scholar
  10. Gugercin S, Antoulas AC, Beattie C (2008) \(\mathcal{H}_{2}\) model reduction for large-scale dynamical systems. SIAM J Matrix Anal Appl 30(2):609–638Google Scholar
  11. Obinata G, Anderson B (2001) Model reduction for control system design. Communications and Control Engineering Series. Springer, LondonCrossRefGoogle Scholar
  12. Ruhe A, Skoogh D (1998) Rational Krylov algorithms for eigenvalue computation and model reduction. Applied Parallel Computing. Large Scale Scientific and Industrial Problems, Lecture Notes in Computer Science, vol 1541. Springer, Berlin/Heidelberg, pp 491–502Google Scholar
  13. Schilders W, van der Vorst H, Rommes J (2008) Model order reduction: theory, research aspects and applications. Springer, Berlin/HeidelbergCrossRefGoogle Scholar
  14. Varga A (1991) Balancing-free square-root algorithm for computing singular perturbation approximations. In: Proceedings of the 30th IEEE CDC, Brighton, pp 1062–1065Google Scholar
  15. Varga A (1995) Enhanced modal approach for model reduction. Math Model Syst 1(2):91–105CrossRefzbMATHGoogle Scholar
  16. Varga A (2001) Model reduction software in the SLICOT library. In: Datta B (ed) Applied and computational control, signals, and circuits. The Kluwer International Series in Engineering and Computer Science, vol 629. Kluwer Academic, Boston, pp 239–282Google Scholar
  17. Zhou K, Doyle J, Glover K (1996) Robust and optimal control. Prentice Hall, Upper Saddle River, NJzbMATHGoogle Scholar

Copyright information

© Springer-Verlag London 2013

Authors and Affiliations

  • Peter Benner
    • 1
  • Heike Faßbender
    • 2
  1. 1.Max Planck Institute for Dynamics of Complex Technical SystemsMagdeburgGermany
  2. 2.Institut Computational MathematicsTechnische Universität BraunschweigBraunschweigGermany