1 Introduction

The Loewner framework, originally proposed in [30] for solving the generalized realization problem coupled with tangential interpolation, was successfully employed for data-driven model order reduction from frequency domain data [26]. Measurements of the frequency response are available in several communities: electrical engineering (impedance, admittance or scattering parameters [26]), mechanical and civil engineering (structural and vibro-acoustic frequency response functions [35] or frequency response measurements of thermal systems [10]), to name a few. The first step in the Loewner framework consists in setting up the data matrices and building the Loewner and shifted Loewner matrices entry-wise based on the chosen partition into right and left data, followed by computing the singular value decomposition (SVD) of a linear combination of these matrices and forming the model by projection, using the dominant singular triplets. The main advantages of the Loewner framework over existing approaches are, on the one hand, its system identification capabilities, in the sense that the order of the system can be deduced from the singular value drop, and, on the other hand, its potential in dealing with systems with a large number of inputs and outputs efficiently thanks to the incorporation of tangential interpolation. The main drawbacks, however, are the large storage requirements paired with the significant CPU cost inherent to the full SVD computation for data sets with a large number of measurements (values in the range \(10^5\) are common in industrial applications). To bypass these inconveniences, greedy-type approaches were proposed in [26], thus reducing memory requirements, from \({\mathcal {O}}(N^2)\) for storing the dense Loewner and shifted Loewner matrices to \({\mathcal {O}}(N+n^2)\), and the computational cost, from \({\mathcal {O}}(N^3)\) for computing the SVD to \({\mathcal {O}}(Nn^3)\) and \({\mathcal {O}}(Nn^4)\), where N is the size of the data set and n is the order of the model.

Taking advantage of numerical linear algebra tools to reduce storage and computational requirements for the Loewner framework is another avenue worth exploring due to the inherent structure embedded in the albeit dense Loewner and shifted Loewner matrices. The factored ADI-Galerkin method for computing these matrices as solutions to certain Sylvester equations with a factored right-hand side was investigated in [18]. Such a scheme computes low-rank approximations to the dense Loewner matrix to speed-up the SVD computation. However, in [18] no results about the accuracy of the computed reduced models are reported. Moreover, the memory constraints coming from the allocation of \({\mathbb {L}}\) and \({\mathbb {S}}\) are still present. Alternatively, one can focus on accelerating solely the step of the SVD calculation by employing Krylov methods (see, e.g., [3, 19, 25, 41] to name a few), by using the randomized SVD [31] to compute the dominant singular triplets instead of the full SVD or other types of inexact SVD-type decompositions (adaptive cross approximation [4], particularly suited for hierarchical matrices, or a CUR decomposition [11] as in [22, 38]).

The novel approach proposed in this paper tackles the issue of the memory requirements, at the same time as reducing the CPU cost of the Loewner framework while maintaining the accuracy of the standard approach for large values of the number of measurements. As the Loewner and shifted Loewner matrices satisfy Sylvester equations with diagonal coefficient matrices, they are, in fact, Cauchy-like matrices, obtained as the Hadamard product between a Cauchy matrix \({\mathcal {C}}\) and low-rank right-hand sides. Extensive research has been devoted to fully exploiting the rich structure of Cauchy matrices. Several algorithms for computing the matrix-vector product \({\mathcal {C}}\mathbf{x}\) can be found in the literature and many avoid assembling the full matrix \({\mathcal {C}}\) (see, e.g., [7, 15, 17, 33]). Hierarchically semiseparable matrices (HSS) have been deemed efficient for approximating Cauchy matrices with a low off-diagonal rank [33, 34]. HSS and other rank-structured matrices are widely used in developing fast algorithms for algebraic operations (matrix-vector multiplications, matrix factorizations, matrix inversion, etc., see, e.g., [8, 33, 34, 44, 47] and references therein) used as building blocks for the solution of certain problems like linear systems of equations [48], eigenvalue problems [45], linear and quadratic matrix equations [23, 27], and many more. For our application, the approximation of the Cauchy matrix in HSS format considerably decreases the computational cost of matrix-vector products involving a linear combination of the Loewner and shifted Loewner matrices needed for the partial SVD computation, while avoiding to form them. All results involving HSS-matrices presented in this paper have been obtained by means of the hm-toolbox [28].

The employment of an HSS-representation of \({\mathcal {C}}\) may introduce some inexactness in our scheme and this has to be taken into account in the iterative SVD computation. The use of inexact matrix-vector products within iterative procedures has been the subject of numerous research papers: Krylov techniques for solving linear systems and matrix equations [6, 24, 32, 40, 43], eigenvalue problems [13, 39], or an inexact variant of the Lanczos bidiagonalization for the computation of leading singular triplets of a generic matrix function [14]. In our case, we do not need an accurate approximation of the singular triplets, but rather have meaningful spaces spanned by the computed left and right singular vectors so that the obtained reduced model inherits the desired approximation properties (see, e.g., [2, 21]).

The remainder of the paper is structured as follows. Section 2 provides a review of the Loewner framework, whereas Sect. 3 presents results showcasing the special structure of the Loewner and shifted Loewner matrices as Cauchy-like matrices and their approximation as hierarchically semiseparable matrices allowing for efficient, inexact matrix-vector products in the partial SVD computation. Section 4 presents the results of our numerical experiments and Sect. 5 concludes the paper.

2 Review of the Loewner Framework

The Loewner framework has been proposed to address the rational interpolation/approximation problem. In the control community, this is referred to as system identification from frequency domain measurements and is stated below.

Problem Statement (Rational approximation in the complex plane) Given points \(s_j\) in the complex plane (which can represent angular frequencies \(\mathrm {i}\omega _j\) if \(s_j\) are on the imaginary axis) and the corresponding transfer function measurement \(\mathbf{H}_j \in {\mathbb {C}}^{p \times q}\) for a system with q inputs and p outputs:

$$\begin{aligned} ({s_j};\mathbf{H}_j),~~j=1,\ldots ,N, \end{aligned}$$
(1)

with p and q assumed to be much smaller than N, the problem amounts to finding the rational transfer function \(\mathbf{H}(\mathrm {s})\) which approximates the data:

$$\begin{aligned} \mathbf{H}_j \approx \mathbf{H}(\mathrm {s}={s_j}),~~\forall j=1,\ldots ,N. \end{aligned}$$
(2)

Thus, the transfer function evaluated for the Laplace variable \(\mathrm {s}={s_j}\) should be close (in some norm) to the corresponding measurement \(\mathbf{H}_j\). Several equivalent representations are possible for the rational transfer function, namely pole-residue, pole-zero, state-space or descriptor-form.

Most systems of interest are real, with their transfer function satisfying the complex conjugate condition \(\mathbf{H}(\bar{\mathrm {s}})=\overline{\mathbf{H}(\mathrm {s})}\). Hence, we add complex conjugate measurements \((\overline{s}_j;\overline{\mathbf{H}}_j)\) to the set (1).

We proceed by presenting the Loewner framework as a solution scheme addressing the rational approximation problem. The first step in the Loewner framework [26, 30] is partitioning the data in two disjoint sets. This partition influences the conditioning of the problem [21][Ch. 2.1] and finding the optimal partition for each data set is beyond the scope of this paper. The most natural partitions are summarized in the following (assuming an even number of measurements N and sampling points sorted in ascending order with respect to their absolute value):

  • Half&Half: the first half of the data in one set and the other half in the second set:

    $$\begin{aligned} \left\{ s_{1},{\overline{s}}_{1},\ldots ,s_{\frac{N}{2}},\overline{s}_{\frac{N}{2}}\right\} \cup \left\{ s_{\frac{N}{2}+1},\overline{s}_{\frac{N}{2}+1},\ldots ,s_{N},{\overline{s}}_{N}\right\} \end{aligned}$$
    (3)

    and, correspondingly,

    $$\begin{aligned} \left\{ \mathbf{H}_{1},\overline{\mathbf{H}}_{1},\ldots ,\mathbf{H}_{\frac{N}{2}},\overline{\mathbf{H}}_{\frac{N}{2}}\right\} \cup \left\{ \mathbf{H}_{\frac{N}{2}+1},\overline{\mathbf{H}}_{\frac{N}{2}+1},\ldots ,\mathbf{H}_{N},\overline{\mathbf{H}}_{N}\right\} , \end{aligned}$$
    (4)
  • Odd&Even: data with odd indices in the first set and data with even indices in the second set:

    $$\begin{aligned} \left\{ s_{1},{\overline{s}}_{1},\ldots ,s_{N-1},\overline{s}_{N-1}\right\} \cup \left\{ s_{2},\overline{s}_{2},\ldots ,s_{N},{\overline{s}}_{N}\right\} \end{aligned}$$
    (5)

    and, correspondingly,

    $$\begin{aligned} \left\{ \mathbf{H}_{1},\overline{\mathbf{H}}_{1},\ldots ,\mathbf{H}_{N-1},\overline{\mathbf{H}}_{N-1}\right\} \cup \left\{ \mathbf{H}_{2},\overline{\mathbf{H}}_{2},\ldots ,\mathbf{H}_{N},\overline{\mathbf{H}}_{N}\right\} . \end{aligned}$$
    (6)

The first set on the right in (3) and (5) comprises the right points, denoted by \(\lambda _k\) , \(k=1,\ldots ,N\), while the second set comprises the left points \(\mu _h\) , \(h=1,\ldots ,N\). This splitting into right and left points is related to the concept of tangential interpolation, which is explained in the following paragraph.

The following step in the Loewner framework is choosing tangential directions as vectors which transform matrix data \(\mathbf{H}_j\) into vector data: right tangential directions are column vectors \(\mathbf{r}_k \in {\mathbb {C}}^{q}\) such that \(\mathbf{H}_k \mathbf{r}_k=\mathbf{w}_k\), whereas left tangential directions are row vectors \(\ell _h \in {\mathbb {C}}^{1 \times p}\) such that \(\ell _h \mathbf{H}_h =\mathbf{v}_h\). The column vectors \(\mathbf{w}_k \in {\mathbb {C}}^{p}\) are referred to as right vector data, while the row vectors \(\mathbf{v}_h \in {\mathbb {C}}^{1 \times q}\) are referred to as left vector data. For simplicity, tangential directions can be chosen as alternating columns/rows of the identity matrix [26], resulting in vector data being column and row vectors of the original matrix data \(\mathbf{H}_j\) in (1).

Remark 1

For scalar data obtained from single-input single-output (SISO) systems (\(p = q = 1\)), tangential directions \(\mathbf{r}_k\), \(\ell _h\) are simply equal to 1.

Remark 2

If the loss of information due to utilizing a single tangential direction per measurement, instead of the whole matrix \(\mathbf{H}_j\), does not allow one to obtain an accurate approximation, one can employ the original matrix \(\mathbf{H}_j\). This is equivalent to considering several tangential directions for the same point. To obtain block right matrix data for \(\mathbf{H}_j \in {\mathbb {C}}^{p \times q}\), the corresponding frequency should be repeated q times as a right point and all columns of the identity matrix of size \(q \times q\) should be considered as right directions. Similarly, to obtain block left matrix data for \(\mathbf{H}_j \in {\mathbb {C}}^{p \times q}\), the corresponding frequency should be repeated p times as a left point and all rows of the identity matrix of size \(p \times p\) should be considered as left directions.

With this notation in place, the Loewner matrix is defined entry-wise as

$$\begin{aligned} {\mathbb {L}}_{hk} = \frac{\mathbf{v}_h \mathbf{r}_k-\ell _h \mathbf{w}_k}{\mu _h-\lambda _k},~~ h,k=1,\ldots ,N, \end{aligned}$$
(7)

and the shifted Loewner matrix is defined as

$$\begin{aligned} {\mathbb {S}}_{{hk}} = \frac{\mu _h\mathbf{v}_h \mathbf{r}_k-\lambda _k\ell _h \mathbf{w}_k}{\mu _h-\lambda _k},~~ h,k=1,\ldots ,N. \end{aligned}$$
(8)

Note that the numerators are scalar quantities as they are obtained by taking inner products.

The quantities defined previously are collected into the following matrices

$$\begin{aligned} \varvec{\Lambda }&=\hbox {diag} \left( \left[ \begin{array}{ccc} \lambda _1,&\ldots&,\lambda _{N} \end{array}\right] \right) \in {\mathbb {C}}^{N \times N},&\mathbf{R}&=\left[ \begin{array}{ccc} \mathbf{r}_1,&\ldots&,\mathbf{r}_{N} \end{array}\right] \in {\mathbb {C}}^{q \times N},\nonumber \\ \mathbf{W}&=\left[ \begin{array}{ccc} \mathbf{w}_1,&\ldots&,\mathbf{w}_{N} \end{array}\right] \in {\mathbb {C}}^{p \times N}, \nonumber \\ \varvec{M}&=\hbox {diag}\left( \left[ \begin{array}{ccc} \mu _1,&\ldots&,\mu _{N} \end{array}\right] \right) \in {\mathbb {C}}^{N \times N},&\mathbf{L}&=\left[ \begin{array}{c} \ell _1\\ \vdots \\ \ell _{N} \end{array}\right] \in {\mathbb {C}}^{N \times p},\nonumber \\ \mathbf{V}&=\left[ \begin{array}{c} \mathbf{v}_1\\ \vdots \\ \mathbf{v}_{N} \end{array}\right] \in {\mathbb {C}}^{N \times q}, \end{aligned}$$
(9)
$$\begin{aligned} {\mathbb {L}}&= \left[ \begin{array}{ccc} \frac{\mathbf{v}_1 \mathbf{r}_1-\ell _1 \mathbf{w}_1}{\mu _1-\lambda _1}&{} \ldots &{}\frac{\mathbf{v}_1 \mathbf{r}_N-\ell _1 \mathbf{w}_N}{\mu _1-\lambda _N}\\ \vdots &{}\ddots &{}\vdots \\ \frac{\mathbf{v}_N \mathbf{r}_1-\ell _N \mathbf{w}_1}{\mu _N-\lambda _1}&{} \ldots &{}\frac{\mathbf{v}_N \mathbf{r}_N-\ell _N \mathbf{w}_N}{\mu _N-\lambda _N} \end{array}\right] \in {\mathbb {C}}^{N \times N},\nonumber \\ {\mathbb {S}}&= \left[ \begin{array}{ccc} \frac{\mathbf{v}_1 \mathbf{r}_1-\ell _1 \mathbf{w}_1}{\mu _1-\lambda _1}&{} \ldots &{}\frac{\mathbf{v}_1 \mathbf{r}_N-\ell _1 \mathbf{w}_N}{\mu _1-\lambda _N}\\ \vdots &{}\ddots &{}\vdots \\ \frac{\mathbf{v}_N \mathbf{r}_1-\ell _N \mathbf{w}_1}{\mu _N-\lambda _1}&{} \ldots &{}\frac{\mathbf{v}_N \mathbf{r}_N-\ell _N \mathbf{w}_N}{\mu _N-\lambda _N} \end{array}\right] \in {\mathbb {C}}^{N \times N}. \end{aligned}$$
(10)

By construction, the Loewner and shifted Loewner matrices satisfy the following Sylvester equations:

$$\begin{aligned} \varvec{M} {\mathbb {L}}-{\mathbb {L}}\varvec{\Lambda }&= \mathbf{V}\mathbf{R}-\mathbf{L}\mathbf{W},&\varvec{M} {\mathbb {S}}-{\mathbb {S}}\varvec{\Lambda }&= \varvec{M}\mathbf{V}\mathbf{R}-\mathbf{L}\mathbf{W}\varvec{\Lambda }, \end{aligned}$$
(11)

as well as the following relations:

$$\begin{aligned} {\mathbb {S}}-{\mathbb {L}}\varvec{\Lambda }&= \mathbf{V}\mathbf{R},&{\mathbb {S}}- \varvec{M} {\mathbb {L}}&= \mathbf{L}\mathbf{W}, \end{aligned}$$
(12)

which will prove useful in our proposed matrix-free matrix-vector product approach.

After introducing notation, we are ready to state the solution provided by the Loewner framework to the rational approximation problem. A (non minimal) model for the transfer function in descriptor-form \(\mathbf{H}(\mathrm {s}) = \mathbf{C}\left( \mathrm {s}\mathbf{E}-\mathbf{A}\right) ^{-1}\mathbf{B}+\mathbf{D}\) is given by

$$\begin{aligned} \mathbf{H}(s) = \mathbf{W}\left( {\mathbb {S}}-s {\mathbb {L}}\right) ^{-1} \mathbf{V}. \end{aligned}$$
(13)

Since we have recast the original problem as a tangential interpolation problem, this transfer function satisfies the right and left interpolation conditions [30] \(\mathbf{H}(\lambda _k) \mathbf{r}_k=\mathbf{w}_k\) and \(\ell _h \mathbf{H}(\mu _h)=\mathbf{v}_h\), \(h,k=1,\ldots ,N\) exactly. To obtain a minimal model, we perform a singular value decomposition

$$\begin{aligned}{}[\mathbf{Y},\varvec{\Sigma },\mathbf{X}] = \hbox {svd}({\mathbb {S}}- x {\mathbb {L}}),~~{x \in \{s_j\}}, \end{aligned}$$
(14)

where \(\varvec{\Sigma }\) is diagonal and \(\mathbf{Y}\), \(\mathbf{X}\) contain the left and right singular vectors, respectively. Choosing the order n of the truncated SVD (n is application-dependent), we define (in Matlab notation) \(\mathbf{X}_n = \mathbf{X}(:,1:n)\) and \(\mathbf{Y}_n = \mathbf{Y}(:,1:n)^*\). Finally, the model of size n in descriptor form is

$$\begin{aligned} \mathbf{E}= & {} -\mathbf{Y}_n {\mathbb {L}}\mathbf{X}_n=-{\mathbb {L}}_n,~ \mathbf{A}= -\mathbf{Y}_n {\mathbb {S}}\mathbf{X}_n=-{\mathbb {S}}_{n},~ \mathbf{B}= \mathbf{Y}_n \mathbf{V}=\mathbf{V}_n,~ \mathbf{C}= \mathbf{W}\mathbf{X}_n=\mathbf{W}_n,\nonumber \\ \mathbf{D}= & {} \mathbf{0}. \end{aligned}$$
(15)

In the following section, we exploit the Cauchy-like structure of the Loewner and shifted Loewner matrices to design efficient approaches, both in terms of memory storage and CPU time, to compute the SVD in (14) by making use of hierarchical matrices.

3 Exploiting the Structure of \({\mathbb {L}}\) and \({\mathbb {S}}\)

For data sets with a sizable number N of measurements \({\mathbf {H}}_j\), the construction of the large, dense Loewner and shifted Loewner matrices is demanding, both in terms of computational efforts as well as storage requirements. The computation of each entry of \({\mathbb {L}}\) and \({\mathbb {S}}\) using (7) and (8) yields a total cost of \({\mathcal {O}}(N^2\left( p+q\right) )\) floating point operations (FLOPs) for assembling the entire \({\mathbb {L}}\) and \({\mathbb {S}}\) matrices. The number of nonzero entries in \({\mathbb {L}}\) and \({\mathbb {S}}\) is \({\mathcal {O}}(N^2)\), much larger than the memory requirements for storing the data in \(\varvec{\Lambda }\), \(\varvec{M}\), \(\mathbf{R}\), \(\mathbf{W}\), \(\mathbf{L}\), and \(\mathbf{V}\)Footnote 1. Besides these excessive storage requirements, there are also considerations to be made regarding the CPU time required for the SVD computation of the matrix \({\mathbb {S}}-x{\mathbb {L}}\), \(x\in \{f_i\}\) in (14). Especially for large dimensional problems, for which we expect a fast decay, it is preferred to compute only the first n singular triplets, thus avoiding wasting resources in computing the full SVD. To this end, many iterative methods have been developed for computing partial SVDs; see, e.g., [3, 19, 25, 41] to name a few. The bottleneck in these approaches is the matrix-vector product with the coefficient matrix, namely \({\mathbb {S}}-x{\mathbb {L}}\) in our case. This operation costs \({\mathcal {O}}(N^2)\) FLOPs due to the dense pattern of \({\mathbb {S}}-x{\mathbb {L}}\).

This section tackles the cost reduction of performing a matrix-vector product with \({\mathbb {S}}-x{\mathbb {L}}\) while avoiding the explicit allocation of \({\mathbb {L}}\) and \({\mathbb {S}}\). The proposed strategy is supported by a thorough analysis of the computational cost, showing that, for very large data sets for which carrying out the full SVD is intractable, our strategy leads to remarkable reductions in both the computational efforts and the storage demand for building minimal realizations in the Loewner framework.

3.1 Hadamard Product and Cauchy Matrices

We present novel results which exploit the particular structure of the Loewner and shifted Loewner matrices. These developments involve the Sylvester equations (11) with diagonal coefficient matrices \(\varvec{\Lambda }\) and \(\mathbf{M}\).

Theorem 1

The Loewner and shifted Loewner matrices \({\mathbb {L}}\) and \({\mathbb {S}}\) satisfying the Sylvester equations in (11) are such that

$$\begin{aligned} {\mathbb {L}}=\sum _{j=1}^q \hbox {diag}({\widetilde{\mathbf{v}}}_j){\mathcal {C}}\hbox {diag}({\widetilde{\mathbf{r}}}_j^*)- \sum _{j=1}^p \hbox {diag}({\widetilde{\ell }}_j){\mathcal {C}}\hbox {diag}({\widetilde{\mathbf{w}}}_j^*), \end{aligned}$$
(16)

and

$$\begin{aligned} {\mathbb {S}}=\sum _{j=1}^q \hbox {diag}(\varvec{M}\widetilde{\mathbf{v}}_j){\mathcal {C}}\hbox {diag}({\widetilde{\mathbf{r}}}_j^*)-\sum _{j=1}^p \hbox {diag}({\widetilde{\ell }}_j){\mathcal {C}}\hbox {diag}( \varvec{\Lambda }^*{\widetilde{\mathbf{w}}}_j^*), \end{aligned}$$
(17)

where \({\mathcal {C}}\) denotes the following Cauchy matrix

$$\begin{aligned} {\mathcal {C}}= \left[ \begin{array}{ccc} \frac{1}{\mu _1-\lambda _1}&{} \ldots &{}\frac{1}{\mu _1-\lambda _N}\\ \vdots &{}\ddots &{}\vdots \\ \frac{1}{\mu _N-\lambda _1}&{} \ldots &{}\frac{1}{\mu _N-\lambda _N} \end{array}\right] , \end{aligned}$$

while the vectors \({\widetilde{\mathbf{v}}}_j\in {\mathbb {C}}^{N}\) and \(\widetilde{\ell }_j\in {\mathbb {C}}^{N}\) denote the j-th columns of \(\mathbf{V}\) and \(\mathbf{L}\), respectively, so that

$$\begin{aligned} \varvec{V}=[{\widetilde{\mathbf{v}}}_1,\ldots , {\widetilde{\mathbf{v}}}_q],\quad \varvec{L}=[{\widetilde{\ell }}_1,\ldots , {\widetilde{\ell }}_q]. \end{aligned}$$

Similarly, the vectors \({\widetilde{\mathbf{r}}}_j\in {\mathbb {C}}^{1 \times N}\) and \({\widetilde{\mathbf{w}}}_j\in {\mathbb {C}}^{1 \times N}\) are the j-th rows of \(\mathbf{R}\) and \(\mathbf{W}\), respectively, namely

$$\begin{aligned} \varvec{R}=\left[ \begin{array}{c} {\widetilde{\mathbf{r}}}_1 \\ \vdots \\ {\widetilde{\mathbf{r}}}_q\\ \end{array}\right] , \quad \varvec{W}=\left[ \begin{array}{c} {\widetilde{\mathbf{w}}}_1 \\ \vdots \\ {\widetilde{\mathbf{w}}}_q\\ \end{array}\right] . \end{aligned}$$

Proof

The Loewner and shifted Loewner matrices \({\mathbb {L}}\) and \({\mathbb {S}}\) are Cauchy-like matrices as they are obtained by taking the Hadamard product \(\circ \) between the Cauchy matrix \({\mathcal {C}}\) and the right-hand sides of the Sylvester equations in (11). In particular,

$$\begin{aligned} {\mathbb {L}}&= {\mathcal {C}}\circ \left( \mathbf{V}\mathbf{R}-\mathbf{L}\mathbf{W}\right) ,&{\mathbb {S}}&= {\mathcal {C}}\circ \left( \varvec{M}\mathbf{V}\mathbf{R}-\mathbf{L}\mathbf{W}\varvec{\Lambda }\right) . \end{aligned}$$
(18)

An important property of the Hadamard product reads as follows. For any vectors \(\mathbf{x},\mathbf{y}\in {\mathbb {C}}^{N}\), it holds

$$\begin{aligned} {\mathcal {C}}\circ (\mathbf{x}\mathbf{y}^*)=\text {diag}(\mathbf{x}){\mathcal {C}}\text {diag}(\mathbf{y}^*). \end{aligned}$$

This, along with the low-rank structure of \(\mathbf{V}\mathbf{R}-\mathbf{L}\mathbf{W}\) and \(\varvec{M}\mathbf{V}\mathbf{R}-\mathbf{L}\mathbf{W}\varvec{\Lambda }\), yields the results in (16) and (17). \(\square \)

Corollary 1

Given a vector \(\mathbf{y}\in {\mathbb {C}}^{N}\) and \(x\in \{s_j\}\), we have

$$\begin{aligned} ({\mathbb {S}}-x{\mathbb {L}})\mathbf{y}=\sum _{j=1}^q {\widetilde{\mathbf{v}}}_j\circ ({\mathcal {C}}(\widetilde{\mathbf{r}}_j^*\circ {\widetilde{\mathbf{y}}}))-\sum _{j=1}^p\widetilde{\ell }_j\circ ({\mathcal {C}}({\widetilde{\mathbf{w}}}_j^*\circ {\widetilde{\mathbf{y}}}))+\mathbf{V}\mathbf{R}\mathbf{y}, \end{aligned}$$

where \({\widetilde{\mathbf{y}}}=(\varvec{\Lambda }-x\mathbf{I})\mathbf{y}\), with \(\mathbf{I}\), the identity matrix.

Proof

Thanks to (12), we can write

$$\begin{aligned} ({\mathbb {S}}-x{\mathbb {L}})\mathbf{y}=&({\mathbb {L}}\varvec{\Lambda }+ \mathbf{V}\mathbf{R}-x{\mathbb {L}})\mathbf{y}={\mathbb {L}}(\varvec{\Lambda }-x\mathbf{I})\mathbf{y}+ \mathbf{V}\mathbf{R}\mathbf{y}. \end{aligned}$$

The result follows by substituting the expression of \({\mathbb {L}}\) given in Theorem 1 in the equation above. \(\square \)

Corollary 1 shows that the majority of the computational cost of performing the matrix-vector multiplication \(({\mathbb {S}}-x{\mathbb {L}})\mathbf{y}\) amounts to computing \(p+q\) matrix-vector products with the Cauchy matrix \({\mathcal {C}}\).

Extensive research has been devoted to fully exploiting the rich structure of Cauchy matrices. Several algorithms for computing the matrix-vector product \({\mathcal {C}}\mathbf{y}\) can be found in the literature and many avoid assembling the full matrix \({\mathcal {C}}\) (see, e.g., [7, 15, 17, 33]). In the next section we recall the strategy presented by Pan in [33, 34] to represent \({\mathcal {C}}\) in terms of a hierarchically semiseparable (HSS) matrix. Even though the novel scheme proposed in this paper does not depend on the strategy employed for performing the matrix-vector product \({\mathcal {C}}\mathbf{y}\)—as long as it is efficient—we believe that the HSS framework may be advantageous as, in principle, many matrix-vector products with \({\mathcal {C}}\) are needed for computing a (partial) SVD of the matrix \({\mathbb {S}}-x{\mathbb {L}}\).

We conclude this section with the following remarks.

Remark 3

The number n of singular triplets needed to be computed to achieve the minimal realization \((\mathbf{E},\mathbf{A},\mathbf{B},\mathbf{C},\mathbf{D})\) in (15) is difficult to estimate a-prioriFootnote 2. However, the expression of \({\mathbb {L}}\) and \({\mathbb {S}}\) in terms of the Hadamard product can be useful to this end. Indeed, another important property of the Hadamard product is that, for any matrices \(\mathbf{A}\) and \(\mathbf{B}\), \(\text {rank}(\mathbf{A}\circ \mathbf{B})\le \text {rank}(\mathbf{A})\text {rank}(\mathbf{B})\). Therefore,

$$\begin{aligned} \text {rank}({\mathbb {L}})&\le \text {rank}({\mathcal {C}})\cdot \text {rank}(\mathbf{V}\mathbf{R}-\mathbf{L}\mathbf{W})\le (p+q)\cdot \text {rank}({\mathcal {C}}), \end{aligned}$$

and similarly for \({\mathbb {S}}\). Thus, we have

$$\begin{aligned} \text {rank}({\mathbb {S}}-x{\mathbb {L}})&\le 2(p+q)\cdot \text {rank}({\mathcal {C}}), \quad \forall \,{x\in \{s_j\}}. \end{aligned}$$
(19)

In general, the Cauchy matrix \({\mathcal {C}}\) is full rank so this inequality is trivially satisfied. However, depending on the partitioning of the points into \(\lambda _k\) and \(\mu _h\) (as in (3) and (5)), it can be numerically low-rank (see, e.g., [33][Theorem 5], [5, 9]). If \(\pi _{\mathcal {C}}\) denotes the numerical rank of \({\mathcal {C}}\), then \(2(p+q)\pi _{\mathcal {C}}\) is a rough estimate for the numerical rank of \({\mathbb {S}}-x{\mathbb {L}}\) . Oftentimes, the underlying dynamical system is of much lower complexity, thus allowing for the computation of a minimal realization of reduced order n. One can also use insight of the system itself or count the number of peaks in the frequency response to estimate n (for systems with poles having dominant imaginary parts).

Remark 4

The expression of \({\mathbb {L}}\) and \({\mathbb {S}}\) in terms of the Hadamard product provides us with an upper bound of the spectral norm of the Loewner and shifted Loewner matrix. Indeed, the spectral norm is submultiplicative with respect to the Hadamard product [20][Theorem 5.5.1], hence

$$\begin{aligned} \begin{array}{rll} \Vert {\mathbb {L}}\Vert &{}=&{}\Vert {\mathcal {C}}\circ (\mathbf{V}\mathbf{R}-\mathbf{L}\mathbf{W})\Vert \le \Vert {\mathcal {C}}\Vert \cdot \Vert \mathbf{V}\mathbf{R}-\mathbf{L}\mathbf{W}\Vert \\ &{} \le &{} \Vert {\mathcal {C}}\Vert _F\cdot \Vert \mathbf{V}\mathbf{R}-\mathbf{L}\mathbf{W}\Vert =\Vert \mathbf{V}\mathbf{R}-\mathbf{L}\mathbf{W}\Vert \sqrt{\sum _{i=1}^N\sum _{j=1}^N\left| \frac{1}{\mu _i-\lambda _j}\right| ^2},\\ \end{array} \end{aligned}$$

where \(\Vert {\mathcal {C}}\Vert _F\) denotes the Frobenius norm of \({\mathcal {C}}\). Note that \(\Vert \mathbf{V}\mathbf{R}-\mathbf{L}\mathbf{W}\Vert \) can be computed cheaply, e.g., by a power method exploiting the low rank of \(\mathbf{V}\mathbf{R}-\mathbf{L}\mathbf{W}\).

Similarly,

$$\begin{aligned} \Vert {\mathbb {S}}\Vert \le \Vert \mathbf{M}\mathbf{V}\mathbf{R}-\mathbf{L}\mathbf{W}\varvec{\Lambda }\Vert \sqrt{\sum _{i=1}^N\sum _{j=1}^N\left| \frac{1}{\mu _i-\lambda _j}\right| ^2}. \end{aligned}$$

Remark 5

Low-rank approximations to \({\mathbb {L}}\) and \({\mathbb {S}}\) may be computed by adaptive cross approximation [4], particularly suited for hierarchical matrices, the CUR decomposition [11] as in [22, 38], or related schemes. These approaches select a certain number of columns and rows of the original matrices in a greedy fashion based on various heuristics, and a core matrix is utilised to compute a low-rank approximation. If a given threshold on the desired accuracy of the computed approximation is provided as an input, these algorithms often construct matrices whose rank is much larger than the one of the target matrices \({\mathbb {L}}\) and \({\mathbb {S}}\). On the other hand, by fixing the rank k of the approximation, \(k\approx \mathtt {rank}({\mathbb {L}}),\mathtt {rank}({\mathbb {S}})\) - assuming we know an estimate of \(\mathtt {rank}({\mathbb {L}})\), \(\mathtt {rank}({\mathbb {S}})\) - the accuracy we achieve may be very low affecting the reliability of the computed reduced models.

3.2 Hierarchically Semiseparable (HSS) Representation of a Cauchy Matrix

The literature on HSS matrices is rather vast and technical (see, e.g., [8, 33, 34, 44, 47] and references therein). Here we recall only the main properties of this class of matrices and their role in the efficient representation of Cauchy matrices. Such a technique is also closely related to the Fast Multipole Method (FMM). We refer the interested reader to, e.g., [8, 9] for more details on the interconnection between HSS matrices and FMM.

Definition 1

[34, Definition 27] Let \(\mathbf{A}\) be an \(N\times N\) matrix with \(\alpha \) being the maximum rank of all its subdiagonal blocks, namely the blocks of all sizes lying strictly below the block diagonal, and \(\beta \) the maximum rank of all its superdiagonal blocks, namely the blocks of all sizes lying strictly above the block diagonal, respectively. Then, \(\mathbf{A}\) is \((\alpha ,\beta )\)-HSS if its diagonal blocks consist of \({\mathcal {O}}((\alpha +\beta )N)\) entries.

The \((\alpha ,\beta )\)-HSS representation of a matrix \(\mathbf{A}\) is very advantageous whenever \(\alpha \) and \(\beta \) are small. For instance, it allows us to express \(\mathbf{A}\) in terms of \({\mathcal {O}}((\alpha +\beta )N)\) parameters avoiding storing its \(N^2\) entries. Moreover, a whole, efficient HSS arithmetic has been developed in the last decades (see, e.g., [8, 47]). For instance, the computational cost of the matrix-vector product \(\mathbf{A}\mathbf{y}\) amounts to \({\mathcal {O}}((\alpha +\beta )N)\) FLOPs. If \(\mathbf{A}\) is nonsingular, its inverse is also a \((\alpha ,\beta )\)-HSS matrix that can be computed in \({\mathcal {O}}((\alpha +\beta )^3N)\) FLOPs (see, e.g., [34, Section6]).

To fully exploit the HSS framework for our purposes, we wish to represent the Cauchy matrix \({\mathcal {C}}\) in terms of a HSS matrix with a low off-diagonal rank. In light of Corollary 1, this would considerably decrease the computational cost of the matrix-vector products involving \({\mathbb {S}}-x{\mathbb {L}}\) while avoiding forming the dense matrices \({\mathbb {S}}\) and \({\mathbb {L}}\).

The construction of an HSS approximation \(\widetilde{{\mathcal {C}}}\) to \({\mathcal {C}}\) is rather involved and the magnitude of the \((\alpha ,\beta )\)-rank of the computed \(\widetilde{{\mathcal {C}}}\) strictly depends on the partitioning of the frequencies along with the accuracy that has been selected for the actual computation of \(\widetilde{{\mathcal {C}}}\)Footnote 3. Given two parameters \(c\in {\mathbb {C}}\) and \(1\le r\le N\), the cardinal relation underlying this construction is the following

$$\begin{aligned} \frac{1}{\mu _i-\lambda _j}=&\frac{1}{\mu _i-c}\cdot \frac{1}{1-\frac{\lambda _j-c}{\mu _i-c}}\\ =&\sum _{\ell =0}^{r-1}\frac{1}{(\mu _i-c)^{\ell +1}}(\lambda _j-c)^\ell +\underbrace{\frac{1}{\mu _i-c}\left( \frac{\lambda _j-c}{\mu _i-c}\right) ^r\frac{1}{1-\frac{\lambda _j-c}{\mu _i-c}}}_{=:{\mathcal {E}}}, \end{aligned}$$

and the approximation obtained by neglecting the error term \({\mathcal {E}}\) can be respresented in low-rank format thanks to the separability (in \(\mu _i\) and \(\lambda _j\)) of the first term in the last step of the relation above. See, e.g., [34, Section8] for further details on the computation of an HSS-representation of a Cauchy matrix. In this paper we employ the readily available hm-toolbox [28].

Example 1

We investigate the impact of the most commonly-used frequency partitions (Half&Half and Odd&Even on the HSS-rank of the computed \(\widetilde{{\mathcal {C}}}\) for a mechanical structure. We emphasize that the most effective partition is problem-dependent and is still an open problem, beyond the scope of this paper. However, in [12, 21, Ch. 2.1] the authors suggest the use of partitions with interleaving frequencies like Odd&Even in order to avoid the introduction of an “artifical” ill-conditioning.

We consider the Flexible Aircraft data set [36] from the MORwiki [42]. This dataset contains 421 frequency values \(\omega _j\) expressed in rad/s and the corresponding measurements of the transfer function \(\mathbf{H}_j\). We disregard the last data point and consider the remaining frequencies ranging from \(f_1 = 0.1\)Hz to \(f_{420}=42\)Hz. As this is a mechanical structure, frequencies considered are in the low spectrum, as opposed to electrical systems, for which frequencies typically span the GHz range.

To avoid complex arithmetic, it is preferred and more advantageous to perform a change of basis when dealing with sampling points on the imaginary axis: \(s_j = \mathrm {i}\omega _j\). By defining

$$\begin{aligned} \varvec{\Pi }= \frac{1}{\sqrt{2}}\left[ \begin{array}{rr} 1&{} -\mathrm {i}\\ 1&{} \mathrm {i}\end{array}\right] \hbox { and } \mathbf{P}= \hbox {blkdiag} \left( \left[ \begin{array}{ccc} \varvec{\Pi },&\ldots&,\varvec{\Pi }\end{array}\right] \right) \in {\mathbb {C}}^{N \times N}, \end{aligned}$$
(20)

we obtain matrices with real entries:

$$\begin{aligned}&\varvec{\Lambda }_r:=\mathbf{P}^*\varvec{\Lambda }\mathbf{P},~ \varvec{M}_r :=\mathbf{P}^*\varvec{M} \mathbf{P},~ \mathbf{L}_r:=\mathbf{P}^*\mathbf{L},~ \mathbf{V}_r:=\mathbf{P}^*\mathbf{V},~\\&\quad \mathbf{R}_r:=\mathbf{R}\mathbf{P},~\mathbf{W}_r:=\mathbf{W}\mathbf{P},~{\mathbb {L}}_r:=\mathbf{P}^*{\mathbb {L}}\mathbf{P},~ {\mathbb {S}}_r :=\mathbf{P}^*{\mathbb {S}}\mathbf{P}, \end{aligned}$$

where \(\mathbf{P}^*\) stands for the complex conjugate transpose of the matrix \(\mathbf{P}\) and \(\mathbf{P}^{-1} = \mathbf{P}^*\). These quantities satisfy analogous expressions as in (11) and (12). Unfortunately, \(\varvec{\Lambda }_r\) and \(\varvec{M}_r\) are no longer diagonal and this represents a major drawback in taking advantage of the Sylvester equations (11) for a fast computation of \({\mathbb {L}}_r\) and \({\mathbb {S}}_r\). However, \(\varvec{\Lambda }_r^2\) and \(\varvec{M}_r^2\) are diagonal and given by

$$\begin{aligned} \varvec{\Lambda }_r^2&= \hbox {blkdiag} \left[ \begin{array}{rr} -\omega _k^2&{}0\\ 0&{}-\omega _k^2 \end{array}\right] ,\; k=1,\ldots ,N,\nonumber \\ \varvec{M}_r^2&= \hbox {blkdiag} \left[ \begin{array}{rr} -\omega _h^2&{}0\\ 0&{}-\omega _h^2 \end{array}\right] ,\; h=1,\ldots ,N. \end{aligned}$$
(21)

By multiplying the first equation in (11) by \(\varvec{M}_r\) on the left and, afterwards, multiplying it by \(\varvec{\Lambda }_r\) on the right and adding the results together, a new Sylvester equation with diagonal coefficient matrices is obtained:

$$\begin{aligned} \varvec{M}_r^2 {\mathbb {L}}_r-{\mathbb {L}}_r \varvec{\Lambda }_r^2&= \varvec{M}_r\left( \mathbf{V}_r\mathbf{R}_r-\mathbf{L}_r \mathbf{W}_r\right) +\left( \mathbf{V}_r\mathbf{R}_r-\mathbf{L}_r \mathbf{W}_r\right) \varvec{\Lambda }_r. \end{aligned}$$
(22)

By performing the same operations on the second equation in (11), a similar Sylvester equation is obtained for the shifted Loewner matrix:

$$\begin{aligned} \varvec{M}_r^2 {\mathbb {S}}_r-{\mathbb {S}}_r \varvec{\Lambda }_r^2&= \varvec{M}_r\left( \varvec{M}_r\mathbf{V}_r\mathbf{R}_r-\mathbf{L}_r \mathbf{W}_r \varvec{\Lambda }_r\right) + \left( \varvec{M}_r\mathbf{V}_r\mathbf{R}_r-\mathbf{L}_r \mathbf{W}_r \varvec{\Lambda }_r\right) \varvec{\Lambda }_r. \end{aligned}$$
(23)

In the following, we refer to this as the Odd&Even (real) partitionFootnote 4

We recall the three different partitions of the sampling points \(\{s_j = \mathrm {i}\omega _j\}_{j=1}^{j=420}\):

  • Half&Half: \(\varvec{\Lambda }=\text {diag}([\mathrm {i}\omega _1,-\mathrm {i}\omega _1,\ldots ,\mathrm {i}\omega _{210},-\mathrm {i}\omega _{210}])\)

    \(\mathbf{M}=\text {diag}([\mathrm {i}\omega _{211},-\mathrm {i}\omega _{211},\ldots ,\mathrm {i}\omega _{420},-\mathrm {i}\omega _{420}])\).

  • Odd&Even: \(\varvec{\Lambda }=\text {diag}([\mathrm {i}\omega _1,-\mathrm {i}\omega _1,\ldots ,\mathrm {i}\omega _{419},-\mathrm {i}\omega _{419}])\)

    \(\mathbf{M}=\text {diag}([\mathrm {i}\omega _{2},-\mathrm {i}\omega _{2},\ldots ,\mathrm {i}\omega _{420},-\mathrm {i}\omega _{420}])\).

  • Odd&Even (Real): \(\varvec{\Lambda }_r=\text {diag}([-\omega _1^2,-\omega _1^2,\ldots ,-\omega _{419}^2,-\omega _{419}^2])\)

    \(\mathbf{M}_r=\text {diag}([-\omega _{2}^2,-\omega _{2}^2,\ldots ,-\omega _{420}^2,-\omega _{420}^2])\).

Table 1 Example 1. HSS-rank and relative error of the HSS representation \(\widetilde{{\mathcal {C}}}\) of \({\mathcal {C}}\) for different frequency partitions along with the rank of \({\mathcal {C}}\)

For each partition, we compute the corresponding Cauchy matrix in HSS format \(\widetilde{{\mathcal {C}}}\) without assembling the full \({\mathcal {C}}\) beforehand, by means of the function hss of the hm-toolbox:

$$\begin{aligned} {\widetilde{{\mathcal {C}}}} =\mathtt {hss('cauchy',dM,-dL,N,N)} \end{aligned}$$

where \(\mathtt {dM}\) and \(\mathtt {dL}\) are N dimensional vectors containing the frequencies \(\mu _h\) and \(\lambda _k\), respectively. We then calculate its rank by \(\mathtt {hssrank}(\widetilde{{\mathcal {C}}})\)Footnote 5. In Table 1 we report the HSS-rank of the matrix \(\widetilde{{\mathcal {C}}}\) for the partitions mentioned above. Thanks to the small dimension of the dataset, we are able to compute the full Cauchy matrix \({\mathcal {C}}\) and document its (standard) rank along with the relative error \(\Vert \widetilde{{\mathcal {C}}}-{\mathcal {C}}\Vert /\Vert {\mathcal {C}}\Vert \). As expected, having two disjoint sets of frequencies like in the Half&Half partition leads to a Cauchy matrix \({\mathcal {C}}\) whose (standard) rank is low. This does not happen in the other two scenarios we examine so that taking advantage of the HSS format is necessary to achieve memory-saving representations of \({\mathcal {C}}\). The results in Table 1 show that a good accuracy in terms of the relative error can be achieved for all three frequency partitions. Nevertheless, the HSS rank of \(\widetilde{{\mathcal {C}}}\) is significantly lower for the Odd&Even (Real) partition, most likely due to the squaring of the frequencies performed in Odd&Even (Real), which leads to a fast decay in the magnitude of the off-diagonal entries of \({\mathcal {C}}\). Hence, for a fixed threshold, the off-diagonal blocks of the Cauchy matrix associated to the Odd&Even (Real) partition can be approximated by matrices having a smaller rank than those associated to the other two scenarios we examined.

Figure 1 we display the absolute value—on a logarithmic scale—of the entries of the Cauchy matrix \({\mathcal {C}}\) stemming from the different partitions. The same scale has been used in all the three figures, enforcing the observation that the Odd & Even (Real) partition exhibits the fastest decay in the magnitude of the off-diagonal entries of \({\mathcal {C}}\) .

Fig. 1
figure 1

Example 1. Absolute value—on a logarithmic scale—of the entries of the Cauchy matrix \({\mathcal {C}}\) stemming from the three different frequency partitions we have examined

3.3 Efficient, Inexact Matrix-Vector Products

Whenever the matrix \({\mathcal {C}}\) admits an accurate approximation in terms of a low-rank HSS matrix \({\widetilde{{\mathcal {C}}}}\), the computational cost of performing the matrix-vector product \(({\mathbb {S}}-x{\mathbb {L}})\mathbf{y}\) can be significantly reduced.

Proposition 1

Let \({\widetilde{{\mathcal {C}}}}\) be an \((\alpha ,\beta )\)-HSS matrix that approximates the Cauchy matrix \({\mathcal {C}}\) accurately. If \({\mathbb {L}}\) and \({\mathbb {S}}\) satisfy the Sylvester equations in (11), then

$$\begin{aligned} ({\mathbb {S}}-x{\mathbb {L}})\mathbf{y}=\sum _{j=1}^q \widetilde{\mathbf{v}}_j\circ (\widetilde{{\mathcal {C}}}({\widetilde{\mathbf{r}}}_j^*\circ \widetilde{\mathbf{y}}))-\sum _{j=1}^p{\widetilde{\ell }}_j\circ (\widetilde{{\mathcal {C}}}(\widetilde{\mathbf{w}}_j^*\circ {\widetilde{\mathbf{y}}}))+\mathbf{V}\mathbf{R}\mathbf{y}+ \mathcal {\varvec{E}}{\widetilde{\mathbf{y}}}, \end{aligned}$$
(24)

where \(\Vert \mathcal {\varvec{E}}\Vert \le (p+q)\Vert {\mathcal {C}}-\widetilde{{\mathcal {C}}}\Vert \max ^2\{\Vert \mathbf{V}\Vert _{\infty },\Vert \mathbf{R}\Vert _{\infty },\Vert \mathbf{L}\Vert _{\infty },\Vert \mathbf{W}\Vert _{\infty }\}\). Moreover, the computational cost of performing

$$\begin{aligned} \sum _{j=1}^q {\widetilde{\mathbf{v}}}_j\circ (\widetilde{{\mathcal {C}}}(\widetilde{\mathbf{r}}_j^*\circ {\widetilde{\mathbf{y}}}))-\sum _{j=1}^p\widetilde{\ell }_j\circ (\widetilde{{\mathcal {C}}}({\widetilde{\mathbf{w}}}_j^*\circ {\widetilde{\mathbf{y}}}))+\mathbf{V}\mathbf{R}\mathbf{y}, \end{aligned}$$
(25)

amounts to \({\mathcal {O}}((p+q)(\alpha +\beta +1)N)\) FLOPs.

Proof

From the result in Corollary 1, we can write

$$\begin{aligned} ({\mathbb {S}}-x{\mathbb {L}})\mathbf{y}=&\sum _{j=1}^q \widetilde{\mathbf{v}}_j\circ (\widetilde{{\mathcal {C}}}({\widetilde{\mathbf{r}}}_j^*\circ \widetilde{\mathbf{y}}))-\sum _{j=1}^p{\widetilde{\ell }}_j\circ (\widetilde{{\mathcal {C}}}(\widetilde{\mathbf{w}}_j^*\circ {\widetilde{\mathbf{y}}}))+\mathbf{V}\mathbf{R}\mathbf{y}\\&+\sum _{j=1}^q \widetilde{\mathbf{v}}_j\circ (({\mathcal {C}}-\widetilde{{\mathcal {C}}})({\widetilde{\mathbf{r}}}_j^*\circ \widetilde{\mathbf{y}}))-\sum _{j=1}^p\widetilde{\ell }_j\circ (({\mathcal {C}}-\widetilde{{\mathcal {C}}})(\widetilde{\mathbf{w}}_j^*\circ {\widetilde{\mathbf{y}}}))\\ =&\sum _{j=1}^q {\widetilde{\mathbf{v}}}_j\circ (\widetilde{{\mathcal {C}}}(\widetilde{\mathbf{r}}_j^*\circ {\widetilde{\mathbf{y}}}))-\sum _{j=1}^p\widetilde{\ell }_j\circ (\widetilde{{\mathcal {C}}}({\widetilde{\mathbf{w}}}_j^*\circ \widetilde{\mathbf{y}}))+\mathbf{V}\mathbf{R}\mathbf{y}+\mathcal {\varvec{E}}{\widetilde{\mathbf{y}}}, \end{aligned}$$

where \(\mathcal {\varvec{E}}:=\displaystyle \sum _{j=1}^q \text {diag}({\widetilde{\mathbf{v}}}_j)({\mathcal {C}}-\widetilde{{\mathcal {C}}})\text {diag}(\widetilde{\mathbf{r}}_j^*)-\sum _{j=1}^p\text {diag}(\widetilde{\ell }_j)({\mathcal {C}}-\widetilde{{\mathcal {C}}})\text {diag}({\widetilde{\mathbf{w}}}_j^*)\). Therefore,

$$\begin{aligned} \Vert \mathcal {\varvec{E}}\Vert \le&\Vert {\mathcal {C}}-\widetilde{{\mathcal {C}}}\Vert \left( \sum _{j=1}^q\Vert \text {diag}(\widetilde{\mathbf{v}}_j)\Vert \Vert \text {diag}(\widetilde{\mathbf{r}}_j^*)\Vert +\sum _{j=1}^p\Vert \text {diag}(\widetilde{\ell }_j)\Vert \Vert \text {diag}({\widetilde{\mathbf{w}}}_j^*)\Vert \right) \\ =&\Vert {\mathcal {C}}-\widetilde{{\mathcal {C}}}\Vert \left( \sum _{j=1}^q\Vert \widetilde{\mathbf{v}}_j\Vert _{\infty }\Vert \widetilde{\mathbf{r}}_j\Vert _{\infty }+\sum _{j=1}^p\Vert \widetilde{\ell }_j\Vert _{\infty }\Vert {\widetilde{\mathbf{w}}}_j\Vert _{\infty }\right) \\ \le&(p+q)\Vert {\mathcal {C}}-\widetilde{{\mathcal {C}}}\Vert \max {^2}\{\Vert \mathbf{V}\Vert _{\infty },\Vert \mathbf{R}\Vert _{\infty },\Vert \mathbf{L}\Vert _{\infty },\Vert \mathbf{W}\Vert _{\infty }\}. \end{aligned}$$

This proves the first part of Proposition 1. To conclude, by making use of the property that the matrix-vector product with a \((\alpha ,\beta )\)-HSS matrix costs \({\mathcal {O}}((\alpha +\beta )N)\) FLOPs and that \(\mathbf{V}\mathbf{R}\) has rank q, a direct computation shows that the number of operations needed to perform (25) amounts to \({\mathcal {O}}((p+q)(\alpha +\beta +1)N)\) FLOPs, which proves the second claim in Proposition 1. \(\square \)

As before, analogous results can be obtained for \({\mathbb {L}}_r\) and \({\mathbb {S}}_r\) satisfying (22) and (23), respectively.

Proposition 1 shows that, whenever \(\Vert {\mathcal {C}}-\widetilde{{\mathcal {C}}}\Vert \) is small, the matrix-vector product \(({\mathbb {S}}-x{\mathbb {L}})\mathbf{y}\) can be well-approximated by the expression in (25) while dramatically reducing the computational complexity from \({\mathcal {O}}(N^2)\) FLOPs to \({\mathcal {O}}((p+q)(\alpha +\beta +1)N)\) FLOPs. However, when this approximation is used within our favorite iterative procedure for computing a partial SVD of \({\mathbb {S}}-x{\mathbb {L}}\), the inexactness introduced by neglecting the term \(\mathcal {\varvec{E}}{\widetilde{\mathbf{y}}}\) should be taken into account.

The use of inexact matrix-vector products within certain iterative procedures has been the subject of numerous research papers: Krylov techniques for solving linear systems and matrix equations [6, 24, 32, 40, 43], eigenvalue problems [13, 39], or an inexact variant of the Lanczos bidiagonalization for the computation of some leading singular triplets of a generic matrix function \(f(\mathbf{A})\) can be found in [14]. With the goal to decrease the computational cost of the overall procedure, these studies show that the accuracy of the matrix-vector product can be relaxed (becoming more and more inaccurate) as iterations proceed. In our framework, the inexactness introduced by approximating \(({\mathbb {S}}-x{\mathbb {L}})\mathbf{y}\) with (25) is fixed throughout the entire iterative procedure and mainly depends on \(\Vert {\mathcal {C}}-\widetilde{{\mathcal {C}}}\Vert \), which is often small, as shown in Example 1. Therefore, the approximation

$$\begin{aligned} ({\mathbb {S}}-x{\mathbb {L}})\mathbf{y}\approx \sum _{i=1}^q \widetilde{\mathbf{v}}_i\circ (\widetilde{{\mathcal {C}}}({\widetilde{\mathbf{r}}}_i^*\circ \widetilde{\mathbf{y}}))-\sum _{i=1}^p {\widetilde{\ell }}_i\circ (\widetilde{{\mathcal {C}}}(\widetilde{\mathbf{w}}_i^*\circ {\widetilde{\mathbf{y}}}))+\mathbf{V}\mathbf{R}\mathbf{y}, \end{aligned}$$

does not greatly affect the accuracy of the computed singular triplets (see Sect. 4). Moreover, in our case, we do not need an accurate approximation of the singular triplets of \({\mathbb {S}}-x{\mathbb {L}}\). The main goal is to have meaningful spaces spanned by the computed left and right singular vectors so that the obtained reduced model \((\mathbf{E},\mathbf{A},\mathbf{B},\mathbf{C})\) inherits the desired approximation properties. Moreover, as shown in [21, Corollary 1.4], [2, Proposition 8.25], in the case of noise-free measurements of a low-order rational function, even general projectors, not necessarily obtained from the SVD, can be employed for identifying the underlying function.

Remark 6

In Remark 3 we suggested to use the value \(2(p+q)\pi _C\), where \(\pi _C\) is the numerical rank of \({\mathcal {C}}\), to decide on the number n of singular triplets of \({\mathbb {S}}-x{\mathbb {L}}\) needed for the reduced model. For interlaced partitions, as it is the case with Odd&Even and Odd&Even (real) (see Table 1), the numerical (standard) rank of the Cauchy matrix is large, in general. Hence, the value \(n=2(p+q)\max \{\alpha ,\beta \}\) may instead be employed for the computation of a meaningful reduced model whenever \({\mathcal {C}}\) can be well-approximated by a \((\alpha ,\beta )\)-HSS matrix \({\widetilde{{\mathcal {C}}}}\)Footnote 6. Moreover, the HSS-rank of \({\widetilde{{\mathcal {C}}}}\) is obtained as a byproduct of the construction of \({\widetilde{{\mathcal {C}}}}\).

Remark 7

If \({\mathcal {C}}\) admits an accurate approximation in terms of an \((\alpha ,\beta )\)-HSS matrix \({\widetilde{{\mathcal {C}}}}\), the expression in Theorem 1 shows that \({\mathbb {L}}\) can also be well-approximated by a HSS matrix \({\widetilde{{\mathbb {L}}}}\) whose rank is at most \(((p+q)\alpha ,(p+q)\beta )\). Even though the computational cost of \({\widetilde{{\mathbb {L}}}} \mathbf{y}\) would still be \({\mathcal {O}}((p+q)(\alpha +\beta )N)\) FLOPs, using the HSS approximation \({\widetilde{{\mathbb {L}}}}\) of \({\mathbb {L}}\) may be very advantageous whenever linear systems with \({\mathbb {L}}\) need to be solved (see, e.g., the procedure presented in [12] for the pseudospectra computation of \({\mathbb {S}}-x{\mathbb {L}}\)). Indeed, as mentioned in Sect. 3.2, the computation of the inverse \({\widetilde{{\mathbb {L}}}}^{-1}\) of \({\widetilde{{\mathbb {L}}}}\) costs \({\mathcal {O}}((p+q)^3(\alpha +\beta )^3N)\) FLOPs. Once \({\widetilde{{\mathbb {L}}}}^{-1}\) is computed, we need only \({\mathcal {O}}((p+q)(\alpha +\beta )N)\) FLOPs to perform \({\widetilde{{\mathbb {L}}}}^{-1}\mathbf{y}\).

Remark 8

We would like to mention that we did not observe any numerical issue related to the matrix-vector product (25) and its stability during our vast numerical testing. Moreover, one may want to perform the matrix-vector product by \({\widetilde{{\mathcal {C}}}}\) in a parallel environment to achieve better computational performance. The strumpack packageFootnote 7 may be employed to this end. See also, e.g., [37]. However, such a parallel approach has not been used in the numerical experiments presented in Sect. 4.

4 Numerical Results

In this section we present numerical experiments illustrating the potential of the proposed approach.

In Example 2, we compare our approach to standard procedures employed in the Loewner framework. Recall that the main steps in the standard approach involve forming the full Loewner and shifted Loewner matrices \({\mathbb {L}}\) and \({\mathbb {S}}\) and computing the SVD of \({\mathbb {S}}-x{\mathbb {L}}\). This SVD can be either computed in full, followed by keeping only the n dominant singular vectors, or only these n singular vectors can be obtained by means of an iterative procedure, where the matrix-vector product with \({\mathbb {S}}-x{\mathbb {L}}\) is neededFootnote 8. In the following, we report the overall running time, considering the construction step (Construction), i.e., the computation of \({\mathbb {L}}\) and \({\mathbb {S}}\) in the standard approach and of \({\widetilde{{\mathcal {C}}}}\) in our approach, as well as the reduction step (Reduction), involving the SVD computation followed by projection to obtain the reduced matrices in (15). In terms of memory requirements, for our approach, this involves the allocation of \({\widetilde{{\mathcal {C}}}}\) in the HSS format, while for the standard approach, we report the storage required for \({\mathbb {L}}\) and \({\mathbb {S}}\).

In Table 2 we recall the computational cost of the construction and reduction steps of both the standard approach, based on either a full or a partial SVD, and the novel one presented in this paper along with their memory requirements.

Table 2 Computational cost of the construction (Construction) and reduction (Reduction) steps of the different approaches we test along with their storage demand (Storage). The computational cost of the construction of \({\widetilde{{\mathcal {C}}}}\) can be found, e.g., in [28, Table 1]

Lastly, the accuracy of the reduced models is reported in terms of the normalized \({\mathcal {H}}_2\)-error:

$$\begin{aligned} {\mathcal {H}}_2-\text {error}=\sqrt{\frac{\sum _{j=1}^N\Vert \mathbf{H}_j-\mathbf{H}(i\omega _j)\Vert _F^2}{\sum _{j=1}^N\Vert \mathbf{H}_j\Vert _F^2}}, \end{aligned}$$

where \(\Vert \cdot \Vert _F\) denotes the Frobenius norm. Similar results in terms of accuracy are attained for the \({\mathcal {H}}_\infty \)-error, however, we decided not to document them here, for the sake of brevity.

In Example 3, we compare our novel strategy to the one presented in [18], which makes use of the low-rank ADI-Galerkin method for computing the Loewner matrix as the solution to (11). Such a scheme computes low-rank approximations to the dense Loewner matrix to speed-up the SVD computation, however, the memory constraints originating from the allocation of \({\mathbb {L}}\) and \({\mathbb {S}}\) are still present.

Results were obtained by running MatlabR2020b [29] on a MacBook Pro with an Intel Core i9 processor running at 2.3GHz using 16GB of RAM. All computations involving HSS matrices employed the hm-toolbox [28] with the default settings and the threshold for off-diagonal truncation set to \(10^{-14}\).

Example 2

We consider a synthetic problem for which we can control the order of the original system (n), the number of inputs and outputs (\(p=q\)), as well as the number of measurements (N). The system dynamics is generated randomly, with poles in complex conjugate pairs. In particular:

  • the real part of the poles is random with mean \(-10^4\) and standard deviation \(-2\cdot 10^3\); the imaginary part is also random, with mean \(10^4\) and standard deviation \(10^6\).

  • residues associated to each pole are rank-1 matrices, obtained as outer products between two random vectors, both having the real part with mean 0 and standard deviation 10, while the imaginary part has mean 0 and standard deviation \(10^2\).

Measurement points \(\{s_j = \mathrm {i}\omega _j\}_{j=1}^{j=N}\) are logarithmically distributed between \(10^4\) and \(10^7\) rad/sec. Last, but not least, random noise with a signal-to-noise ratio \(SNR=100\) was added to the transfer function evaluation \(\mathbf{H}(\mathrm {i}\omega _j)\) to obtain the measurement matrices \(\mathbf{H}_j\). We adopt the Odd&Even (real) partition of the frequencies as it achieves satisfactory approximation results while eliminating complex arithmetic. Tangential directions are chosen as unit vectors (rows and columns of the identity matrix of size p).

We compare the proposed approach to the traditional Loewner framework, in which the Loewner and shifted Loewner matrices \({\mathbb {L}}\) and \({\mathbb {S}}\) are formed and the full SVD of \({\mathbb {S}}-x{\mathbb {L}}\) is computed, as well as the alternative approach in which, after building \({\mathbb {L}}\) and \({\mathbb {S}}\), a partial SVD of \({\mathbb {S}}-x{\mathbb {L}}\) using the Matlab svds function is computed for various instances of the data set described above for different values of N, p, and n. The command svds was employed with the left starting vector \({\widetilde{\mathbf{v}}}_1\) (same notation as in Theorem 1) instead of a random starting vector, which is the default setting.

Figure 2 presents the memory requirements for storing the Loewner and shifted Loewner matrices \({\mathbb {L}}\) and \({\mathbb {S}}\) (in red), as opposed to storing the HSS approximation \({\widetilde{{\mathcal {C}}}}\) in our approach (in blue), along with the storage needed to allocate the data in \(\varvec{\Lambda }_r\), \(\varvec{M}_r\), \(\mathbf{R}_r\), \(\mathbf{L}_r\), \(\mathbf{V}_r\), \(\mathbf{W}_r\), for increasing values in the number of inputs and outputs p (in black). We point out that for values of N larger than \(40\,000\), we were not able to allocate the full matrices \({\mathbb {L}}\) and \({\mathbb {S}}\) on the employed laptop (this value, however, depends on the available RAM memory of the machine). For instances when these matrices can be allocated, Fig. 2 shows that the memory requirements for the proposed approach are always much lower than for the standard scheme. Moreover, in contrast to what happens to the memory required for the data matrices, the storage demanded by the allocation of \({\widetilde{{\mathcal {C}}}}\) in HSS format is independent of p.

Fig. 2
figure 2

Example 2. Memory requirements in Megabytes to store \({\mathbb {L}}\), \({\mathbb {S}}\), \({\widetilde{{\mathcal {C}}}}\), and the data matrices (\(\Lambda _r\), \(M_r\), \(\mathbf{V}_r\), \(\mathbf{W}_r\), \(\mathbf{L}_r\), \(\mathbf{R}_r\)) for different values of N and p

Table 3 Example 2. Computational time (in seconds) and \({\mathcal {H}}_2\)-error achieved by each approach for different values of N (number of samples), p (number of inputs and outputs), and n (order of the underlying system and of the model) on the employed laptop

We report the results of the comparison between the different approaches in terms of run time in Table 3 for the number of measurements N varying between \(1\, 000\) to \(100\, 000\), the number of inputs and outputs p taking values 1, 5 and 10, and the number of poles being 50 or 100. The “–” is used to indicate the instances for which we were not able to compute the reduced model (15): for \(N>50\, 000\), we cannot allocate the full matrices \({\mathbb {L}}\) and \({\mathbb {S}}\), and for \(N=30\, 000,40\, 000\) we could not compute the full SVD of \({\mathbb {S}}-x{\mathbb {L}}\) . Such constraints are not relevant to our proposed strategy. It is pertinent to remark the following:

  1. 1.

    the CPU time of the full SVD approach does not depend on p and n, only on N, as expected from Table 2: indeed, the cost of building \({\mathbb {L}}\) and \({\mathbb {S}}\) is quadratic in N whereas the full SVD demands \({\mathcal {O}}(N^3)\) FLOPs; the full SVD approach is rarely the fastest method (it can happen for very modest values of N in the considered range);

  2. 2.

    the CPU time of the full assembly of \({\mathbb {S}}-x{\mathbb {L}}\) followed by the svds Matlab command does not depend on p, only on N and n, as expected from Table 2: the construction of \({\mathbb {L}}\) and \({\mathbb {S}}\) costs \({\mathcal {O}}(N^2)\) FLOPs, whereas the computational effort for the partial SVD depends on n, leading to a more demanding procedure for large n; it is usually the fastest approach for (very) modest values of N in the considered range and \(p>1\);

  3. 3.

    the HSS rank of the Cauchy matrix approximation \({\widetilde{{\mathcal {C}}}}\) only depends on the frequency samples, hence on N because, in our scenario, the sampling interval is the same, but the distribution of points inside the interval is different for each N; there may be instances when, for the same samples, the HSS rank of \({\widetilde{{\mathcal {C}}}}\) may produce slightly different results due to the randomness induced by the adaptive cross approximation procedure used in constructing \({\widetilde{{\mathcal {C}}}}\) (for instance, for \(N = 50\, 000\), \(p = 5\), \(n = 50\) and \(n = 100\), the rank is 28, while for the rest of the values considered for n and p, the rank is 27); moreover, the HSS rank increases with N;

  4. 4.

    our proposed approach is as accurate as the first two approaches, highlighting the fact that the HSS approximation \({\widetilde{{\mathcal {C}}}}\) does not lead to significant losses in the approximation properties of the reduced model (15); clearly, our approach cannot be more accurate than the traditional Loewner framework, especially when the full SVD is performed;

  5. 5.

    last, but not least, the CPU time of the proposed solution depends linearly on p, n and \(N \log N\) (Table 2), thus being the fastest method for large values of N; moreover, no memory constraints are present for N up to \(100\, 000\).

Fig. 3
figure 3

Example 2. Left: Frequency response obtained with the standard and proposed approaches (in black and blue, respectively) versus the measurements (in red) for \(p=1\), \(n=50\), and \(N=10\,000\). Right: Error plots for the models obtained with the standard and proposed approaches (in black and blue, respectively) for \(p=1\), \(n=50\), and \(N=10000\) (Color figure online)

Figure 4 (left) we plot the computational time of the three approaches for \(p=1\), \(n=50\), and different values of N. Even though these are the same results as those reported in Table 3, Fig. 4 (left) clearly shows the \({\mathcal {O}}(N^3)\) trend of the full SVD scheme versus the \({\mathcal {O}}(N^2)\) trend of the svds scheme versus the \({\mathcal {O}}(N)\) behaviour of the proposed approach. Figure 4 (right) we depict, on a logarithmic scale, the running time of the proposed procedure for \(n=50\) and different values of N and p, clearly exhibiting a linear dependency on p and an \(N \log N\) dependency with respect to N.

Fig. 4
figure 4

Example 2. Left: Computational time achieved by the different approaches for \(p=1\), \(n=50\), and N. Right: Computational time achieved by our novel procedure for \(n=50\), and different values of N and p

Example 3

In this example we compare the novel strategy presented in this paper to the fast Loewner SVD scheme illustrated in [18]. We consider the same data set as the one in Example 2, this time with \(SNR=120\) and a random \(\mathbf{D}\in {\mathbb {R}}^{p \times p}\ne \mathbf{0}\). Due to the fact that the models resulting from the Loewner framework have \(\mathbf{D}=\mathbf{0}\), a realization of size \(n+p\) is needed to approximate the system with \(\mathbf{D}\ne \mathbf{0}\) [26, 30].

In [18], a Galerkin-ADI method is applied to the Sylvester equation (11) satisfied by the Loewner matrix. At the k-th iteration, a low-rank approximation \(P_k L_k Q_k^*\), \(P_k,Q_k\in {\mathbb {C}}^{N\times {\bar{k}}}\), \(L_k\in {\mathbb {C}}^{\bar{k}\times {\bar{k}}}\), to \({\mathbb {L}}\) is thus computed. If \(U_kS_kV_k^*=L_k\) denotes the SVD of \(L_k\), then the matrices \(P_kU_k\) and \(V_k^*Q_k^*\) can be used in place of \(\mathbf{X}_n\) and \(\mathbf{Y}_n\) in (15) to compute the reduced model. The method is stopped whenever the norm of the residual matrix \(\varvec{M} P_k L_k Q_k^*-P_k L_k Q_k^* \varvec{\Lambda }-\mathbf{V}\mathbf{R}+\mathbf{L}\mathbf{W}\), consisting of the left-hand side of the Sylvester equation with \({\mathbb {L}}\) replaced by its low-rank approximation \(P_k L_k Q_k^*\), is smaller than a certain threshold \(\varepsilon \). In the results that follow we employ \(\varepsilon =10^{-4}\), as done in [18]. At each iteration step, the SVD of \(L_k\) is truncated to keep only the \(n+p\) significant values.

We consider the Half&Half partition of the frequencies as this is the best scenario for the scheme coming from [18]. The Half&Half partition often leads to a rather fast convergence of the Galerkin-ADI method in terms of number of iterations so that a quite small approximation space is constructed. If different partitions were used, the Galerkin-ADI method could be equipped with a quite involved divide-and-conquer scheme; see [18]. On the other hand, as illustrated in Example 1, the Half&Half partition leads to higher values of the HSS-rank of \({\widetilde{{\mathcal {C}}}}\) than for the Even&Odd partition with a consequent increment in the computational efforts of our scheme. In addition, as for [18], our tests employed complex arithmetic and did not solve the corresponding Sylvester equation (22) for real-coefficient matrices.

In Table 4 we report the results for \(p=10\), \(n=50\), and different values of N. Notice that even though the Galerkin-ADI approach efficiently computes the approximation spaces, the construction of the reduced model (15) still requires the allocation of both \({\mathbb {L}}\) and \({\mathbb {S}}\). Therefore, also for the Galerkin-ADI scheme severe memory constraints hold and for \(N>30\, 000\), we are not able to allocate the \({\mathbb {L}}\) and \({\mathbb {S}}\) matrices with complex entries on the machine used for running the tests.

Table 4 Example 3. Number of iterations, computational time (in seconds) solely of the Galerkin-ADI iteration scheme together with the total time (including building the data, the full Loewner and shifted Loewner matrices and the projection step) as well as the \({\mathcal {H}}_2\)-error achieved by the Galerkin-ADI approach. In comparison, we list the HSS-rank, the total time (in seconds) as well as the \({\mathcal {H}}_2\)-error of the novel scheme presented in this paper for different values of N (number of samples), \(p=10\), and \(n=50\)

Even though the Galerkin-ADI approach is faster for \(N<20 \,000\), the computed approximation spaces are quite poor. Indeed, the computed reduced models are always 7 orders of magnitude less accurate than the ones constructed by our approach. The paper [18] validates the Galerkin-ADI scheme on a system with randomly generated poles for various orders n and number of samples N but does not mention the accuracy of the resulting models. Moreover, in terms of CPU time, our results are comparable to the ones in [18] when considering the computational time solely of the Galerkin-ADI iteration, disregarding the steps involving building the full matrices and projecting these to obtain the reduced model.

The remarkable difference in the accuracy attained by the two approaches make any sort of computational comparison rather pointless. However, we would like to point out that the computational time of the Galerkin-ADI approach grows quadratically with N due to the need to assemble and store the full Loewner and shifted Loewner matrices, while an \(N \log N\) dependency of the computational cost of our novel approach can be evidenced once again from the timings reported in Table 4.

Several ideas could be implemented to improve the accuracy of the models obtained with the Galerkin-ADI approach. In order to have the fairest comparisons with respect to our novel approach, each of these ideas will be tested separately to explore all the possibilities to enhance the Galerkin-ADI approach from [18].

First, the tolerance \(\varepsilon \) for solving the Sylvester equation via Galerkin-ADI can be chosen to a value comparable to the noise level for an SNR of 120, namely \(\varepsilon =10^{-12}\). Results are detailed in Table 5 only for the case \(N = 5\,000\), \(p=10\), and \(n=50\) as the trend is obvious from this one example. While the accuracy of the model has slightly improved with respect to results obtained for \(\varepsilon =10^{-4}\), the number of iterations has also considerably increased, leading to matrices \(L_k\) of much larger dimensions for which the SVD \(L_k = U_kS_kV_k^*\) becomes costly. Hence, the CPU cost of the scheme has exploded and is no longer viable. In any case, even for a tolerance value close to the noise level, the accuracy of the model is several orders of magnitude worse than with our proposed technique (\(10^{-3}\) versus \(10^{-9}\)).

Table 5 Example 3. Number of iterations, computational time (in seconds) solely of the Galerkin-ADI iteration scheme together with the total time (including building the data, the full Loewner and shifted Loewner matrices and the projection step) as well as the \({\mathcal {H}}_2\)-error achieved by the Galerkin-ADI approach for \(N = 5\,000\), \(p=10\), and \(n=50\)

Second, it is always advisable to compute the projection subspaces from a linear combination of \({\mathbb {S}}\) and \({\mathbb {L}}\), namely \({\mathbb {S}}-x{\mathbb {L}}\) rather than only \({\mathbb {L}}\), as the Loewner matrix \({\mathbb {L}}\) encodes the strictly rational part and the addition of \({\mathbb {S}}\) provides all the information on the system, including its polynomial part (the \(\mathbf{D}\)-term). We apply the low-rank Galerkin-ADI method to the Sylvester equation fulfilled by \({\mathbb {S}}-x{\mathbb {L}}\) thus computing a matrix \(P_kZ_kQ_k^*\) such that \(P_kZ_kQ_k^*\approx {\mathbb {S}}-x{\mathbb {L}}\). Results are detailed in Table 6 for the case \(\varepsilon =10^{-4}\), \(N = 5\,000\), \(p=10\), and \(n=50\). For all instances considered, results were comparable in terms of CPU time to those obtained when considering solely the Sylvester equation satisfied by \({\mathbb {L}}\) in the Galerkin-ADI iteration (listed in the first line of Table 6 for reference), while in terms of accuracy, they are slightly worse. For this example, the sole benefit of using a linear combination \({\mathbb {S}}-x{\mathbb {L}}\) might be the system identification properties as, in principle, a sharp drop in the SVD of \(Z_k\) reveals the degree of the underlying system.

Table 6 Example 3. Number of iterations, computational time (in seconds) solely of the Galerkin-ADI iteration scheme together with the total time (including building the data, the full Loewner and shifted Loewner matrices and the projection step) as well as the \({\mathcal {H}}_2\)-error achieved by the Galerkin-ADI approach on the Sylvester equations satisfied by \({\mathbb {L}}\), \({\mathbb {S}}\) and \({\mathbb {S}}-x{\mathbb {L}}\), for \(\varepsilon =10^{-4}\), \(N = 5\,000\), \(p=10\), and \(n=50\)

The third avenue worth exploring is employing real arithmetic and the corresponding Sylvester equations (22) and (23). Table 7 shows the results obtained using real arithmetic, both for the Galerkin-ADI scheme, as well as our proposed method. For reference, the first line in Table 7 lists the results previously obtained in complex arithmetic. For the method in [18], the cost of the scheme has mostly increased, due to more complicated Sylvester equations in (22) and (23). The CPU cost of building the data matrices, the full Loewner and shifted Loewner matrices, has also increased, yielding a total cost far superior to that obtained in complex arithmetic. In some instances, the accuracy has improved slightly. On the other hand, the real arithmetic causes the HSS-rank of the Cauchy matrix approximation to be much smaller with a remarkable impact on the CPU time and almost no effects on the model accuracy when using our novel approach.

Table 7 Example 3. Number of iterations, computational time (in seconds) solely of the Galerkin-ADI iteration scheme together with the total time (including building the data, the full Loewner and shifted Loewner matrices and the projection step) as well as the \({\mathcal {H}}_2\)-error achieved by the Galerkin-ADI approach. In comparison, we list the HSS-rank, the total time (in seconds) as well as the \({\mathcal {H}}_2\)-error of the novel scheme presented in this paper for different values of N (number of samples), \(p=10\), and \(n=50\) when employing real arithmetic

We conclude this example by mentioning that the use of a hybrid approach may be fruitful. In particular, our novel approach can be employed to avoid storing the large and dense Loewner and shifted Loewner matrices. Then, the Galerkin-ADI scheme can be used to compute the first dominant singular vectors of \({\mathbb {S}}-x{\mathbb {L}}\), instead of employing svds, thus also being able to identify the order of the underlying system. However, the accuracy will not be comparable to that of our proposed approach. We implemented this idea and list the CPU times of the various steps in Table 8 together with the resulting accuracy for Galerkin-ADI applied to solving the Sylvester equation (22) for \({\mathbb {L}}\) in real arithmetic with \(\varepsilon =10^{-4}\) for \(N = 5\,000\), \(p=10\), and \(n=50\). Plots of the responses of our proposed approach, together with the Galerkin-ADI scheme as proposed in [18] and the hybrid approach are shown in Fig. 5. Even though the general shape of the response is well captured, some resonances are not modeled accurately, as expected from the much higher model errors reported earlier. This can be noticed better from the error plots in Fig. 6.

Table 8 Example 3. Computational time (in seconds) of the three individual steps in the hybrid approach: setting up of the data matrices, the Galerkin-ADI iteration scheme and projection to obtain the reduced model, together with the total time as well as the \({\mathcal {H}}_2\)-error for \(N = 5000\), \(p=10\), and \(n=50\) in real arithmetic
Fig. 5
figure 5

Example 3. Frequency response of the model (in black) and the measurements (in red) for \(N = 5\,000\), \(p=10\), and \(n=50\) using our proposed approach, Galerkin-ADI as in [18] and the hybrid approach, employing real arithmetic

Fig. 6
figure 6

Example 3. Error plots for \(N = 5\,000\), \(p=10\), and \(n=50\) using our proposed approach, Galerkin-ADI as in [18] and the hybrid approach, employing real arithmetic

5 Conclusion

By exploiting the Cauchy-like structure of the Loewner and shifted Loewner matrices, a novel strategy for reducing the computational costs and the memory requirements of the Loewner framework has been proposed. In particular, the use of the HSS-format leads to tremendous savings in the storage demand and computational efforts of the overall scheme. Indeed, except for the construction of \({\widetilde{{\mathcal {C}}}}\) whose cost is polylogarithmic in N, both the memory requirements and the computational cost of iteratively performing the SVD now linearly depend on the cardinality of the considered data set.

The success of our procedure strongly relies on the capability of representing the Cauchy matrix \({\mathcal {C}}\) in terms of an HSS-matrix \({\widetilde{{\mathcal {C}}}}\) with low \(\left( \alpha ,\beta \right) \) rank of the off-diagonal blocks. Even though we restricted ourselves to showing how different, but common, partitions of the frequencies affect the HSS-rank of \({\widetilde{{\mathcal {C}}}}\), a thorough analysis of their connection may be beneficial. This interesting, but tricky study will need to take into account several and diverse aspects like the compressibility in the HSS format of the matrix \({\mathcal {C}}\), the conditioning of \({\mathbb {L}}\) and \({\mathbb {S}}\), and the approximation properties of the underlying partition of the frequencies.

We have always computed \({\widetilde{{\mathcal {C}}}}\) at high accuracy. Results very similar to the one reported in the previous sections are obtained also with \(10^{-12}\) as low-rank truncation threshold. However, we believe that the employment of more inexact, and thus with a lower rank, HSS-representations of \({\mathcal {C}}\) and its effects on the accuracy of the overall scheme may be another interesting research direction which is worth pursuing depending on the application at hand.

The strategy presented in this paper can be applied to more sophisticated problems as long as the Loewner and shifted Loewner matrices maintain a Cauchy-like structure. In particular, our approach can be employed with minor modifications in model order reduction of parametrized [21], linear switched [16], and bilinear systems [1].