Structured barycentric forms for interpolation-based data-driven reduced modeling of second-order systems

An essential tool in data-driven modeling of dynamical systems from frequency response measurements is the barycentric form of the underlying rational transfer function. In this work, we propose structured barycentric forms for modeling dynamical systems with second-order time derivatives using their frequency domain input-output data. By imposing a set of interpolation conditions, the systems' transfer functions are rewritten in different barycentric forms using different parametrizations. Loewner-like algorithms are developed for the explicit computation of second-order systems from data based on the developed barycentric forms. Numerical experiments show the performance of these new structured data driven modeling methods compared to other interpolation-based data-driven modeling techniques from the literature.


Introduction
Data-driven reduced-order modeling, i.e., the construction of models describing the underlying dynamics of unknown systems from measurements, has become an increasingly preeminent discipline.It is an essential tool in situations when explicit models in the form of state space formulations are not available, yet abundant input/output data are, motivating the need for data-driven modeling.Depending on the underlying physics, dynamical systems can inherit differential structures leading to specific physical interpretations.In this work, we concentrate on systems that are described by differential equations with second-order time derivatives of the form M ẍ(t) + D ẋ(t) + Kx(t) = bu(t), with M , D, K ∈ R n×n and b, c ∈ R n .Systems like (1) typically appear in the modeling of mechanical, electrical, and related structures [1,11,21,23].In the frequency domain (also known as the Laplace domain), the input-to-output behavior of ( 1) is equivalently given by the corresponding transfer function which is a degree-2n rational functions in s, where n is the state-space dimension of (1).
In recent years, several methods have been developed for learning reduced-order statespace representations of dynamical systems from given data.However, most of these approaches consider the classical, unstructured case of first-order systems of the form E ẋ(t) = Ax(t) + bu(t), where E, A ∈ R n×n and b, c ∈ R n , with the transfer function Examples for such methods are the subspace identification framework [18][19][20], dynamic mode decomposition [35,40], operator inference [27,29,33], the Loewner framework [22,28], rational least-squares methods such as vector fitting [14,17] or RKFIT [8], or the transfer-function based H 2 -optimal model reduction [7].See also [12] for a more general introduction to this topic.The importance of preserving internal system structures in the computation of reduced-order approximations of dynamical systems for the case of second-order systems (1) has been observed in [34,42], which allows in particular the reinterpretation of system quantities, the preservation of structure-inherent properties and provides cheap-to-evaluate models with high accuracy.However, only a few data-driven approaches have been recently extended to (1) such as the Loewner framework [30,36] and operator inference [15,37].
In this work, we concentrate on the case in which input-output measurements are available in the frequency domain, i.e., evaluations of the system's transfer function (2).For this type of data, the goal in data-driven reduced-order modeling is the construction of low-order rational functions H(s) that approximate the given data well in an appropriate measure.These rational functions can be interpreted as transfer functions corresponding to dynamical systems.Typically, it is not possible to extract additional differential structures from general rational functions.For example, even though one can always convert the structured transfer function in (2) to an unstructured rational function in (4), the reverse direction is not guaranteed.Most methods for learning transfer functions from frequency domain data have been mainly developed for the unstructured case (3).In particular, such methods include the barycentric Loewner framework [2], the vector fitting algorithm [14,17] and the AAA algorithm [24].The backbone of these methods is the barycentric form of rational functions, which allows for computationally efficient constructions of rational interpolants and least-squares fits [9].Enforcing structures in the barycentric form allows the design of structured data-driven modeling algorithms.In [44], this idea led to the extension of the vector fitting algorithm towards mechanical systems with modal damping structure.
In this paper, we develop new structured barycentric forms associated with the transfer functions of second-order systems (2).By enforcing interpolation conditions, we show that the system matrices in (1) satisfy certain equality constraints.Using different parametrizations of the matrices in (1), we derive corresponding structured barycentric forms that allow an easy construction of the system matrices (1) and enforce interpolation by construction.We are using free parameters in the barycentric forms that are not bound in the derivation, in order to design several Loewner-like algorithms, allowing the direct construction of second-order systems from given frequency domain data with interpolating transfer functions.We also present several strategies that allow the choice of free parameters in the structured barycentric forms to enforce additional properties in the constructed system matrices such as positive definiteness.Numerical examples are used to verify the developed theory and algorithms based on these barycentric forms.
The rest of the paper is organized as follows: In Section 2, we include mathematical preliminaries, needed for the theoretical derivations in this paper, and briefly review the theory about the barycentric form of unstructured systems (3).Then, we develop the structured barycentric forms in Section 3, followed by computational algorithms in Section 4 for the explicit construction of second-order systems (1) from frequency data.Section 5 illustrates the effectiveness of the presented methods for several numerical examples, including the vibrational responses of an underwater drone and bone tissue.The paper is concluded in Section 6.

Mathematical preliminaries and first-order systems analysis
For our derivation of the (structured) barycentric forms, the Sherman-Morrison-Woodbury formula for matrix inversion takes an essential role.Given an invertible matrix X ∈ C r×r and two vectors u, v ∈ C r such that X + uv T is also invertible, the Sherman-Morrison-Woodbury formula yields see, for example, [16].In this work, we focus on transfer functions as in ( 2) and ( 4) where the inverse in the middle is pre-and post-multiplied by two vectors.Thus we consider the following adaption of (5).
Proposition 1.Let X ∈ C r×r be an invertible matrix and let u, v ∈ C r be column vectors such that X + uv T is also invertible.Then, for any z ∈ C r it holds Let H(s) denote the transfer function of an unknown dynamical system.We assume that we have access to evaluations of the transfer function at distinct frequency points λ 1 , . . ., λ r ∈ C such that We denote the complete data set of frequency points and transfer function values by Next, we consider the parametrization of first-order (unstructured ) dynamical systems of the form (3) with the transfer function H(s) = c T (s E − A) −1 b that interpolates the given data (7).In the following, the first-order dynamical system (3) of order r is denoted by Σ FO : ( E, A, b, c).A slightly different proof of the next result can be found in [3] for the case of multi-input/multi-output dynamical systems.For thoroughness, we include a proof here as it will be the starting point for the structured variants considered later on.
Lemma 1.Given the data (7), define and let holds, where b = w 1 . . .w r T ∈ C r contains free parameters with w k = 0, for k = 1, . . ., r, and the matrix E is invertible, then the transfer function of Σ FO interpolates the data in (7), i.e., it holds Proof.Without loss of generality, we show the proof tailored specifically to the case E = I r .This scenario is by no means restrictive, since the matrix E is considered to be invertible, and thus, can be incorporated into the matrices A and b, accordingly.Let e i denote the i-th unit vector of length r.By multiplying the constraint in ( 8) with e i from the right, one obtains (Λ − A)e i = b1 T r e i and, therefore, ( Note that since the entries of b are nonzero, λ i is not an eigenvalue of A. We prove this claim by contradiction.Let the entries of b be nonzero and assume that λ i is an eigenvalue of A with the corresponding left eigenvector v. Thus, it holds that and, therefore, Since the i-th entry of the row vector v T (Λ − λ i I r ) is zero, we consequently have v T b = 0, and thus v T (Λ Since the λ k 's are assumed to be distinct, this implies α 1 = . . .= α i−1 = α i+1 = . . .= α r = 0.Moreover, using v T b = 0, one obtains α i w i = 0.But recall that w i = 0; thus α i = 0 and, in summary, v = 0.However, v is an eigenvector, which leads to the contradiction.Therefore, λ i is not an eigenvalue of A. Since Then by multiplying this final relation with c T from the left, it holds c T (λ i I r − A) −1 b = c T e i , which proves the interpolation conditions (10).
In this work, we do not consider the case of systems Σ FO with differential-algebraic equations (descriptor systems), for which the matrix E is allowed to be singular.Such endeavors are kept for future research.Hence, in what follows we consider the matrix E to be invertible.For the simplicity of exposition, we choose without loss of generality the matrix E to be the r × r identity matrix I r , since any system Σ FO with E invertible can be equivalently written as (I r , E −1 A, E −1 b, c).In this representation, one can observe that in the construction of Σ FO in (8), r parameters in b remain free to be chosen.They can, for example, be used to match further r interpolation conditions additionally to (10).
Using E = I r , Equation ( 8) now reads Λ − A = b1 T r .Thus, substituting A into (9), the transfer function of Σ FO can be rewritten as Define Φ(s) = sI r − Λ to be the diagonal matrix function depending on the frequency parameter s ∈ C.Then, the transfer function in (12) can be formulated as The form of the transfer function H(s) in terms of Φ(s) as given by (13) will play a crucial role in later sections to extend the interpolation theory to the structured case.
The following result, which recovers the classical barycentric form of rational interpolants, follows from applying Lemma 1 to (13).
Corollary 1.Given the interpolation data (7), the transfer function (13) of the first-order model that yields the interpolation conditions (10) can be equivalently expressed as where Φ(s) = sI r − Λ, and Λ, c, and b are defined as in Lemma 1.This formula can be further represented as barycentric rational interpolation form Proof.Applying the identity (6) in Proposition 1 to yields (14).The fact that Φ(s) is diagonal and the definitions of b and c directly lead to which then together result in the barycentric form (15).
As stated earlier, the expression ( 15) is known as the barycentric form of the rational interpolant and is a well-studied object [9] as it forms the foundation for many rational approximation techniques [2,24].The derivation of the barycentric form in Corollary 1 follows a perspective from systems and control theory that aligns well with the secondorder dynamics we study next.
The additional value of one in the denominator of ( 14) appears as a result of the Sherman-Morrison-Woodbury formula.Typically, in the classical theory about barycentric interpolation, such term does not appear in the denominator of the barycentric formula; see in particular [9,24].The rational function represented in ( 14) is strictly proper since the degree of the denominator is greater than the one of the numerator, which aligns with the setting of corresponding LTI systems.Although this may seem restrictive, the proposed approach can also accommodate proper rational functions corresponding to LTI systems with a nonzero feed-through term in the state-output equation.

Structured barycentric forms
As in the previous section, we assume to have transfer function measurements of the form (7) given and aim to construct models that fit the given data.However, in contrast to Section 2, we aim, from now on, to construct structured models of the form (1) denoted as Σ SO : ( M , D, K, b, c), with the model matrices M , D, K ∈ R r×r and b, c ∈ R r .Before we present the main results of this work, we introduce the following two sets of assumptions that will be needed later on.The reasons for imposing Assumptions (A1.1) and (A1.2) are similar to those in the case of first-order systems from the previous section.More specifically, Assumption (A1.1) enforces the system (1) to be described by ordinary differential equations rather than differential-algebraic ones, which require a singular M .The modeling of such descriptor systems is left for future research.As in the first-order case, Assumption (A1.2) is necessary to avoid inconsistencies in the interpolation conditions.The case in which repeated interpolation points and derivative data are used for Hermite interpolation will be considered in a separate work.
Assumption 2. For the model matrices Σ SO : ( M , D, K, b, c) and given data in (7), we assume for i, k = 1, . . ., r and i = k that either (a) −(λ k + λ i ) is not an eigenvalue of the matrix M −1 D, or (A2.1) In contrast to Assumptions (A1.1) and (A1.2), which need to hold both at the same time, only one of Assumptions (A2.1) and (A2.2) will be imposed at once, since these two assumptions are equivalent to each other for their respectively corresponding structured barycentric form, as it will become clearer later on.Although Assumptions (A2.1) and (A2.2) may seem restrictive at first glance, we will show that they occur naturally for practical choices of the parameters in the new structured barycentric forms.A more detailed discussion of this topic is provided in Section 4.3.

Interpolatory second-order transfer functions
The following result extends Lemma 1 to second-order systems establishing sufficient conditions for the interpolation of given transfer function data (7).
Lemma 2. Given the interpolation data (7), define and let holds, where b = w 1 . . .w r T ∈ C r contains free parameters with w k = 0 for k = 1, . . ., r.If Assumptions (A1.1) and (A1.2) as well as either Assumption (A2.1) or (A2.2) hold, then the transfer function of Σ SO interpolates the data in (7), i.e., it holds Proof.Without loss of generality, we show the proof tailored specifically to the case M = I r .This scenario is by no means restrictive, since the M matrix is considered to be invertible; see Assumption (A1.1).Thus, it can be incorporated into D, K and b, accordingly.For the consideration of eigenvalues of second-order systems, we introduce the augmented matrices Since the entries of b are nonzero and with Assumption (A2.1) that −(λ k + λ i ) is not an eigenvalue of matrix D, the interpolation point λ i is not a solution to the linearized eigenvalue problem of the matrix pencil (A, E) in ( 18), i.e., it is not an eigenvalue of the quadratic pencil involving M , D and K.We prove this claim by contradiction.Let the entries of b be nonzero and assume that λ i is an eigenvalue of A with the corresponding left-eigenvector Employing the block matrix structure from (18) yields the quadratic eigenvalue relation Preprint.

2023-03-22
By multiplying the constraint in ( 16) with e i from the right, we obtain for 1 Then, multiplication of this last equation ( 20) with v T 2 from the left yields It follows directly from ( 19) and ( 21) that v T 2 b = 0. Let the eigenvector be given as T .Without loss of generality assume that D is a diagonal matrix with D = diag(δ 1 , . . ., δ r ).Now we use (16) to describe the stiffness matrix K in terms of the rest such that K = b1 T r − DΛ − Λ 2 and substitute this relation into (19) to obtain Since the λ k 's are distinct (Assumption (A1.2)) and Since w i = 0, this would imply α i = 0, yielding v 2 = 0.However, v 2 is an eigenvector, thus leading to the contradiction.Therefore, λ i is not a solution to the quadratic eigenvalue problem.
As a results, the matrix λ 2 i I r + λ i D + K is non-singular, and by multiplying ( 20) with (I r λ 2 i + Dλ i + K) −1 from the left, we get Hence, we have shown that H(λ i ) = h i for any 1 ≤ i ≤ r.Note that we only used (A2.1)out of Assumption 2 in this proof, which allowed the description of the stiffness matrix K by the other terms in (16).An analogous proof relies on Assumption (A2.2), which allows the reformulation of ( 16) for the damping matrix D. Due to the similarity to the presented proof, we omit this part.
Similar to the case of first-order systems in Section 2, we can eliminate one of the unknown matrices in the constraint ( 16) by using Assumption (A1.1).Thereby, we will choose the mass matrix to be the r-dimensional identity matrix, M = I r .However, in contrast to the case of first-order systems, this leaves us with three remaining unknown matrices in (16) instead of two.Following diagonalization assumptions, which we will point out later in detail, this leaves us with 2r free parameters to choose for the explicit realization of interpolating second-order systems.
Remark 1.Aside from this work, a data-driven method for the derivation of structured models with interpolating transfer functions has been developed in [36].Therein, the authors use constraints similar to (16) to parametrize the model matrices as the solution of large-scale linear systems of equations to enforce the interpolation conditions.The unknowns in these linear systems correspond to the entries of the vectorized statespace quantities.There is no discussion of or connection to structured barycentric forms in [36] (which represents the main novelty of the current work) as [36] is directly related to and based on the non-barycentric Loewner framework for interpolation [22] and the projection-based interpolatory model reduction of structured systems [6].However, it will be interesting to revisit [36] in a future work since it might provide directions for extending the barycentric form to different structures than those we consider here.
In the following derivation of structured barycentric forms, we will make use of the equality constraint in ( 16) that enforces r interpolation conditions.As mentioned above, the remaining free 2r parameters are given in the input vector b and either in the stiffness matrix K or damping matrix D. Therefore, different barycentric forms result from the reformulation of ( 16) in terms of either stiffness (in Section 3.2) or damping matrix (in Section 3.3).

Parametrization with constrained stiffness matrix 3.2.1 General setup
In this section, we incorporate the remaining free parameters of the system Σ SO into the damping matrix D and the input vector b.Therefore, from ( 16) it follows that the stiffness matrix satisfies Substituting ( 22) into the transfer function H(s) corresponding to the second-order model Σ SO : ( M , D, K, b, c) yields where the matrix-valued function Φ(s) is given by The following lemma states the structured barycentric form of ( 23) in terms of the input and output vectors, and the matrix-valued function Φ(s).Lemma 3. Given the interpolation data (7), the transfer function (23) of the second-order model that yields the interpolation conditions (17) can be equivalently expressed as , where Φ(s) = M (sI r + Λ) + D (sI r − Λ), and Λ, c, and b are as defined in Lemma 1.
Proof.Using Proposition 1 for the formulation of H(s) in (23), with the following choice of vectors and matrices yields the result of the lemma. Preprint.

2023-03-22
As mentioned in Section 3.1, we choose the mass matrix M to be the identity due to Assumption (A1.1).Under the assumption that D has no higher order Jordan blocks, it can be diagonalized while preserving M = I r such that D = diag(δ 1 , . . ., δ r ) and M = I r = diag(1, . . ., 1).
In this case, the matrix-valued function Φ(s) is a diagonal matrix for all s ∈ C, which allows us to write Using this diagonal form of the matrix-valued function in Lemma 3 yields the following result: a structured barycentric formula for second-order transfer functions.
Theorem 1.Given interpolation points and measurements {(λ i , h i )| 1 ≤ i ≤ r} and let Assumptions (A1.1) and (A1.2) as well as Assumption (A2.1) hold.The barycentric form of the transfer function H(s) corresponding to second-order system Σ SO : ( M , D, K, b, c) is given by with the weights 0 = w i ∈ C and support points σ i = −(δ i + λ i ), where δ i ∈ C are damping parameters, for 1 ≤ i ≤ r.The barycentric form (25) satisfies the interpolation conditions (17).The matrices of the corresponding second-order system are given by Proof.By making use of the diagonal structure of Φ(s) in (24) and the other components in the formulation of the transfer function in Lemma 3, the transfer function is rewritten in barycentric form by multiplying out the matrix-vector products as Setting the support points σ i = −(δ i + λ i ), the result in (25) follows directly.The realization ( 26) is then given by rearranging the different parameters into the corresponding matrices and vectors.
Note that given the notation Σ = diag(σ 1 , . . ., σ r ), the realization in (26) can equivalently be written as The free parameters that explicitly appear above are 2r in total and are given by the entries of the vector b and of the diagonal matrix Σ, i.e., the free parameters in the structured barycentric form (25) are {w 1 , . . ., w r } ∪ {σ 1 , • • • , σ r }.

Systems with zero damping matrix
An important subclass of second-order systems ( 1) is given by a zero damping matrix, i.e., D = 0.These occur, for example, in the case of "conservative" dynamics where no dissipation/damping is considered.Hamiltonian systems belong to this category [1,23].
Retaining this additional structure allows, for example, modeling the preservation of energy in the system.Another problem class that can be modeled by a zero damping matrix is the case of hysteretic damping, i.e., constant damping over the complete frequency range [5,13].This is used, for example, to model the general influence of physical structures on the damping behavior of systems.Thereby, the damping matrix is considered to be frequency dependent with D(s) = 1 s iηK.Inserting this damping definition into the second-order transfer function (2) yields which can be seen as a system with a complex stiffness matrix and zero damping matrix.
The following corollary refines the results from Theorem 1 to the case of system structure with zero damping matrix, D = 0.
Corollary 2. Given interpolation points and measurements {(λ i , h i )| 1 ≤ i ≤ r} and let Assumptions (A1.1) and (A1.2) as well as Assumption (A2.1) hold.The barycentric form of the transfer function H(s) corresponding to second-order system Σ SO : ( M , 0, K, b, c) is given by with the weights 0 = w i ∈ C, for 1 ≤ i ≤ r.The barycentric form (27) satisfies the interpolation conditions (17) and it can be written as a second-order dynamical systems with zero damping, i.e., where As in Theorem 1, Assumption (A2.1) needs to hold for (27) to satisfy the interpolation conditions (17).However, in the special case of D = 0, Assumption (A2.1) simplifies to λ i = −λ k , for all i = k.In particular, if the interpolation points are chosen on the imaginary axis, no complex conjugate pairs are allowed in the set of interpolation points.

Parametrization with constrained damping matrix
In this section, we incorporate the remaining free parameters of the system Σ SO into the stiffness matrix K and the input vector b.Therefore, it follows from ( 16) that the damping matrix satisfies Under the assumption that Λ is invertible, i.e., zero is not an interpolation point, one can equivalently write (28) as By substituting (29) into the formula of the transfer function H(s) corresponding to the second-order model Σ SO : ( M , D, K, b, c), it holds where the matrix-valued function Ψ(s) is given by Similar to Lemma 3, the following lemma states the structured barycentric form of (30) in terms of the input and output vectors and the matrix-valued function Φ(s).Lemma 4. Given the interpolation data (7), the transfer function (30) of the second-order model that yields the interpolation conditions (17) can be equivalently expressed as Proof.As previously done in the proof of Lemma 3, we apply Proposition 1 to the formulation of H(s) in (30), with the following choice of vectors and matrices which yields the result of the lemma.
As in Section 3.2, we can assume that the mass matrix M to be the identity due to Assumption (A1.1).However, this time, we additionally assume that K has no higher order Jordan blocks such that we can diagonalize the stiffness matrix while preserving the identity mass matrix K = diag(κ 1 , . . ., κ r ) and M = I r = diag(1, . . ., 1).Therefore, the matrix-valued function Ψ(s) is a diagonal matrix for all s ∈ C, which can be written as Using this diagonal form of the matrix-valued function in Lemma 4 yields the barcycentric form for the constrained damping case.
Theorem 2. Given interpolation points and measurements {(λ i , h i )| 1 ≤ i ≤ r}, let Assumptions (A1.1) and (A1.2) as well as Assumption (A2.2) hold.The barycentric form of the transfer function H(s) corresponding to second-order system Σ SO : ( M , D, K, b, c) is given by with the weights 0 = w i ∈ C and support points θ i = κ i λ −1 i , where κ i ∈ C are stiffness parameters, for 1 ≤ i ≤ r.The barycentric form (32) satisfies the interpolation conditions (17).Moreover, the matrices of the corresponding second-order system are given as Proof.By using the diagonal structure of Ψ(s) in (31), and the structure of the other components in the formulation of the transfer function in Lemma 4, the transfer function is rewritten in barycentric form by multipliying out the matrix-vector products as .
Setting the support points θ i = κ i λ −1 i , the result in (32) follows directly.The realization (33) is then given by rearranging the different parameters into the corresponding matrices and vectors.
As for the realization of matrices in Theorem 1, the matrices in (33) can be reformulated by introducing the notation Θ = diag(θ 1 , . . ., θ r ) such that Algorithm 1: Linearized K-constrained second-order barycentric Loewner framework.
Input: Left and right interpolation data {(λ i , h i )| 1 ≤ i ≤ r} and {(µ i , g i )| 1 ≤ i ≤ r}, support points σ 1 , . . ., σ r ∈ C. Output: Second-order system matrices M , D, K, b, c. 1 Construct the r-dimensional divided differences matrix 2 Solve the linear system of equations L K w = g, for the unknown weights w = w 1 . . .w r T and the given data g = g 1 . . .g r T .
The free parameters that explicitly appear above are 2r in total, and are given by the entries of the vector b and of the diagonal matrix Θ, i.e., the free parameters in the structured barycentric form (32) are {w 1 , . . ., w r } ∪ {θ 1 , • • • , θ r }.

Computational methods
In this section, we discuss computational aspects for the construction of interpolating second-order models from data based on the different barycentric forms introduced in the previous section.

Linearized structured barycentric Loewner frameworks
The different barycentric forms presented in this paper are designed to interpolate, by construction, given transfer function data {(λ i , h i )| 1 ≤ i ≤ r}.However, in all three structured forms, free parameters remain.In the first-order case (Corollary 1), these can be used, for example, to interpolate additional transfer function data where it is assumed that the interpolation points of the two sets are distinct, i.e., {λ 1 , . . ., λ r } ∩ {µ 1 , . . ., µ r } = ∅.
The resulting method can be seen as a barycentric transfer function version of the unstructured (first-order) Loewner framework [22].
Here, we aim to derive similar algorithms for the interpolation of additional transfer function data {(µ i , g i )| 1 ≤ i ≤ r} via the new structured barycentric forms (25), (27), and (32) by making use of the remaining free parameters.We can observe that (25) and ( 32) have 2r free parameters left, which potentially allow constructing of models that match the same number of interpolation conditions.However, the resulting systems of equations that need to be solved are nonlinear in the unknowns and, thus, need thorough investigations in Algorithm 2: Linearized D-constrained second-order barycentric Loewner framework.
2 Solve the linear system of equations L D w = g, for the unknown weights w = w 1 . . .w r T and the given data g = g 1 . . .g r T .
3 Construct the second-order system matrices terms of solvability.For simplicity of exposition, we consider in this work only linearized versions of the equations by choosing "suitable" σ i 's in (25) and θ i 's in (32) a priori, leading to small linear systems to solve to satisfy additional r interpolation conditions.A discussion of heuristic choices for these support points is given in the upcoming Section 4.3.
The resulting methods based on the barycentric forms ( 25) and ( 32) are given in Algorithms 1 and 2. Both algorithms follow a similar structure.In Step 1 of Algorithms 1 and 2, the given interpolation data and support points are used to set up divided differences matrices.The realizations of these matrices are determined by the underlying barycentric forms (25) and (32).
For example, consider the barycentric form in (25).The goal is to find the weights w = w 1 . . .w r T in this barycentric form (25) such that the additional interpolation conditions (35) for {(µ i , g i )| 1 ≤ i ≤ r} are satisfied.By inserting the form (25) into the interpolation conditions (35) and by multiplying both sides with the denominator of the barycentric form, we obtain the new relation for j = 1, . . ., r. Bringing the terms with the unknowns w 1 , . . ., w r to the left-hand side yields the relation r i=1 for j = 1, . . ., r. Rearranging all equations of the form (34) such that the unknowns can be written as the vector w = w 1 . . .w r T results in the linear system of equations 2023-03-22 Algorithm 3: Second-order barycentric Loewner framework with zero damping.
2 Solve the linear system of equations L KD0 w = g, for the unknown weights w = w 1 . . .w r T and the given data g = g 1 . . .g r T .
with the data vector g = g 1 . . .g r T and the matrix of divided differences This system of linear equations is then solved in Step 2 of Algorithm 1.A similar derivation using (32) leads to the divided differences matrix in Step 1 of Algorithm 2 and the solve of an analogous linear system of equations in Step 2 of Algorithm 2. Under the assumption that the number of given data points r is less than the minimal system dimension and suitable choices of support points {σ i } and {θ i } such that Assumptions (A1.2), (A2.1) and (A2.2) are satisfied for all interpolation points λ 1 , . . ., λ r , µ 1 , . . ., µ r , these linear systems of equations have unique solutions.
In Step 3 of Algorithms 1 and 2, the second-order systems are constructed following the theory of Theorems 1 and 2. In both cases, the transfer functions of the constructed systems satisfy the 2r imposed interpolation conditions Algorithm 3 shows the barycentric Loewner framework for the case of zero damping matrices based on Corollary 2. While the main computational steps are following the same ideas as in Algorithms 1 and 2, the difference to these methods is the lack of the set of support points {σ i } i=1,...r and {θ i } i=1,...r , which results from enforcing the D = 0 damping model.The corresponding barycentric form (27) has r remaining free parameters that exactly allow the construction of an interpolating second-order system satisfying (35).

Construction of systems with real-valued matrices
A property desired in many applications for learned systems is the realization by means of real-valued matrices.This often allows the reinterpretation of the learned quantities and the use of classical tools, established for real systems resulting, for example, from finite element discretizations.The key feature of second-order systems (1) with real matrices that needs to be exploited for the construction is that data at complex conjugate frequency points are also complex conjugate: As in the classical Loewner framework, we assume the given data {(λ i , h i )| 1 ≤ i ≤ r} and {(µ i , g i )| 1 ≤ i ≤ r} to be closed under conjugation in the respective sets.Additionally, for Algorithms 1 and 2, we need to assume that the support points {σ i } i=1,...r and {θ i } i=1,...r are also closed under conjugation and that if λ i , λ i+1 are complex conjugate, then so are σ i , σ i+1 or θ i , θ i+1 , respectively.Let the interpolation data and parameters be ordered such that complex conjugate are sorted together, e.g., for the interpolation points in Given the matrices M , D, K, b, c computed by any of the Algorithms 1 to 3, a real-valued realization of the described system is given by P T M P , P T DP , P T KP , P T b, cP , where the transformation matrix P is block diagonal with and the block matrices are chosen according to the given interpolation data by for complex conjugate interpolation points, 1 for real interpolation points.
Remark 2. A special situation occurs in the case of Algorithm 3 for the construction of models with zero damping matrix.As discussed at the end of Section 3.2.2, the collection of data on the imaginary axis iR in complex conjugate pairs violates Assumption (A2.1).This is consistent with the observation that the corresponding transfer function yields identical values for complex conjugate points on the imaginary axis, i.e., for any M , K ∈ C n×n , D = 0 and b, c ∈ C n , it holds that for all s = iω ∈ iR.As a result, the collection of data from complex conjugate points on the imaginary axis yields no additional information due to the specific system structure and leads to singular linear systems in Step 2 of Algorithm 3. A side effect is that systems with zero damping and real matrices produce only real data on the imaginary axis such that no additional enforcement of real valued matrices is necessary.

Heuristics for choosing the support points
To discuss suitable choices for the support points σ 1 , . . ., σ r and θ 1 , . . ., θ r , we consider their influence on the transfer functions ( 25) and (32).First, consider the case of the barycentric form (25) resulting from the constrained stiffness matrix.Assume for the moment that σ 1 , . . ., σ r are all distinct.Then, for the transfer function (25), it holds that i.e., the transfer function assumes the same values at the support points as at the interpolation points.Since the typical case will be H(λ i ) = H(σ i ), this introduces approximation errors at the chosen support points.In the case that some of the support points are chosen to be identical, i.e., σ i = σ i 1 = . . .= σ i for some indices i 1 , . . ., i , one can observe that holds.Similar to the case of distinct support points, approximation errors at σ i are introduced.However, choosing many support points to be identical, allows to cluster the introduced errors away from frequency ranges of interest.In a similar way, one can observe for the barycentric form (32) resulting from the constrained damping matrix that in the case of distinct support points θ 1 , . . ., θ r , it holds As before, typically H(θ i ) = h i λ i θ i will hold, indicating approximation errors introduced at the chosen support points.In the case that some of the support points are identical, i.e., θ i = θ i 1 = . . .= θ i for some indices i 1 , . . ., i , it holds that To summarize, Equations ( 36) and (37) show that poorly chosen support points introduce undesired approximation errors.The points σ 1 , . . ., σ r and θ 1 , . . ., θ r do not need to be distinct (in contrast to the interpolation points; cf.Assumption (A1.2)), and the undesired approximation errors are enforced at the support points themselves.The second observation motivates to choose σ 1 , . . ., σ r and θ 1 , . . ., θ r outside the considered frequency region of interest, i.e., σ k , θ k ∈ conv R {λ 1 , . . ., λ r , µ 1 , . . ., µ r }, for k = 1, . . ., r and where conv R {a 1 , . . ., a denotes the convex hull of elements in C over R.This can be achieved, for example, by taking large shifts or multiples of the given interpolation points for the support points.
Next, we revisit the construction of the final second-order system matrices in Algorithms 1 and 2 and the influence of the support points on the properties of these matrices.
In both algorithms, the damping matrices take into account the negatives of the interpolation points λ 1 , . . ., λ r , which are subtracted by the chosen support points σ 1 , . . ., σ r or θ 1 , . . ., θ r .Eigenvalues with positive real parts in the damping matrix can be interpreted as dissipation of energy from the system.To achieve this, a reasonable choice for σ 1 , . . ., σ r and θ 1 , . . ., θ r is to have negative real parts, which pushes the eigenvalues of the resulting damping matrix toward the right open half-plane.Especially, it is possible to construct real, symmetric positive definite damping matrices via Algorithm 1 in the case that the interpolation data has been obtained on the imaginary axis by choosing the support points σ 1 , . . ., σ r to yield Re(σ i ) < 0 and Im(σ i ) = − Im(λ i ), for all i = 1, . . ., r.
On the other hand, we observe that the stiffness matrices in Algorithms 1 and 2 involve the multiplication of the interpolation points λ 1 , . . ., λ r with the support points σ 1 , . . ., σ r and θ 1 , . . ., θ r .Aiming for stiffness matrices that have eigenvalues with positive real parts, which together with eigenvalues with positive real parts in the damping matrix drives the system towards asymptotic stability, the support points σ 1 , . . ., σ r and θ 1 , . . ., θ r should be chosen to have imaginary parts in the opposite half-plane of the imaginary parts of λ 1 , . . ., λ r , e.g., if the interpolation points are chosen over the positive imaginary axis, then σ 1 , . . ., σ r and θ 1 , . . ., θ r should have negative imaginary parts.In the case of Algorithm 2 and the interpolation points λ 1 , . . ., λ r to be on the imaginary axis, the support points θ 1 , . . ., θ r can be chosen such that K can be realized as a real-valued, symmetric positive definite matrix by Im(θi) = − (Im(λ i ) + c i ) , for all i = 1, . . ., r, where c i ∈ R are real constants with the same sign as the imaginary part of λ i such that c i Im(λ i )) > 0 holds.

Numerical experiments
In this section, we verify the proposed algorithms and barycentric forms numerically in different examples.The experiments reported here have been executed on a machine equipped with an AMD Ryzen 5 5500U processor running at 2.10 GHz and with 16 GB total main memory.The computer runs on Windows 10 Home version 21H2 (build 19044.2251)and, for all reported experiments, we use MATLAB 9.9.0.1592791 (R2020b).Source codes, data and numerical results are available at [43].

Computational setup
As setup of the subsequent comparative study, we consider the following data driven, interpolation-based methods:  soLoewRayleigh the second-order (matrix) Loewner framework for Rayleigh-damped systems from [30], BaryLoew the classical first-order barycentric Loewner framework based on the results in Corollary 1; see also [2].
All models are constructed such that the transfer functions satisfy the same interpolation conditions.
As interpolation points, we have chosen the local minima and maxima of the given data samples on the positive part of the imaginary axis using the MATLAB functions islocalmin and islocalmax supplemented by the limits of the considered frequency intervals of interest [iω min , iω max ] and, if necessary, some additional intermediate points.The interpolation points are then split into the left and right sets by alternating the ascending order of the positive imaginary parts, i.e., Im(λ 1 ) < Im(µ 1 ) < Im(λ 2 ) < Im(µ 2 ) < . . .< Im(λ r ) < Im(µ r ).
For soBaryLoewK and soBaryLoewD, the support points are chosen according to the discussion in Section 4.3, more specifically as in (38).The individual choices are outlined in the description of the examples below.The Rayleigh damping parameters for soLoewRayleigh are either known from the original model or inferred in an optimal way.This means that we do not consider the optimization process of these additional parameters here.
For the comparison of the different methods, we consider the transfer function magnitude, given as the absolute value |H(iω k )|, in the discrete frequency points iω k given in the data sets, and the corresponding pointwise relative errors via An overview providing the dimensions of the original full-order systems, the dimensions of the learned (reduced-order) models and if complex conjugate data has been used in the computations for the construction of real-valued system matrices can be found in Table 1.The rows of the table are split into the examples with and without damping matrix.Interpolation data soBaryLoewK soBaryLoewD soLoewRayleigh BaryLoew Figure 1: Butterfly gyroscope example: All methods recover reduced-order models with similar accuracy using the same amount of given interpolation data.The model learned by soLoewRayleigh performs slightly better for frequencies between 500 and 5 000 rad/s.

Examples with non-zero damping matrix
We first consider the case of models with energy dissipation, which need the presence of a damping matrix D. Three examples are considered to test the proposed methods.The butterfly gyroscope models the behavior of a micro-mechanical vibrating gyroscope structure for the use in inertia-based navigation systems [10,26].The artificial fishtail models the deformation of a silicon structure in the shape of a fishtail used in the construction of underwater vehicles with fish-like locomotion [38,39].Lastly, we have sampled data from a high-fidelity simulation of a flexible aircraft model used in civil aeronautics to optimize lightweight structures [31,32].The dimensions of the sampled models and the dimension of the constructed second-order models are shown in Table 1.Note that we consider here SISO versions of these examples, which are originally single-input/multi-output.
The results computed by the different methods are shown in Figures 1 to 3, where we have set the support points as for the butterfly gyroscope and flexible aircraft, and in the case of the fishtail example, where ω max ∈ R is the upper limit of the considered frequency interval.The figures show the transfer function magnitudes of the constructed models with the used interpolation data and the pointwise relative errors computed with respected to all given data samples.In the case of the butterfly and fishtail examples, these are 1 000 samples in the frequency range of interest and 421 samples for the flexible aircraft example.For all three examples, the considered methods perform similarly well in terms of the pointwise relative errors shown in Figures 1 to 3.However, we can note that the learned models assume different spectral properties.In the case of the butterfly gyroscope, the proposed methods soBaryLoewK and soBaryLoewD produce asymptotically stable reduced-order models due to the choice of support points, in contrast to the classical The model inferred by soBaryLoewD performs insignificantly worse for frequencies between 1 and 100 rad/s, while the model from soLoewRayleigh keeps the accuracy level also for higher frequencies.
BaryLoew, which has one unstable pole.The Rayleigh-damped approach soLoewRayleigh gives one unstable and one infinite eigenvalue, where the infinite one likely results from the finite arithmetic in the eigenvalue computations and the highly ill-conditioned learned system matrices.For the fishtail example, all computed reduced-order models are stable and for the aircraft example, no reduced-order model is stable.In particular, soBaryLoewD, soLoewRayleigh and BaryLoew have three pairs of unstable complex conjugate eigenvalues, while soBaryLoewK has only two pairs.

Examples with zero damping matrix
Now, we consider two examples with zero damping matrix, in order to test soBaryLoewKD0.First, we have the bone model as a conservative system, which is used to simulate the porous bone micro-architecture in studies of bone tissue under different loads [25,41].As a second example, we consider the model of a vibrating plate that is equipped with tuned vibration absorbers, which lead to hysteretic structural damping [4,5].The results of the different methods can be seen in Figures 4 and 5.In both examples, the second-order methods soBaryLoewKD0 and soLoewRayleigh perform better in terms of the pointwise relative errors than the classical BaryLoew.This comes from the additional preservation of the damping model in these methods.In particular, we can observe that in the absence of any type of energy dissipation, the two methods that preserve the zero damping structure outperform the classical Loewner framework by several orders of magnitude.Additionally, we note that the curves of soBaryLoewKD0 and soLoewRayleigh are in fact identical in both examples.This is a numerical verification that the barycentric form in Corollary 2 describes exactly the same system that is recovered by soLoewRayleigh with zero Rayleigh damping parameters, i.e., both methods construct different realizations of exactly the same interpolatory second-order systems.All methods construct reduced-order models that recover the given data set with sufficient accuracy.For higher frequencies, more interpolation data is used due to the presence of many local maxima and minima.

Conclusions
We have developed new structured barycentric forms for the transfer functions of secondorder systems for data-driven, interpolatory reduced-order modeling.Based on these barycentric forms, we have proposed three Loewner-like algorithms for the construction of second-order systems from data.Numerical experiments compared these new methods to the classical, unstructured Loewner approach as well as to another Loewner-like method for the construction of second-order systems from frequency domain data.In all examples, the new structured approaches were able to provide a similar , and in some cases significantly better, approximation accuracy as the established methods from the literature some of which do not obey to preserve the structure.Since the proposed algorithms rely on some fixed parameter choices to simplify computations, we expect that including these additional "support points" as parameters would significantly increase the approximation capabilities of methods based on the presented structured barycentric forms.However, we leave these considerations for future work.Additionally, we have observed that these support points can be used to alter the properties of the constructed system matrices, allowing, for example, the construction of asymptotically stable second-order systems.
At the heart of this work are the new structured barycentric forms that allow for a large bandwidth of new algorithms for learning structured models from frequency domain data.For the clarity of the presentation, we restricted the analysis in this work to a purely interpolatory framework.However, the use of the free parameters in the barycentric forms for least-squares fitting will allow the derivation of methods like vector fitting [17] and AAA [24] for second-order systems.In particular, the presence of more parameters than in the unstructured, first-order system case gives rise to a lot more variety in resulting algorithms.We will consider these ideas in future work.
soBaryLoewK the linearized K-constrained second-order barycentric Loewner framework from Algorithm 1, soBaryLoewD the linearized D-constrained second-order barycentric Loewner framework from Algorithm 2, soBaryLoewKD0 the second-order barycentric Loewner framework with zero damping matrix from Algorithm 3,

Figure 2 :
Figure 2: Artificial fishtail example: All methods provide a similar approximation accuracy.The model inferred by soBaryLoewD performs insignificantly worse for frequencies between 1 and 100 rad/s, while the model from soLoewRayleigh keeps the accuracy level also for higher frequencies.

Figure 3 :
Figure3: Flexible aircraft example: All methods construct reduced-order models that recover the given data set with sufficient accuracy.For higher frequencies, more interpolation data is used due to the presence of many local maxima and minima.

Figure 5 :
Figure5: Hysteretic plate example: For up to 50 rad/s, the second-order methods soBaryLoewKD0 and soLoewRayleigh produce relative errors that are at least four orders of magnitude smaller than the classical BaryLoew.The curves of soBaryLoewKD0 and soLoewRayleigh are identical up to numerical round-off.

Table 1 :
Dimensions and numbers of data samples for numerical examples.The amount of matched interpolation conditions for all constructed models is 2r, due to the partition into left and right data.The column "c.c.d" denotes the use of complex conjugate interpolation points and data.