Semi-active damping optimization of vibrational systems using the reduced basis method

In this article, we consider vibrational systems with semi-active damping that are described by a second-order model. In order to minimize the influence of external inputs to the system response, we are optimizing some damping values. As minimization criterion, we evaluate the energy response, that is the $\cH_2$-norm of the corresponding transfer function of the system. Computing the energy response includes solving Lyapunov equations for different damping parameters. Hence, the minimization process leads to high computational costs if the system is of large dimension. We present two techniques that reduce the optimization problem by applying the reduced basis method to the corresponding parametric Lyapunov equations. In the first method, we determine a reduced solution space on which the Lyapunov equations and hence the resulting energy response values are computed approximately in a reasonable time. The second method includes the reduced basis method in the minimization process. To evaluate the quality of the approximations, we introduce error estimators that evaluate the error in the controllability Gramians and the energy response. Finally, we illustrate the advantages of our methods by applying them to two different examples.


Introduction
When constructing large civil engineering infrastructure such as buildings or bridges, external vibrational forces like wind perturbations or earthquakes need to be taken into account.Due to the continuous improvement in engineering construction, which provides for lighter and finer structures, corresponding infrastructures have become more susceptible to large deflections and fatigue when external forces with dominant frequencies close to the natural frequencies of the structure are applied.
To prevent this effect, dampers are designed in order to remove critical energies from the physical system.We consider the vibrational system M ẍ(t) + D(g) ẋ(t) + Kx(t) = Bu(t), y(t) = Cx(t) where M ∈ R n×n is the mass matrix, D(g) ∈ R n×n is the damping matrix and K ∈ R n×n the stiffness matrix.We assume, that M and K are symmetric and positive definite, and that D(g) is symmetric and positive semidefinite for all parameters g ∈ D, where D ⊆ R ℓ is the parameter set.Additionally, the matrix B ∈ R n×m is the input matrix and C ∈ R p×n the output matrix.The vectors u(t) ∈ R m , x(t) ∈ R n and y(t) ∈ R p describe the input, the state and the output of the system, respectively.The damping of the system consists of two parts: an internal damping of small magnitude and external dampers designed to limit the influence of the input to the output of the system.Hence, the parameter-dependent damping matrix D(g) is composed of D(g) = D int + D ext (g), where D int describes the internal damping and D ext (g) describes the external damping.
The internal damping can be modeled in different ways.In this work we use a multiple of the critical damping for α ≪ 1, which is a widely used convention, and was applied, for example, in [5,7,27,43,47].However, our theory holds for every modal damper, i.e., every damping matrix that is simultaneously with M and K diagonalizable.Other methods to model the internal dampings are presented in [21,25].
External dampers typically depend on two variables, the position and the damping values.In this paper, we will focus on the optimization of the damping values to attenuate the effects from the external forces.Hence, the external damping matrix D ext (g) depends on the damping gains g = g 1 , . . ., g ℓ , that represent the friction coefficients, and the matrix F ∈ R n×ℓ that describes the position of the dampers such that D ext (g) := F G(g)F T , G(g) := diag (g 1 , . . ., g ℓ ) .
We assume that the number of dampers ℓ is significantly smaller than the dimension n.Additionally, we assume that the damping gains are fixed over time and lie in given intervals [g − i , g + i ] ⊂ R + , for all i = 1, . . ., ℓ.We collect these bounds by setting g where the parameter set D contains all restricting intervals.However, it is also possible that no a priori knowledge of the parameter set is given, then we define D = R ℓ + .Our goal is to design the damping values based on some optimization of an objective function.The problem of finding optimal external dampers was widely investigated in the literature, see [14,20,22,26,37,47].The objective of this work, is to solve the problem of semi-active damping, i.e., for a given vibrational system of the form (1), to determine the best damping D(g) that ensures optimal attenuation of the output y.Since we want to minimize the maximum response gain for the entire range of time, we minimize the L ∞ -norm of the output y.For that, we can make use of the following bound y L∞ ≤ G(•, g) H2 u L2 , where G(s, g) := C(s 2 M + sD(g) + K) −1 B is the transfer function associated to the viabrational system in (1) that describes input to output behavior in the frequency domain.We see that the norm of y is bounded by the H 2 -norm of the transfer function G(•, g) that is also called the energy response and defined as trace(G(iw; g) H G(iw; g))dw 1 2 .
In this article, we will optimize the damping values in such a way that the H 2 -norm of the transfer function, and hence the energy response, is minimized.This criterion was also used in [3,15,39].
The latter is a linearization of (1), and we emphasize that there are many other useful linearizations that could be used here.We use (3) for simplicity.As described in [48], the H 2 -norm of the transfer function can be computed as where the matrix P(g) is called controllability Gramian and is defined in the time domain or the frequency domain as The upper left block P 11 (g) of the Gramian P(g) is called position controllability Gramian and encodes the reachabillity space of the state x(t) of the corresponding second-order system (1).The controllability Gramian P(g) is computed by solving the Lyapunov equation In order to find the damping gains g ∈ D that minimize the energy response J(g), we have to solve a Lyapunov equation ( 6) in every step of the optimization method.Since the Lyapunov equation solves are very demanding if the matrices are of large dimensions, the minimization process would lead to high computational cost and hence be inefficient or unfeasible in a large-scale setup.In the literature, several approaches have been developed to accelerate the minimization process.The authors in [3] utilize the dominant pole algorithm in order to build a reduced minimization problem that is fast solvable.On the other hand, in [39], an efficient optimization approach using structure-preserving parametric model reduction based on the iterative rational Krylov algorithm (sym2IRKA) is used to derive an efficient optimization algorithm.In [2], a sampling-free approach is presented that reduces the system (1) for all admissible parameters.Alternatively, in [40] the H ∞ -norm of the transfer function G is minimized.
In [6,41,43], the authors present different reduction techniques to optimize the related problem of minimizing the total average energy for the system (1) with no input.It is worth mentioning that the problem of optimizing damping positions is investigated in [5,11,16,23,38].This is a challenging problem especially for large-scale systems and it is not the focus of this work.
In this paper, we apply the reduced basis method (RBM) and modifications of it, in order to reduce the Lyapunov equation ( 6) such that the surrogate equation can be solved in a reasonable time.The RBM is a well-established method to reduce parameter-dependent partial differential equations [18,30,[44][45][46].Later, it was used for Riccati equations [33] and finally the RBM was applied to Lyapunov equations by Son an Stykel in [36].In [29], the authors use the RBM in order to reduce parametric differential-algebraic systems.The method is decomposed into an offline and an online phase.In the offline phase of the RBM, a reduced space is determined that approximates the solution space of the original Lyapunov equation for all parameters g ∈ D.
To evaluate the quality of the approximated reduced space, we suggest two error estimators that evaluate the error in the controllability Gramians and in the corresponding energy responses for different damping values.The approximated solution space is then used in the online phase to solve the corresponding reduced Lyaponv equations, leading to an approximation of the energy response determined by Equation ( 4) for all requested parameters g ∈ D. To avoid adding informations into the basis that are not needed during the optimization, we introduce an adaptive method that enriches the basis within the optimization process.
The rest of this paper is structured as follows: In Section 2, we describe the reduced basis method, and introduce error estimators that are suitable for our method.The adaptive enrichment method is then introduced in Section 3. Afterwards, we describe some implementation details in Section 4. In order to illustrate the effectiveness of our methods, in Section 5, we apply our methods to two examples, and finally Section 6 concludes the manuscript.

Reduced basis method
As described above, we want to find the damping values that minimize the energy response J(g).
The optimization procedure requires the evaluation of this energy for several parameters g ∈ D. To accelerate the optimization, we want to speed up the computation of the Gramian P(g) or P 11 (g), which contains the numerically most costly step.The reduced basis method (RBM) has been found to be an effective tool to deal with this problem, see [36].We apply the RBM in order to reduce the optimization problem in Section 2.1.Afterwards, in Section 2.2, we derive error estimators that are needed in the RBM to evaluate the quality of the approximations.

Reduction using the reduced basis method
In order to accelerate the computation of the position controllability Gramians P 11 (g), we aim to find a space V D that spans the controllability space of the second-order system (1) and hence the columns of the Gramians P 11 (g), i.e. span {P 11 (g)} ⊂ V D = span {V D } , for all admissible parameters g ∈ D, where V D is a basis that spans the space V D .Then, for all g ∈ D, there exists a matrix X 11 (g) with Since such a space is in general not available, the idea of the RBM is to find a reduced space V = span {V 1 }, V 1 ∈ R n×r so that the position controllability Gramian P 11 (g) can be approximated as for suitable P 11 (g) ∈ R r×r , and for all damping parameters g ∈ D. The matrices P 11 (g) are determined by solving Lyapunov equations of dimension 2r, which are fast computable if r is small enough.
The RBM is decomposed into an offline and an online phase.The offline phase consists of the computation of the basis V 1 ∈ R n×r , which is rather time consuming but needs to be performed only once.In the online phase, on the other hand, Lyapunov equations are solved on the reduced space V, which is fast and can be performed multiple times.This phase involves performing the optimization, which requires evaluating the approximated energy response for multiple damping parameters.In order to describe the two phases in more detail, we need a criterion to evaluate the quality of the reduced space V. Thus, we assume that we have an error estimator ∆(g) that provides a criterion to determine how well the solution space for a parameter g is approximated by the current basis V 1 .The used error estimators are described later in Section 2.2.
Offline phase: In this phase, we aim to find an approximation V of the space V D and the corresponding basis V 1 , which will be used later to define an approximation of the position controllability Gramians P 11 (g) for the required parameters g ∈ D. Since we can not evaluate and investigate an infinite number of parameters that are given in D, we define a test-parameter set D Test ⊂ D that is finite and well distributed in D. Finding a good sampling strategy for the test-parameter set is a challenging task, which was addressed in [8,12,19,32] for partial differential equations.In the following, we derive a space V that includes approximations of the solution spaces corresponding to all test-parameters g ∈ D Test .Then, if D Test is well-chosen, the space V approximates the solution space for all parameters in D.
Because of the low-rank structure of the right-hand side in (6), we can assume that the solution P(g) of the Lyapunov equation in ( 6) is well approximated by a low-rank factor Z(g) such that where for g ∈ D. In order to build a first basis V 1 , we choose one arbitrary test-parameter g 0 ∈ D Test and solve the Lyapunov equation in (6) to obtain the low-rank factor Z(g 0 ) that includes Z 1 (g 0 ).Using the low-rank factor Z 1 (g 0 ), we define the basis The operator orth(•) describes the orthonormalization of the columns of a given matrix.After forming our first basis, we evaluate the quality of the approximation of the position controllability Gramian P 11 (g) for all remaining parameters g ∈ D Test .To this aim, we compute the error estimate ∆(g) for all these parameters and define the largest error estimate as where g 1 is the parameter that leads to the largest estimate.If ∆ max is larger than a given tolerance tol f , we know that the current basis does not approximate the solution space good enough for at least one parameter g 1 .Hence, we need to enlarge the basis V 1 .An obvious candidate to enrich the basis are the columns of the Gramian P 11 (g 1 ) for the parameter g 1 that results in this largest error estimate.We compute the low-rank factor Z 1 (g 1 ) by solving the Lyapunov equation in (6) and set to build the new basis.We continue with this procedure until the error estimate ∆(g) is smaller than the tolerance tol f for every damping value g ∈ D Test .
This method is described in Algorithm 1.In Step 3 and 10 we define the set of already used parameters M so that we do not evaluate their error estimates in Step 6 and 12.

15: end while
Online phase: After the basis V 1 is computed, it can be used to determine an approximation of the position controllability Gramian P 11 (g) as described in (7).For that, define the basis corresponding to the first-order system in (3) and define the by V reduced matrices We compute the reduced Gramian P 11 (g) that is used in (7) by solving the reduced Lyapunov equation where . Note that the projected Lyapunov equation in ( 9) is of dimension 2r and can therefore be solved cheaply compared to the original Lyapunov equation in (6).
In practice, we make use of the low-rank structure of the Gramians described in ( 8) so that we obtain the approximations Optimization: Finally, we include the online phase of the RBM into our optimization problem.We assume that the bases V 1 and V were computed in the offline phase, beforehand.Then, in any step of the optimization process (in a parameter g), we solve the reduced Lyapunov equation from ( 9) so that the reduced Gramians P(g) and P 11 (g) are available.Afterwards, we define the approximated energy response that is minimized.The computation of the reduced energy response in (10) includes the solving of a Lyapunov equation from ( 9) of dimension 2r which is performed in every step of the optimization process.This is a significant acceleration compared to the optimization of the original energy response in (4) where the Lyapunov equation from ( 6) of dimension 2n needs to be solved in every iteration step.

Error estimator
For the reduced basis method presented above, as well as for the adaptive method presented in the following section, error estimators are needed to evaluate the quality of the resulting approximations.
In this section, we will derive error estimators of two different quantities.
The first one is the approximation error of the position controllability Gramians, defined as where P 11 (g) is an approximation of P 11 (g).The position controllability Gramian error E 11 (g) is the upper left block of the corresponding error In the literature there are various upper bounds for E(g) , i.e. for the error in the solution of the Lyapunov equation in (6), see e.g.[29,36].These bounds use the corresponding residual R(g) := BB T + A(g) P(g) + P(g)A(g) T divided by the coercivity constant of the Lyapunov equation in (6) which is the minimal eigenvalue of the corresponding linear system matrix.For vibrational systems with small internal damping, the smallest eigenvalues are close to the imaginary axis, and hence, the coercivity constant is small so that the error estimators lead to large, very conservative values and are not feasible for our application.Additionally, we only want to evaluate the approximation of the controllability space of x(t) encoded in P 11 (g), while evaluating the approximation of the complete Gramian P(g) contains information that are not used.
The second quantity that we consider is the approximation error of the function values of the energy response, i.e.
We notice that the error E(g) or E 11 (g) is required for both, the evaluation of the error E 11 (g) as well as for the computation of the error E J (g).Since it is costly to compute the error E(g) for every requested parameter g, we require an approximate E(g) ≈ E(g).We aim to find such an E(g) in the following.Firstly, notice that the error E(g) is the solution of the following Lyapunov equation that is also called error equation.Hence, a second RBM that follows the same scheme as presented in Section 2.1 can be applied to determine a basis V 1,err and to approximate the error as To avoid confusion, we denote the second reduced basis method that determines the basis V 1,err for the error estimation as EE-RBM.The basis V 1,err can be constructed using the solutions of ( 12) for some parameters.However, the right-hand side R(g) of the error equation in (12) does not consist of lowrank factors, and hence, solving this Lyapunov equation is generally numerically costly, since solution algorithms cannot exploit such a low-rank factor structure.Hence, in order to avoid having do solve the error equation in (12), we investigate the error E 11 (g) and the structure of the corresponding error spaces in more detail to simplify its computation.For this purpose, we write the position controllability Gramian as , where V D is a basis spanning the (complete) solution space V D of the second-order system in (1) for all parameters g ∈ D. The error is then given as and hence we obtain, that the error E 11 (g) lies in the space spanned by the basis for all parameters, which was investigated in [9] for linear systems.Since V 1 is known already from the first RBM, the remaining task is to determine V D .However, obviously, the basis V D is not available, otherwise we would have a basis that spans the solution space of the Lyapunov equation in ( 6) for all parameters without any error.This is why we apply the second reduced basis method (EE-RBM) and derive an approximation of V E that is called V 1,err .However, because of the structure of the basis V E , we can solve a second Lyapunov equation from ( 6) instead of the error equation in (12), which is of a more advantageous structure because of the low-rank factor structure of the right-hand side.Hence, in every step of EE-RBM, we solve the Lyapunov equation from ( 6) in a certain parameter g r to obtain the corresponding solution Z 1 (g r ) and to enrich the basis of the error equation in (12) as ), and therefore build a basis that approximates V E .Adding V 1 and Z 1 (g r ) is equivalent to solving the error equation from (12) in g r and adding the resulting solution to the basis V 1,err .
In order to include the computation of the basis V 1,err into the first RBM, we run both methods, the RBM and the EE-RBM in parallel.The first parameters g 0 and g r 0 are chosen arbitrarily in D Test with g r 0 = g 0 .We refer to [13] for a mathematical analysis why g r 0 should be chosen different from g 0 .We compute the basis V 1 = orth (Z 1 (g 0 )) as described in the previous subsection and, in addition, solve the Lyapunov equation from (6) in g r 0 to obtain Z 1 (g r 0 ) such that our first error space basis is given as As described above, the next parameter g 1 is the one that leads to the largest error estimate ∆(g) and we use the corresponding solution Z 1 (g 1 ) to enrich the basis V 1 .Next, the parameter g r 1 is chosen to be that one that results in the largest residual of the error equation in the Frobenius norm, i.e. the parameter g r 1 that leads to the largest value We use the parameters g r 1 to generate the next error equation basis We continue with this process until the maximum error estimator ∆(g) is smaller than a certain tolerance for all parameters and the overall RBM is finished.
After we have determined an error equation basis V 1,err , this basis is used to derive the corresponding error estimators.For that, we define where V err spans an approximation of the solution space of the error equation in (12).Using this basis, we define the reduced matrices A err (g) := V T err A(g)V err and R err (g) := V T err R(g)V err that lead to the reduced error equation that is solved by the reduced error matrix E(g), similar to the step in ( 9) for the RBM.The approximation of the position controllability Gramian error E 11 (g) is then built as Using the error approximation E(g) and E 11 (g) from ( 15), we define the two error estimators We use the expression ∆(g) in the following and leave it to the user to choose the more appropriate one.The first RBM combined with the EE-RBM results in Algorithm 2. We observe that the first steps of the RBM with the EE-RBM lead to rough error estimates since the basis V 1,err includes only a few solutions.However, the larger and therefore better the basis V 1 is, the larger and more detailed is the basis V 1,err .
Remark 1. Numerical experiments suggest that adding the solution vectors of Z 1 (0) corresponding to the undamped system to our basis V 1 leads to more robust results.Since this basis is independent of the damping values, we compute it beforehand and initialize the basis V 1 = orth(Z 1 (0)).
Remark 2. In practice, the computation of R r (g) can be performed efficiently as follows.We make use of the trace formulation of the Frobenius norm and utilize the low-rank representations E(g) = V err E(g)V T err and P (g) = V P (g)V T and the trace properties to obtain the fast computable residual representation:

Adaptive basis building
In the previous section, we proposed an RBM leading to a space V that approximates the controllability space of the state x(t) of system (1) for all parameters in D. This space is then used to derive a reduced optimization problem.However, if the parameter set is equal to D = R ℓ + , i.e., there is no Algorithm 2 Error estimation within the RBM Input: 1: Choose any g 0 , g r 0 ∈ D Test , g 0 = g r 0 .2: Solve the Lyapunov equation ( 6) at g 0 to obtain Z 1 (g 0 ).

20:
Set k := k + 1. prior knowledge about the parameter set, the method is not applicable.Additionally, the optimization process might only use parameters from a subset of D such that the basis V 1 of the space V from Section 2 contains controllability space information corresponding to parameters g that are not used throughout the optimization process and therefore the basis V 1 might be of too large dimension.This motivates an adaptive scheme, i.e. a procedure, that enriches the basis V 1 within the optimization process based on the quality of the approximation in the considered parameters.To find out whether the current basis V 1 is adequate for the current parameter g, we again need an error estimator ∆(g).
In what follows, we will firstly describe this adaptive RBM in Section 3.1 assuming an error estimator is provided.Then, in Section 3.2, an error estimator is proposed for this adaptive procedure.

Reduction using the adaptive reduced basis method
The idea of the adaptive RBM is to enrich the basis V 1 within the optimization process.Consequently, there is no offline phase for this methodology, since the reduced bases are constructed within the optimization steps.Firstly, we select a parameter g 0 ∈ D as the initial value for the optimization process and solve the Lyapunov equation in (6) for this parameter to obtain the low-rank factor Z 1 (g 0 ).We set the first basis to be V 1 = orth(Z 1 (g 0 )) that is used to define the reduced optimization problem in (10), which depends on the solution of the reduced Lyapunov equation in (9).In contrast to the previous method, we add an additional stopping criterion within the optimization process that interrupts the procedure whenever the solution space corresponding to the current parameter g is not well-approximated by the basis V 1 .To achieve this stopping, we modify the goal function as described by Algorithm 3. In every iteration of the minimization, we query the error estimate ∆(g) of the current parameter g as described in Step 2. If the error estimate is smaller than a given tolerance, we proceed with the function evaluation in Steps 5 and 6 to obtain the resulting function value J(g) and continue with the minimization.On the other hand, if the error estimate is larger than the tolerance, this means that the current basis V 1 does not approximate the solution space of the Lyapunov equation ( 6) for the current parameter g sufficiently well.Hence, we return that the minimization did not converge.In this case, we need to enrich the basis V 1 .Therefore, we solve the Lyapunov equation from (6) in this parameter g to obtain Z 1 (g) and define the updated basis Consequently, we obtain a new optimization problem (10) that is defined by the new basis V 1 and the new projected Lyapunov equation from (9).Since the optimized function depends on the current basis V 1 , which changes during the optimization procedure, convergence problems may occur.In this case, we restart the optimization procedure after we enrich the basis and use the current parameter g as initial value.We continue with this procedure until the optimum is reached.

Error Estimator
Finally, we have a closer look at the error estimator ∆(g) for the adaptive RBM.We follow the same idea as in Section 2.2 and run a second reduced basis method to generate a basis V 1,err that spans an approximation of the error space.The equation in (16) defines then the error estimators ∆ 1 (g) and ∆ 2 (g) corresponding to the bases V 1 and V 1,err .In this adaptive procedure, the basis V 1,err is enlarged whenever the basis V 1 is expanded.In this way, the error approximation, and thus the error estimator, becomes more accurate the closer we get to the optimizing parameter.
The detailed procedure is described in Algorithm 4. When we determine the first basis V 1 = orth(Z 1 (g 0 )), we solve a second Lyapunov equation in an arbitrary parameter g r 0 ∈ D with g r 0 = g 0 to obtain the solution Z 1 (g r 0 ).To limit the possibilities of choosing g r 0 , we again define a finite subset D Test ⊂ D and pick the parameter g r 0 from this finite set D Test .However, we can select the parameter g r 0 and the following parameters g r randomly if we want to avoid confinement to a parameter set D. We use the solution Z 1 (g r 0 ) and the basis V 1 to obtain the first error equation basis With the bases V 1 and V 1,err , we define the error estimate ∆(g) as in (16).The basis computation is described in Steps 1 to 5 of Algorithm 4. After computing the first bases V 1 and V 1,err , we start the optimization process of the reduced problem defined by Algorithm 3.This optimization yields either the minimizer g * or, if conv = false holds, the information that the optimization process did not converge and we need to enrich the bases.If the bases need to be expanded, in Steps 8 to 12 we enlarge the bases V 1 and V 1,err as where we choose in Step 12 the parameter g r ∈ D Test that results in the largest residual with R r (g) defined as in (13).Afterwards, in Step 13 we compute the approximated energy response value and proceed with the minimization process.

Algorithm 3 Reduced energy response
Output: Energy response J(g), variable conv that shows whether the algorithm converged.

6:
Set J(g) = tr(CV 1 P 11 (g)V T 1 C T ).7: end if Remark 3. As in the previous section, we solve the Lyapunov equation (6) in g = 0 ∈ R ℓ (undamped system) to obtain Z 1 (0).The vectors of Z 1 (0) are then added to the basis V 1 which turns out to lead to a more robust basis.

Implementation details
In this section, we specify some implementational details.First, in Section 4.1, we describe a transformation that leads to a numerically advantageous system.Then, in Section 4.2 we make use of this structure that accelerates the sign-function method that is used to solve the Lyapunov equations.

Modal representation
In order to simplify the computations and the numerical effort, we describe briefly an useful transformation, that is used in the following.As shown in [42], there exists a transformation Φ, called modal matrix, such that

Algorithm 4 Adaptive reduced basis method
Input: A : D → R n×n asymptotically stable, B ∈ R n×m , initial parameters g, g r ∈ D Test with g = g r , tolerance tol f .Output: Minimizer g opt , energy response J(g opt ).
]). 6: Apply an optimization method to optimize the function given by Algorithm 3 to obtain the minimizer g opt , J(g opt ) and conv.Solve the Lyapunov equation from ( 6) in g opt to obtain Z(g opt ). 9: ). 10: Determine g r := argmax g∈DTest R r (g) F . 11: Solve the Lyapunov equation from ( 6) in g r to obtain Z 1 (g r ). 12: ).

13:
Apply an optimization method to optimize the function given by Algorithm 3 to obtain g opt , J(g opt ) and conv.

14: end while
The values ω 1 , . . ., ω n are the eigenvalues of the undamped system and are called eigenfrequencies, see [43].The transformation matrix Φ is given by the spectral decomposition of M − 1 2 KM − 1 2 as where Ω is as above.It holds that Φ T D int Φ = 2αΩ.That means that Φ diagonalizes the internal damping D int .Hence, this damping is called modal damping.The transformed mass matrix is the identity matrix, the transformed stiffness and internal damping matrix are diagonal matrices and the external damping matrix is written using its low rank factors as (Φ T F )G(g)(Φ T F ) T .Hence, the second order system (1) is equivalent to the first order system (3) with the matrices

Solving Lyapunov equations using sign function method
As described in the previous sections, we need to solve Lyapunov equations from ( 6) in order to compute the desired energy response from (4) for specific parameters g ∈ D. Hence, in this section, we briefly review some numerical methods to solve the parameter-independent Lyapunov equation There are multiple methods for solving this kind of equations.If the matrix dimensions are sufficiently small, dense direct solvers such as the Hammarling method [17] or the Bartels-Steward algorithm [1] are available.There are also dense iterative solvers such as the sign function method introduced in [4].However, these methods are unfeasible if the matrix dimensions are large.In this case, the alternating-direction implicit (ADI) method [24,28] and Krylov subspace methods [35] are the state of the art.Moreover, for certain matrix structures, the sign function method can be applied, which exploits these structures to solve the Lyapunov equations efficiently.In this work, the latter method is the one of choice, since numerical experiments indicate that it leads to the fastest results in our setting.
Within the sign-function method, which was derived in [10], the structure presented in ( 17) is used to solve the Lyapunov equations more efficiently.The sign function method determines a low-rank factor Z, such that P ≈ ZZ T is an approximative solution of the Lyapunov equation in (18).We exploit the fact, that the Lyapunov equation in ( 18) is equivalent to the equation where sign(•) denotes the sign function of a matrix.We observe that the sign function of W provides the solution P ≈ ZZ T of the Lyapunov equation from (18) in its lower left block.The sign function can be computed by applying Newton's method: where c k is an acceleration factor and can be chosen as F .The convergence of this method is shown in [31].The Newton's method described in ( 19) is applied to compute sign(W).In order to improve the efficiency while computing the inverse A −1 k , the decomposition presented in (17) as well as the Sherman-Morrison-Woodbury formula is applied as described in [10].The initial values are set to be and the iteration is defined as We stop this method if A k + I 2 ≤ tol since A k converges to −I, or if a maximum number of iterations iter max is exceeded.
One disadvantage of this method is the high growth rate of the dimension of the low-rank factor B k+1 .Therefore, even with internal truncation techniques, the method becomes rather slow if it does not converge after a few steps.Hence, in this work, we set the maximum number of iterations iter max to be quite small, so that the method is interrupted before the low-rank factors are of too large dimensions.Hence, the low-rank factor approximation is not guaranteed to be accurate, however, it still spans a space that is a good enough approximation of the solution space as we will show in the numerical examples.

Examples and numerical results
In this section, we illustrate the accelerations that arise when we apply the methods presented in this paper.Therefore, we consider two examples that are presented in [39].The computations have been done on a computer with 2 Intel Xeon Silver 4110 CPUs running at 2.1 GHz and equipped with 192 GB total main memory.The experiments use MATLAB ® 2021a.

Example 1
The first example was introduced in [39] and arises in mechanical constructions with n consecutive masses.Each mass m j is connected to the direct neighbor masses m j−1 and m j+1 by springs with stiffness values k j and k j+1 .Additionally, each mass is connected by springs with stiffness values k j−1 and k j+2 to the masses next to the neighbor masses m j−2 and m j+2 .The outermost masses are connected to fixed objects via springs with constants 2k 1 and 2k n .This construction results in the following mass and stiffness matrix We consider an example of dimension n = 1900 with stiffness constants k j = 500, j = 1, . . ., n.The mass values are chosen as The internal damping D int is built as described in Equation ( 2) where the scaling factor is α = 0.005.We consider external disturbance forces that attack at the sequential masses from m 471 to m 480 .Hence, in the input matrix B the values at positions 471 to 480 are set to be B(471 : 480, 1 : 10) = diag (10,20,30,40,50,50,40,30,20,10) .
The other entries of B are equal to zero.Consequently, we have an (n × 10)-dimensional input matrix B where the highest magnitude of disturbance is applied to the mass in the center, whereby the disturbance magnitude gets smaller in the outer masses.To observe the system behavior, we consider the displacement of the states x 100 (t), x 200 (t), . . ., x 1800 (t).Hence, the output matrix C is 18 × ndimensional and has zero entries everywhere except at the positions (1, 100), (2,200), . . ., (18,1800) where the entries are equal to one.Now, we consider the external dampers that we want to optimize.We consider four dampers at the positions j, j + 1, k, k + 1 where j and k can take the following values {(j, k) | j ∈ {50, 150, 250, 350}, k ∈ {850, 950, . . ., 1850}} such that we obtain 44 possible damping configurations.For each damping configuration, we optimize the damping values individually.The damping gains g consist of two values g 1 and g 2 , where the dampers at the j-th and the (j + 1)-st position have the damping value g 1 and the dampers at the k-th and the (k + 1)-st position the damping value g 2 .We assume that the damping values g 1 and g 2 lie in the interval [500, 4000].
To optimize the damping gains for the different damping configurations, we use the MATLAB-function fminfun, where we stop the minimization process if the difference between two successive function values or damping gains is smaller than a tolerance of 10 −4 .We start the optimization process at g 0 = 1000 1000 T for all damping configurations.To solve the Lyapunov equations from (6), we use the sign-function method that is presented in Section 4.2 with tol = 10 −6 and a maximum iteration number of iter max = 10 because of the fast dimension growth within the method.The tolerance for the function value error that indicates whether a basis is sufficiently detailed is set to be tol f = 10 −3 .As test-parameter set D Test for the RBM, we use 36 uniformly distributed parameters in [500, 4000] × [500, 4000].
After the first step of the RBM for the 11-th damper configuration, we display the quality of the error estimator in Figure 1.We observe that the relative error in the position controllability Gramian E 11 (g) F / P 11 (g) F and the corresponding estimate E 11 (g) F / P 11 (g) F are very close, so the error is well approximated.On the other hand, the energy response is underestimated since we are considering an error estimator and not an error bound.However, for our purpose the quality was good enough, since the energy response error and its approximate are of a similar magnitude.Since the initial value g 0 is known, we choose this parameter as first one that is evaluated within the RBM.The first parameter g r 0 that is used to obtain a first error equation basis is chosen to be g r 0 = 100 100 within the RBM and the adaptive RBM.Within the state of the art methods, the symmetric IRKA approach for second-order systems (sym2IRKA) leads to the highest acceleration rates in the optimization of external dampers and their viscosities.Hence, we will compare in the following our approach to the sym2IRKA method from [39].Since its code is not available, we implemented the method ourselves and compare the results to the best implementation and configurations we were able to generate using the sym2IRKA method.This method is a projection method as well that enriches the corresponding basis by vectors that are generated using an IRKA approach.This method starts the optimization processes at g 0 as well.For every enriched basis a new optimization process is started.
In every iteration, 60 vectors are added to the basis and the method stops if the relative error between two consecutive optima is smaller than the tolerance tol IRKA = 10 −3 .The relative errors between the optimal damping gain and the approximations obtained using the methods presented in the previous sections are presented in Figure 2. We observe that all errors are smaller than 10 −2 and hence that the damping values are for our purposes sufficiently detailed.Now, we evaluate the optimization times, that include for the RBM the offline and the online phase.Offline, we determine a low-rank factor of the solution of the Lyapunov equation ( 6) for the undamped system.This low-rank factor is included into the bases in the RBM, the adaptive RBM, and the sym2IRKA approach.Since this low-rank factor is computed beforehand, the computation time of 5.2 seconds is not taken into account in any of these methods.We compare the times in Figure 3 and the acceleration rates for the different methods in Figure 4.The MATLAB-solver lyapchol is used to solve the Lyapunov equations from (6).We observe, that the reduced optimization procedures lead  to significantly faster results so that the acceleration rates are in average 339 for the reduced basis method and 208 for the adaptive procedure.These results are comparable with those from [39].Here the acceleration rate is in average 78 in our sym2IRKA implementation, that is less improvement than with our method.However, we note that in [39] the acceleration rate for this example was 346.We also observe that the adaptive reduced basis method is slower than the one where offline and online phase are decoupled.This is because already the first basis that is equal for both methods is sufficiently good while for the adaptive method there are additional computational cost for the evaluations of the error estimators.We want to mention, that the adaptive method can still be advantageous since we do not need a parameter set D in advanced for it.Only for the error estimator the test-parameter set D Test ⊂ D is needed that can be replaced by choosing arbitrary parameters in a certain range around the damping values that are attained during the optimization process.
In Figure 5, the function values for all 44 damping configurations are evaluated.We see that the optimal damping configuration is the 34-th one that has the damping positions j = 350, k = 850.

Example 2
The second example contains a mass oscillator with 2d + 1 = n masses and n + 2 springs.We have two lines of d consecutive masses m 1 , . . ., m d and m d+1 , . . ., m 2d that are connected by springs.The springs of the first line all have the stiffness value k 1 and the springs in the second line the stiffness value k 2 , where the first masses are connected with these springs to a fixed object.The last masses m d and m 2d are connected to a mass m 2d+1 = m n by springs with stiffness constants k 1 and k 2 while  the mass m n is connected to a fixed object via a spring with a constant k 1 + k 2 + k 3 .This construction results in the stiffness matrix In this example, we consider four damping values that are optimized.We consider two dampers in the first row that are between the masses m j and m j+5 and between the masses m j+20 and m j+25 .For the second row we follow the same pattern and add two dampers between the masses m k and m k+5 and between m k+20 and m k+25 .Consequently, we optimize four damping values g 1 , g 2 , g 3 , g 4 that are saved in g ∈ R 4 .The corresponding damping position matrix is then of the form This setting leads to 28 different damping configurations.We assume that the damping values g 1 , g 2 , g 3 , g 4 lie in the interval [350,7000].For the optimization process, we set a tolerance of 5 • 10 −4 and start at g 0 = 1000 1000 1000 1000 T for all damping configurations.The tolerance for the function value error that indicates whether a bases is sufficiently detailed is set to be tol f = 10 −2 .As test-parameter set D Test for the RBM we use 21 uniformly distributed parameters in [350, 7000] 4 .The first parameter g r 0 that is used to obtain a first error equation basis is chosen to be g r 0 = 100 100 100 100 within the RBM and the adaptive RBM.
We evaluate the quality of the error estimate after the first step of the RBM for the fifth damper configuration in Figure 6.We observe that the relative error in the position controllability Gramian and the corresponding estimate are very close, so the error is well approximated.In this example, it can be observed that the error in the energy response is also well approximated.We compare again the results generated by the methods presented above with the symmetric IRKA approach for second order systems (sym2IRKA) [39].In every iteration, 120 vectors are added to the basis and the method stops if the relative error between two consecutive optima is smaller that the tolerance tol IRKA = 10 −3 .The relative errors between the optimal damping gain and the approximations we obtain using the methods presented in the previous sections are shown in Figure 7. Additionally, we evaluate the optimization times.Outside of the applied methods, we determine a low-rank factor of the solution of the Lyapunov equation ( 6) for the undamped system.This low-rank factor is included into all the bases and is not taken into account in the time measures.This solving takes 54 seconds.We compare the optimization times and the acceleration rates for the different methods in Figure 8 and Figure 9, respectively.They are in average 89 for the reduced basis method and 51 for the adaptive procedure.These results are comparable with those from [39].Here the acceleration rate is in average 4 in our implementation, that is less improvement than with our method.However, in [39], acceleration rates of up to 208 were achieved for this example, which we could not reproduce.
In Figure 10 the function values for all 28 damping configurations are evaluated.We see that the optimal damping configuration is the 25-th one that has the damping positions j = 850, k = 1450.

Conclusion
In this article, we have dealt with the damping optimization problem in a large-scale vibrational system.We have presented two different approaches based on the reduced basis method (RBM), which can be used to find optimal damping values in a reasonable time.In the first method, the RBM for Lyapunov equations has been integrated into the calculation of the energy response, and, subsequently a reduced objective functional has been optimized.In the second method, we have integrated the RBM into the optimization process so that the basis is only enriched when necessary.In addition, we derived new error estimators that were used to evaluate the quality of the reduced energy response and the reduced   position controllability Gramian.Finally, we have applied our methods to some mechanical systems and have observed that our approaches improve on or at are least comparable to existing methods.

Figure 1 :
Figure 1: Errors estimator for Example 1 for the first damping configuration and the first step of the RBM.

Figure 5 :
Figure 5: Function values obtained in Example 1

Figure 6 :
Figure 6: Errors estimator for Example 2 for the first damping configuration and the fifth step of the RBM.

Figure 10 :
Figure 10: Function values obtained in Example 2