1 Introduction

Sensitivity analysis of eigenvalues and their associated eigenvectors is a significant research aspect in the field of engineering and natural sciences. As modern technologies and increasingly complex projects continue to evolve, engineers are increasingly faced with the necessity to comprehend how alternations in design parameters influence the dynamic characteristics of the systems under study. Sensitivity analysis is a vital tool employed in various domains, including structural design [1], model updating [2], damage detection, optimization procedures [3], identification [4], and uncertain design parameters [5].

An overview of sensitivity analysis methods is available in the review paper [6]. One of the tools utilized in sensitivity analysis, also employed in this article, involves the computation of derivatives for quantities that describe the structural response. An early work, in which authors computed first-order derivatives for undamped systems with symmetric matrices, is referenced in [7]. In that work, the authors applied the modal expansion technique. The paper by Nelson [8] presents a method for obtaining derivatives of eigenvectors for distinct eigenvalues. The derivative is expressed as a solution comprising particular and homogeneous parts. Subsequent works have been developed based on this method, categorized as the Nelson-type method. In [9], an enhancement of this method was introduced to apply it to systems with repeated eigenvalues. In practice, in typical engineering systems, especially in cases involving system symmetry, repeated eigenvalues are relatively common. However, they present certain difficulties in calculating derivatives of eigenvectors. Eigenvectors associated with repeated eigenvalues are linearly dependent, and computing derivatives necessitates identifying the appropriate adjacent eigenvectors to ensure stable control of eigenvector changes. Papers [10, 11] improved and modified the method proposed in [9] by incorporating second-order sensitivity analysis of eigenvalues. Further research allowed for the application of the discussed method to non-self-adjoint systems [12]. In [13], the authors introduced a new normalization condition for complex eigenvectors with both distinct and repeated eigenvalues. The paper [14] extends the method of determining the sensitivity of eigenvectors through the calculation of particular and homogeneous solutions to cases where the first derivatives of eigenvalues are also repeated. This is achieved by utilizing information from the triple differentiation of the eigenvalue problem. Such a specific case, where both eigenvalues and their first derivatives are repeated, was also explored in [15,16,17]. Typically, the sensitivity of eigenvalues is calculated by solving an additional eigenvalue problem, and its solution provides the transformation matrix for obtaining new eigenvectors. In the case of repeated derivatives of eigenvalues, the transformation matrix is not unique, necessitating the use of additional algorithms to determine the eigenvector sensitivity.

The work in [18] presents a generalization of the method for linear, non-viscously damped systems with repeated eigenvalues. The authors introduced a normalization condition for eigenvectors in such systems because, in this context, the eigenvectors do not satisfy the classical orthogonality condition. The work also considers non-self-adjoint systems.

In addition to Nelson-type methods, various other approaches have been developed for calculating derivatives of eigenvectors for repeated eigenvalues. In [19], a singular value decomposition approach was applied to determine the required basis of eigenvector space for computing derivatives of eigenvectors for repeated eigenvalues. In [20], the authors introduced a method suitable for both distinct and repeated eigenvalues, although it primarily focuses on the calculation of eigenvalue sensitivities. This method is based on the characteristic polynomial of the considered eigenvalue problem and, unlike the Nelson method, does not require an eigenvector. In [21], a numerical method using an iterative procedure was proposed, considering both distinct and repeated eigenvalues. Lee [22] introduced the application of the adjoint method for calculating sensitivities of repeated eigenvalues and their corresponding eigenvectors. The advantage of this method is that it does not necessitate determining second-order eigenvalue sensitivities or linear combinations of eigenvectors. On the other hand, in [23], a method was proposed to calculate eigenvector sensitivities for repeated eigenvalues without the need for using adjacent eigenvectors.

In the works [24, 25], the calculation of sensitivities of eigenvalues and their associated eigenvectors was proposed through the solution of a system of linear algebraic equations. These works considered distinct [24] and repeated [25] eigenvalues, respectively. In [26], a method was introduced to determine particular solutions for eigenvector derivatives, leading to a significant reduction in the condition number of the coefficient matrix. Simultaneously, an error was identified in the derivation of the normalization condition derivatives in [25].

In the work [27], sensitivities of defective repeated eigenvalues and their corresponding eigenvectors were considered within the context of a square eigenvalue problem. Sensitivity analysis for the square eigenvalue problem is also discussed in [28] and in [29], considering non-self-adjoint systems.

While in most cases, researchers primarily focus on first-order sensitivities, there are situations where these prove insufficient for accurately predicting changes in the structural response. In such cases, it becomes necessary to determine second-order sensitivities. Formulas for calculating second-order sensitivities of repeated eigenvalues and their corresponding eigenvectors for systems with damping can be found in [30].

Li et al. proposed an algorithm in [31] for non-self-adjoint systems, enabling the independent computation of left and right eigenvector derivatives. This method was applied to both distinct and repeated eigenvalues.

The work in [32] introduced an extension of the method for computing particular and homogeneous solutions of eigenvector derivatives for repeated eigenvalues in systems where square eigenvalue problems arise. Meanwhile, in [33], an enhancement of this method was proposed, significantly reducing the condition number of the coefficient matrix required for calculating particular solutions. In [34], this solution was extended to non-viscously damped systems.

The work in [35] presents a generalization of this method to address the general nonlinear eigenproblem. It considers both distinct and repeated eigenvalues. However, in [36], the authors introduced a new normalization condition and derived formulas for calculating second-order sensitivities, but only for distinct eigenvalues. For repeated values, sensitivities were derived for first-order eigenvalues and eigenvectors. The systems considered involved damping leading to a square eigenvalue problem.

The work in [37] provides a solution for calculating complex eigenvector derivatives for both distinct and repeated eigenvalues, with a specific focus on asymmetric systems. The solution suggests the imposition of consistent normalization conditions on the left and right eigenvectors and the computation of sensitivities using the chain rule.

The sensitivities of eigenvalues and eigenvectors are essential in various applications, including optimization problems. In [38], the authors analyzed the eigenmode optimization problem, considering case involving repeated eigenvalues. The sensitivity of eigenvectors was applied within a gradient-based optimization algorithm. One important aspect of analyzing systems with repeated eigenvalues is the use of optimization methods that effectively solve problems where objective functions or constraints are non-differentiable. Conventional optimization methods may encounter difficulties due to the lack of continuity. In the paper [39], numerical methods for non-differentiable problems are presented. The authors analyzed a clamped–clamped column and introduced a method for structural optimization in this context, allowing to overcome the challenges associated with the non-differentiability of repeated eigenvalues. In the work [40], mass optimization of a truss was conducted considering frequency constraints, using a particle swarm optimization algorithm. The selected method is effective, eliminating the need for determining gradients of the objective function. On the other hand, in [41], a genetic algorithm based on Kirsch's approximations was utilized for structural optimization with frequency constraints. The study [42] focuses on topology optimization, where the authors proposed implementing the bundle method to solve topology of continuous structures with distinct and repeated eigenvalues.

This work focuses on systems containing viscoelastic elements. The equation describing the eigenvalue problem for such systems is as follows:

$${\mathbf{M}}{{\ddot{\mathbf{q}}}}\left( t \right) + {\mathbf{C}\dot{\mathbf{q}}}\left( t \right) + \int\limits_{0}^{t} {\widetilde{{\mathbf{G}}}\left( {t - \tau } \right)\mathop {\mathbf{q}}\limits^{.} \left( \tau \right)d\tau + {\mathbf{Kq}}\left( t \right)} = {\mathbf{0}}$$
(1)

where \({\mathbf{q}}\left( t \right)\) is the displacement vector, \({\mathbf{M}}\) is the mass matrix, \({\mathbf{K}}\) is the stiffness matrix, \({\mathbf{C}}\) is the damping matrix associated with structural damping, and \({\tilde{\mathbf{G}}}\left( {t - \tau } \right)\) is the matrix of the damping kernel function. In structural dynamics, damping systems can consist of multiple mechanical components characterized by various levels of energy dissipation. To describe their behavior, they must also encompass various damping models. The form \({\tilde{\mathbf{G}}}\left( {t - \tau } \right)\) depends on the chosen damping model for viscoelastic material. Among others, the literature describes the exponential damping model [43, 44], Biot model [45, 46], the Golla–Hughes–McTavish model (GHM) [47, 48], or anelastic displacement field mode (ADF) [49, 50]. Classical and fractional rheological models are described in [51,52,53]. A review of various non-viscous damping functions expressed in the frequency domain can be found in [54, 55]. In this study, it is assumed that a single damping model is used to describe damping, which as shown in [52], is sufficient to describe the behavior of viscoelastic dampers. However, the proposed approach can also be applied to systems where more than one damping model is considered.

Works [56, 57] demonstrated methods for calculating second-order sensitivities in systems with viscoelastic elements; however, they did not explore cases where the solution to the eigenvalue problem resulted in repeated eigenvalues. This present work is dedicated to precisely these cases, offering a detailed analysis of sensitivities for repeated eigenvalues and their corresponding eigenvectors in systems with viscoelastic damping elements. Sensitivity analysis is an extremely useful tool in the engineering design process as it enables understanding how changes in design parameters affect the dynamic characteristics of the studied systems, which is essential in structural design, model updating, damage detection, and optimization procedures. The article analyzes both first and second-order sensitivity. Especially the latter can be applicable in many scenarios, such as the analysis of systems with large uncertainty ranges [58], where first-order analysis may not always yield sufficiently accurate results. The application of higher-order eigenvalue sensitivity is also demonstrated in [59] for the harmonic analysis of viscoelastically damped systems.

The presented generalized of the formulation for systems with viscoelastic elements has not been previously presented in the literature. The sensitivity of eigenvalues is obtained by solving an additional eigenvalue problem, while the sensitivity of eigenvectors is calculated based on Nelson's formulation. Compared to previous works, the presented method addresses particular cases occurring with repeated eigenvalues. The analysis encompasses both classical rheological models for describing viscoelastic properties and models involving fractional derivatives, which is a novel aspect of this work. An innovative aspect is also the analysis of a system with damping elements where sensitivities of eigenvalues are also repeated. Additionally, for the discussed systems, this work introduces, for the first time, formulas for calculating second-order sensitivities of repeated eigenvalues and their corresponding eigenvectors. It provides examples in which second-order sensitivity proves essential for obtaining accurate results. Moreover, given the ill-conditioned matrix of coefficient in the considered problem, an additional factor has been introduced, significantly improving the results, as validated by the provided examples.

The article is organized as follows: Sect. 2 presents sensitivity analysis for systems with viscoelastic elements, focusing on scenarios where sensitivities of repeated eigenvalues are distinct; Sect. 3 addresses cases where sensitivities of eigenvalues are also repeated; Sect. 4 describes application of the presented method to structures with viscoelastic elements; Sect. 5 offers examples and a comparison of the problem’s condition numbers; finally, Sect. 6 presents the conclusions. The Appendices include the stability proof of the presented method, its applicability for cases with distinct eigenvalues, and a diagram illustrating the use of the provided formulas to determine sensitivities.

2 Sensitivity analysis of repeated eigenvalues and corresponding eigenvectors – distinct derivatives of eigenvalues

2.1 Sensitivity of the first order

By applying the Laplace transform with zero initial conditions, Eq. (1) can be written in the following form:

$$ \left( {s^{2} {\mathbf{M}}} + s{\mathbf{C} + {\mathbf{G}}\left(s \right) }+ {\mathbf{K}} \right){\overline{\mathbf{q}}}\left( s \right) = \mathbf{0} \;\; {\texttt{or}} \;\; {\mathbf{D}}\left( s \right){\overline{\mathbf{q}}}\left( s \right) = \mathbf{0}$$
(2)

where s represents the Laplace variable, \({\mathbf{G}}\left( s \right)\) is a matrix whose structure depends on the adopted model for the viscoelastic element (detailed matrix forms will be presented in Sect. 5), \({\overline{\mathbf{q}}}\left( s \right)\) is the Laplace transform of \({\mathbf{q}}\left( t \right)\), and \({\mathbf{D}}\left( s \right) = s^{2} {\mathbf{M}} + s{\mathbf{C}} + {\mathbf{G}}\left( s \right) + {\mathbf{K}}\) is a dynamic stiffness matrix. If the considered system is under-critically damped and has n degrees of freedom, the solution to the eigenvalue problem (2) consists of 2n complex conjugate eigenvalues and their corresponding eigenvectors. This solution also includes cases with repeated eigenvalues. For any eigenvalue \(\lambda_{i}\), the eigenvector \({\overline{\mathbf{q}}}_{i}\) can be defined using Eq. (2):

$$\left( {\lambda_{i}^{2} {\mathbf{M}} + \lambda_{i} {\mathbf{C}} + {\mathbf{G}}\left( {\lambda_{i} } \right) + {\mathbf{K}}} \right){\overline{\mathbf{q}}}_{i} = \bf{0}$$
(3)

and the normalization condition can be expressed as:

$${\overline{\mathbf{q}}}_{i}^{T} \frac{{\partial {\mathbf{D}}\left( {\lambda_{i} } \right)}}{\partial s}{\overline{\mathbf{q}}}_{i} = 1$$
(4)

where \(\frac{{\partial {\mathbf{D}}\left( {\lambda_{i} } \right)}}{\partial s} = 2\lambda_{i} {\mathbf{M}} + {\mathbf{C}} + \frac{{\partial {\mathbf{G}}\left( {\lambda_{i} } \right)}}{\partial s}\).

In the cases where the eigenproblem has repeated eigenvalues \(\lambda_{1} = \lambda_{2} = \ldots = \lambda_{m} = \lambda\), the relationship (2) can be rewritten in the following form:

$${\mathbf{MQ}}{\varvec{\Lambda }}^{2} + {\mathbf{C}}{\varvec{\Lambda}}\mathbf{Q} + {\mathbf{KQ}} + {\mathbf{G}}\left( \lambda \right){\mathbf{Q}} = \bf{0} \; \;{\texttt{or}} \;\; {\mathbf{D}}\left( \lambda \right){\mathbf{Q}} = \bf{0},$$
(5)

where \({\mathbf{D}}\left( \lambda \right) = {\mathbf{M}}{\varvec{\Lambda }}^{2} + {\mathbf{C\Lambda }} + {\mathbf{K}} + {\mathbf{G}}\left( \lambda \right)\), \({{\varvec{\Lambda}}}\) is a diagonal matrix containing repeated eigenvalues \({{\varvec{\Lambda}}} = diag\left[ {\lambda_{1} , \lambda_{2} , \ldots ,\lambda_{m} } \right]\), and \({\mathbf{Q}}\) is the matrix containing their associated eigenvectors \({\mathbf{Q}} = \left[ {{\overline{\mathbf{q}}}_{1} ,{\overline{\mathbf{q}}}_{2} , \ldots ,{\overline{\mathbf{q}}}_{m} } \right]\). The normalization condition for the matrix \({\mathbf{Q}}\) is expressed based on (4) as:

$${\mathbf{Q}}^{{\text{T}}} \frac{{\partial {\mathbf{D}}\left( {\uplambda } \right)}}{{\partial {\text{s}}}}{\mathbf{Q}} = {\mathbf{I}}$$
(6)

where \({\mathbf{I}}\) is the identity matrix of dimension \(m {\text{x}} m\), and \(\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s} = 2\lambda {\mathbf{M}} + {\uplambda }{\mathbf{C}} + \frac{{\partial {\mathbf{G}}\left( \lambda \right)}}{\partial s}\).

A characteristic feature of the solution to Eq. (5) in cases of repeated eigenvalues is that their corresponding eigenvectors are not unique. Therefore, when it becomes necessary to calculate their derivatives, it is essential to find the appropriate adjacent eigenvectors that allow for controlling changes in the eigenvectors. These adjacent eigenvectors can be defined as:

$${{\varvec{\Phi}}} = {\mathbf{Q}}{\varvec{\upbeta}}_{1}$$
(7)

where \({\varvec{\upbeta}}_{1}\) is the transformation matrix that satisfies the condition:

$${\varvec{\upbeta}}_{1}^{T} {\varvec{\upbeta}}_{1} = {\mathbf{I}}$$
(8)

The new vector \({{\varvec{\Phi}}}\) is also a solution to the eigenvalue problem (5) and must satisfy the normalization condition (6). Equations (5) and (6) can now be expressed in the following form:

$${\mathbf{M\Phi \Lambda }}^{2} + {\mathbf{CQ\Lambda }} + {\mathbf{K\Phi }} + {\mathbf{G}}\left( \lambda \right){{\varvec{\Phi}}} = \bf{0}\; \; {\texttt{or}}\; {\mathbf{D}}\left( \lambda \right){{\varvec{\Phi}}} = \bf{0}$$
(9)
$${{\varvec{\Phi}}}^{{\text{T}}} \frac{{\partial {\mathbf{D}}\left( {\uplambda } \right)}}{{\partial {\text{s}}}}{{\varvec{\Phi}}} = {\mathbf{I}}$$
(10)

The next step involves finding the transformation matrix \({\varvec{\beta}}_{1}\). To do so, we differentiate Eq. (9) with respect to the change in the design parameter \(p_{k}\):

$$\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }}{{\varvec{\Phi}}} + \frac{{\partial {\varvec{D}}\left( \lambda \right)}}{\partial s}{{\varvec{\Phi}}}\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }} + {\mathbf{D}}\left( \lambda \right)\frac{{\partial {{\varvec{\Phi}}}}}{{\partial p_{k} }} = \bf{0}$$
(11)

and then, we left-multiply it by \({\mathbf{Q}}^{T}\). This leads to the subeigenvalue problem:

$${\overline{\mathbf{D}}}{\varvec{\upbeta}}_{1} = \frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }}{\varvec{\upbeta}}_{1}$$
(12)

where \({\overline{\mathbf{D}}} = - {\mathbf{Q}}^{T} \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }}{\mathbf{Q}}\).

The solutions to the additional eigenvalue problem (12) are the sensitivities of eigenvalues and the transformation matrix. To determine the sensitivities of eigenvectors, Eq. (11) is reformulated in the following form:

$${\mathbf{D}}\left( \lambda \right)\frac{{\partial {{\varvec{\Phi}}}}}{{\partial p_{k} }} = - \left( {\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }}{{\varvec{\Phi}}} + \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}{{\varvec{\Phi}}}\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }}} \right)$$
(13)

Due to the singularity of the matrix \({\mathbf{D}}\left( \lambda \right)\) in Eq. (13), it is not possible to directly determine the sensitivities of eigenvectors \(\frac{{\partial {{\varvec{\Phi}}}}}{{\partial p_{i} }}\). According to [8], the sensitivity of eigenvectors in cases of repeated eigenvalues can be expressed as:

$$\frac{{\partial {{\varvec{\Phi}}}}}{{\partial p_{k} }} = {\mathbf{V}}_{1} + {\varvec{\Phi}} {\mathbf{C}}_{1}$$
(14)

where \({\mathbf{V}}_{1}\) is a particular solution of Eq. (13), and \({\mathbf{C}}_{1}\) is a coefficient matrix for calculating the homogeneous solution. Each column of the vector \({\mathbf{V}}_{1}\) is a linear combination of the eigenvectors, except for those in \({{\varvec{\Phi}}}\). Hence, based on Eq. (10), the particular solution must satisfy Eq. [34]:

$${{\varvec{\Phi}}}^{T} \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}{\mathbf{V}}_{1} = \bf{0}$$
(15)

Substituting Eq. (14) into (13) results in:

$${\mathbf{D}}\left( \lambda \right){\mathbf{V}}_{1} = - \left( {\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }}{{\varvec{\Phi}}} + \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}{{\varvec{\Phi}}}\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }}} \right)$$
(16)

These two Eqs. (15) and (16) together form a system of equations, which can be expressed in matrix form as follows:

$$\left[ {\begin{array}{*{20}c} {{\mathbf{D}}\left( \lambda \right)} & {\kappa \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}{{\varvec{\Phi}}}} \\ {\kappa {{\varvec{\Phi}}}^{T} \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}} & \bf{0} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\mathbf{V}}_{1} } \\ {\frac{1}{\kappa }\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }}} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {{\mathbf{S}}_{1} } \\ \bf{0} \\ \end{array} } \right]$$
(17)

where \({\mathbf{S}}_{1} = - \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }}{{\varvec{\Phi}}}\). The solution to this system of Eqs. (17) consists of the particular solution \({\mathbf{V}}_{1}\) and the sensitivities of eigenvalues \(\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }}\). The proof that the matrix on the left-hand side of Eq. (17) is non-singular is presented in Appendix A. The factor κ is introduced because the elements of the coefficient matrix have different orders of magnitude. It is determined according to [14] as the ratio of the largest element in the \({\mathbf{D}}\left( \lambda \right)\) matrix to the largest element in \(\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}\). This significantly reduces the condition number of the matrix, as will be presented in Sect. 5.5.

The computation of the coefficients of the matrix \({\mathbf{C}}_{1}\) will be carried out in two steps. In the first step, the off-diagonal coefficients will be calculated, and then the diagonal coefficients will be determined. To calculate the off-diagonal coefficients of the matrix \({\mathbf{C}}_{1}\), it is necessary to differentiate (9) twice and then left-multiply by \({{\varvec{\Phi}}}^{T}\). As a result, we obtain:

$${\mathbf{C}}_{1} \frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }} - \frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }}{\mathbf{C}}_{1} + \frac{1}{2}\frac{{\partial^{2} {{\varvec{\Lambda}}}}}{{\partial p_{k}^{2} }} = {\mathbf{R}}_{1}$$
(18)

where \({\mathbf{R}}_{1} = - \frac{1}{2}\left[ {{{\varvec{\Phi}}}^{T} \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k}^{2} }}{{\varvec{\Phi}}} + 2{{\varvec{\Phi}}}^{T} \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }}{\mathbf{V}}_{1} + 2{{\varvec{\Phi}}}^{T} \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k} }}{{\varvec{\Phi}}}\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }} + {{\varvec{\Phi}}}^{T} \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} }}{{\varvec{\Phi}}}\left( {\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }}} \right)^{2} } \right]\).

Let the elements of the matrix \({\mathbf{C}}_{1}\) be denoted as \(c_{1ij}\), and the elements of the matrix \({\mathbf{R}}_{1}\) as \(r_{1ij}\). Then, the off-diagonal elements of the matrix \({\mathbf{C}}_{1}\) can be calculated using the formula:

$$c_{1ij} = \frac{{r_{1ij} }}{{\frac{{\partial \lambda_{j} }}{{\partial p_{k} }} - \frac{{\partial \lambda_{i} }}{{\partial p_{k} }}}}\; {\text{for}}\; i,j = 1,2, \ldots ,m$$
(19)

Considering that the matrix \(\frac{{\partial^{2} {{\varvec{\Lambda}}}}}{{\partial p_{k}^{2} }}\) has zero values outside the diagonal, it will not be taken into account. It is to be noted that the use of Eq. (19) is possible only when the sensitivities of eigenvalues are distinct. The case where they are equal is more complex and will be discussed in Sect. 3.

To determine the elements on the diagonal, we substitute the differentiated Eq. (10) into (14). This leads to:

$${\mathbf{C}}_{1}^{T} + {\mathbf{C}}_{1} = {\mathbf{R}}_{2}$$
(20)

where \({\mathbf{R}}_{2} = - \left[ {{{\varvec{\Phi}}}^{T} \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} }}{{\varvec{\Phi}}}\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }} + {{\varvec{\Phi}}}^{T} \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k} }}{{\varvec{\Phi}}}} \right]\).

In a generalized form similar to (19), it is possible to calculate the elements on the diagonal as:

$$c_{1ii} = \frac{1}{2}r_{2ii}\, {\text{for}} \,i = 1,2, \ldots ,m$$
(21)

2.2 Sensitivity of the second order

To determine the second-order sensitivities of repeated eigenvalues and their associated eigenvectors, a similar analysis to that in Sect. 2.1 will be conducted. It is necessary to differentiate Eq. (11) once again, leading to:

$${\mathbf{D}}\left( \lambda \right)\frac{{\partial^{2} {{\varvec{\Phi}}}}}{{\partial p_{k}^{2} }} = - \left[ {\frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k}^{2} }}{{\varvec{\Phi}}} + 2\frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k} }}{{\varvec{\Phi}}}\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }} + 2\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }}\frac{{\partial {{\varvec{\Phi}}}}}{{\partial p_{k} }} + \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} }}{{\varvec{\Phi}}}\left( {\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }}} \right)^{2} } \right. \left. { + 2\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}\frac{{\partial {{\varvec{\Phi}}}}}{{\partial p_{k} }}\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }} + \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}{{\varvec{\Phi}}}\frac{{\partial^{2} {{\varvec{\Lambda}}}}}{{\partial p_{k}^{2} }}} \right]$$
(22)

The solution to Eq. (22) can be presented similarly to the case of Eq. (14):

$$\frac{{\partial^{2} {{\varvec{\Phi}}}}}{{\partial p_{k}^{2} }} = {\mathbf{V}}_{2} + {\mathbf{\Phi C}}_{2}$$
(23)

where \({\mathbf{V}}_{2}\) is the particular solution to Eq. (22), and \({\mathbf{C}}_{2}\) is the coefficient matrix for the homogeneous solution of the equation. Substituting (23) into (22), we obtain the following formula:

$${\mathbf{D}}\left( \lambda \right){\mathbf{V}}_{2} + \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}{{\varvec{\Phi}}}\frac{{\partial^{2} {{\varvec{\Lambda}}}}}{{\partial p_{k}^{2} }} = {\mathbf{S}}_{2}$$
(24)

where \({\mathbf{S}}_{2} = - \left[ {\frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k}^{2} }}{{\varvec{\Phi}}} + 2\frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k} }}{{\varvec{\Phi}}}\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }} + 2\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }}\frac{{\partial {{\varvec{\Phi}}}}}{{\partial p_{k} }} + \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} }}{{\varvec{\Phi}}}\left( {\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }}} \right)^{2} + 2\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}\frac{{\partial {{\varvec{\Phi}}}}}{{\partial p_{k} }}\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }}} \right]\).

Similar to the first-order analysis, the particular solution must satisfy the condition:

$${{\varvec{\Phi}}}^{T} \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}{\mathbf{V}}_{2} = \bf{0}$$
(25)

Equations (24) and (25) can be expressed in matrix form as:

$$\left[ {\begin{array}{*{20}c} {{\mathbf{D}}\left( \lambda \right)} & {\kappa \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}{{\varvec{\Phi}}}} \\ {\kappa {{\varvec{\Phi}}}^{T} \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}} & \bf{0} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\mathbf{V}}_{2} } \\ {\frac{1}{\kappa }\frac{{\partial^{2} {{\varvec{\Lambda}}}}}{{\partial p_{k}^{2} }}} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {{\mathbf{S}}_{2} } \\ \bf{0} \\ \end{array} } \right]$$
(26)

The solution to the system of Eqs. (26) consists of the particular solution \({\mathbf{V}}_{2}\) and the second-order sensitivities of repeated eigenvalues \(\frac{{\partial^{2} {{\varvec{\Lambda}}}}}{{\partial p_{k}^{2} }}\). It is worth noting that the structure of Eq. (26) is the same as in the first-order analysis and the system of Eqs. (17). The coefficient matrix is exactly the same, which means that to find the solution to the system of Eqs. (26), it is necessary to determine only the right-hand side vector.

The coefficient matrix \({\mathbf{C}}_{2}\) will be determined in a similar manner to the matrix \({\mathbf{C}}_{1}\). First, we differentiate Eq. (22), and then left-multiply by \({{\varvec{\Phi}}}^{T}\). Using Eq. (25), we obtain:

$${\mathbf{C}}_{2}\frac{\partial{\varvec{\Lambda}}}{\partial {p}_{k}}-\frac{\partial{\varvec{\Lambda}}}{\partial {p}_{k}}{\mathbf{C}}_{2}+\frac{1}{3}\frac{{\partial }^{3}{\varvec{\Lambda}}}{\partial {p}_{k}^{3}}={\mathbf{R}}_{3}$$
(27)

where

$$\begin{aligned} {\mathbf{R}}_{3} &= - \frac{1}{3}\left[ {{\mathbf{\Phi }}^{T} \frac{{\partial ^{3} {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k}^{3} }}{\mathbf{\Phi }} + 3{\mathbf{\Phi }}^{T} \frac{{\partial ^{3} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k}^{2} }}{\mathbf{\Phi }}\frac{{\partial {\mathbf{\Lambda }}}}{{\partial p_{k} }} + 3{\mathbf{\Phi }}^{T} \frac{{\partial ^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k}^{2} }}\frac{{\partial {\mathbf{\Phi }}}}{{\partial p_{k} }}} \right. \hfill \\ &\quad + 3{\mathbf{\Phi }}^{T} \frac{{\partial ^{3} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} \partial p_{k} }}{\mathbf{\Phi }}\left( {\frac{{\partial {\mathbf{\Lambda }}}}{{\partial p_{k} }}} \right)^{2} + 3{\mathbf{\Phi }}^{T} \frac{{\partial ^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k} }}{\mathbf{\Phi }}\frac{{\partial ^{2} {\mathbf{\Lambda }}}}{{\partial p_{k}^{2} }} + 6{\mathbf{\Phi }}^{T} \frac{{\partial ^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k} }}\frac{{\partial {\mathbf{\Phi }}}}{{\partial p_{k} }}\frac{{\partial {\mathbf{\Lambda }}}}{{\partial p_{k} }} \hfill \\ &\quad + {\mathbf{\Phi }}^{T} \frac{{\partial ^{3} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{3} }}{\mathbf{\Phi }}\left( {\frac{{\partial {\mathbf{\Lambda }}}}{{\partial p_{k} }}} \right)^{3} + 3{\mathbf{\Phi }}^{T} \frac{{\partial ^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} }}\frac{{\partial {\mathbf{\Phi }}}}{{\partial p_{k} }}\left( {\frac{{\partial {\mathbf{\Lambda }}}}{{\partial p_{k} }}} \right)^{2} + 3{\mathbf{\Phi }}^{T} \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial s}}\frac{{\partial {\mathbf{\Phi }}}}{{\partial p_{k} }}\frac{{\partial ^{2} {\mathbf{\Lambda }}}}{{\partial p_{k}^{2} }} \hfill \\ & \left. {\quad + 3{\mathbf{\Phi }}^{T} \frac{{\partial ^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} }}{\mathbf{\Phi }}\frac{{\partial {\mathbf{\Lambda }}}}{{\partial p_{k} }}\frac{{\partial ^{2} {\mathbf{\Lambda }}}}{{\partial p_{k}^{2} }} + 3{\mathbf{\Phi }}^{T} \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }}{\mathbf{V}}_{2} } \right] \end{aligned}$$

The off-diagonal elements can be determined using the formula:

$$c_{2ij} = \frac{{r_{3ij} }}{{\frac{{\partial \lambda_{j} }}{{\partial p_{k} }} - \frac{{\partial \lambda_{i} }}{{\partial p_{k} }}}} \;{\text{for}}\; i,j = 1,2, \ldots ,m$$
(28)

The obtained form of the solution is very similar to Eq. (19). The elements on the diagonal are also determined in a similar way to the first-order sensitivity. In the first step, we differentiate Eq. (10) twice, and then, after substituting the solution (23), we get:

$${\mathbf{C}}_{2}^{T} + {\mathbf{C}}_{2} = {\mathbf{R}}_{4}$$
(29)

where

$$\begin{aligned} {\mathbf{R}}_{4} &= - \left[ 2\frac{{\partial {{\varvec{\Phi}}}^{T} }}{{\partial p_{k} }}\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}\frac{{\partial {{\varvec{\Phi}}}}}{{\partial p_{k} }} + 2{{\varvec{\Phi}}}^{T} \frac{{\partial^{3} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} \partial p_{k} }}{{\varvec{\Phi}}}\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }} + {{\varvec{\Phi}}}^{T} \frac{{\partial^{3} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{3} }}{{\varvec{\Phi}}}\left( {\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }}} \right)^{2} + {{\varvec{\Phi}}}^{T} \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} }}{{\varvec{\Phi}}}\frac{{\partial^{2} {{\varvec{\Lambda}}}}}{{\partial p_{k}^{2} }}\right.\\ &\quad \left.+\, 4{{\varvec{\Phi}}}^{T} \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} }}\frac{{\partial {{\varvec{\Phi}}}}}{{\partial p_{k} }}\frac{{\partial {{\varvec{\Lambda}}}}}{{\partial p_{k} }} + {{\varvec{\Phi}}}^{T} \frac{{\partial^{3} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k}^{2} }}{{\varvec{\Phi}}} + 4{{\varvec{\Phi}}}^{T} \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k} }}\frac{{\partial {{\varvec{\Phi}}}}}{{\partial p_{k} }} \right]\end{aligned}$$

The elements on the main diagonal can be calculated using the equation:

$$c_{2ii} = \frac{1}{2}r_{4ii} \;{\text{for}}\; i = 1,2, \ldots ,m$$
(30)

To calculate the second-order sensitivities of eigenvectors, it is necessary to first compute the first-order sensitivities of eigenvalues and their associated eigenvectors. As before, the condition that the sensitivities of repeated eigenvalues are distinct must be met. The next section will consider the case where eigenvalue sensitivities are the same.

3 Sensitivity analysis of repeated eigenvalues and corresponding eigenvectors – repeated derivatives of eigenvalues

In the case of repeated eigenvalue sensitivities, the vectors that compose the transformation matrix \({\varvec{\beta}}_{1}\), which is a solution to Eq. (12), are not independent. As a result, using Eq. (7), will not yield independent eigenvectors \({{\varvec{\Phi}}}\). Therefore, it is necessary to determine a new adjacent vector [14]:

$${{\varvec{\Psi}}} = {{\varvec{\Phi}}}{\varvec{\upbeta}}_{2}$$
(31)

The matrix \({\varvec{\upbeta}}_{2}\) must satisfy the condition \({\varvec{\upbeta}}_{2}^{T} {\varvec{\upbeta}}_{2} = {\mathbf{I}}\). The new eigenvector expressed as (31) must satisfy the eigenvalue problem (5), which takes the form:

$${\mathbf{D}}\left( \lambda \right){{\varvec{\Psi}}} = \bf{0}$$
(32)

and the normalization condition (6), expressed as:

$${{\varvec{\Psi}}}^{T} \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}{{\varvec{\Psi}}} = {\mathbf{I}}$$
(33)

By differentiating the eigenvalue problem (32) with respect to the design parameter, the following equation is arrived at:

$${\mathbf{D}}\left( \lambda \right)\frac{{\partial {{\varvec{\Psi}}}}}{{\partial p_{k} }} = - \left( {\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }}{{\varvec{\Psi}}} + \frac{\partial \lambda }{{\partial p_{k} }}\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}{{\varvec{\Psi}}}} \right)$$
(34)

Based on this, the sensitivity of the eigenvectors can be expressed in a similar form as before:

$$\frac{{\partial {{\varvec{\Psi}}}}}{{\partial p_{k} }} = {\mathbf{V}}_{3} + {\mathbf{\Psi C}}_{3}$$
(35)

The particular solution of Eq. (35), \({\mathbf{V}}_{3}\), must satisfy the equation:

$${{\varvec{\Psi}}}^{T} \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}{\mathbf{V}}_{3} = \bf{0}$$
(36)

Utilizing Eq. (31), sensitivity of the eigenvectors can be expressed as:

$$\frac{{\partial {{\varvec{\Psi}}}}}{{\partial p_{k} }} = \frac{{\partial {{\varvec{\Phi}}}}}{{\partial p_{k} }}{\varvec{\beta}}_{2}$$
(37)

Substituting Eq. (37) into (35) and considering Eq. (14), we obtain the relation:

$${\mathbf{V}}_{1} {\varvec{\upbeta}}_{2} = {\mathbf{V}}_{3}$$
(38)

To derive the transformation matrix \({\varvec{\upbeta}}_{2}\), differentiate (34) once more, left-multiply it by \({{\varvec{\Phi}}}^{T}\), and, after substituting (14), (31), and (37) and performing transformations, the following subeigenproblem is obtained:

$$\overline{\overline{{\mathbf{D}}}} {{\varvec{\upbeta}}}_{2} = \frac{{\partial^{2} {{\varvec{\Lambda}}}}}{{\partial p_{k}^{2} }}{{\varvec{\upbeta}}}_{2}$$
(39)

where

$$\overline{\overline{{\mathbf{D}}}} = - \left[ {{{\varvec{\Phi}}}^{T} \left( {\frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k}^{2} }} + 2\frac{\partial \lambda }{{\partial p_{k} }}\frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k} }} + \left( {\frac{\partial \lambda }{{\partial p_{k} }}} \right)^{2} \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k}^{2} }}} \right){{\varvec{\Phi}}} + 2{{\varvec{\Phi}}}^{T} \left( {\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }} + \frac{\partial \lambda }{{\partial p_{k} }}\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}} \right){\mathbf{V}}_{1} } \right]$$

The solution to problem (39) consists of second-order eigenvalue sensitivities \(\frac{{\partial {{\varvec{\Lambda}}}^{2} }}{{\partial p_{k}^{2} }}\) and the transformation matrix \({{\varvec{\upbeta}}}_{2}\). After obtaining the \({{\varvec{\upbeta}}}_{2}\) matrix, it is possible to calculate the vectors \({{\varvec{\Psi}}}\). However, to find their derivatives, we need to compute \({\mathbf{C}}_{3}\) and \({\mathbf{V}}_{3}\). First, we differentiate (32) twice:

$$\begin{aligned}{\mathbf{D}}\left( \lambda \right)\frac{{\partial^{2} {{\varvec{\Psi}}}}}{{\partial p_{k}^{2} }} &= - \left[ \left( \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k}^{2} }} + 2\frac{{\partial {\uplambda }}}{{\partial p_{k} }}\frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k} }} + \left( {\frac{{\partial {\uplambda }}}{{\partial p_{k} }}} \right)^{2} \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} }} + \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}\frac{{\partial^{2} {{\varvec{\Lambda}}}}}{{\partial p_{k}^{2} }} \right){{\varvec{\Psi}}} \right.\\ &\quad+ \left. {2\left( {\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }} + \frac{{\partial {\uplambda }}}{{\partial p_{k} }}\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}} \right)\frac{{\partial {{\varvec{\uppsi}}}}}{{\partial p_{k} }}} \right] \end{aligned}$$
(40)

The solution to Eq. (40) can be presented in a similar form as before:

$$\frac{{\partial^{2} {{\varvec{\Psi}}}}}{{\partial p_{k}^{2} }} = {\mathbf{V}}_{4} + {\mathbf{\Psi C}}_{4}$$
(41)

After substituting (35) and (41) into Eq. (40), we obtain:

$${\mathbf{D}}\left( \lambda \right){\mathbf{V}}_{4} + \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial \lambda }{{\varvec{\uppsi}}}\frac{{\partial^{2} {{\varvec{\Lambda}}}}}{{\partial p_{k}^{2} }} = {\mathbf{S}}_{3}$$
(42)

where

$${\mathbf{S}}_{3} = \left[ { - \left( {\frac{{\partial ^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k}^{2} }} + 2\frac{{\partial \lambda }}{{\partial p_{k} }}\frac{{\partial ^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k} }} + \left( {\frac{{\partial \lambda }}{{\partial p_{k} }}} \right)^{2} \frac{{\partial ^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} }}} \right){\mathbf{\Psi }} - 2\left( {\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }} + \frac{{\partial \lambda }}{{\partial p_{k} }}\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial s}}} \right){\mathbf{V}}_{3} - 2\left( {\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }} + \frac{{\partial \lambda }}{{\partial p_{k} }}\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial s}}} \right){\mathbf{\Psi C}}_{3} } \right]$$

Equations (36) and (42) form a system of equations:

$$\left[ {\begin{array}{*{20}c} {{\mathbf{D}}\left( \lambda \right)} & {\kappa \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}{{\varvec{\Psi}}}} \\ {\kappa {{\varvec{\Psi}}}^{T} \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}} & \bf{0} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\mathbf{V}}_{4} } \\ {\frac{1}{\kappa }\frac{{\partial^{2} {{\varvec{\Lambda}}}}}{{\partial p_{k}^{2} }}} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {{\mathbf{S}}_{3} } \\ \bf{0} \\ \end{array} } \right]$$
(43)

If we express the system of Eq. (43) in the form:

$${\mathbf{AV}} = {\mathbf{P}}$$
(44)

then

$${\mathbf{A}} = \left[ {\begin{array}{*{20}c} {{\mathbf{D}}\left( \lambda \right)} & {\kappa \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}{{\varvec{\Psi}}}} \\ {\kappa {{\varvec{\Psi}}}^{T} \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}} & \bf{0} \\ \end{array} } \right], {\mathbf{V}} = \left[ {\begin{array}{*{20}c} {{\mathbf{V}}_{4} } \\ {\frac{1}{\kappa }\frac{{\partial^{2} {{\varvec{\Lambda}}}}}{{\partial p_{k}^{2} }}} \\ \end{array} } \right] {\text{and}} {\mathbf{P}} = \left[ {\begin{array}{*{20}c} {{\overline{\mathbf{S}}}_{3} } \\ \bf{0} \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {\overline{\overline{{\mathbf{S}}}}_{3} } \\ \bf{0} \\ \end{array} } \right]{\mathbf{C}}_{3}$$
(45)

where

$$\begin{aligned}&{\overline{\mathbf{S}}}_{3} = - \left[ {\frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k}^{2} }} + 2\frac{\partial \lambda }{{\partial p_{k} }}\frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k} }} + \left( {\frac{\partial \lambda }{{\partial p_{k} }}} \right)^{2} \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} }}} \right]{{\varvec{\Psi}}} - 2\left( {\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }} + \frac{\partial \lambda }{{\partial p_{k} }}\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}} \right){\mathbf{V}}_{3} \hfill \\ &\overline{\overline{{\mathbf{S}}}}_{3} = - 2\left( {\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }} + \frac{\partial \lambda }{{\partial p_{k} }}\frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{\partial s}} \right){{\varvec{\Psi}}}\end{aligned}$$
(46)

To determine the vector \({\mathbf{V}}\), it is necessary to compute the inverse matrix of matrix \({\mathbf{A}}\), which can be expressed as:

$${\mathbf{A}}^{ - 1} = \left[ {\begin{array}{*{20}c} {{\overline{\mathbf{A}}}_{11} } & {{\overline{\mathbf{A}}}_{12} } \\ {{\overline{\mathbf{A}}}_{21} } & {{\overline{\mathbf{A}}}_{22} } \\ \end{array} } \right]$$
(47)

The dimensions of the individual component matrices are as follows: \({\overline{\mathbf{A}}}_{11} \left[ {n \times n} \right]\), \({\overline{\mathbf{A}}}_{12} \left[ {n \times m} \right]\), \({\overline{\mathbf{A}}}_{21} \left[ {m \times n} \right]\), and \({\overline{\mathbf{A}}}_{22} \left[ {m \times m} \right]\). Utilizing Eqs. (44), (45), and (47), we can write the expression to determine the particular solution \({\mathbf{V}}_{4}\) as follows:

$${\mathbf{V}}_{4} = {\overline{\mathbf{A}}}_{11} {\overline{\mathbf{S}}}_{3} + {\overline{\mathbf{A}}}_{11} \overline{\overline{{\mathbf{S}}}}_{3} {\mathbf{C}}_{3}$$
(48)

To find the solution \({\mathbf{V}}_{4}\), it is necessary to determine the elements of matrix \({\mathbf{C}}_{3}\). In cases where the derivatives of eigenvalues are distinct, the elements of matrices \({\mathbf{C}}_{1}\) and \({\mathbf{C}}_{2}\) on the diagonal and off the diagonal can be determined independently. However, when sensitivities of eigenvalues are repeated, this is not possible. In such cases, one should first calculate the elements of matrix \({\mathbf{C}}_{3}\) on the diagonal. Afterward, using the known values of \(c_{3ii}\), the off-diagonal elements can be determined.

The first step is to calculate the elements on the diagonal. To do this, differentiate the condition (33), and after substituting the relationship (35), the following formula is obtained:

$${\mathbf{C}}_{3}^{T} + {\mathbf{C}}_{3} = {\mathbf{R}}_{5}$$
(49)

where

$${\mathbf{R}}_{5} = - \left[ {{{\varvec{\Psi}}}^{T} \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k} }}{{\varvec{\Psi}}} + \frac{\partial \lambda }{{\partial p}}{{\varvec{\Psi}}}^{T} \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} }}{{\varvec{\Psi}}}} \right]$$

The elements on the diagonal can be determined using the formula, where \(c_{3ii}\) and \(r_{5ii}\) represent the elements on the diagonal of matrices \({\mathbf{C}}_{3}\) and \({\mathbf{R}}_{5}\), respectively:

$$c_{3ii} = \frac{1}{2}r_{5ii}$$
(50)

To determine the off-diagonal elements, we need to differentiate the eigenvalue problem (32) three times and multiply it from the left by \({{\varvec{\Psi}}}^{T}\). Then, using (35) and (41), we obtain the following formula:

$${\mathbf{W}}_{2} + {\mathbf{W}}_{1} {\mathbf{C}}_{3} - {\mathbf{C}}_{3} \frac{{\partial^{2} {{\varvec{\Lambda}}}}}{{\partial p_{k}^{2} }} = \frac{1}{3}\frac{{\partial^{3} {{\varvec{\Lambda}}}}}{{\partial p_{k}^{3} }}$$
(51)

where

$$\begin{aligned} {\mathbf{W}}_{1} &= - \left[ {{{\varvec{\Psi}}}^{T} \left( {2\frac{\partial \lambda }{{\partial p_{k} }}\frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k} }} + \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k}^{2} }} + \left( {\frac{\partial \lambda }{{\partial p_{k} }}} \right)^{2} \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} }}} \right){{\varvec{\Psi}}} + {{\varvec{\Psi}}}^{T} \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }}{\overline{\mathbf{A}}}_{11} \overline{\overline{{\mathbf{S}}}}_{3} } \right] \hfill \\ {\mathbf{W}}_{2} &= - \left[ {{{\varvec{\Psi}}}^{T} \left( {\frac{1}{3}\frac{{\partial^{3} {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k}^{3} }} + \frac{\partial \lambda }{{\partial p_{k} }}\frac{{\partial^{3} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k}^{2} }} + \left( {\frac{\partial \lambda }{{\partial p_{k} }}} \right)^{2} \frac{{\partial^{3} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} \partial p_{k} }} + \frac{1}{3}\left( {\frac{\partial \lambda }{{\partial p_{k} }}} \right)^{3} \frac{{\partial^{3} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{3} }}} \right){{\varvec{\Psi}}}} \right. \hfill \\ &\quad + {{\varvec{\Psi}}}^{T} \left( {\frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k} }} + \frac{\partial \lambda }{{\partial p_{k} }}\frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} }}} \right){{\varvec{\Psi}}}\frac{{\partial^{2} {{\varvec{\Lambda}}}}}{{\partial p_{k}^{2} }} \hfill \\ &\quad+ \left.{{\varvec{\Psi}}}^{T} \left( {2\frac{\partial \lambda }{{\partial p_{k} }}\frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s\partial p_{k} }} + \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k}^{2} }} + \left( {\frac{\partial \lambda }{{\partial p_{k} }}} \right)^{2} \frac{{\partial^{2} {\mathbf{D}}\left( \lambda \right)}}{{\partial s^{2} }}} \right){\mathbf{V}}_{3} + {{\varvec{\Psi}}}^{T} \frac{{\partial {\mathbf{D}}\left( \lambda \right)}}{{\partial p_{k} }}{\overline{\mathbf{A}}}_{11} {\overline{\mathbf{S}}}_{3} \right]\end{aligned}$$

Since the sensitivity matrices of the second and third orders have a diagonal structure, using the values of the elements of the matrix \(c_{3ii}\) already computed from Eq. (50), the values of the \({\mathbf{C}}_{3}\) matrix off the diagonal can be calculated by solving a system of linear equations. In the case where the number m is equal to 2, the coefficients \(c_{3ij}\) can be calculated using the formula:

$$c_{3ij} = - \frac{{w_{2ij} + w_{1ij} c_{3ii} }}{{w_{1ii} - \frac{{\partial^{2} \lambda_{j} }}{{\partial p_{k}^{2} }}}}$$
(52)

Second-order sensitivities of eigenvalues can be obtained using the formula (39), and to obtain second-order sensitivities of eigenvectors, we need to repeat the procedure outlined in this chapter.

4 Application to calculating sensitivities for structures with viscoelastic elements

Equation (2) describes an eigenvalue problem that arises in systems with viscoelastic elements. The matrix \({\mathbf{G}}\left( s \right)\) depends on the viscoelastic element model, as well as on the type of structure under consideration and the placement of viscoelastic elements. It can be expressed in a general form:

$${\mathbf{G}}\left( s \right) = \mathop \sum \limits_{e = 1}^{r} {\mathbf{G}}_{e} \left( s \right) = \mathop \sum \limits_{e = 1}^{r} {\mathbf{K}}_{v,e} g_{e} \left( s \right)$$
(53)

where r represents the number of viscoelastic elements, \({\mathbf{K}}_{v,e}\) is a matrix that takes into account the viscoelastic element’s position, and \(g_{e} \left( s \right)\) is a function describing the material properties of viscoelastic element e. Detailed descriptions of these structures can be found in references like [60] for frames with viscoelastic dampers, [61] for beams with viscoelastic layers, [62] for plates with viscoelastic layers, and [63] for plates supported by viscoelastic supports. The eigenvalue problem for the considered systems with viscoelastic elements will be solved using a continuation method [60]. The solutions to the eigenvalue problem consist of pairs of complex conjugate eigenvalues:

$$\lambda_{i} = \mu_{i} \pm {\text{i}}\eta_{i}$$
(54)

where \(\mu_{i}\) and \(\eta_{i}\) are the real and imaginary parts of the eigenvalues, and \({\text{i}} = \sqrt { - 1}\). Based on this, further dynamic characteristics can be determined, such as natural frequencies \(\left( {\omega_{i} } \right)\) and non-dimensional damping ratios \(\left( {\gamma_{i} } \right)\):

$$\omega_{i} = \sqrt {\mu_{i}^{2} + \eta_{i}^{2} } , \gamma_{i} = - \frac{{\mu_{i} }}{{\omega_{i} }}$$
(55)

Sensitivities of natural frequencies and non-dimensional damping ratios can be determined using the formulas [56]:

$$\frac{{\partial \omega_{i} }}{{\partial p_{k} }} = \frac{1}{{\omega_{i} }}\left( {\mu_{i} \frac{{\partial \mu_{i} }}{{\partial p_{k} }} + \eta_{i} \frac{{\partial \eta_{i} }}{{\partial p_{k} }}} \right), \frac{{\partial \gamma_{i} }}{{\partial p_{k} }} = - \frac{1}{{\omega_{i} }}\frac{{\partial \mu_{i} }}{{\partial p_{k} }} - \frac{{\gamma_{i} }}{{\omega_{i} }}\frac{{\partial \omega_{i} }}{{\partial p_{k} }}$$
(56)

for the first-order sensitivity and

$$\frac{{\partial^{2} \omega_{i} }}{{\partial p_{k}^{2} }} = \frac{1}{{\omega_{i} }}\left( {\left( {\frac{{\partial \mu_{i} }}{{\partial p_{k} }}} \right)^{2} + \left( {\frac{{\partial \eta_{i} }}{{\partial p_{k} }}} \right)^{2} + \mu_{i} \frac{{\partial^{2} \mu_{i} }}{{\partial p_{k}^{2} }} + } \right.\left. {\eta_{i} \frac{{\partial^{2} \eta_{i} }}{{\partial p_{k}^{2} }} - \left( {\frac{{\partial \omega_{i} }}{{\partial p_{k} }}} \right)^{2} } \right)$$
(57)
$$\frac{{\partial^{2} \gamma_{i} }}{{\partial p_{k}^{2} }} = - \frac{1}{{\omega_{i} }}\left( {\gamma_{i} \frac{{\partial^{2} \omega_{i} }}{{\partial p_{k}^{2} }} + \frac{{\partial^{2} \mu_{i} }}{{\partial p_{k}^{2} }} + 2\frac{{\partial \gamma_{i} }}{{\partial p_{k} }}\frac{{\partial \omega_{i} }}{{\partial p_{k} }}} \right)$$
(58)

for the second-order sensitivity, if we express first- and second-order sensitivities of eigenvalues as:

$$\frac{{\partial \lambda_{i} }}{{\partial p_{k} }} = \frac{{\partial \mu_{i} }}{{\partial p_{k} }} + {\text{i}}\frac{{\partial \eta_{i} }}{{\partial p_{k} }}, \frac{{\partial^{2} \lambda_{i} }}{{\partial p_{k}^{2} }} = \frac{{\partial^{2} \mu_{i} }}{{\partial p_{k}^{2} }} + {\text{i}}\frac{{\partial^{2} \eta_{i} }}{{\partial p_{k}^{2} }}$$
(59)

The computed sensitivities will be used to determine the response values of the structure, denoted generally as \(f\left( p \right)\), after changing the design parameter. The new values will be determined by expanding the function in a Taylor series around the parameter \(p_{k}\) according to the formulas:

$$f\left( {p_{k} + {\Delta }p_{k} } \right) = f\left( {p_{k} } \right) + \frac{{\partial f\left( {p_{k} } \right)}}{{\partial p_{k} }}\Delta p_{k}$$
(60)

for the first-order sensitivity and

$$f\left( {p_{k} + {\Delta }p_{k} } \right) = f\left( {p_{k} } \right) + \frac{{\partial f\left( {p_{k} } \right)}}{{\partial p_{k} }}\Delta p_{k} + \frac{1}{2}\frac{{\partial^{2} f\left( {p_{k} } \right)}}{{\partial p_{k}^{2} }}\Delta p_{k}^{2}$$
(61)

for the second-order sensitivity.

5 Examples

5.1 Example 1–four-degrees of freedom system

To validate the derived formulas and confirm the correctness of the written programs, the analysis begins with the examination of a four-degree-of-freedom system presented in Fig. 1, which has been previously studied in, among others, [35]. This is a mass-spring system with five damping elements described by the Biot model, for which the components of the \({\mathbf{G}}\left( s \right)\) matrix are as follows:

$${\mathbf{K}}_{v} = \left[ {\begin{array}{*{20}c} 1 &\quad { - 1} &\quad 0 &\quad 0 \\ { - 1} & \quad 1 & \quad 0 &\quad 0 \\ 0 &\quad 0 &\quad 2 &\quad 0 \\ 0 & \quad 0 & \quad 0 & \quad 2 \\ \end{array} } \right], g\left( s \right) = \frac{sc}{{s + \mu }}$$
(62)
Fig. 1
figure 1

Four-degrees of freedom system with damping Biot elements

The matrices \({\mathbf{M}}\) and \({\mathbf{K}}\) have the following forms:

$${\mathbf{M}} = \left[ {\begin{array}{*{20}c} m &\quad 0 &\quad 0 &\quad 0 \\ 0 &\quad m &\quad 0 &\quad 0 \\ 0 &\quad 0 &\quad m &\quad 0 \\ 0 &\quad 0 &\quad 0 &\quad m \\ \end{array} } \right],\;{\mathbf{K}} = \left[ {\begin{array}{*{20}c} {2k} &\quad { - k} &\quad 0 &\quad 0 \\ { - k} &\quad {2k} &\quad 0 &\quad 0 \\ 0 &\quad 0 &\quad {2k} &\quad 0 \\ 0 &\quad 0 &\quad 0 &\quad {k + k_{1} } \\ \end{array} } \right].$$

The computations were carried out with the following data \(k = 1000\; {\text{N}}/{\text{m}}\), \(k_{1} = 1000\; {\text{N}}/{\text{m}}\), \(m = 1 \;{\text{kg}}\), \(c = 0.3\), \(\mu = 10\). Solving the eigenvalue problem (2) yielded two repeated eigenvalues: \(\lambda_{1} = \lambda_{2} = - 1.4249 \cdot 10^{ - 3} + 44.785{\text{i}}\). A comparison of the first-order sensitivities of eigenvalues and their associated eigenvectors with the results from the work [35] is presented in Table 1. The results in the article [35] were verified by using two finite difference methods: the forward difference method and the central difference method, making them suitable for comparative analysis.

Table 1 Comparison of sensitivity of eigenvalues and eigenvectors

The obtained results indicate a good agreement and demonstrate the correctness of the derived formulas and the written codes.

5.2 Example 2 – 3D-frame with dampers

As a second example, consider a spatial frame with four dampers described using the fractional Zener model (Fig. 2). The elements of the matrix \({\mathbf{G}}\left( s \right)\) for one damper can now be expressed as follows:

$$g_{e} \left( s \right) = \frac{{c_{1,e} s^{{\alpha_{e} }} }}{{k_{1,e} + c_{1,e} s^{{\alpha_{e} }} }}, {\mathbf{K}}_{v,e} = {\mathbf{L}}_{e}$$
(63)

where \({\mathbf{L}}_{e}\) is the damper location matrix. Additionally, the matrix \({\mathbf{K}}_{0}\) is formed as follows:

$${\mathbf{K}}_{0} = \mathop \sum \limits_{e = 1}^{r} k_{0,e} {\mathbf{L}}_{e}$$
(64)

which is added to the structural stiffness matrix.

Fig. 2
figure 2

Fractional Zener model of damper

The structural data for the system are as follows: the mass of the floor plates is consistent at \(m=27{,}000\; \text{kg}\), with plate dimensions measuring 6 × 6 m. The column spacing along the axes is 5 × 5 m, while the dimensions are 0.4 × 0.4 m, and their height stands at 4 m. It is assumed that the material is concrete resulting in a Young's modulus of \(E=31\; \text{GPa}\), and a Poisson’s ratio of \(v=0.2\). In Fig. 3, the positions of the columns and dampers are shown. The dampers are located on all walls of the second floor. The calculations are based on a model with rigid slabs where each floor has three degrees of freedom acting in the horizontal plane of the floor. Consequently, the considered frame has six dynamic degrees of freedom. For this analysis, all dampers are considered identical, and their parameters are as follows: spring stiffnesses \(k_{0} = 9000\; {\text{kN}}/{\text{m}}\) and \(k_{1} = 6000\; {\text{kN}}/{\text{m}}\), damping coefficient \(c_{1} = 700\; {\text{kNs}}^{\alpha } /m\), and the fractional derivative order \(\alpha = 0.6\). Dynamic characteristics of the considered system are provided in Tables 2 and 3.

Fig. 3
figure 3

a 3D-Frame with dampers, b dampers placement

Table 2 Dynamic characteristics for 3D-frame with dampers
Table 3 Eigenvectors corresponding to repeated eigenvalues

Two cases of changing design parameters of damper 1 (D1 in Fig. 3b) were considered. In the first case, the damping coefficient \({c}_{\text{1,1}}\) changes, and in the second case, the coefficient \({\alpha }_{1}\). changes. Sensitivities of eigenvalues and eigenvectors are presented in Tables 4, 5 and 6.

Table 4 Sensitivity of the eigenvalues with respect to change parameters \({c}_{\text{1,1}}\) and \({\alpha }_{1}\)
Table 5 Sensitivity of eigenvectors corresponding to repeated eigenvalues with respect to parameter \({c}_{\text{1,1}}\)
Table 6 Sensitivity of eigenvectors corresponding to repeated eigenvalues with respect to parameter \({\alpha }_{1}\)

It is worth noting that in the case of eigenvector sensitivities with respect to a change in the parameter α, second-order sensitivities are of similar magnitudes, or in some cases, even larger magnitudes than first-order sensitivities. This demonstrates that when predicting the response of a structure to parameter variations, not accounting for second-order sensitivities can lead to erroneous results. The correctness of the derived sensitivities was verified based on Eqs. (60) and (61). Exact solutions were compared to those obtained with parameter changes, and approximate results were calculated using Eq. (60) for first-order sensitivities and Eq. (61) for second-order sensitivities. The results are presented in Tables 7 and 9 for natural frequencies and in Tables 8 and 10 for non-dimensional damping ratios. First-order sensitivities calculated from Eqs. (56) and second-order sensitivities calculated from Eqs. (57) and (58) were utilized.

Table 7 Comparison of natural frequencies ω1

The results presented in Tables 7, 8, 9 and 10 show that in most cases, the calculated sensitivities provide a very accurate approximation for predicting the structural response after a parameter change. Moreover, it is evident that second-order sensitivity enables results closer to the exact solution. It is worth analyzing the results for the non-dimensional damping ratio \({\gamma }_{4}\) calculated due to the change in the parameter α1. The first- and second-order sensitivities are as follows: \(\frac{\partial {\gamma }_{4}}{\partial \alpha }=0.002975\) and \(\frac{{\partial }^{2}{\gamma }_{4}}{\partial {\alpha }^{2}}=-0.121009\). The first-order sensitivity is a positive value, indicating that an increase in the parameter’s value leads to an increase in the non-dimensional damping ratio. However, this holds true only for a 5% change. For subsequent changes, i.e., 15% and 30% of the damping coefficient value, the non-dimensional damping ratio decreases. Therefore, it is evident that only inclusion of second-order sensitivity allows for the accurate determination of these values. This highlights the significance of considering second-order sensitivity in such analyses.

Table 8 Comparison of non-dimensional damping ratio Ү1
Table 9 Comparison of natural frequencies ω4
Table 10 Comparison of non-dimensional damping ratio Ү4

Tables 11, 12, 13 and 14 show a comparison of eigenvectors after a 5% and 30% change in the parameter \({c}_{\text{1,1}}\).

Table 11 Comparison of eigenvector q1 for a 5% change in parameter c1,1
Table 12 Comparison of eigenvector q1 for a 30% change in parameter c1,1
Table 13 Comparison of eigenvector q4 for a 5% change in parameter c1,1
Table 14 Comparison of eigenvector q4 for a 30% change in parameter c1,1

The results presented demonstrate that the provided method yields accurate results when calculating predicted values of eigenvectors corresponding to repeated eigenvectors. In many cases, second-order sensitivity offers a significantly better approximation than first-order sensitivity. This superiority is particularly evident when there is a high variation, for example, 30%.

5.3 Example 3 – 3D-truss with dampers

A spatial truss structure with dampers is under analysis (Fig. 4). The dampers are described by a fractional Kelvin model (Fig. 5) where the elements of the matrix \({\mathbf{G}}\left( s \right)\) take the form:

$$g_{e} \left( s \right) = c_{1,e} s^{{\alpha_{e} }} , {\mathbf{K}}_{v,e} = {\mathbf{L}}_{e}$$
(65)
Fig. 4
figure 4

a Schematic of a 3D-Truss with dampers, b location of the dampers

Fig. 5
figure 5

Fractional Kelvin model

Similarly to the Zener model, an additional matrix \({\mathbf{K}}_{0}\), described by Eq. (64), needs to be created. It is assumed that the cross section of the rods is a circular pipe with dimensions 10 cm × 20 mm. The rod length is 2 m, and Young's modulus of steel is taken as \(E = 210\; {\text{GPa}}\). The linear mass of the rod is \(m = 13.6\; {\text{kg}}/{\text{m}}\). The parameters of the dampers are the same and are \(k_{0} = 4000\; {\text{kN}}/{\text{m}}\), \(c_{0} = 500\;{\text{kNs}}^{\alpha } /m\), \(\alpha = 0.6\). The considered system has 19 dynamic degrees of freedom, and the solution consists of 9 distinct eigenvalues and 5 repeated eigenvalues. The accuracy of the solution will be assessed based on the value \(- 2.5659 + 550.2747{\text{i}}\). Sensitivities due to changes in parameters \(c_{0}\), \(k_{0}\), \(\alpha\) are presented in Table 15. Using Eqs. (60) and (61), predicted values were calculated, considering first- and second-order sensitivities for parameter variations up to 30%. The obtained values were compared to the exact solution in Figs. 6, 7, 8 and 9. The legend for the graphs is provided in Fig. 6. Figure 9 presents a comparison with simultaneous changes in two parameters \({\alpha }_{1}\) and \({c}_{\text{0,1}}\). In this case, Eqs. (60) and (61) take the following forms:

$$f\left( {p_{k} + {\Delta }p_{k} } \right) = f\left( {p_{k} } \right) + \mathop \sum \limits_{k = 1}^{2} \frac{{\partial f\left( {p_{k} } \right)}}{{\partial p_{k} }}\Delta p_{k}$$
(66)

for the first-order sensitivity and

$$f\left( {p_{k} + {\Delta }p_{k} } \right) = f\left( {p_{k} } \right) + \mathop \sum \limits_{k = 1}^{2} \frac{{\partial f\left( {p_{k} } \right)}}{{\partial p_{k} }}\Delta p_{k} + \frac{1}{2}\mathop \sum \limits_{k = 1}^{2} \frac{{\partial^{2} f\left( {p_{k} } \right)}}{{\partial p_{k}^{2} }}\Delta p_{k}^{2}$$
(67)

for the second-order sensitivity, assuming that we only consider the elements on the main diagonal of the Hessian matrix [57].

Table 15 The sensitivity of eigenvalue (one of the first pair of repeated eigenvalues)
Fig. 6
figure 6

Comparison of the eigenvalues with respect to the change of \({c}_{\text{0,1}}\) (real and imaginary parts)

Fig. 7
figure 7

Comparison of the eigenvalues with respect to the change of \({k}_{\text{0,1}}\) (real and imaginary parts)

Fig. 8
figure 8

Comparison of the eigenvalues with respect to the change of \({\alpha }_{1}\) (real and imaginary parts)

Fig. 9
figure 9

Comparison of the eigenvalues with respect to the simultaneous change of \({\alpha }_{1}\) \({c}_{\text{0,1}}\) (real and imaginary parts)

All the diagrams show that considering second-order sensitivities provides better convergence to the exact solution. However, in the case of certain parameters (\({c}_{\text{0,1}}\) i \({k}_{\text{0,1}}\)), first-order sensitivity yields sufficiently close values, even for large variations in the parameter, making it unnecessary to calculate second-order sensitivities. On the other hand, for the parameter α, solutions that are close to the exact solution are obtained only for small variations, up to about 5%, when considering first-order sensitivity. For larger parameter variations, it is advisable to use second-order sensitivity to accurately determine the changes in the structural response. The derived formulas also allow for assessing the change in the structural response when assuming the simultaneous change of multiple parameters. In Fig. 9, it can be observed that even when one of the changing parameters is α, good convergence to the exact solution is achieved for changes up to about 15%.

5.4 Example 4 – system with equal eigenvalue sensitivities

To illustrate the formulas derived in Sect. 3, a three-degree of freedom system was considered. The system was based on examples analyzed in [14] and [16] but modified by adding damping elements (Fig. 10).

Fig. 10
figure 10

Three-degrees of freedom system

The following values were used for the calculations: \(m_{1} = 1\; {\text{kg}}\), \(m_{2} = 4\; {\text{kg}}\), \(m_{3} = 1 {\text{kg}}\), \(k_{1} = 8\; {\text{N}}/{\text{m}}\), \(k_{2} = 2\; {\text{N}}/{\text{m}}\), \(k_{3} = 2\; {\text{N}}/{\text{m}}\), \(k_{4} = 1\; {\text{N}}/{\text{m}}\), \(c_{1} = 4\; {\text{Ns}}/{\text{m}}\), \(c_{2} = 1\; {\text{Ns}}/{\text{m}}\), \(c_{3} = 1\; {\text{Ns}}/{\text{m}}\), \(c_{4} = 0.5\; {\text{Ns}}/{\text{m}}\). The matrices \({\mathbf{M}}\), \({\mathbf{K}}\), and \({\mathbf{G}}\left( s \right)\) are as follows:

$${\mathbf{M}} = \left[ {\begin{array}{*{20}c} {m_{1} } & 0 & 0 \\ 0 & {m_{2} } & 0 \\ 0 & 0 & {m_{3} } \\ \end{array} } \right], {\mathbf{K}} = \left[ {\begin{array}{*{20}c} {k_{2} + k_{4} } & { - k_{2} } & { - k_{4} } \\ { - k_{2} } & {k_{1} + k_{2} + k_{3} } & { - k_{3} } \\ { - k_{4} } & { - k_{3} } & {k_{3} + k_{4} } \\ \end{array} } \right]\;{\text{and}}\;{\mathbf{G}}\left( s \right) = s\left[ {\begin{array}{*{20}c} {c_{2} + c_{4} } & { - c_{2} } & { - c_{4} } \\ { - c_{2} } & {c_{1} + c_{2} + c_{3} } & { - c_{3} } \\ { - c_{4} } & { - c_{3} } & {c_{3} + c_{4} } \\ \end{array} } \right]$$

To calculate sensitivities, it was assumed that the chosen design parameter is a combination of parameters k1 and k4, for which stiffnesses change according to a ratio of 12:1. Therefore, the derivatives of the matrices \({\mathbf{M}}\), \({\mathbf{K}}\), and \({\mathbf{G}}\left( s \right)\) are as follows:

$$\frac{{\partial {\mathbf{K}}}}{\partial p} = \left[ {\begin{array}{*{20}c} 1 & 0 & { - 1} \\ 0 & {12} & 0 \\ { - 1} & 0 & 1 \\ \end{array} } \right], \frac{{\partial^{2} {\mathbf{K}}}}{{\partial p^{2} }} = \frac{{\partial^{3} {\mathbf{K}}}}{{\partial p^{3} }} = 0, \frac{{\partial {\mathbf{M}}}}{\partial p} = \frac{{\partial^{2} {\mathbf{M}}}}{{\partial p^{2} }} = \frac{{\partial^{3} {\mathbf{M}}}}{{\partial p^{3} }} = 0, \frac{{\partial {\mathbf{G}}\left( s \right)}}{\partial p} = \frac{{\partial^{2} {\mathbf{G}}\left( s \right)}}{{\partial p^{2} }} = \frac{{\partial^{3} {\mathbf{G}}\left( s \right)}}{{\partial p^{3} }} = 0$$

.

The eigenvalues of the problem are \(s_{1} = s_{2} = - 1.0 + 1.7321{\text{i}}\) and \(s_{3} = - 0.25 + 0.9682{\text{i}}\), with the first two being repeated. Subsequently, sensitivities of the eigenvalues s1 and s2 and their associated eigenvectors were calculated. After solving the additional eigenproblem (12), the repeated eigenvalue sensitivities were found to be \(\frac{{\partial s_{1} }}{\partial p} = \frac{{\partial s_{2} }}{\partial p} = 0.5774{\text{i}}\). Second-order sensitivities were computed after solving another additional eigenproblem (39): \(\frac{{\partial^{2} s_{1} }}{{\partial p^{2} }} = 0.3333 + 0.1925{\text{i}}\) and \(\frac{{\partial^{2} s_{2} }}{{\partial p^{2} }} = 0.0 + 0.0{\text{i}}\). Using the transformation matrix \({{\varvec{\upbeta}}}_{2}\), basis eigenvectors were determined:

$${{\varvec{\Psi}}} = \left[ {\begin{array}{*{20}c} { - 0.1551 + 0.1551{\text{i}}} & {0.2686 - 0.2686{\text{i}}} \\ {0.1551 - 0.1551{\text{i}}} & {0.0 + 0.0{\text{i}}} \\ { - 0.1551 + 0.1551{\text{i}}} & { - 0.2686 + 0.2686{\text{i}}} \\ \end{array} } \right]$$

The sensitivities of the elements of the eigenvector, calculated based on Eq. (35), are as follows:

$$\frac{{\partial {{\varvec{\Psi}}}}}{\partial p} = \left[ {\begin{array}{*{20}c} { - 0.0120 - 0.1671{\text{i}}} & { - 0.0448 + 0.0448{\text{i}}} \\ { - 0.0448 - 0.0448{\text{i}}} & {0.0 + 0.0{\text{i}}} \\ { - 0.0120 - 0.1671{\text{i}}} & {0.0448 - 0.0448{\text{i}}} \\ \end{array} } \right]$$

To verify the calculations, it was assumed that the selected design parameter varies by 5%. The exact solution for the modified stiffness values \({k}_{1}\) and \({k}_{4}\) was compared to the solution obtained using the Taylor series expansion (60). The results are presented in Table 16.

Table 16 Comparison of eigenvalues and eigenvectors for the case of repeated first derivatives of eigenvalues

The comparison of the results presented in Table 16 demonstrates that the obtained sensitivities are accurate and allow for a good approximation of dynamic characteristics for systems in which both eigenvalues and eigenvalue sensitivities are repeated. At the same time, it is worth noting that in the case of eigenvalues, first-order sensitivity provides an incorrect result, suggesting that after changing the design parameter, eigenvalues will remain repeated. Only second-order sensitivity allows for a correct assessment. On the other hand, the sensitivity of eigenvectors provides a very good approximation of the new eigenvectors.

5.5 Discussion on condition numbers

To improve the conditioning of the coefficient matrices, a special factor κ has been introduced. It is calculated as the ratio of the largest element of the matrix \(\mathbf{D}\left(\lambda \right)\) to the largest element of \(\frac{\partial \mathbf{D}\left(\lambda \right)}{\partial s}\) (see Eq. 17). Table 17 provides the condition numbers of the coefficient matrices for all examples.

Table 17 Comparison of condition numbers of coefficient matrices in Eq. 17

Comparing the results provided in Table 17, it is evident that the introduction of the additional factor κ significantly improves the condition number in most cases. For Example 4, the difference is small, but even without the κ factor, the condition number of the coefficient matrix is not high.

6 Conclusions

The paper introduces a method for calculating the sensitivities of repeated eigenvalues and their associated eigenvectors for systems with viscoelastic elements. The mechanical behavior of these elements is described using rheological models, both classical and utilizing fractional derivatives. The sensitivities of eigenvalues are determined by solving an additional eigenvalue problem, while the sensitivities of eigenvectors are computed by dividing derivatives into particular and homogeneous solutions. The paper derives formulas for both first- and second-order sensitivities, marking the first time such calculations have been presented for discussed systems. It is also demonstrated that the proposed method can be applied to a special case where the sensitivities of eigenvalues are also repeated.

An additional factor in the coefficient matrix is introduced to improve its condition number. The provided examples demonstrate the substantial improvement in the condition number achieved by this factor.

It is proven that the derived formulas can be applied to specific examples involving systems with viscoelastic elements. Furthermore, it has also been demonstrated in the presented examples that the application of second-order sensitivity analysis is necessary in some cases to accurately predict the behavior of the analyzed structure.