How to Compute Invariant Manifolds and their Reduced Dynamics in High-Dimensional Finite-Element Models

Invariant manifolds are important constructs for the quantitative and qualitative understanding of nonlinear phenomena in dynamical systems. In nonlinear damped mechanical systems, for instance, spectral submanifolds have emerged as useful tools for the computation of forced response curves, backbone curves, detached resonance curves (isolas) via exact reduced-order models. For conservative nonlinear mechanical systems, Lyapunov subcenter manifolds and their reduced dynamics provide a way to identify nonlinear amplitude-frequency relationships in the form of conservative backbone curves. Despite these powerful predictions offered by invariant manifolds, their use has largely been limited to low-dimensional academic examples. This is because several challenges render their computation unfeasible for realistic engineering structures described by finite-element models. In this work, we address these computational challenges and develop methods for computing invariant manifolds and their reduced dynamics in very high-dimensional nonlinear systems arising from spatial discretization of the governing partial differential equations. We illustrate our computational algorithms on finite-element models of mechanical structures that range from a simple beam containing tens of degrees of freedom to an aircraft wing containing more than a hundred-thousand degrees of freedom.


Introduction
Invariant manifolds are low-dimensional surfaces in the phase space of a dynamical system that constitute organizing centers of nonlinear dynamics. These surfaces are composed of full system trajectories that stay on them for all times, partitioning the phase space locally into regions of different behavior. For instance, invariant manifolds attached to fixed points can be viewed as the nonlinear analogues of (flat) modal subspaces of the linearized system. Classic examples are the stable, unstable and center manifolds tangent to the stable, unstable and center subspaces of fixed points (see, e.g., Guckenheimer & Holmes [1]). The most important class of invariant manifolds are those that attract other trajectories and hence their own low-dimensional internal dynamics acts a mathematically exact reduced-order model for the full, high-dimensional system. The focus of this article is to compute such invariant manifolds and their reduced dynamics accurately and efficiently in very high-dimensional nonlinear systems.
The theory of invariant manifolds has matured over more than a century of research and has been applied to numerous fields for the qualitative understanding of nonlinear behavior of systems (see Fenichel [2,3,4], Hirsch et al. [5], Wiggins [6], Eldering [7], Nipp & Stoffer [8]). The computation of invariant manifolds, on the other hand, is a relatively new and rapidly evolving discipline due to advances in scientific computing. In this paper, we address some challenges that have been hindering the computation of invariant manifolds in high-dimensional mechanical systems arising from spatially discretized partial differential equations (PDEs).
The methods for computing invariant manifolds can be divided into two categories: local and global. Local methods approximate an invariant manifold in the neighborhood of simpler invariant sets, such as fixed points, periodic orbits or invariant tori. Such local approximations are performed using Taylor series approximations around the fixed point or Taylor-Fourier series around the periodic orbit/invariant torus (see Simo [25]). Global methods, on the other hand, seek invariant manifolds globally in the phase space. Global techniques generally employ numerical continuation for growing invariant manifolds from their local approximation that may be obtained from the linearized dynamics (see Krauskopf et al. [10] for a survey).

Global techniques
A key aspect of most global techniques is to discretize the manifold into a mesh e.g., via a collocation or a spectral approach (see Dancowicz & Schilder [19], Krauskopf et al. [11]) and solve invariance equations for the unknowns at the mesh points. For growing an M −dimensional manifold via q collocation/spectral points (along each of the M dimensions) in an N −dimensional dynamical system, one needs to solve a system of O N q M nonlinear algebraic equations at each continuation step. This is achieved via an iterative solver such as the Newton's method. As N invariably becomes large in the case of discretized PDEs governing mechanics applications, numerical continuation of invariant manifolds via collocation and spectral approaches becomes computationally intractable. Indeed, while global approaches are often discussed for general systems, the most common applications of these approaches tend to involve low-dimensional problems, such as the computation of the Lorenz manifold (Krauskopf & Osinga [9]).
Global approaches also include the continuation of trajectory segments along the invariant manifold, expressed as a family of trajectories. This is achieved by formulating a two-point boundary value problem (BVP) satisfied by the trajectory on the manifold and numerically following a branch of solutions (see Krauskopf et al. [10], Guckenheimer et al. [49]). While collocation and spectral methods are valid means to achieve this end as well, the (multiple) shooting method method (see Keller [12], Stoer & Bulirsch [13]) has a distinguishing appeal from a computational perspective for high-dimensional problems. In the (multiple) shooting method, an initial guess for one point on the solution trajectory is iteratively corrected such until the two-point BVP is solved up to a required precision. In each iteration, one performs numerical time integration of the full nonlinear system between the two points of the BVP, which is a computationally expensive undertaking for large systems. However, time integration involves the solution O (N ) nonlinear algebraic equations at each time step, in contrast to collocation and spectral methods which require O (N q) nonlinear algebraic equations to be solved simultaneously for q collocation/spectral points. Coupled with advances in domain decomposition methods for time integration (see Carraro et al. [15]), the multiple shooting method provides a feasible alternative to collocation and spectral methods. Still, covering a multi-dimensional invariant manifold using trajectory segments in a high-dimensional system is an elusive task even for multiple shooting methods (see, e.g., Jain et al. [60]).
A number of numerical continuation packages have enabled the computation of global invariant manifolds via collocation, spectral or multiple shooting methods. AUTO [16], a FORTRAN-based package, constitutes the earliest organized effort towards continuation and bifurcation analysis of parameter-dependent ODEs. AUTO [16] employs orthogonal collocation to approximate solutions and is able to continue solution families in two or three parameters. The MATLAB-based package Matcont [17] addresses some of the limitations of AUTO, albeit at a loss of computational performance. Additionally, Matcont can also perform normal form analysis. coco [19] is an extensively documented and object-oriented MATLAB package which enables continuation via multidimensional atlas algorithms (Dankowicz et al. [14]) and implements collocation as well as spectral methods. Another recent MATLAB package, NLvib [18], implements the pseudo arc-length technique for the continuation of single-parameter family of periodic orbits in nonlinear mechanical systems via the shooting method or the spectral method (commonly referred to as harmonic balance in mechanics). Similar continuation packages are also available in the context of delay differential equations (see DDE-BIFTOOL [23], written in MATLAB and Knut [24], written in C++). The main focus of all these and other similarly useful packages is to implement automated continuation procedures, including demonstrations on low-dimensional examples, but not on the computational complexity of the operations as N increases. As discussed above, the collocation/spectral/shooting techniques that are invariably employed in such packages limit their ability to compute invariant manifolds in high-dimensional mechanics problems, where system dimensionality N varies from several thousands to millions.

Local techniques
In contrast to global techniques, local techniques for invariant manifold computations produce approximations valid in a neighborhood of a fixed point, periodic orbit or invariant torus. As a result, local techniques are generally unable to compute homoclinic and heteroclinic connections. Nonetheless, in engineering applications, local approximations of invariant manifolds often suffice for assessing the influence of nonlinearities on a well-understood linearized response.
Center manifold computations and their associated local bifurcation analysis via normal forms (see Guckenheimer & Holmes [1]) are classic applications of local approximations where the manifold is expressed locally as a graph over the center subspace. For an M −dimensional manifold, this local graph is sought via an M −variate Taylor series where the coefficient of each monomial term is unknown. These unknown coefficients are determined by solving the invariance equations in a recursive manner at each polynomial order of approximation, i.e., the solution at a lower order can be computed without the knowledge of the higher-order terms. The computational procedure simply involves the solution of a system of O(N ) linear equations for each monomial in the Taylor expansion (see Simo [25] for flows, Fuming & Küpper, [26] for maps). Thus, in the computational context of high-dimensional problems, local techniques that employ Taylor series approximations exhibit far greater feasibility in comparison to global techniques that involve the continuation of collocation, spectral or shooting based solutions.
More recently, the parametrization method has emerged as a rigorous framework for the local analysis and computation of invariant manifolds of discrete and continuous time dynamical systems. This method was first developed in papers by Cabré, Fontich & de la Llave [27,28,29] for invariant manifolds tangent to eigenspaces of fixed points of nonlinear mappings on Banach spaces, and then extended to whiskered tori by Haro & de la Llave [30,32,31]. We refer to the monograph by Haro et al. [33] for an overview of the results. An important feature of the parametrization method is that it does not require the manifold parametrization to be the graph of a function and hence allows for folds in the manifold. Furthermore, the method returns the dynamics on the invariant manifold along with its embedding. The formal computation can again be carried out via Taylor series expansions when the invariant manifold is attached to a fixed point and via Fourier-Taylor expansions when it is attached to an invariant torus perturbing from a fixed point (see, e.g., Mireles James [34], Castelli et al. [35], Ponsioen et al. [37,42]).
The main focus of the parametrization method has been on the computer-assisted proofs of existences and uniqueness of invariant manifolds, for which the dynamical system is conveniently diagonalized at the linear level. Furthermore, as discussed by Haro et al. [33], this diagonalization allows a choice between different styles of parametrization for the reduced dynamics on the manifold, such as a normal form style, a graph style or a mixed style. Recent applications of the parametrization method include the computation of spectral submanifolds or SSMs (Haller & Ponsioen [52]) and Lyapunov subcenter manifolds or LSMs (Kelley [47]). For these manifolds the normal form parametrization style can be used to directly extract forced response curves (FRC) and backbone curves in nonlinear mechanical systems, as we will discuss in this paper (see Ponsioen et al. [37,42], Breunung & Haller [38], Veraszto et al. [39]).

Our contributions
While helpful for proofs and expositions, the routinely performed diagonalization and the associated linear coordinate change in invariant manifold computations, render the parametrization method unfeasible to high-dimensional mechanics problems for two reasons. First, diagonalization involves the computation of all N eigenvalues of an N −dimensional dynamical system and second, the nonlinear coefficients in physical coordinates exhibit an inherent sparsity in mechanics applications that is annihilated by the linear coordinate change associated to diagonalization. Both these factors lead to unmanageable computation times and memory requirements when N becomes large, as we discuss in Section 3 of this manuscript.
To address these issues, we develop here a new computational methodology for local approximations to invariant manifolds via the parametrization method. The key aspects making this methodology scalable to high-dimensional mechanics problems are the use of physical coordinates and just the minimum number of eigenvectors. In the autonomous setting, we seek to compute invariant manifolds attached to fixed points where we develop expressions for the Taylor series coefficients that determine the parametrization of the invariant manifold as well as its reduced dynamics in different styles of parametrization (see Section 4). We develop similar expressions in the non-autonomous periodic or quasiperiodic setting, where we seek to compute invariant manifolds or whiskers attached to an invariant torus perturbed from a hyperbolic fixed point under the addition of small-amplitude non-autonomous terms. In this case, we seek to compute the coefficients in Fourier-Taylor series that parametrize the invariant manifold as well as its reduced dynamics in different parametrization styles (see Section 5). Finally, we apply this methodology to high-dimensional examples arising from a finite-element discretization of structural mechanics problems, whose forced response curves we recover from a normal form style parametrization of SSMs (see Section 6).
Related computational ideas have already been used in other contexts. For instance, Beyn & Kleß [61] performed similar Taylor series-based computations of invariant manifolds attached to fixed points in physical coordinates using master modes. Their work predates the parametrization method and does not involve the choice of reduced dynamics or normal forms. Recently, Carini et al. [58] focused on computing center manifolds of fixed points and their normal forms using only master modes in physical coordinates. While this an application of the parametrization method in the normal form style, they attribute their results to earlier related work by Coullet & Spiegel [59] and use these center manifolds for analyzing stability of flows around bifurcating parameter values.
More recently, Vizzaccaro et al. [43] and Opreni et al. [44] have computed normal forms on second-order, proportionally-damped mechanical systems with up to cubic nonlinearities and derived explicit expressions up to cubic order accuracy (see also Touzé et al. [45] for a review). This is a direct application of the parametrization method via a normal-form style parametrization to formally compute assumed invariant manifolds whose existence/uniqueness is a priori unclear (cf. Haller & Ponsioen [52]). These results in [43,44] provide low-order approximations to SSMs [52], whose computation up to arbitrarily high orders of accuracy has already been automated in prior work [37,42] for mechanical systems with diagonalized linear part. A major computational advance in the approach of Vizzaccaro et al. [43] is the nonintrusive use of finite element software to compute normal form coefficients up to cubic order. All these prior results, however, are fundamentally developed for unforced (non-autonomous) systems.
The computation procedure we develop is generally applicable to first-order systems with smooth nonlinearities, periodic or quasiperiodic forcing, and enables automated computation of various types of invariant manifolds such as stable-, unstable-, and center manifolds, LSMs, and SSMs, up to arbitrarily high orders of accuracy in physical coordinates. Finally, a numerical implementation of these computational techniques is available in the form of an open-source MATLAB package, SSMTool 2.0 [69], which is integrated with a generic finite-element solver (Jain et al. [70]) for mechanics problems. We describe some key symbols and the notation used in the remainder of this paper in Table 1 before proceeding towards the technical setup in the next section.

General setup
Symbol Meaning n Dimensionality of full second-order mechanical system (1) N Dimensionality of the full first-order system (2) in the first-order (N = 2n for mechanical systems) E Master (spectral) subspace of a fixed point of system (2) W(E) Invariant manifold tangent to E at its fixed point M dim(E) = dim(W(E)): Dimensionality of the invariant manifold constructed around E W : C M → R N Parametrization for the invariant manifold W(E) R : C M → C M Parametrization for the reduced dynamics on W(E) p ∈ C M Parametrization coordinates describing reduced dynamics: Transpose of a matrix or vector Complex conjugate transpose for a matrix or vector; adjoint for an operator. K Number of rationally incommensurate forcing frequencies in system (1) or (2) Ω ∈ R K + Quasiperiodic forcing frequencies γ small amplitude invariant torus of system (2) W(E, γ ) Invariant manifold (whisker) of torus γ perturbed from spectral subspace E.  We are mainly interested here in dynamical systems arising from mechanics problems. Such problems are governed by PDEs that are spatially discretized typically via the finite element method. The discretization results in a system of second-order ordinary differential equations for the generalized displacement x(t) ∈ R n , which can be written as Here M, C, K ∈ R n×n are the mass, stiffness and damping matrices; f (x,ẋ) ∈ R n is the purely nonlinear internal force; f ext (x,ẋ, Ωt) ∈ R n denotes the (possibly linear) external forcing with frequency vector Ω ∈ R K for some K ≥ 0. The function f ext is autonomous for K = 0, periodic in t for K = 1, and quasi-periodic for K > 1 with K rationally incommensurate frequencies. The second-order system (1) may be expressed in a first-order form as where z = ẋ x , A, B ∈ R 2n×2n , F : R 2n → R 2n , F ext : R 2n × T K → R 2n denote the first-order quantities derived from system (1). Such a first-order conversion is not unique: two equivalent choices are given by (see Tisseur & Meerbergen [36]) where N ∈ R n×n may be chosen as any non-singular matrix. If the matrices M, C, K are symmetric, then the choice of N = −K for (L1) and N = M for (L2) results in the first-order matrices A, B being symmetric. The computation methodology we will discuss is for any first-order system of the form (2) for z ∈ R N . In particular, we have N = 2n for second-order mechanical systems of the form (1). We first focus on the autonomous ( = 0) limit of the system (2), given by whose linearization at the fixed point z = 0 is The linear system (7) has invariant manifolds defined by eigenspaces of the generalized eigenvalue problem where for each distinct eigenvalue λ j , there exists an eigenspace E j ⊂ R N spanned by the real and imaginary parts of the corresponding generalized eigenvector v j ∈ C N . These eigenspaces are invariant for the linearized system (7) and, by linearity, a subspace spanned by any combination of eigenspaces is also invariant for the system (7). A general invariant subspace of this type is known as a spectral subspace [52] and is obtained by the direct-summation of eigenspaces as where ⊕ denotes the direct sum of vector spaces and E j1,...,jq is the spectral subspace obtained from the eigenspaces E j1 , . . . , E jq for some q ∈ N. Classic examples of spectral subspaces are the stable, unstable and center subspaces, which are denoted by E s , E u and E c and are obtained from eigenspaces associated to eigenvalues with negative, positive and zero real parts, respectively. By the center manifold theorem, these classic invariant subspaces of the linear system (7) persist as invariant manifolds under the addition of nonlinear terms in system (6). Specifically, there exist stable, unstable and center invariant manifolds W s , W u and W c tangent to E s , E u and E c at the origin 0 ∈ R N respectively. All these manifolds are invariant and W s and W u are also unique (see, e.g., Guckenheimer & Holmes [1]). In analogy with the stable manifold W s , which is the nonlinear continuation of the stable subspace E s , a spectral submanifold (SSM) [52] is an invariant submanifold of the stable manifold W s that serves as the smoothest nonlinear continuation of a given stable spectral subspace of E s . The existence and uniqueness results for such spectral submanifolds under appropriate conditions are derived by Haller & Ponsioen [52] using the parametrization method of Cabré et al. [27,28,29]. The parametrization method also serves as a tool to compute these manifolds.
We are interested in locally approximating the invariant manifolds of the fixed point 0 ∈ R N of system (6) using the parametrization method. Let W(E) be an invariant manifold of system (6) which is tangent to a master spectral subspace E ⊂ R N at the origin such that dim(E) = dim (W(E)) = M < N.
Let V E = [v 1 , . . . , v M ] ∈ C N ×M be a matrix whose columns contain the (right) eigenvectors corresponding to the master modal subspace E. Furthermore, we define a dual matrix U E = [u 1 , . . . , u M ] ∈ C N ×M which contains the corresponding left-eigenvectors that span the adjoint subspace E as where we choose these eigenvectors to satisfy the normalization condition Using the eigenvalue problems (8)-(10), we obtain the following relations for the matrices V E and where Λ E := diag(λ 1 , . . . , λ M ). We note that if the matrices M, C, K are symmetric, then the matrices A, B will be symmetric as well. In that case, the left and the right eigenvectors u j , v j are identical and we may conveniently choose U E =V E , with the overbar denoting complex conjugation.
The common approach to local invariant manifold computation involves diagonalizing the system (6) asq where and q ∈ C N are modal coordinates with z = Vq. When B = I N , then using the normalization condition (11), we obtain U = V −1 , which results in the familiar diagonalized form (14) with While the form (14) is very helpful for the purposes of proving the existence and uniqueness properties of invariant manifolds, it presents a computationally intractable form for the actual computation of invariant manifolds in high-dimensional finite element problems, as we will see next.
3 Pitfalls of the diagonalized form (14) In this work, we use the Kronecker notation for expressing smooth nonlinear functions as a multivariate Taylor series in terms of their arguments. The Kronecker product (also known as the outer/dyadic product) is commonly denoted by the symbol ⊗. For a column vector z ∈ R N , the Kronecker product operation z ⊗ z returns the matrix zz ∈ R N ×N . In index notation, we write The Kronecker notation is more generally defined for obtaining the product of higher-order tensors, where a first-order tensor can be viewed as a vector, a second-order tensor, as a matrix and an orderk tensor as a k−dimensional array. Specifically, the Kronecker product of two tensors of orders p and q yields a tensor of order p + q. We refer to Van Loan [53] for a concise review of the Kronecker product and its properties. Now, the system nonlinearity F (see eq. (6)) can be expanded in terms of the physical coordinates z ∈ R N as where z ⊗k denotes the term z ⊗ · · · ⊗ z (k-times), containing N k monomial terms at degree k in the variables z. The array F k ∈ R N ×N k contains the coefficients of the nonlinearity F associated to each of these monomials. Similarly, the nonlinearity T (see eq. (14)) in modal coordinates q ∈ C N can be expanded as

Eigenvalue and eigenvector computation
For local approximations of invariant manifolds around a fixed point of (6), it is commonly assumed that the complete generalized spectrum of the matrix B −1 A (or generalized eigenvalues of the pair A, B) is known and that a basis in which the linear system (7) takes its Jordan canonical form is readily available (see, e.g., Simo [25], Homburg et al. [56], Tian & Yu [57], Haro et al. [33], Ponsioen et al. [37,42]). For small to moderately-sized systems, obtaining a complete set of (generalized) eigenvalues/eigenvectors can indeed be accomplished using numerical eigensolvers, but this quickly transforms into an intractable task as the system size increases. While techniques in numerical linear algebra can help us determine a small subset of eigenvalues and eigenvectors for very high-dimensional systems using a variety of iterative methods (see, e.g., Golub & Van Loan [51]), obtaining a complete set of eigenvectors of such systems remains unfeasible despite the availability of modern computing tools. To emphasize this, we illustrate in Figure 2 the time and memory required for the eigenvalue computation for the finite element mesh for a square plate (see Figure 1). The purpose of this comparison is to report trends in computational complexity rather than precise numbers. To this end, we have used MATLAB across all comparisons, which may not be the fastest computing platform generally but is known to assimilate the state-of-the-art algorithms for numerical linear algebra computations (Golub & Van Der Vorst [22]).  Figure 2a shows that as the number of degrees of freedom, n, increases from a few tens to approximately a hundred thousand, the computational time required for computing a full set of eigenvalues of the system grows polynomially up to almost a year. For computing a subset of eigenvalues in discretized PDEs, sparse iterative eigensolvers are used, such as the routines (e.g., Stewart [20], Lehoucq et al. [21]) implemented by the MATLAB's eigs command (cf. direct eigensolvers implemented by the eig command). These sparse solvers are considered inefficient for nearly full or less sparse matrices. There are, therefore, two competing factors here, sparsity of the matrices and the size of the matrices. The small matrices in the beginning have very low sparsity and sparse eigensolver eigs of MATLAB is inefficient for computing eigenvalues here. Indeed, we see that computing the full set of eigenvalues for a small matrix (using the eig command) ends up being less expensive compared to computing a subset of eigenvalues. As sparsity is governed by the number of DOFs that are shared by neighboring elements relative to the total number of degrees of freedom, it increases with mesh refinement. Thus, sparse eigensolvers become more efficient as we refine the mesh initially, but after the refinement reaches an optimum value, the computation time is governed solely by the size of the matrix.
Furthermore, all these eigenvectors must be held in the computer's active memory (RAM) in typical invariant manifold computations, which contributes towards very high memory requirements, as shown in Figure 2b. At the same time, these figures also show that a small subset of eigenvectors can be quickly computed and easily stored even for very high dimensional systems.

Unfeasible memory requirements due to coordinate-change
Aside from the cost of eigenvalue computation, invariant manifold computations typically involve local approximations via Taylor series. These are obtained by transforming the system into modal coordinates (see eq (14)), expressing the manifold locally as a graph over the master subspace, substituting the polynomial ansatz into eq. (14) and solving the invariance equations recursively at each order. While such a modal transformation results in decoupling of the governing equations at the linear level, it generally annihilates the inherent sparsity in the nonlinear terms, as shown in Figure 3a. That sparsity generally arises because only neighboring elements of the numerical mesh share coupled degrees of freedom. Due to the loss of this sparsity upon transformation to the diagonal form (14), the number of polynomial coefficients required to describe the nonlinearities increases by orders of magnitude, resulting in unfeasible memory requirements.  (6) into (14) via the linear transformation z = Vq results in destruction of the inherent sparsity of the governing equations in physical coordinates z, which leads to unfeasible memory (RAM) requirements for storing nonlinearity coefficients. Here, the multi-dimensional array F k and T k represent the polynomial coefficients at degree k for the nonlinearities F (in physical coordinates) and T (in modal coordinates); see eqs. (6), (14), (17) and (18). (b) Comparison of memory requirements for storing the nonlinearities F and T at degrees k = 2, 3, 4, 5 in the n−degree-offreedom square plate example (see Figure 1) with n = 36, 120, 432, 1,632, 6,336, 24,960, 99,072 and phase space dimension N = 2n.
Indeed, in Figure 3b, we compare the memory estimates for storing these coefficients in physical vs. modal coordinates as a function of the system's phase space dimension N . We see that even for the moderately sized meshes of the square plate example (see Figure 1) considered here, the storage requirements for the transformed coefficients reaches astronomically high values in the order of several terabytes/petabytes. At the same time, however, note that the RAM requirements for handling the same coefficients in physical coordinates are much less than a gigabyte, which is easily manageable for modern computers.

Computing invariant manifolds of fixed points in physical coordinates
Unlike commonly employed computational approaches [25,56,57,33,37,41], we now describe the computation of general invariant manifolds in physical coordinates using the eigenvectors and eigenvalues associated to the master subspace E only. This is motivated by the computational advantages we expect based on Figures 2 and 3. We seek to compute an invariant manifold W (E) tangent to a spectral subspace E at the origin of system (6). Let W : C M → R N be a mapping that parametrizes the M −dimensional manifold W (E) and let p ∈ C M be its parametrization coordinates. Then, W(p) provides us the coordinates of the manifold in the phase space of system (6), as shown in Figure 4. For any trajectory z(t) on the invariant manifold W (E), we have a reduced dynamics trajectory p(t) in the parametrization space such that Let R : C M → C M be a parametrization for the reduced dynamics. Then, any reduced dynamics trajectory p(t) satisfiesṗ = R(p).  (6) constructed around a spectral subspace E with dim(E) = dim(W(E)) = M . This manifold is tangent to E at the origin. Furthermore, we have the freedom to choose the parametrization R : C M → C M of the reduced dynamics on the manifold such that the function W also maps the reduced system trajectories p(t) onto the full system trajectories on the the invariant manifold, i.e., z(t) = W (p(t)).
Differentiating eq. (19) with respect to t and using eqs. (2) and (20), we obtain the invariance To solve this invariance equation, we need to determine the parametrizations W and R. We choose to parameterize the manifold and its reduced dynamics in the form of multivariate polynomial expansions as where W i ∈ C N ×M i , R i ∈ C M ×M i are matrix representation of multi-dimensional arrays containing the unknown polynomial coefficients at degree i for the parametrizations W and R. Furthermore, we have the expansion (17) for the nonlinearity F in physical coordinates, where F i ∈ R N ×N i are sparse arrays, which are straight-forward to store despite their large size (see Section 3.2). Using the expansions (17), (22) and (23), we collect the coefficients of the multivariate polynomials in the invariance equation (21) at degree i ≥ 1, similarly to Ponsioen et al. [37], as where with and At leading-order, i.e., for i = 1, equation (24) simply yields Comparing equation (28) with the eigenvalue problem (12), we choose a solution for W 1 , R 1 in terms of the master modes and their eigenvalues as Remark 1. The solution choice (29) for W 1 , R 1 is not unique. Indeed, we may choose W 1 ∈ C N ×M to be any matrix whose columns span the master subspace E, generally resulting in a non-diagonal R 1 . Since our system is defined in the space of reals (R), a real choice of W 1 , R 1 allows us to choose the parametrization coordinates p in which reduces the computational memory requirements by half relative to the complex setting. At any order i ≥ 2 in eq. (24), we collect the terms containing the coefficients W i on the left-hand side and the lower degree terms on the right-hand side as where R i,i is defined according to eq. (26) and We solve (30) recursively for i ≥ 2 by vectorizing it as (see, e.g., Van Loan [53]) where In eq. (31), the matrix L i is often called the order−i cohomological operator induced by the linear flow (7) and the master subspace E on the linear space whose coefficients are homogeneous, M −variate polynomials of degree i (see Haro et al. [33], Murdock [50]). Hence, at any order of expansion i, the entries of L i are completely determined using only the linear part of the full and reduced systems, i.e., via the matrices A, B and R 1 (which is equal to Λ E due to the choice (29)). Remark 2. For a diagonal choice of R 1 (see, e.g., choice 29), the matrix L i has a block-diagonal structure, i.e., system (31) can be split into M i decoupled linear systems containing N equations each. Hence, the coefficients parametrizing the manifold and its reduced dynamics can be determined independently for each monomial in p ⊗i . This splitting of the large system (31) into smaller decoupled systems not only eases computations but also makes these computations appealing for a parallel computing, which has the potential to speed these computations up by a factor of M i at each order i. (30) is under-determined in terms of the unknowns W i , R i . As discussed by Haro et al. [33], this underdeterminacy turns out to be an advantage, as it provides us the freedom to choose a particular style of parametrization depending on the context. When the matrix L i is non-singular for every i ≥ 2, the cohomological equations (31) have a unique solution W i for any choice of the reduced dynamics R i . The trivial choice

Note that the system
leads to linear reduced dynamics. However, as we will see next, L i may be singular in the presence of resonances and yet system (31) may be solvable under an appropriate choice of parametrization.

Choice of parametrization
Eigenstructure of L i In order to choose the reduced dynamics R appropriately, we seek to explore the eigenstructure of L i in relation to that of the matrix R i and the generalized matrix pair (B, A). We first derive a general result which helps us compute the eigenstructure of R i . For notational purposes, we introduce an ordered set ∆ i which contains all i-tuples j (indexed lexicographically) taking values in the range 1, . . . , M , defined as As an example, consider the case of i = 3 and M = 2. Then, we may order the 3−tuples in ∆ 3,2 lexicographically as {(1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), (2, 1, 1), (2, 1, 2), (2, 2, 1), (2, 2, 2)}, which contains 2 3 elements. Essentially, each ∈ ∆ i,M corresponds to a monomial of degree i in the reduced variables p ∈ C M , i.e., p 1 p 2 . . . p i At a high order i for the manifold expansion, the matrix R i,i in eq. (30) may be high-dimensional, even though its components only involve the low-dimensional matrices I M and R 1 (see eq. (35)). Proposition 1 in Appendix A allows us to compute all the eigenvalues and eigenvectors of R i,i simply in terms of those of R 1 . Indeed, let the eigenvalues of R 1 be given by λ 1 , . . . , λ M . Note that for a diagonal choice of R 1 (see choice (29)), the left and right eigenvectors are simply given by the unit vectors aligned with the coordinate axes in C M , i.e. e 1 , . . . , e M . Then, from Proposition 1, the eigenvalues and eigenvectors of R i,i are given as Furthermore, Proposition 2 in Appendix A characterizes the eigenstructure of L i in relation to that of the matrices (B, A) and R i,i . From Proposition 2, we deduce that L i is singular whenever the resonance λ = λ j occurs for some ∈ ∆ i,M , j ∈ {1, . . . , N }. In this case, the solvability of eq. (31) depends on the nature of such resonances. Hence, these resonances are distinguished into inner and outer resonances as Outer resonances: Both inner and outer resonances result in the cohomological operator L i becoming singular. The main difference between these resonances is that the cohomological equation (30) can be solved in the presence of inner resonances by adjusting the parametrization choice of R i so that the right-hand side of (30) belongs to im(L i ), which we will discuss shortly. In the presence of outer resonances, however, the right-hand side of equation (30) cannot be adjusted to lie in the range of the operator L i and, hence, system (30) has no solution. Indeed, the manifold does not exist in presence of certain outer resonances (see Cabré et al. [27], Haller & Ponsioen [52]). Haro et al. [33] refer to inner and outer resonances as internal and cross resonances, we use this terminology of Ponsioen et al. [37] as internal resonances carry a different meaning in the context of mechanics. Next, we discuss two common choices for reduced dynamics parametrization, i.e., the normal form and the graph style parametrizations, which are useful for solving the cohomological equation (31) in the presence of inner resonances.

Normal form parametrization
Normal forms provide us tools for the qualitative and quantitative understanding of local bifurcations in dynamical systems. Normal form computations for any dynamical system involve successive near-identity coordinate transformations to simplify the transformed dynamics. The simplest form of dynamics that one can hope for is linear. In the presence of resonances, however, a transformation that linearizes the dynamics does not exist and the normal form procedure results in nonlinear dynamics that is only "as simple as possible". This is achieved by systematically removing the nonessential terms from the Taylor series up to any given order (see, e.g., Guckenheimer & Holmes [1]).
Using the parametrization method, we can simultaneously compute the normal form parametrization for the reduced dynamics R along with the parametrization W for the manifold. As discussed earlier, in the absence of any inner resonances, the trivial choice (see eq. (37)) leads to the simplest (i.e., linear) reduced dynamics, which is automatically obtained from the normal form procedure. However, when the inner resonance relations (40) hold, then the dynamics cannot be linearized. In such cases, we can compute the essential nonlinear terms at degree i following the normal form style of parametrization. This involves first projecting the invariance equation (31) onto ker(L i ) to eliminate the unknowns W i and then solving for the essential nontrivial terms in R i by computing a partial inverse (see Murdock [50]). In prior approaches, this is achieved by transforming the governing equations into diagonal coordinates [1,33,37,42] which causes the matrix L i to be diagonal and hence simplifies the detection of its kernel. Here, we develop explicit expressions for the computation of normal form directly in physical coordinates using only the knowledge of the left eigenvectors u j associated to the master subspace E, as summarized below.
We focus on the case of a system with r i ∈ N 0 inner resonances and no outer resonances at order i. Taking D = R i,i and C = I M i in Proposition 2, we can directly estimate ker(L i ) using only the eigenvalues λ j and the corresponding eigenvectors of the master subspace E. Specifically, the generalized eigenvalues for the matrix pair (I M i , R i,i ) are given by µ = 1 λ with left-eigenvectors e according to eq. (39) and with the left kernel N i of L i given as Now, let N i ∈ C N M i ×ri be a basis for N i , which can be obtained by simply stacking the column vectors (e ⊗ u j ) from Definition (42) that are associated to the modes with inner resonances (see eq. (40)). Then, the reduced dynamics coefficients in the normal form parametrization are chosen by projecting the invariance equation (31) onto N i as The left-hand-side of eq. (43) is identically zero since columns of N i belong to ker(L * i ) , i.e., Hence, we are able to eliminate the unknowns W i from eq. (43) to obtain To solve eq. (45), we may further simplify it using the normalization (11) which results in where E i ∈ R M i+1 ×ri is a matrix whose columns are of the form (e ⊗ e j ), such that ( , j) are pairs with inner resonances, i.e., λ = λ j and e j ∈ R M is the unit vector aligned along the j th coordinate axis. Here, the columns (e ⊗ e j ) of E i must be arranged analogous to the columns (e ⊗ u j ) of N i . Using the relation (46), and noting that N i is a Boolean matrix with E i E i = I, we obtain the canonical solution to (45) for the coefficients R i as Note that for each inner resonant pair ( , j) in the definition of N i (42), the solution choice (47) produces non-trivial coefficients in the j th equation of reduced dynamics (20) precisely for the monomial p 1 . . . p i corresponding to an inner resonance. As a result, eq. (47) directly provides the normal form coefficients of the reduced dynamics on the manifold in physical coordinates, using only the knowledge of the master modes spanning the adjoint modal subspace E .

Graph style parametrization
As the name suggests, a graph style of parametrization for the reduced dynamics is the result of expressing the manifold as a graph over the master subspace E (see Haro et al [33]), as done in the graph transform method. The graph style of parametrization may be appealing in the context of center manifold computation, where an infinite number of inner resonances may arise. For instance, in a system with a two-dimensional center subspace with eigenvalues λ 1,2 = ±iω, its center manifold exhibits the inner resonances In our setting, a graph style of parametrization is achieved by projecting the invariance equations (31) onto the subspace G i defined as Note that in the case of inner resonances, N i ⊂ G i (cf. Definition (42)) and, hence, G i includes all possible resonant subspaces at order i. Then, similarly to the normal form style, we define a basis G i ∈ C N M i ×M i for G i , obtained by stacking the column vectors (e ⊗ u j ) in the definition (50). We obtain a graph style of parametrization by projecting (31) on to G i and equating the right-hand side to zero as In contrast to the normal form style (47), where only the coefficients of resonant monomials are nontrivial, we generally obtain a larger set of monomials with nontrivial coefficients in the graph style. Hence, the normal form style retains only the minimal number of nonlinear terms in the reduced that are essential for solving the invariance equation (21) at each order i, whereas the graph style generally leads to more complex expressions of the reduced dynamics. For the choice of W 1 = V E (see eq. (29)) and the normalization condition (11), eq. (51) can be simplified to obtain the reduced dynamics coefficients in the graph style as Graph style: Note that eq. (52) directly provides the reduced dynamics coefficients in the graph style without the evaluation of G i . Hence, an advantage of using the graph style parametrization relative to the normal form style is that specific inner resonances need not be identified a priori. More generally, a combination of graph and normal form styles of parametrization may also be used depending on the problem. This is referred to as a mixed style of parametrization as discussed by Haro et al. [33]. A mixed style may be particularly appealing in the context of parameterdependent manifold computation, where the parameters are dummy dynamic variables and the associated modes always have trivial dynamics. Thus, it is desirable to choose a graph style for the parametric modes and a normal form style for remaining master modes which may feature inner resonances (see Haro et al. [33], Murdock [50]).

Computing the parametrization coefficients w i
Once the reduced dynamics coefficients R i (specific to the choice of parametrization style) are determined (see eqs. (47), (52)), we can compute the manifold parametrization coefficients W i by solving eq. (30).
When the coefficient matrix L i is (nearly) singular in the presence of (near) resonances, numerical blow-up errors (ill-conditioning) may occur due to the small divisors that arise in solving system (30) using conventional solvers (see also Remark 3). As an alternative, we adopt a norm-minimizing solution to (30) given by which can be obtained using existing routines, such as the lsqminnorm in MATLAB. Other commonly used techniques in the literature include the simultaneous solution of equations (31) and (45) or (50). This involves the inversion of a bordered matrix that extends L i and ends up being nonsingular (see, e.g., Beyn & Kleß [61], Kuznetsov [54]).
To summarize, we have developed an automated procedure for computing invariant manifolds attached to fixed points of system (6) and for choosing different styles of parametrizations for their reduced dynamics (eqs. (47), (52)) by solving invariance equations (31) in the physical coordinates and only using the eigenvectors associated to its master master spectral subspace. The open-source MATLAB package [69] automates this computational procedure. Next, we illustrate applications of this computation procedure developed so far.

Applications
Parameter-dependent center manifolds and their reduced dynamics We illustrate the automated procedure developed above to compute the center manifold in the Lorenz system and its normal form style of reduced dynamics without performing any diagonalization and using only the modes associated to the center subspace. In the following example, we compute the ρ-dependent center manifold and the normal form of the reduced dynamics to analyze the local bifurcation around ρ = 1 (see section 3.2 in Guckenheimer & Holmes [1]). Consider the Lorenz systemẋ = σ(y − x), where (x, y, z) ∈ R 3 , σ, ρ, β > 0. The basic steps are as follows.
1. Setup: With a new variable µ := ρ − 1 in the Lorenz system (54), we obtain an extended system of the form (6) with For σ = β = 1, the eigenvalues of A are given by λ 1 = λ 2 = 0, λ 3 = −2 , and λ 4 = 1. The nonlinearity F is quadratic and can be expressed according to the Kronecker expansion (17) as where the term z ⊗2 = (z ⊗ z) = (x 2 , xy, xz, xµ, yx, y 2 , yz, yµ, zx, zy, z 2 , zµ, µx, µy, µz, µ 2 ) T contains the monomials and F 2 ∈ R 4×4 2 is a sparse matrix representation of the coefficients of the quadratic nonlinearity. The non-zero entries of F 2 corresponding to the monomials xη, xz and xy in the definition of F (see eq. (55)) are given as 2. Choose master subspace: We construct a center manifold over the center-subspace E spanned by the eigenvectors corresponding to the two zero eigenvalues. We obtain the eigenvectors associated with this E satisfying the normalization condition (11), as 3. Assemble invariance equations: At leading-order, the parametrization coefficients for the center manifold and its reduced dynamics can be simply chosen as (see eqs. (56) and (29)) To obtain the parametrization coefficients at order 2, we need to solve the vectorized invariance equation (31) for i = 2, i.e., where

Resonance detection:
We deduce the eigenvalues of R 2,2 from the formula (39) as Thus, as per definition (40), we observe inner-resonances between the two (zero) eigenvalues of the center-subspace, i.e., and no outer-resonances, i.e., λ (i,j) = λ k = 0, ∀i, j ∈ {1, 2}, k ∈ {3, 4}. (57) is solvable for a nontrivial choice of reduced dynamics the R 2 . Using eq. (47), we choose the normal form parametrization of the reduced dynamics which results in the following non-zero entry in the coefficient array R 2 :
6. Recursion: This procedure can be recursively applied to obtain higher-order terms on the center manifold dynamics. The reduced dynamics on the two-dimensional center manifold up to cubic terms is given asṗ Here, the variable p 2 is the modal coordinate along the center direction associate to the parameter µ. The normal form parametrization automatically results in trivial dynamics along this direction. Indeed, the near-identity transformation associated to the normal form leaves the coordinate µ-mode unchanged, which prompts us to replace p 2 by µ in eq. (58). Hence, we obtain the parameter-dependent dynamics on the center manifold aṡ which recovers the pitchfork bifurcation (see section 3.4 in Guckenheimer & Holmes [1]) with respect to the parameter µ.
For more involved applications to center manifold computation, we refer to the work of Carini et al. [58], who analyze the stability of bifurcating flows using a similar methodology for computing parameter-dependent center manifold and normal forms.

Lyapunov subcenter manifolds and conservative backbone curves
The Lyapunov subcenter manifolds (LSM) form centerpieces of periodic response in conservative, unforced, mechanical systems (see Kerschen et al. [67], de la Llave & Kogelbauer [68]). We discuss how the above methodology can be applied in such systems to compute LSMs and directly extract conservative backbone curves, i.e., the functional relationship between amplitudes and frequency of the periodic orbits on the LSM. We consider the following form of a conservative mechanical system where M, K ∈ R n×n are positive definite mass and stiffness matrices and is a conservative nonlinearity. The quadratic eigenvalue problem provides us the vibration modes ϕ j ∈ R n and the corresponding natural frequencies ω j of system (59). In the first-order form (5) with C = 0, N = M, the eigenvalues and eigenvectors can be expressed using eq. (60) as Any distinct pair of eigenvalues ±iω m , where m = 1, . . . , n, spans a two-dimensional linear modal subspace. An LSM is a unique, analytic, two-dimensional, nonlinear extension to such a linear modal subspace and is guaranteed to exist if the master eigenfrequency ω m is not in resonance with any of the remaining eigenfrequencies of the system (Kelley [47]), i.e, under the non-resonance conditions The LSM over the m th mode can be computed by solving the invariance equation (21) in the physical coordinates using only the master mode ϕ m that spans the two-dimensional modal subspace E = span (v 2m−1 , v 2m ). The leading order coefficients in the parametrizations for the LSM and its reduced dynamics are given by eq. (29) as Note that for any ∈ N, the master subspace E satisfies the inner resonance relations which result in the following reduced dynamics in the normal-form parametrization style (see eq. (45)) where the γ are the nontrivial coefficients associated to the monomials in the normal form (45). Then, the following statement directly provides us the conservative backbone associated to the m th mode. (63), the backbone curve, i.e., the functional relationship between the polar response amplitude ρ and the oscillation frequency ω of the periodic orbits of the LSM associated to the mode ϕ m of the conservative mechanical system (59) is given as

Lemma 1. Under the non-resonance condition
Proof. See Appendix B.

Invariant manifolds and their reduced dynamics under nonautonomous forcing
In the non-autonomous setting of system (2), i.e., for > 0, the fixed point is typically replaced by an invariant torus created by the quasiperiodic term F ext (z, Ωt), or by a periodic orbit when Ω is one-dimensional. Indeed, for small enough > 0, the existence of a small-amplitude invariant torus γ in the extended phase space of system (2) is guaranteed if the origin is a hyperbolic fixed point in its = 0 limit (see Guckenheimer Figure 5: Applying the parametrization method to system (2) with > 0, we obtain a parametrization W : C M × T K → R N of an (M + K)−dimensional perturbed manifold (whisker) attached to a small amplitude whiskered torus γ parameterized by the angular variables φ ∈ T K withφ = Ω. This whisker is perturbed from the master spectral subspace E of the linear system (7) under the addition of nonlinear and O( ) terms of system (2) (cf. Figure 4). Furthermore, we have the freedom to choose a parametrization R : C M × T K → C M of the reduced dynamics on the manifold such that the function W also maps the reduced system trajectories p(t) onto the full system trajectories on the invariant manifold, i.e., z(t) = W (p(t), Ωt).
In contrast to the invariant manifold W(E) from the autonomous setting, the perturbed manifold or whisker, W(E, γ ), is attached to γ instead of the origin and dim (W(E, γ )) = dim (E) + dim (γ ) = M + K, as shown in Figure 5. From a computational viewpoint, now the manifold W(E, γ ) and its reduced dynamics need to be additionally parameterized by the angular variables φ ∈ T K that correspond to the multi-frequency vector Ω ∈ R K as Here, W : C M × T K → R N , R : C M × T K → C M are parametrizations for the invariant manifold W(E, γ ) and its reduced dynamics; W(p), R(p) recover the manifold W(E) and its reduced dynamics in the unforced limit of = 0; and X(p, φ), S(p, φ) denote the O( ) terms, which depend on the angular variables φ due to the presence of forcing F ext (z, Ωt). Invoking the invariance of W(E, γ ), we substitute the expansions (70)-(71) into the governing equations (2) and collect the O( ) terms to obtain (cf. Ponsioen et al. [42]) The terms X(p, φ), S(p, φ) can be further expanded into Taylor series in p with coefficients that depend on the angular variables φ as Collecting the O(1) terms in p from the invariance equation (72), we obtain which is a system of linear differential equations for the unknown, time-dependent coefficients X 0 (φ).
Similarly to the autonomous setting, the choice of reduced dynamics S 0 (φ) again provides us the freedom to remove (near-) resonant terms via a normal-form style of parametrization.
In this work, we restrict our attention to the computation of the leading-order non-autonomous contributions, i.e., X 0 (φ), S 0 (φ). To this end, we perform a Fourier expansion of the different terms in eq. (75) as where F 0,κ ∈ C N are the known Fourier coefficients for the forcing F ext (z, Ωt) and x 0,κ ∈ C N ; and s 0,κ ∈ C M are the unknown Fourier coefficients for the leading-order, non-autonomous components of X, S. Upon substituting eqs. (76)-(78) into eq. (75) and comparing Fourier coefficients at order κ, we obtain linear equations in terms of the variables x 0,κ , s 0,κ as where The coefficient matrix L 0,κ in (79) becomes (nearly) singular when the forcing is (nearly) resonant with any of eigenvalues of the system (A, B), i.e., when Similarly to the autonomous setting, such nearly resonant forcing leads to small divisors while we are solving system (79) (cf. Remark 3) and hence it is desirable to include such terms in the reduced dynamics as per the normal form style of parametrization. This results in (cf. eq. (47)) Normal-form style: s 0,κ = e j u j F 0,κ ∀κ ∈ Z K , j ∈ {1, . . . , M } : i κ, Ω ≈ λ j .
Alternatively, using a graph style parametrization, we obtain (cf. eq. (52)) Graph style: Note, however, that these choices are only available for the modes in the master subspace that are resonant with the external frequency Ω, i.e., j = 1, . . . , M in the approximation (80). If the near-resonance relation (80) holds for any eigenvalues outside the master subspace, i.e., j = M + 1, . . . , N , then the domain of convergence of our Taylor approximations is reduced. Depending on the application, a workaround for this may be to include any nearly resonant modes in the master subspace from the start. Finally, upon determining the reduced dynamics coefficients s 0,κ specific to the chosen parametrization style (see eqs. (81), (82)), we compute a norm-minimizing solution to (79) given by as we did in the autonomous setting (cf. eq. (53)).

Remark 4. (Parallelization)
For each κ ∈ Z K , the reduced dynamics coefficients s 0,κ and the manifold coefficients x 0,κ can be determined independently of each other. Hence, parallel computation of these coefficients will result in high speed up due to minimal cross communication across the processes (see also Remark 2).

Spectral submanifolds and forced response curves
In structural dynamics, predicting the steady-state response of mechanical systems in response to periodic forcing is often the end goal of the analysis. This response is commonly expressed in terms of the FRC depicting the response amplitude as a function of the external forcing frequency. FRCs are computationally expensive to obtain for large structural systems of engineering significance (see, e.g., Ponsioen et al. [42], Jain et al. [60]). The recent theory of spectral submanifolds [52] (SSM), however, has enabled the fast extraction of such FRCs via exact reduced-order models. The analytic results of Breunung & Haller [38], Ponsioen et al. [41] make it possible to obtain FRCs from the normal form of the reduced dynamics on two-dimensional SSMs without any numerical simulation. These approaches, however, develop SSM computations in diagonal coordinates assuming semi-simplicity of the matrix B −1 A. This has limited applicability for high-dimensional finite element-based problems, as we have discussed in Section 3. Here, we revisit their results in our context. We consider the mechanical system (1) under periodic, position-independent forcing as where Ω ∈ R + is the external forcing frequency and the periodic forcing F ext (Ωt) can be expressed in a Fourier expansion as We assume that the system (84) represents a lightly damped structure. This implies that Ω satisfies the following near-resonance relationship 1 with a two-dimensional spectral subspace associated with the eigenvalues λ,λ: will hold for any finite ∈ N (see Szalai et al. [40]). As per eqs. (47) and (81), the nearresonances (86)-(87) lead to the following normal form for the reduced dynamics (cf. Breunung & Haller [38]): where the coefficients γ are determined automatically from the normal-form style parametrization (47) of the reduced dynamics on the two-dimensional SSM. Theorem 3.8 of Breunung & Haller [38] provides explicit expressions for extracting FRCs from the reduced dynamics on two-dimensional SSMs near a resonance with the forcing frequency. Their expressions are derived under the assumption of proportional damping; mono-harmonic, cosinusoidal and synchronous forcing on the structure. The following statement generalizes their expressions to system (84) with periodic forcing (85) and provides us a tool to extract forced-response curves near resonance from two-dimensional SSMs in physical coordinates.

Lemma 2. Under the near-resonance relationships (86) and (87):
(i) Reduced-order model on SSMs: The reduced dynamics (88) in polar coordinates (ρ, θ) is given by where (ii) FRC: The fixed points of the system (89) correspond to periodic orbits with frequency ηΩ and are given by the zero level set of the scalar function (iii) Phase shift: The constant phase shift ψ between the external forcing f ext (Ωt) and a ρ-amplitude periodic response, obtained as a zero of eq. (91), is given by (iv) Stability: The stability of the periodic response is determined by the eigenvalues of the Jacobian Proof. See Appendix C Note that the zero level set of F, which provides the FRC, can also be written as the zero-level set of the functions Despite the equivalence in the zero-level sets of the functions F and G ± , one over the other might be preferred to avoid numerical difficulties. The zero level set of F is a one-dimensional submanifold in the (ρ, Ω) space for a given forcing of small enough amplitude |f |. The parameter values for which the FRC contains more than one connected component is referred in literature as the emergence of detached resonance curves or isolas. The non-spurious zeros of the polynomial a(ρ) result in the non-trivial steady-state for the full system (see Ponsioen et al. [41]). The analytical formulas given in Lemma 2 enable us to compute the FRCs along with isolas, if those exist.
In the case of (near-) outer resonances of λ with any of the remaining eigenvalues of the system, such a two-dimensional SSM does not exist (see Haller & Ponsioen [52]) and one should include the resonant eigenvalues in the master modal subspace E, resulting in higher-dimensional SSMs with inner resonances. The reduced dynamics on such high-dimensional SSMs can again be used to compute FRCs via numerical continuation, as discussed by Li et al. [73].
The automated computation procedure developed here is also applicable for treating highdimensional problems with inner resonances up to arbitrarily high order of accuracy. A numerical implementation of the computational methodology developed in this work, is available in the form of the open-source MATLAB package, SSMTool 2.0 [69], which is integrated with a generic finiteelement solver (Jain et al. [70]) and coco [19]. This allows us to treat high-dimensional mechanics problems, as we demonstrate over several numerical examples in the next section.

Numerical examples
In the following examples, we perform local SSM computations on mechanical systems systems following the methodology discussed in Sections 4 and 5, which involves the solution of invariance equations (21) and (72). We use the reduced-dynamics on two-dimensional SSMs attached to periodic orbits for obtaining FRCs of various nonlinear mechanical systems via Lemma 2.
The equations of motion governing the following examples are given in the general form: An SSM characterizes the deformation in the corresponding modal subspace that arises due to the addition of nonlinearities in the linearized counterpart of system (94). Specifically, the nonlinear terms in the Taylor expansions W, R (see eqs. (22) and (23)) end up being nontrivial precisely due to the presence of the nonlinearity f in system (94). For each of the following examples, we illustrate this deformation of the modal subspace by taking a snapshot (Poincaré section) of the non-autonomous SSM along with its reduced dynamics at an arbitrary time instant, t = t 0 . We then plot the SSM as a graph over the modal coordinates [ρ cos θ, ρ sin θ], where θ = (ψ + ηΩt 0 ) (see Lemma 2).
To this end, we simply simulate the autonomous, two-dimensional ROM (89) which results in the reduced dynamics trajectories ρ(t) and θ(t) on the SSM in polar coordinates. We then map these trajectories onto the SSM using the parametrization W (p(t), Ωt 0 )), where We also compare these results with global computational techniques involving numerical continuation of the periodic response via collocation, spectral and shooting-based approximations. While the local manifold computations we have discussed would benefit greatly from parallel computing (see Remarks 2 and 4), in this work, we refrain from any parallel computations for a fair comparison of computation time with other techniques, where the tools we have employed do not use parallelization. We perform all computations via openly available MATLAB packages on version 2019b of MATLAB.

Finite-element-type oscillator chain
As a first example, we consider the nonlinear oscillator chain example used by Jain et al. [60], whose computational implementation can be made to resemble a finite-element assembly, with each of the nonlinear springs treated as an element. The equations of motion for the n-mass oscillator chain, shown in Figure 6, are given by system (94) with M = mI n , K = kL n , where L n is a Toeplitz matrix given as and f 3 ∈ R n×n 3 is a sparse cubic-coefficients array such that We choose the parameter values where forcing frequency Ω in the range of 0.23-1 rad/s and the forcing shape For Ω ∈ [0.23, 1], these three pairs of eigenvalues (101)-(103) are nearly resonant with Ω as per approximations (86) with η = 1. We subdivide the frequency range into three intervals around each of these near-resonant eigenvalue pairs. We then perform SSM computations up to quintic order to approximate the near-resonant FRC via Lemma 2 for each pair of near-resonant eigenvalues. Figure 7a illustrates the Poincaré section of the non-autonomous SSM computed around the second mode with eigenvalues (102) and near-resonant forcing frequency Ω = 0.6158 rad/s (period T = 2π/Ω). Each curve of the reduced dynamics shown in Figure 7a represents iterates of the period T -Poincaré map. In particular, any hyperbolic fixed points correspond to T -periodic orbits of the full system with the same hyperbolicity according to Lemma 2. Hence, we directly obtain unstable and stable periodic orbits on the FRC by investigating the stable (blue) and unstable (red) fixed points of the reduced dynamics on the SSM for different values of Ω, as shown in Figure 7b. Figure 7b further shows that the FRC obtained from these SSM computations agrees with the spectral (harmonic balance) and collocation-based approximations. We perform these harmonic balance approximations using an openly available MATLAB package, NLvib [18], which implements an alternating frequency-time (AFT) approach. We choose 5 harmonics for approximations in the frequency domain and 2 7 time steps for the approximations in the time domain. For performing collocation-based continuation, we use the po-toolbox of coco [19] with default settings and adaptive refinement of collocation mesh and one-dimensional atlas algorithm.
The total computation time consumed in model generation, coefficient assembly and computation of all eigenvalues of this system was less than 1 second on a Windows-PC with Intel Core i7-4790 CPU @ 3.60GHz and 32 GB RAM. We compare the computation times for obtaining the FRC using different methods in Table 2.  95)). The fixed points in blue and red directly provide us the stable and unstable periodic orbits on the FRC. (b) FRC obtained via local computations of SSM at O(5) agree with those obtained using global continuation methods involving the harmonic balance method (NLvib [18]) and collocation (coco [19]); the computation is performed for n = 10 (see Table 2 for computation times); the plot shows the displacement amplitude for the 5 th (middle) degree of freedom.
In this example, the SSM-based analytic approximation to FRC using Lemma 2 involves the computation of the O(5)-autonomous SSM three times (once around each resonant pair). The leading-order non-autonomous SSM computation needs to be repeated for each Ω in the frequency span [0. 23,1]. We emphasize that while each of these SSM computations is parallelizable (see Remark 2) in contrast to continuation-based global methods, we have reported computation times via a sequential implementation in Table 2. As expected, we observe from Table 2 that local approximations to SSMs are a much faster means to compute FRCs in comparison to global techniques that involve collocation or spectral (harmonic balance) approximations.   Figure 7. These computations were performed on MATLAB version 2019b, installed on a Windows-PC with Intel Core i7-4790 CPU @ 3.60GHz and 32 GB RAM.

Von Kármán Beam
We now consider a finite element model of a geometrically nonlinear, cantilevered von Kármán beam (Jain et al. [48]), illustrated in Figure 8a. The geometric and material properties of the beam are given in Table 3. The equations of motion are again given in the general form (94). This model is programmed in the finite element solver [70], which directly provides us the matrices M, C, K and the coefficients of the nonlinearity f in physical coordinates. We discretize this model using 10 elements resulting in n = 30 degrees of freedom. von Kármán beam 2 Figure 8: The schematic of a two-dimensional von Kármán beam model (Jain et al. [48]) with height h and length L, initially aligned with the x 1 axis, see Table 3 for geometric and material properties.  Table 3: Physical parameters of the von Kármán beam model (see Figure 8a) The eigenvalue pair associated to the first mode of vibration of the beam is given by and external forcing is chosen as where f 0 represents a spatially uniform forcing vector with transverse forcing magnitude of 0.5N/m across the length of the beam. We choose the forcing frequency Ω in the range 4.1-6.2 rad/s for which the eigenvalue pair λ 1,2 (104) is nearly resonant with Ω (see (86)). We then perform O(5) SSM computations to approximate the near-resonant FRC around the first natural frequency via Lemma 2. Figure 9 illustrates the Poincaré section of the non-autonomous SSM computed around the first mode with eigenvalues (104) and near-resonant forcing frequency Ω = 5.4 rad/s (period T = 2π/Ω). We observe in Figure 9a that the graph of the manifold is flat along the transverse degree of freedom, which gives the impression that there is no significant deformation of the modal subspace under the addition of nonlinearities in this system. At the same time, however, Figure 9b depicts a significant curvature of the SSM along the axial degree of freedom, which is related to the bendingstretching coupling introduced by the geometric nonlinearities in any beam model. Hence, we note that the invariance computation automatically accounts for the important physical effects arising due to nonlinearities in the form of the parametrizations W and R of the manifold and its reduced dynamics. These effects, otherwise, are typically captured by a heursitic projection of the governing equation onto carefully selected modes (see Jain et al. [48], Buza et al. [64] for a discussion). of the beam for near-resonant forcing frequency Ω = 5.4 rad/s. The projection of the SSM onto the axial degree of freedom (b) located at the tip of the beam shows significant curvature in contrast to that onto the transverse degree of freedom (a), which appears relatively flat. The reduced dynamics in polar coordinates ρ, θ is obtained by simulating the ROM (89) (see eq. (95)); the fixed points in blue and red directly provide us the stable and unstable periodic orbits on the FRC for different values of Ω (see Figure 10).
Finally, in Figure 10, we obtain unstable and stable periodic orbits on the FRC by investigating the stable (blue) and unstable (red) fixed points of the reduced dynamics on the SSM for different values of Ω. Figure 10 also shows that the FRC obtained via local SSM computation closely approximates the FRCs obtained using various global continuation techniques: collocation approximations via coco [19]; and harmonic balance approximations via NLvib [18]. These continuation were performed with the same settings as in the previous example.  105)). The FRC obtained via local computations of SSM at O(5) agrees with those obtained using global continuation methods involving the harmonic balance method (NLvib [18]) and collocation (coco [19]); the plot shows the displacement amplitude for in the x3 direction at the tip of the beam (see Table 4 for computation times).
Once again, the total computation time spent on model generation, coefficient assembly and computing the first 10 eigenvalues of this system was less than 1 second on a Windows-PC with Intel Core i7-4790 CPU @ 3.60GHz and 32 GB RAM. Table 4 records the computation times to obtain FRCs via each of these methods. For the collocation-based response computation via coco [19], we also employ the atlas-kd algorithm (see Dankowicz et al. [14]) in addition to the default atlas-1d algorithm used in the previous example. Atlas-kd allows the user to choose the subspace of the continuation variables along which the continuation step size h is measured. Here, we choose this subset to be (z out (0), Ω, T ), where T = 2π Ω is the time period of periodic response and z out is the response at output degree of freedom shown in Figure 10. We allow for the continuation step size to adaptively vary between the values h min = 10 −5 to h max = 50 and a maximum residual norm for the predictor step to be 10. We found these settings to be optimal for this atlas-kd run since relaxing these tolerances further has no effect on the continuation speed. Once again, the computation times in Table 4 indicate orders-of-magnitude higher speed in reliably approximating FRC via local SSMs computations in comparison to global techniques that involve collocation or spectral approximations.   Figure 8b. These computations were performed on MATLAB version 2019b installed on a Windows-PC with Intel Core i7-4790 CPU @ 3.60GHz and 32 GB RAM.

Shallow-arch structure
Next, we consider a finite element model of a geometrically nonlinear shallow arch structure, illustrated in Figure 11a (Jain & Tiso [62]).  Figure 11: (a) schematic of a shallow-arch structure (Jain & Tiso [62]), see Table 5 for geometric and material properties. This plate is simply supported at the two opposite edges aligned along the y-axis. (b) The finite element mesh (containing 400 elements, 1,320 degrees of freedom) deformed along first bending mode having undamped natural frequency of approximately 23.47 Hz.
The geometrical and material properties of this curved plate are given in Table 5. The plate is simply supported at the two opposite edges aligned along the y-axis in Figure 11a. The model is discretized using flat, triangular shell elements and contains 400 elements, resulting in n = 1320 degrees of freedom. The open-source finite element code [70] directly provides us the matrices M, C, K and the coefficients of the nonlinearity f in the equations of motion (94).  Table 5: Geometrical and material parameters of the shallow-arch structure in Figure 11a The first mode of vibration of this structure is shown in Figure 11b and the corresponding eigenvalue pair is given by The external forcing is again given by where f 0 represents a vector of concentrated load in z-direction with magnitude of 100 N at the mesh node located at x = L 2 , y = H 2 in Figure (11)a. We choose the forcing frequency Ω in the range 133-162 rad/s for which the eigenvalue pair λ 1,2 is nearly resonant with Ω (see (86)).  Table 6 for computation times); plots (a) and (b) show the displacements in the x and z-directions at the mesh node located at x = L 2 , y = H 2 in Figure 11.
We then compute the near-resonant FRC around the first natural frequency via O(3), O(5), and O(7) SSM computations using Lemma 2. Once again, Figure 12a shows the Poincaré section of the non-autonomous SSM for the near-resonant forcing frequency Ω = 146.49 rad/s, where we directly obtain the unstable (red) and stable (blue) periodic orbits on the FRC as hyperbolic fixed points of the reduced dynamics (89) on the SSM. The three FRCs at O(3), O(5), and O(7) seem to converge to softening response shown in Figure 12b. Note that we expect a softening behavior in the FRC of shallow-arches (see, e.g., Buza et al. [64,63]).
Due to excessive memory requirements, this FRC could not be computed using collocation approximations via coco [19] or using harmonic balance approximations via NLvib [18]. Instead, we compare this FRC to another global continuation technique based on the shooting method, which is still feasible (see Introduction).
For shooting, we use the classic Newmark time integration scheme (Newmark [71], see Géradin & Rixen [72] for a review) as the common Runge-Kutta schemes (e.g., ode45 of MATLAB) struggle to converge in structural dynamics problems. We use an open-source toolbox [46], based on the atlas-1d algorithm of coco [19] for continuation of the periodic solution trajectory obtained via shooting (see Dancowicz et al. [14]). We use a constant time step throughout time integration which is chosen by dividing the time span T = 2π Ω into 100 equal intervals. We found this choice of time step to be nearly optimal for this problem as larger time steps lead to non-quadratic convergence during Newton-Raphson iterations and smaller time steps result in slower computations. The stability of the response is computed by integrating the equations of variation around the converged periodic orbit.  The total time consumed in model generation and coefficient assembly was 33 seconds on a Windows-PC with Intel Core i7-4790 CPU @ 3.60GHz and 32 GB RAM. This includes the time spent in computing the first 10 eigenvalues of this system, which took less than 1 second. Figure 12b shows that this shooting based global continuation agrees with the SSM-based approximation to the FRC. Obtaining this FRC via the shooting methods, however, takes more than 2 days, in contrast to SSM-based approximation using the proposed computational methodology, which still takes less than a minute even at O(7), as shown in Table 6.

Aircraft Wing
As a final example, we consider the finite element model of a geometrically nonlinear aircraft wing originally presented by Jain et al [65] (see Figure 13). The wing is cantilevered at one of its ends and the structure is meshed using flat triangular shell elements featuring 6 degrees of freedom per node. With Table 7: Geometrical and material parameters of the shallow-arch structure in Figure 11a (a) (b) Figure 13: (a) A wing structure with NACA 0012 airfoil stiffened with ribs (Jain et al. [65]), see Table 5 for geometric and material properties. (b) The finite-element mesh is illustrated after removing the skin panels. The wing is cantilevered at the z = 0 plane. The mesh contains 49,968 elements which results in n = 133, 920 degrees of freedom.
For assembling coefficients on a problem of this size, we used the Euler supercomputering cluster at ETH Zurich. The total time consumed in model generation and coefficient assembly was 1 hour 21 minutes and 38 seconds without any parallelization. This time includes the time taken for computing the first 10 eigenvalues of this system, which was approximately 5 seconds. The main bottleneck was the memory consumption during the assembly of the coefficients of the nonlinearity f , where the peak memory consumption was around 183 GB. However, once assembled, these coefficients consume only about 1.8 GB of RAM. This extraordinary memory consumption during assembly occurs due to a sub-optimal assembly procedure of sparse tensors [66]. To avoid these bottlenecks, parallel computing and distributed memory architectures need to be employed, which are currently not available in the packages we have used.
In this example, we choose Rayleigh damping (see, e.g., Géradin & Rixen [72]), which is commonly employed in structural dynamics applications to construct the damping matrix C = αM + βK as a linear combination of mass and stiffness matrices. The constants α, β are chosen to ensure a damping ratio of 0.4% along the first two vibration modes. The eigenvalue pair associated to the first mode of vibration is given by Once again, we choose harmonic external forcing given by where f 0 represents a vector of concentrated loads at the tip nodes 1 and 2 (see Figure (13 Figure 14a shows the Poincaré section of the non-autonomous SSM for the near-resonant forcing frequency Ω = 29.8 rad/s. The hyperbolic fixed points of the reduced dynamics (89) on the SSM directly provide the stable (blue) and unstable (red) periodic orbits on the FRC for different values of forcing frequency Ω. On a macro-level, this wing example resembles a cantilevered beam and we expect a hardening type response. Indeed, the three FRCs at O(3), O(5), and O(7) converge towards a hardening-type response, as shown in Figure 14b.  Table 8 for the computational resources consumed). Table 8 depicts the computational resources consumed in obtaining these three FRCs. The peaks in memory consumption reported in Table 8 occur during the composition of nonlinearity (see eq. (27)). Note that these peaks are short-lived, however, as the average memory consumption during all these computations are much lower. We remark that in the context of finite-element applications, these memory peaks can be significantly reduced by implementing the nonlinearity composition at the element-level in contrast to the currently performed implementation at the full system level. Once again, use of parallel computing and distributed memory architectures would be greatly beneficial in this context.  Table 8: Computation time and memory requirements for obtaining the three FRCs depicted in Figure 14. All computations were performed on MATLAB version 2019b installed on the ETH Zürich Euler supercomputing cluster on a single node with 100, 000 MB (≈ 100 GB) of RAM.

Conclusions
In this work, we have reformulated the parametrization method for local approximations of invariant manifolds and their reduced dynamics in the context of high-dimensional nonlinear mechanics problems. In this class of problems, the classically used system diagonalization at the linear level is no longer feasible. Instead, we have developed expressions that enable the computation of invariant manifolds and their reduced dynamics in physical coordinates using only the master modes associated with the invariant manifold. Hence, these computations facilitate mathematically rigorous nonlinear model reduction in very high-dimensional problems. A numerical implementation of the proposed computational methodology is available in the open-source MATLAB package, SSMTool 2.0 [69], which enables the computation of invariant manifolds in finite element-based discretized problems via an integrated finite element solver [70] and bifurcation analysis of the reduced dynamics on these invariant manifolds via its coco [19] integration. We have connected this computational methodology to several applications of engineering significance, including the computation of parameter-dependent center manifolds; Lyapunov subcenter manifolds (LSM) and their associated conservative backbone curves; and Spectral Submanifolds (SSM) and their associated forced response curves (FRCs) in dissipative mechanical systems. We have also demonstrated fast and reliable computations of FRCs via a normal form style parametrization of SSMs in very large mechanical structures, which has been a computationally intractable task for other available approaches.
While our examples focused on the applications of two-dimensional SSMs, this automated computation procedure and its numerical implementation [69] can treat higher-dimensional invariant manifolds as well. Specifically, the reduced dynamics on higher-dimensional SSMs can be used for the direct computation of FRCs in internally-resonant mechanical systems featuring energy transfer among multiple modes, as will be demonstrated in forthcoming publications (Li et al. [73], Li & Haller [74]). Furthermore, in the non-autonomous setting, we have restricted our expressions to the leading-order contributions from the forcing. Similar expressions, however, can also be obtained for higher-order terms at the non-autonomous level, which is relevant for the nonlinear analysis of parametrically excited systems. These expressions and the related numerical implementation are currently under development.
Finally, as we have noted, these computations will further benefit from parallelization since the invariance equations can be solved independently for each monomial/Fourier multi-index (see Remarks 2 and 4, and Section 6.4). This development is currently underway and will be reported elsewhere.
where we have used basic properties of the Kronecker product (see, e.g., Van Loan [53]) along with with eq. (113). Similarly, using eq. (112), we can verify that the left eigenvector corresponding to the eigenvalue µ is given by q := q 1 ⊗ · · · ⊗ q i , i.e., q Q i = µ q .
We note that since p 1 , . . . , p M are linearly independent since Q is semisimple. Then, using the fact that for any two linearly independent vectors a and b, the vectors a ⊗ b and b ⊗ a are also linearly independent, we conclude that all M i eigenvectors p with ∈ ∆ i of Q i are linearly independent. Hence, Q i is semi-simple. is singular with (e ⊗ u) ∈ ker (E ) and (f ⊗ v) ∈ ker (E).
Proof. Since λ is a generalized eigenvalue of the matrix pair A, B with w, v being the corresponding left and right eigenvectors, we have Similarly, for the matrix pair C, D with eigenvalue µ and e, f being the corresponding left and right eigenvectors, we have e C = µe D.
We verify that λµ is a generalized eigenvalue of the matrix pair (C ⊗ A) , (D ⊗ B) with eigenvector (f ⊗ v), i.e., where we have used eqs. (115), (117) along with basic properties of the Kronecker product of matrices (see, e.g., Van Loan [53]). Similarly, using eqs. (116), (118), we can show that which proves that (e ⊗ u) is the generalized left-eigenvector for the generalized eigenvalue λ for the generalize first part of the proposition. which proves that the matrix E (see definition (114)) is singular and the vectors (f ⊗ v) and (e ⊗ u) belong to its left and right kernels.

B Proof of Lemma 1
In polar coordinates p = P(ρ, θ) := ρ e iθ e −iθ , the reduced dynamics (68) is given as Now, since the equation forρ is a scalar ODE decoupled from the phase θ, the only possible steady states are fixed pointsρ = 0, i.e., ρ = ρ(0) =constant. Therefore,θ = ω(ρ) represents a constant angular frequency depending on the constant steady-state amplitude ρ. Since the LSM is an analytic manifold, the function a, ω describing reduced dynamics on the LSM are also analytic. Note that since the LSM is filled with periodic orbits in an open neighborhood of the origin, a(ρ) must be identically zero because any non-trivial polynomial expression for a would result in isolated periodic orbits on the LSM. Thus, we deduce that our computational procedure must result in Finally, as the LSM is foliated with periodic orbits in an open neighborhood of the origin, the equationθ = ω(ρ) expresses the frequency of oscillation as a function any given constant amplitude ρ for the periodic orbits on the LSM. Hence the conservative backbone around the m th mode is given by the relation (69).
Comparing the real and imaginary parts in eq. (123), we obtain the polar reduced dynamics given in eq. (89), which concludes the proof of (i). The fixed points of system (89) are obtained by equating its right-hand-side to zero as r(ρ, ψ, Ω) = 0.
Any such fixed point represent a periodic orbit for the reduced system (89) with constant polar radius ρ and constant phase difference ψ with respect to the cyclic variable ηφ, which has the angular frequency ηΩ. Hence, we obtain a 1-dimensional submanifold of zeros upon solving (124), whose projection in the (ρ, Ω) provides us the FRC. Eliminating ψ from eq. (124), we obtain the fixed points as the set of points (ρ, Ω) that satisfy the equation which proves statement (ii).