1 Introduction

Invariant manifolds are low-dimensional surfaces in the phase space of a dynamical system that constitute organizing centers of nonlinear dynamics. These surfaces are composed of full system trajectories that stay on them for all times, partitioning the phase space locally into regions of different behavior. For instance, invariant manifolds attached to fixed points can be viewed as the nonlinear analogues of (flat) modal subspaces of the linearized system. Classic examples are the stable, unstable and center manifolds tangent to the stable, unstable and center subspaces of fixed points (see, e.g., Guckenheimer and Holmes [27]). The most important class of invariant manifolds are those that attract other trajectories, and hence, their own low-dimensional internal dynamics acts a mathematically exact reduced-order model for the full, high-dimensional system. The focus of this article is to compute such invariant manifolds and their reduced dynamics accurately and efficiently in very high-dimensional nonlinear systems.

The theory of invariant manifolds has matured over more than a century of research and has been applied to numerous fields for the qualitative understanding of nonlinear behavior of systems (see Fenichel [20,21,22], Hirsch et al. [35], Wiggins [74], Eldering [18], Nipp and Stoffer [59]). The computation of invariant manifolds, on the other hand, is a relatively new and rapidly evolving discipline due to advances in scientific computing. In this paper, we address some challenges that have been hindering the computation of invariant manifolds in high-dimensional mechanical systems arising from spatially discretized partial differential equations (PDEs).

The methods for computing invariant manifolds can be divided into two categories: local and global. Local methods approximate an invariant manifold in the neighborhood of simpler invariant sets, such as fixed points, periodic orbits or invariant tori. Such local approximations are performed using Taylor series approximations around the fixed point or Taylor–Fourier series around the periodic orbit/invariant torus (see Simo [64]). Global methods, on the other hand, seek invariant manifolds globally in the phase space. Global techniques generally employ numerical continuation for growing invariant manifolds from their local approximation that may be obtained from the linearized dynamics (see Krauskopf et al. [48] for a survey).

1.1 Global techniques

A key aspect of most global techniques is to discretize the manifold into a mesh, e.g., via a collocation or a spectral approach (see Dancowicz and Schilder [13], Krauskopf et al. [49]), and solve invariance equations for the unknowns at the mesh points. For growing an \(M-\)dimensional manifold via q collocation/spectral points (along each of the M dimensions) in an \(N-\)dimensional dynamical system, one needs to solve a system of \(\mathcal {O}\left( Nq^M\right) \) nonlinear algebraic equations at each continuation step. This is achieved via an iterative solver such as the Newton’s method. As N invariably becomes large in the case of discretized PDEs governing mechanics applications, numerical continuation of invariant manifolds via collocation and spectral approaches becomes computationally intractable. Indeed, while global approaches are often discussed for general systems, the most common applications of these approaches tend to involve low-dimensional problems, such as the computation of the Lorenz manifold (Krauskopf and Osinga [47]).

Global approaches also include the continuation of trajectory segments along the invariant manifold, expressed as a family of trajectories. This is achieved by formulating a two-point boundary value problem (BVP) satisfied by the trajectory on the manifold and numerically following a branch of solutions (see Krauskopf et al. [48], Guckenheimer et al. [28]). While collocation and spectral methods are valid means to achieve this end as well, the (multiple) shooting method (see Keller [43]; Stoer and Bulirsch [66]) has a distinguishing appeal from a computational perspective for high-dimensional problems. In the (multiple) shooting method, an initial guess for one point on the solution trajectory is iteratively corrected such until the two-point BVP is solved up to a required precision. In each iteration, one performs numerical time integration of the full nonlinear system between the two points of the BVP, which is a computationally expensive undertaking for large systems. However, time integration involves the solution \(\mathcal {O}\left( N\right) \) nonlinear algebraic equations at each time step, in contrast to collocation and spectral methods which require \(\mathcal {O}\left( Nq\right) \) nonlinear algebraic equations to be solved simultaneously for q collocation/spectral points. Coupled with advances in domain decomposition methods for time integration (see Carraro et al. [10]), the multiple shooting method provides a feasible alternative to collocation and spectral methods. Still, covering a multi-dimensional invariant manifold using trajectory segments in a high-dimensional system is an elusive task even for multiple shooting methods (see, e.g., Jain et al. [42]).

A number of numerical continuation packages have enabled the computation of global invariant manifolds via collocation, spectral or multiple shooting methods. AUTO [17], a FORTRAN-based package, constitutes the earliest organized effort toward continuation and bifurcation analysis of parameter-dependent ODEs. AUTO [17] employs orthogonal collocation to approximate solutions and is able to continue solution families in two or three parameters. The MATLAB-based package Matcont [16] addresses some of the limitations of AUTO, albeit at a loss of computational performance. Additionally, Matcont can also perform normal form analysis. \(\textsc {coco}\) [13] is an extensively documented and object-oriented MATLAB package which enables continuation via multi-dimensional atlas algorithms (Dankowicz et al. [14]) and implements collocation as well as spectral methods. Another recent MATLAB package, NLvib [46], implements the pseudo-arc-length technique for the continuation of single-parameter family of periodic orbits in nonlinear mechanical systems via the shooting method or the spectral method (commonly referred to as harmonic balance in mechanics). Similar continuation packages are also available in the context of delay differential equations (see DDE-BIFTOOL [19], written in MATLAB and Knut [67], written in C++). The main focus of all these and other similarly useful packages is to implement automated continuation procedures, including demonstrations on low-dimensional examples, but not on the computational complexity of the operations as N increases. As discussed above, the collocation/spectral/shooting techniques that are invariably employed in such packages limit their ability to compute invariant manifolds in high-dimensional mechanics problems, where system dimensionality N varies from several thousands to millions.

1.2 Local techniques

In contrast to global techniques, local techniques for invariant manifold computations produce approximations valid in a neighborhood of a fixed point, periodic orbit or invariant torus. As a result, local techniques are generally unable to compute homoclinic and heteroclinic connections. Nonetheless, in engineering applications, local approximations of invariant manifolds often suffice for assessing the influence of nonlinearities on a well-understood linearized response.

Center manifold computations and their associated local bifurcation analysis via normal forms (see Guckenheimer and Holmes [27]) are classic applications of local approximations where the manifold is expressed locally as a graph over the center subspace. For an \(M-\)dimensional manifold, this local graph is sought via an \(M-\)variate Taylor series where the coefficient of each monomial term is unknown. These unknown coefficients are determined by solving the invariance equations in a recursive manner at each polynomial order of approximation, i.e., the solution at a lower order can be computed without the knowledge of the higher-order terms. The computational procedure simply involves the solution of a system of \(\mathcal {O}(N)\) linear equations for each monomial in the Taylor expansion (see Simo [64] for flows, Fuming and Küpper, [23] for maps). Thus, in the computational context of high-dimensional problems, local techniques that employ Taylor series approximations exhibit far greater feasibility in comparison to global techniques that involve the continuation of collocation, spectral or shooting-based solutions.

More recently, the parametrization method has emerged as a rigorous framework for the local analysis and computation of invariant manifolds of discrete and continuous time dynamical systems. This method was first developed in papers by Cabré, Fontich and de la Llave [6,7,8] for invariant manifolds tangent to eigenspaces of fixed points of nonlinear mappings on Banach spaces and then extended to whiskered tori by Haro and de la Llave [32,33,34]. We refer to the monograph by Haro et al. [31] for an overview of the results. An important feature of the parametrization method is that it does not require the manifold parametrization to be the graph of a function and hence allows for folds in the manifold. Furthermore, the method returns the dynamics on the invariant manifold along with its embedding. The formal computation can again be carried out via Taylor series expansions when the invariant manifold is attached to a fixed point and via Fourier–Taylor expansions when it is attached to an invariant torus perturbing from a fixed point (see, e.g., Mireles James [56], Castelli et al. [11], Ponsioen et al. [61, 63]).

The main focus of the parametrization method has been on the computer-assisted proofs of existences and uniqueness of invariant manifolds, for which the dynamical system is conveniently diagonalized at the linear level. Furthermore, as discussed by Haro et al. [31], this diagonalization allows a choice between different styles of parametrization for the reduced dynamics on the manifold, such as a normal form style, a graph style or a mixed style. Recent applications of the parametrization method include the computation of spectral submanifolds or SSMs (Haller and Ponsioen [29]) and Lyapunov subcenter manifolds or LSMs (Kelley [44]). For these manifolds, the normal form parametrization style can be used to directly extract forced response curves (FRCs) and backbone curves in nonlinear mechanical systems, as we will discuss in this paper (see Ponsioen et al. [61, 63], Breunung and Haller [3], Veraszto et al. [72]).

1.3 Our contributions

While helpful for proofs and expositions, the routinely performed diagonalization and the associated linear coordinate change in invariant manifold computations render the parametrization method unfeasible to high-dimensional mechanics problems for two reasons. First, diagonalization involves the computation of all N eigenvalues of an \(N-\)dimensional dynamical system, and second, the nonlinear coefficients in physical coordinates exhibit an inherent sparsity in mechanics applications that is annihilated by the linear coordinate change associated with diagonalization. Both these factors lead to unmanageable computation times and memory requirements when N becomes large, as we discuss in Sect. 3 of this manuscript.

To address these issues, we develop here a new computational methodology for local approximations to invariant manifolds via the parametrization method. The key aspects making this methodology scalable to high-dimensional mechanics problems are the use of physical coordinates and just the minimum number of eigenvectors. In the autonomous setting, we seek to compute invariant manifolds attached to fixed points where we develop expressions for the Taylor series coefficients that determine the parametrization of the invariant manifold as well as its reduced dynamics in different styles of parametrization (see Sect. 4). We develop similar expressions in the non-autonomous periodic or quasiperiodic setting, where we seek to compute invariant manifolds or whiskers attached to an invariant torus perturbed from a hyperbolic fixed point under the addition of small-amplitude non-autonomous terms. In this case, we seek to compute the coefficients in Fourier–Taylor series that parametrize the invariant manifold as well as its reduced dynamics in different parametrization styles (see Sect. 5). Finally, we apply this methodology to high-dimensional examples arising from a finite element discretization of structural mechanics problems, whose forced response curves we recover from a normal form style parametrization of SSMs (see Sect. 6).

Related computational ideas have already been used in other contexts. For instance, Beyn and Kleß [2] performed similar Taylor series-based computations of invariant manifolds attached to fixed points in physical coordinates using master modes. Their work predates the parametrization method and does not involve the choice of reduced dynamics or normal forms. Recently, Carini et al. [9] focused on computing center manifolds of fixed points and their normal forms using only master modes in physical coordinates. While this is an application of the parametrization method in the normal form style, they attribute their results to earlier related work by Coullet and Spiegel [12] and use these center manifolds for analyzing stability of flows around bifurcating parameter values.

More recently, Vizzaccaro et al. [73] and Opreni et al. [60] have computed normal forms on second order, proportionally damped mechanical systems with up to cubic nonlinearities and derived explicit expressions up to cubic-order accuracy (see also Touzé et al. [71] for a review). This is a direct application of the parametrization method via a normal form style parametrization to formally compute assumed invariant manifolds whose existence/uniqueness is a priori unclear (cf. Haller and Ponsioen [29]). These results in [60, 73] provide low-order approximations to SSMs [29], whose computation up to arbitrarily high orders of accuracy has already been automated in prior work [61, 63] for mechanical systems with diagonalized linear part. A major computational advance in the approach of Vizzaccaro et al. [73] is the non-intrusive use of finite element software to compute normal form coefficients up to cubic order. All these prior results, however, are fundamentally developed for unforced (non-autonomous) systems.

The computation procedure we develop is generally applicable to first-order systems with smooth nonlinearities, periodic or quasiperiodic forcing and enables automated computation of various types of invariant manifolds such as stable, unstable and center manifolds, LSMs and SSMs, up to arbitrarily high orders of accuracy in physical coordinates. Finally, a numerical implementation of these computational techniques is available in the form of an open-source MATLAB package, SSMTool 2.0 [38], which is integrated with a generic finite element solver (Jain et al. [37]) for mechanics problems. We describe some key symbols and the notation used in the remainder of this paper in Table 1 before proceeding toward the technical setup in the next section.

Table 1 Notation

2 General setup

We are mainly interested here in dynamical systems arising from mechanics problems. Such problems are governed by PDEs that are spatially discretized typically via the finite element method. The discretization results in a system of second-order ordinary differential equations for the generalized displacement \(\mathbf {x}(t)\in \mathbb {R}^{n}\), which can be written as

$$\begin{aligned} \mathbf {M}\ddot{\mathbf {x}}+\mathbf {C}\dot{\mathbf {x}}+\mathbf {K}\mathbf {x}+\mathbf {f}(\mathbf {x},\dot{\mathbf {x}})=\epsilon \mathbf {f}^{ext}(\mathbf {x},\dot{\mathbf {x}},\varvec{\Omega }t). \end{aligned}$$
(1)

Here, \(\mathbf {M},\mathbf {C},\mathbf {K}\in \mathbb {R}^{n\times n}\) are the mass, stiffness and damping matrices; \(\mathbf {f}(\mathbf {x},\dot{\mathbf {x}})\in \mathbb {R}^{n}\) is the purely nonlinear internal force; and \(\mathbf {f}^{ext}(\mathbf {x},\dot{\mathbf {x}},\varvec{\Omega }t)\in \mathbb {R}^{n}\) denotes the (possibly linear) external forcing with frequency vector \(\varvec{\Omega }\in \mathbb {R}^{K}\) for some \(K\ge 0\). The function \(\mathbf {f}^{ext}\) is autonomous for \(K=0\), periodic in t for \(K=1\) and quasiperiodic for \(K>1\) with K rationally incommensurate frequencies. The second-order system (1) may be expressed in a first-order form as

$$\begin{aligned} \mathbf {B}\dot{\mathbf {z}}&=\mathbf {Az}+\mathbf {F}(\mathbf {z})+\epsilon \mathbf {F}^{ext}(\mathbf {z},\varvec{\phi }), \end{aligned}$$
(2)
$$\begin{aligned} \dot{\varvec{\phi }}&=\varvec{\Omega }, \end{aligned}$$
(3)

where \(\mathbf {z}=\left[ \begin{array}{c} \mathbf {x}\\ \dot{\mathbf {x}} \end{array}\right] \), \(\mathbf {A},\mathbf {B}\in \mathbb {R}^{2n\times 2n},{\mathbf {F}:\mathbb {R}^{2n}\rightarrow \mathbb {R}^{2n}},\mathbf {F}^{ext}:\mathbb {R}^{2n}\times \mathbb {T}^{K}\rightarrow \mathbb {R}^{2n}\) denote the first-order quantities derived from system (1). Such a first-order conversion is not unique: Two equivalent choices are given by (see Tisseur and Meerbergen [70])

$$\begin{aligned}&(L1):\quad \mathbf {A}=\left[ \begin{array}{cc} \mathbf {0} &{} \mathbf {\mathbf {N}}\\ \mathbf {-\mathbf {K}} &{} -\mathbf {C} \end{array}\right] ,\nonumber \\&\quad \mathbf {B}=\left[ \begin{array}{cc} \mathbf {N} &{} \mathbf {0}\\ \mathbf {0} &{} \mathbf {M} \end{array}\right] ,\nonumber \\&\quad \mathbf {F}(\mathbf {z})=\left[ \begin{array}{c} \mathbf {0}\\ -\mathbf {f}(\mathbf {x},\dot{\mathbf {x}}) \end{array}\right] ,\nonumber \\&\quad \mathbf {F}^{ext}(\mathbf {z},\varvec{\phi })=\left[ \begin{array}{c} \mathbf {0}\\ \mathbf {f}^{ext}(\mathbf {x},\dot{\mathbf {x}},\varvec{\phi }) \end{array}\right] , \end{aligned}$$
(4)
$$\begin{aligned}&(L2):\quad \mathbf {A}=\left[ \begin{array}{cc} -\mathbf {K} &{} \mathbf {0}\\ \mathbf {0} &{} \mathbf {N} \end{array}\right] ,\nonumber \\&\quad \mathbf {B}=\left[ \begin{array}{cc} \mathbf {C} &{} \mathbf {M}\\ \mathbf {N} &{} \mathbf {0} \end{array}\right] ,\nonumber \\&\quad \mathbf {F}(\mathbf {z})=\left[ \begin{array}{c} \mathbf {-\mathbf {f}(\mathbf {x}},{\dot{\mathbf {x}})}\\ \mathbf {0} \end{array}\right] ,\nonumber \\&\quad \mathbf {F}^{ext}(\mathbf {z},\varvec{\phi }) =\left[ \begin{array}{c} \mathbf {f}^{ext}(\mathbf {x},\dot{\mathbf {x}},\varvec{\phi })\\ \mathbf {0} \end{array}\right] , \end{aligned}$$
(5)

where \(\mathbf {N}\in \mathbb {R}^{n\times n}\) may be chosen as any non-singular matrix. If the matrices \(\mathbf {M},\mathbf {C},\mathbf {K}\) are symmetric, then the choice of \(\mathbf {N=-K}\) for (L1) and \(\mathbf {N}=\mathbf {M}\) for (L2) results in the first-order matrices \(\mathbf {A},\mathbf {B}\) being symmetric. The computation methodology we will discuss is for any first-order system of the form (2) for \(\mathbf {z}\in \mathbb {R}^{N}\). In particular, we have \(N=2n\) for second-order mechanical systems of the form (1).

We first focus on the autonomous (\(\epsilon =0\)) limit of the system (2), given by

$$\begin{aligned} \mathbf {B}\dot{\mathbf {z}}=\mathbf {Az}+\mathbf {F}(\mathbf {z}), \end{aligned}$$
(6)

whose linearization at the fixed point \(\mathbf {z}=0\) is

$$\begin{aligned} \mathbf {B}\dot{\mathbf {z}}=\mathbf {Az}. \end{aligned}$$
(7)

The linear system (7) has invariant manifolds defined by eigenspaces of the generalized eigenvalue problem

$$\begin{aligned} \left( \mathbf {A}-\lambda _{j}\mathbf {B}\right) \mathbf {v}_{j}=\mathbf {0},\quad j=1,\dots ,N, \end{aligned}$$
(8)

where for each distinct eigenvalue \(\lambda _{j}\), there exists an eigenspace \(E_{j}\subset \mathbb {R}^{N}\) spanned by the real and imaginary parts of the corresponding generalized eigenvector \(\mathbf {v}_{j}\in \mathbb {C}^{N}\). These eigenspaces are invariant for the linearized system (7) and, by linearity, a subspace spanned by any combination of eigenspaces is also invariant for the system (7). A general invariant subspace of this type is known as a spectral subspace [29] and is obtained by the direct summation of eigenspaces as

$$\begin{aligned} E_{j_{1},\dots ,j_{q}}=E_{j_{1}}\oplus \dots \oplus E_{j_{q}}, \end{aligned}$$

where \(\oplus \) denotes the direct sum of vector spaces and \(E_{j_{1},\dots ,j_{q}}\) is the spectral subspace obtained from the eigenspaces \(E_{j_{1}},\dots ,E_{j_{q}}\) for some \(q\in \mathbb {N}\). Classic examples of spectral subspaces are the stable, unstable and center subspaces, which are denoted by \(E^{s}\), \(E^{u}\) and \(E^{c}\) and are obtained from eigenspaces associated with eigenvalues with negative, positive and zero real parts, respectively. By the center manifold theorem, these classic invariant subspaces of the linear system (7) persist as invariant manifolds under the addition of nonlinear terms in system (6). Specifically, there exist stable, unstable and center invariant manifolds \(W^{s},W^{u}\) and \(W^{c}\) tangent to \(E^{s},E^{u}\) and \(E^{c}\) at the origin \(\mathbf {0}\in \mathbb {R}^{N}\), respectively. All these manifolds are invariant, and \(W^{s}\) and \(W^{u}\) are also unique (see, e.g., Guckenheimer and Holmes [27]).

In analogy with the stable manifold \(W^{s}\), which is the nonlinear continuation of the stable subspace \(E^{s}\), a spectral submanifold (SSM) [29] is an invariant submanifold of the stable manifold \(W^{s}\) that serves as the smoothest nonlinear continuation of a given stable spectral subspace of \(E^{s}\). The existence and uniqueness results for such spectral submanifolds under appropriate conditions are derived by Haller and Ponsioen [29] using the parametrization method of Cabré et al. [6,7,8]. The parametrization method also serves as a tool to compute these manifolds.

We are interested in locally approximating the invariant manifolds of the fixed point \(\mathbf {0}\in \mathbb {R}^{N}\) of system (6) using the parametrization method. Let \(\mathcal {W}(E)\) be an invariant manifold of system (6) which is tangent to a master spectral subspace \(E\subset \mathbb {R}^{N}\) at the origin such that

$$\begin{aligned} \mathrm {dim}(E)=\mathrm {dim}\left( \mathcal {W}(E)\right) =M<N. \end{aligned}$$
(9)

Let \(\mathbf {V}_{E}=[\mathbf {v}_{1},\dots ,\mathbf {v}_{M}]\in \mathbb {C}^{N\times M}\) be a matrix whose columns contain the (right) eigenvectors corresponding to the master modal subspace E. Furthermore, we define a dual matrix \(\mathbf {U}_{E}=[\mathbf {u}_{1},\dots ,\mathbf {u}_{M}]\in \mathbb {C}^{N\times M}\) which contains the corresponding left-eigenvectors that span the adjoint subspace \(E^{\star }\) as

$$\begin{aligned} \mathbf {u}_{j}^{\star }\left( \mathbf {A}-\lambda _{j}\mathbf {B}\right) =\mathbf {0},\quad j=1,\dots ,M, \end{aligned}$$
(10)

where we choose these eigenvectors to satisfy the normalization condition

$$\begin{aligned} \mathbf {u}_{i}^{\star }\mathbf {B}\mathbf {v}_{j}=\delta _{ij}~(\text {Kronecker delta)}. \end{aligned}$$
(11)

Using the eigenvalue problems (8)-(10), we obtain the following relations for the matrices \(\mathbf {V}_{E}\) and \(\mathbf {U}_{E}\)

$$\begin{aligned}&\mathbf {A}\mathbf {V}_{E}=\mathbf {B}\mathbf {V}_{E}{\varvec{\Lambda }}_{E}, \end{aligned}$$
(12)
$$\begin{aligned}&\mathbf {U}_{E}^{\star }\mathbf {A}={\varvec{\Lambda }}_{E}\mathbf {U}_{E}^{\star }\mathbf {B}, \end{aligned}$$
(13)

where \({\varvec{\Lambda }}_{E}:=\mathrm {diag}(\lambda _{1},\dots ,\lambda _{M})\). We note that if the matrices \(\mathbf {M},\mathbf {C},\mathbf {K}\) are symmetric, then the matrices \(\mathbf {A},\mathbf {B}\) will be symmetric as well. In that case, the left and the right eigenvectors \(\mathbf {u}_{j},\mathbf {v}_{j}\) are identical and we may conveniently choose \(\mathbf {U}_{E}=\bar{\mathbf {V}}_{E}\), with the overbar denoting complex conjugation.

The common approach to local invariant manifold computation involves diagonalizing the system (6) as

$$\begin{aligned} \dot{\mathbf {q}}&=\varvec{\Lambda }\mathbf {q}+\mathbf {T}(\mathbf {q}), \end{aligned}$$
(14)

where

$$\begin{aligned}&{\varvec{\Lambda }}:=\mathrm {diag}(\lambda _{1},\dots ,\lambda _{N}),\nonumber \\&\mathbf {T}(\mathbf {q}):=\mathbf {U}^{\star }\mathbf {F}(\mathbf {V}\mathbf {q}),\nonumber \\&\mathbf {V}=[\mathbf {v}_{1},\dots ,\mathbf {v}_{N}],\nonumber \\&\mathbf {U}=[\mathbf {u}_{1},\dots ,\mathbf {u}_{N}], \end{aligned}$$
(15)

and \(\mathbf {q}\in \mathbb {C}^{N}\) are modal coordinates with \(\mathbf {z}=\mathbf {V}\mathbf {q}\). When \(\mathbf {B}=\mathbf {I}_{N}\), then using the normalization condition (11), we obtain \(\mathbf {U}^{\star }=\mathbf {V}^{-1}\), which results in the familiar diagonalized form (14) with \(\mathbf {T}(\mathbf {q})=\mathbf {V}^{-1}\mathbf {F}(\mathbf {V}\mathbf {q})\).

While the form (14) is very helpful for the purposes of proving the existence and uniqueness properties of invariant manifolds, it presents a computationally intractable form for the actual computation of invariant manifolds in high-dimensional finite element problems, as we will see next.

3 Pitfalls of the diagonalized form (14)

In this work, we use the Kronecker notation for expressing smooth nonlinear functions as a multivariate Taylor series in terms of their arguments. The Kronecker product (also known as the outer/dyadic product) is commonly denoted by the symbol \( \otimes \). For a column vector \( \mathbf {z} \in \mathbb {R}^N\), the Kronecker product operation \( \mathbf {z} \otimes \mathbf {z} \) returns the matrix \( \mathbf {z}\mathbf {z}^{\top } \in \mathbb {R}^{N\times N}\). In index notation, we write

$$\begin{aligned} (\mathbf {z}\otimes \mathbf {z})_{ij} = z_i z_j, \quad \forall i,j \in {1,\dots ,N}. \end{aligned}$$
(16)

The Kronecker notation is more generally defined for obtaining the product of higher-order tensors, where a first-order tensor can be viewed as a vector, a second-order tensor, as a matrix and an order-k tensor as a \(k-\)dimensional array. Specifically, the Kronecker product of two tensors of orders p and q yields a tensor of order \( p+q \). We refer to Van Loan [55] for a concise review of the Kronecker product and its properties.

Now, the system nonlinearity \(\mathbf {F}\) (see Eq. (6)) can be expanded in terms of the physical coordinates \(\mathbf {z}\in \mathbb {R}^{N}\) as

$$\begin{aligned} \mathbf {F}(\mathbf {z})=\sum _{k\in \mathbb {N}}\mathbf {F}_{k}\mathbf {z}^{\otimes k}, \end{aligned}$$
(17)

where \(\mathbf {z}^{\otimes k}\) denotes the term \(\mathbf {z}\otimes \dots \otimes \mathbf {z}\) (k-times), containing \( N^k \) monomial terms at degree k in the variables \( \mathbf {z} \). The array \(\mathbf {F}_{k}\in \mathbb {R}^{N\times N^{k}}\) contains the coefficients of the nonlinearity \(\mathbf {F}\) associated with each of these monomials. Similarly, the nonlinearity \(\mathbf {T}\) (see Eq. (14)) in modal coordinates \(\mathbf {q}\in \mathbb {C}^{N}\) can be expanded as

$$\begin{aligned} \mathbf {T}(\mathbf {q})=\sum _{k\in \mathbb {N}}\mathbf {T}_{k}\mathbf {q}^{\otimes k}. \end{aligned}$$
(18)

3.1 Eigenvalue and eigenvector computation

For local approximations of invariant manifolds around a fixed point of (6), it is commonly assumed that the complete generalized spectrum of the matrix \(\mathbf {B}^{-1}\mathbf {A}\) (or generalized eigenvalues of the pair \(\mathbf {A},\mathbf {\mathbf {B}}\)) is known and that a basis in which the linear system (7) takes its Jordan canonical form is readily available (see, e.g., Simo [64]; Homburg et al. [36]; Tian and Yu [69]; Haro et al. [31]; Ponsioen et al. [61, 63]). For small to moderately sized systems, obtaining a complete set of (generalized) eigenvalues/eigenvectors can indeed be accomplished using numerical eigensolvers, but this quickly transforms into an intractable task as the system size increases.

While techniques in numerical linear algebra can help us determine a small subset of eigenvalues and eigenvectors for very high-dimensional systems using a variety of iterative methods (see, e.g., Golub and Van Loan [25]), obtaining a complete set of eigenvectors of such systems remains unfeasible despite the availability of modern computing tools. To emphasize this, we illustrate in Fig. 2 the time and memory required for the eigenvalue computation for the finite element mesh for a square plate (see Fig. 1). The purpose of this comparison is to report trends in computational complexity rather than precise numbers. To this end, we have used MATLAB across all comparisons, which may not be the fastest computing platform generally but is known to assimilate the state-of-the-art algorithms for numerical linear algebra computations (Golub and Van Der Vorst [26]).

Fig. 1
figure 1

Mesh for case study: We use a shell-based finite element mesh of a square plate with geometric nonlinearities, arising from von Kármán strains for performing numerical experiments (see Figs. 2 and 3). The material is linear elastic with Young’s modulus 70 GPa, density 2700 \(\hbox {kg/}\mathrm {mm}^{3}\) and Poisson’s ratio 0.33. The plate has a thickness of 8 mm, and length is proportional to the square root of the total number of elements. This fixes the size of the elements in the mesh and avoids numerical errors that may otherwise arise in larger meshes

Fig. 2
figure 2

Cost of computing eigenvalues and eigenvectors of an \(n-\)degree-of-freedom plate example (see Fig. 1) for \(n=\) 36, 120, 432, 1,632, 6,336, 24,960, 99,072. a Computation time and b memory required for obtaining all n eigenvalues and eigenvectors using the \(\texttt {eig}\) command of MATLAB compared to those for obtaining a subset of 5 smallest magnitude eigenvalues and their associated eigenvectors using the \(\texttt {eigs}\) commands of MATLAB. These computations were performed on the ETH Zürich Euler cluster with \(10^{5}\) MB RAM. The computation of all eigenvalues in the case of \(n=\) 99,072 degrees of freedom was manually terminated as the estimated computation time via extrapolation was found to be around 341 days

Figure 2a shows that as the number of degrees of freedom, n, increases from a few tens to approximately a hundred thousand, the computational time required for computing a full set of eigenvalues of the system grows polynomially up to almost a year. For computing a subset of eigenvalues in discretized PDEs, sparse iterative eigensolvers are used, such as the routines (e.g., Stewart [65], Lehoucq et al. [51]) implemented by the MATLAB’s \(\texttt {eigs}\) command (cf. direct eigensolvers implemented by the \(\texttt {eig}\) command). These sparse solvers are considered inefficient for nearly full or less sparse matrices. There are, therefore, two competing factors here, sparsity of the matrices and the size of the matrices. The small matrices in the beginning have very low sparsity, and sparse eigensolver eigs of MATLAB is inefficient for computing eigenvalues here. Indeed, we see that computing the full set of eigenvalues for a small matrix (using the \(\texttt {eig}\) command) ends up being less expensive compared to computing a subset of eigenvalues. As sparsity is governed by the number of DOFs that are shared by neighboring elements relative to the total number of degrees of freedom, it increases with mesh refinement. Thus, sparse eigensolvers become more efficient as we refine the mesh initially, but after the refinement reaches an optimum value, the computation time is governed solely by the size of the matrix.

Fig. 3
figure 3

a Destruction of sparsity: Transforming the system (6) into (14) via the linear transformation \(\mathbf {z}=\mathbf {Vq}\) results in destruction of the inherent sparsity of the governing equations in physical coordinates \(\mathbf {z}\), which leads to unfeasible memory (RAM) requirements for storing nonlinearity coefficients. Here, the multi-dimensional array \(\mathbf {F}_{k}\) and \(\mathbf {T}_{k}\) represent the polynomial coefficients at degree k for the nonlinearities \(\mathbf {F}\) (in physical coordinates) and \(\mathbf {T}\) (in modal coordinates); see Eqs. (6), (14), (17) and (18). b Comparison of memory requirements for storing the nonlinearities \(\mathbf {F}\) and \(\mathbf {T}\) at degrees \(k=2,3,4,5\) in the \(n-\)degree-of-freedom square plate example (see Fig. 1) with \(n=\) 36, 120, 432, 1,632, 6,336, 24,960, 99,072 and phase space dimension \(N=2n\)

Furthermore, all these eigenvectors must be held in the computer’s active memory (RAM) in typical invariant manifold computations, which contributes toward very high memory requirements, as shown in Fig. 2b. At the same time, these figures also show that a small subset of eigenvectors can be quickly computed and easily stored even for very high-dimensional systems.

3.2 Unfeasible memory requirements due to coordinate change

Aside from the cost of eigenvalue computation, invariant manifold computations typically involve local approximations via Taylor series. These are obtained by transforming the system into modal coordinates (see Eq. (14)), expressing the manifold locally as a graph over the master subspace, substituting the polynomial ansatz into Eq. (14) and solving the invariance equations recursively at each order. While such a modal transformation results in decoupling of the governing equations at the linear level, it generally annihilates the inherent sparsity in the nonlinear terms, as shown in Fig. 3a. That sparsity generally arises because only neighboring elements of the numerical mesh share coupled degrees of freedom. Due to the loss of this sparsity upon transformation to the diagonal form (14), the number of polynomial coefficients required to describe the nonlinearities increases by orders of magnitude, resulting in unfeasible memory requirements.

Fig. 4
figure 4

Using the parametrization method, we obtain the parametrization \(\mathbf {W}:\mathbb {C}^{M}\rightarrow \mathbb {R}^{N}\) of an \(M-\)dimensional invariant manifold for the system (6) constructed around a spectral subspace E with \(\dim (E)=\dim (\mathcal {W}(E))=M\). This manifold is tangent to E at the origin. Furthermore, we have the freedom to choose the parametrization \(\mathbf {R}:\mathbb {C}^{M}\rightarrow \mathbb {C}^{M}\) of the reduced dynamics on the manifold such that the function \(\mathbf {W}\) also maps the reduced system trajectories \(\mathbf {p}(t)\) onto the full system trajectories on the invariant manifold, i.e., \(\mathbf {z}(t)=\mathbf {W}\left( \mathbf {p}(t)\right) \)

Indeed, in Fig. 3b, we compare the memory estimates for storing these coefficients in physical vs. modal coordinates as a function of the system’s phase space dimension N. We see that even for the moderately sized meshes of the square plate example (see Fig. 1) considered here, the storage requirements for the transformed coefficients reach astronomically high values in the order of several terabytes/petabytes. At the same time, however, note that the RAM requirements for handling the same coefficients in physical coordinates are much less than a gigabyte, which is easily manageable for modern computers.

4 Computing invariant manifolds of fixed points in physical coordinates

Unlike commonly employed computational approaches [31, 36, 61, 62, 64, 69], we now describe the computation of general invariant manifolds in physical coordinates using the eigenvectors and eigenvalues associated with the master subspace E only. This is motivated by the computational advantages we expect based on Figs. 2 and 3.

We seek to compute an invariant manifold \(\mathcal {W}\left( E\right) \) tangent to a spectral subspace E at the origin of system (6). Let \(\mathbf {W}:\mathbb {C}^{M}\rightarrow \mathbb {R}^{N}\) be a mapping that parametrizes the \(M-\)dimensional manifold \(\mathcal {W}\left( E\right) \) and let \(\mathbf {p}\in \mathbb {C}^{M}\) be its parametrization coordinates. Then, \(\mathbf {W}(\mathbf {p})\) provides us the coordinates of the manifold in the phase space of system (6), as shown in Fig. 4. For any trajectory \(\mathbf {z}(t)\) on the invariant manifold \(\mathcal {W}\left( E\right) \), we have a reduced dynamics trajectory \(\mathbf {p}(t)\) in the parametrization space such that

$$\begin{aligned} \mathbf {\mathbf {z}}(t)=\mathbf {W}(\mathbf {p}(t)). \end{aligned}$$
(19)

Let \(\mathbf {R}:\mathbb {C}^{M}\rightarrow \mathbb {C}^{M}\) be a parametrization for the reduced dynamics. Then, any reduced dynamics trajectory \(\mathbf {p}(t)\) satisfies

$$\begin{aligned} \dot{\mathbf {p}}=\mathbf {R}(\mathbf {p}). \end{aligned}$$
(20)

Differentiating Eq. (19) with respect to t and using Eqs. (2) and (20), we obtain the invariance equation of \(\mathcal {W}(E)\) as

$$\begin{aligned} \mathbf {B}\,(D\mathbf {W})\,\mathbf {R}=\mathbf {A}\mathbf {W}+\mathbf {F}\circ \mathbf {W}. \end{aligned}$$
(21)

To solve this invariance equation, we need to determine the parametrizations \(\mathbf {W}\) and \(\mathbf {R}\). We choose to parametrize the manifold and its reduced dynamics in the form of multivariate polynomial expansions as

$$\begin{aligned}&\mathbf {W}(\mathbf {p})=\sum _{i\in \mathbb {N}}\mathbf {W}_{i}\mathbf {p}^{\otimes i}, \end{aligned}$$
(22)
$$\begin{aligned}&\mathbf {R}(\mathbf {p})=\sum _{i\in \mathbb {N}}\mathbf {R}_{i}\mathbf {p}^{\otimes i}, \end{aligned}$$
(23)

where \(\mathbf {W}_{i}\in \mathbb {C}^{N\times M^{i}}\), \(\mathbf {R}_{i}\in \mathbb {C}^{M\times M^{i}}\) are matrix representation of multi-dimensional arrays containing the unknown polynomial coefficients at degree i for the parametrizations \(\mathbf {W}\) and \(\mathbf {R}\). Furthermore, we have the expansion (17) for the nonlinearity \(\mathbf {F}\) in physical coordinates, where \(\mathbf {F}_{i}\in \mathbb {R}^{N\times N^{i}}\) are sparse arrays, which are straightforward to store despite their large size (see Sect. 3.2).

Using the expansions (17), (22) and (23), we collect the coefficients of the multivariate polynomials in the invariance equation (21) at degree \(i\ge 1\), similarly to Ponsioen et al. [61], as

$$\begin{aligned} \left( \mathbf {B}D\mathbf {W}\mathbf {R}\right) _{i}=\mathbf {A}\mathbf {W}_{i}+\left( \mathbf {F}\circ \mathbf {W}\right) _{i}, \end{aligned}$$
(24)

where

$$\begin{aligned} \left( \mathbf {B}D\mathbf {W}\mathbf {R}\right) _{i}=\mathbf {B}\mathbf {W}_{1}\mathbf {R}_{i}+\mathbf {B}{\sum _{j=2}^{i}\mathbf {W}_{j}\varvec{\mathcal {R}}_{i,j}}, \end{aligned}$$
(25)

with

$$\begin{aligned} \varvec{\mathcal {R}}_{i,j}:=\sum _{k=1}^{j}\underbrace{\mathbf {I}_{M}\otimes \dots \otimes \mathbf {I}_{M}\otimes \overset{\overset{k-\mathrm {th\,position}}{\uparrow }}{\mathbf {R}_{i-j+1}}\otimes \mathbf {I}_{M}\otimes \dots \otimes \mathbf {I}_{M}}_{j-\mathrm {terms}}, \end{aligned}$$
(26)

and

$$\begin{aligned} \left( \mathbf {F}\circ \mathbf {W}\right) _{i}=\sum _{j=2}^{i}\mathbf {F}_{j}\left( \sum _{\mathbf {q}\in \mathbb {N}^{j},|\mathbf {q}|=i}\mathbf {W}_{q_{1}}\otimes \dots \otimes \mathbf {W}_{q_{j}}\right) . \end{aligned}$$
(27)

At leading order, i.e., for \(i=1\), equation (24) simply yields

$$\begin{aligned} \mathbf {A}\mathbf {W}_{1}=\mathbf {B}\mathbf {W}_{1}\mathbf {R}_{1}. \end{aligned}$$
(28)

Comparing equation (28) with the eigenvalue problem (12), we choose a solution for \(\mathbf {W}_{1},\mathbf {R}_{1}\) in terms of the master modes and their eigenvalues as

$$\begin{aligned} \mathbf {W}_{1}=\mathbf {V}_{E},\quad \mathbf {R}_{1}=\varvec{\Lambda }_{E}. \end{aligned}$$
(29)

Remark 1

The solution choice (29) for \(\mathbf {W}_{1},\mathbf {R}_{1}\) is not unique. Indeed, we may choose \(\mathbf {W}_{1}\in \mathbb {C}^{N\times M}\) to be any matrix whose columns span the master subspace E, generally resulting in a non-diagonal \(\mathbf {R}_{1}\). Since our system is defined in the space of reals (\(\mathbb {R}\)), a real choice of \(\mathbf {W}_{1},\mathbf {R}_{1}\) allows us to choose the parametrization coordinates \(\mathbf {p}\) in \(\mathbb {R}^{M}\) instead of \(\mathbb {C}^{M}\). This will result in \(\mathbf {W}_{i}\in \mathbb {R}^{N\times M^{i}}\), \(\mathbf {R}_{i}\in \mathbb {R}^{M\times M^{i}}\) for each i, which reduces the computational memory requirements by half relative to the complex setting.

At any order \(i\ge 2\) in Eq. (24), we collect the terms containing the coefficients \(\mathbf {W}_{i}\) on the left-hand side and the lower degree terms on the right-hand side as

$$\begin{aligned} \mathbf {B}\mathbf {W}_{i}\varvec{\mathcal {R}}_{i,i} -\mathbf {A}\mathbf {W}_{i}=\mathbf {C}_{i}-\mathbf {B}\mathbf {W}_{1}\mathbf {R}_{i} \end{aligned}$$
(30)

where \(\varvec{\mathcal {R}}_{i,i}\) is defined according to Eq. (26) and

$$\begin{aligned} \mathbf {C}_{i}:=\left( \mathbf {F}\circ \mathbf {W}\right) _{i}-\mathbf {B} {\sum _{j=2}^{i-1}\mathbf {W}_{j}\varvec{\mathcal {R}}_{i,j}}. \end{aligned}$$

We solve (30) recursively for \(i\ge 2\) by vectorizing it as (see, e.g., Van Loan [55])

$$\begin{aligned} \varvec{\mathcal {L}}_{i}\mathbf {w}_{i}=\mathbf {h}_{i}(\mathbf {R}_{i}), \end{aligned}$$
(31)

where

$$\begin{aligned} \mathbf {w}_{i}&:=\mathrm {vec}\left( \mathbf {W}_{i}\right) \in \mathbb {C}^{NM^{i}}, \end{aligned}$$
(32)
$$\begin{aligned} \varvec{\mathcal {L}}_{i}&:=\left( \varvec{\mathcal {R}}_{i,i}^{\top }\otimes \mathbf {B}\right) -\left( \mathbf {I}_{M^{i}}\otimes \mathbf {A}\right) \in \mathbb {C}^{NM^{i}\times NM^{i}}, \end{aligned}$$
(33)
$$\begin{aligned} \mathbf {h}_{i}(\mathbf {R}_{i})&:=\mathrm {vec}\left( \mathbf {C}_{i}\right) -\mathbf {D}_{i}\mathrm {vec}\left( \mathbf {R}_{i}\right) \end{aligned}$$
(34)
$$\begin{aligned} \varvec{\mathcal {R}}_{i,i}&={\sum _{j=1}^{i}\left( \mathbf {I}_{M}\right) ^{\otimes j-1}\otimes \mathbf {R}_{1}\otimes \left( \mathbf {I}_{M}\right) ^{\otimes i-j}}\in \mathbb {C}^{M^{i}\times M^{i}},\nonumber \\&\quad \text {(from definition}\,(26)) \end{aligned}$$
(35)
$$\begin{aligned} \mathbf {D}_{i}&:=\left( \mathbf {I}_{M^{i}}\otimes \mathbf {B}\mathbf {W}_{1}\right) \in \mathbb {C}^{NM\times M^{i}}. \end{aligned}$$
(36)

In Eq. (31), the matrix \(\varvec{\mathcal {L}}_{i}\) is often called the order\(-i\) cohomological operator induced by the linear flow (7) and the master subspace E on the linear space whose coefficients are homogeneous, \(M-\)variate polynomials of degree i (see Haro et al. [31], Murdock [57]). Hence, at any order of expansion i, the entries of \(\varvec{\mathcal {L}}_{i}\) are completely determined using only the linear part of the full and reduced systems, i.e., via the matrices \(\mathbf {A},\mathbf {B}\) and \(\mathbf {R}_{1}\) (which is equal to \(\varvec{\Lambda }_{E}\) due to the choice (29)).

Remark 2

For a diagonal choice of \(\mathbf {R}_{1}\) (see, e.g., choice 29), the matrix \(\varvec{\mathcal {L}}_{i}\) has a block-diagonal structure, i.e., system (31) can be split into \(M^{i}\) decoupled linear systems containing N equations each. Hence, the coefficients parametrizing the manifold and its reduced dynamics can be determined independently for each monomial in \(\mathbf {p}^{\otimes i}\). This splitting of the large system (31) into smaller decoupled systems not only eases computations but also makes these computations appealing for a parallel computing, which has the potential to speed these computations up by a factor of \(M^{i}\) at each order i.

Note that the system (30) is under-determined in terms of the unknowns \(\mathbf {W}_{i},\mathbf {R}_{i}\). As discussed by Haro et al. [31], this underdeterminacy turns out to be an advantage, as it provides us the freedom to choose a particular style of parametrization depending on the context. When the matrix \(\varvec{\mathcal {L}}_{i}\) is non-singular for every \(i\ge 2\), the cohomological equations (31) have a unique solution \(\mathbf {W}_{i}\) for any choice of the reduced dynamics \(\mathbf {R}_{i}\). The trivial choice

$$\begin{aligned} \mathbf {R}_{i}=\mathbf {0}\quad \forall i\ge 2, \end{aligned}$$
(37)

leads to linear reduced dynamics. However, as we will see next, \(\varvec{\mathcal {L}}_{i}\) may be singular in the presence of resonances and yet system (31) may be solvable under an appropriate choice of parametrization.

4.1 Choice of parametrization

4.1.1 Eigenstructure of \(\varvec{\mathcal {L}}_{i}\)

In order to choose the reduced dynamics \(\mathbf {R}\) appropriately, we seek to explore the eigenstructure of \(\varvec{\mathcal {L}}_{i}\) in relation to that of the matrix \(\varvec{\mathcal {R}}_{i}\) and the generalized matrix pair \(\left( \mathbf {B},\mathbf {A}\right) \). We first derive a general result which helps us compute the eigenstructure of \(\varvec{\mathcal {R}}_{i}\). For notational purposes, we introduce an ordered set \(\Delta _{i}\) which contains all i-tuples \(\varvec{\ell }_{j}\) (indexed lexicographically) taking values in the range \(1,\dots ,M\), defined as

$$\begin{aligned} \Delta _{i,M}:=\{\varvec{\ell }_{1},\dots ,\varvec{\ell }_{M^{i}} \in \{1,\dots ,M\}^{i}\subset \mathbb {N}^{i}\}. \end{aligned}$$
(38)

As an example, consider the case of \(i=3\) and \(M=2\). Then, we may order the \(3-\)tuples in \(\Delta _{3,2}\) lexicographically as \(\{(1,1,1),(1,1,2),(1,2,1),(1,2,2),(2,1,1),(2,1,2),(2,2,1),(2,2,2)\}\), which contains \(2^{3}\) elements. Essentially, each \( \varvec{\ell } \in \Delta _{i,M} \) corresponds to a monomial of degree i in the reduced variables \( \mathbf {p}\in \mathbb {C}^M \), i.e., \( p_{\ell _1}p_{\ell _2}\dots p_{\ell _i} \)

At a high order i for the manifold expansion, the matrix \(\varvec{\mathcal {R}}_{i,i}\) in Eq. (30) may be high-dimensional, even though its components only involve the low-dimensional matrices \(\mathbf {I}_{M}\) and \(\mathbf {R}_{1}\) (see Eq. (35)). Proposition 1 in Appendix A allows us to compute all the eigenvalues and eigenvectors of \(\varvec{\mathcal {R}}_{i,i}\) simply in terms of those of \(\mathbf {R}_{1}\). Indeed, let the eigenvalues of \(\mathbf {R}_{1}^{\top }\) be given by \(\lambda _{1},\dots ,\lambda _{M}\). Note that for a diagonal choice of  \(\mathbf {R}_{1}\) (see choice (29)), the left and right eigenvectors are simply given by the unit vectors aligned with the coordinate axes in \(\mathbb {C}^{M}\), i.e., \(\mathbf {e}_{1},\dots ,\mathbf {e}_{M}\). Then, from Proposition 1, the eigenvalues and eigenvectors of \(\varvec{\mathcal {R}}_{i,i}^{\top }\) are given as

$$\begin{aligned} \lambda _{{\varvec{\ell }}}:=\lambda _{\ell _{1}}+\dots + \lambda _{\ell _{i}},\quad \mathbf {e}_{\varvec{\ell }} =\mathbf {e}_{\ell _{1}}\otimes \dots \otimes \mathbf {e}_{\ell _{i}}, \quad \varvec{\ell }\in \Delta _{i,M}. \end{aligned}$$
(39)

Furthermore, Proposition 2 in Appendix A characterizes the eigenstructure of \(\varvec{\mathcal {L}}_{i}\) in relation to that of the matrices \((\mathbf {B},\mathbf {A})\) and \(\varvec{\mathcal {R}}_{i,i}\). From Proposition 2, we deduce that \(\varvec{\mathcal {L}}_{i}\) is singular whenever the resonance \(\lambda _{{\varvec{\ell }}}=\lambda _{j}\) occurs for some \(\varvec{\ell }\in \Delta _{i,M}\), \(j\in \{1,\dots ,N\}\). In this case, the solvability of Eq. (31) depends on the nature of such resonances. Hence, these resonances are distinguished into inner and outer resonances as

$$\begin{aligned}&\text {Inner resonances:}\quad \lambda _{{\varvec{\ell }}} =\lambda _{j},\nonumber \\&\varvec{\ell }\in \Delta _{i,M},j\in \{1,\dots ,M\}, \end{aligned}$$
(40)
$$\begin{aligned}&\text {Outer resonances:}\quad \lambda _{{\varvec{\ell }}} =\lambda _{j},\nonumber \\&\quad \varvec{\ell }\in \Delta _{i,M},j\in \{M+1,\dots ,N\}. \end{aligned}$$
(41)

Both inner and outer resonances result in the cohomological operator \(\varvec{\mathcal {L}}_{i}\) becoming singular. The main difference between these resonances is that the cohomological equation (30) can be solved in the presence of inner resonances by adjusting the parametrization choice of \(\mathbf {R}_{i}\) so that the right-hand side of (30) belongs to \(\mathrm {im(}\varvec{\mathcal {L}}_{i})\), which we will discuss shortly. In the presence of outer resonances, however, the right-hand side of equation (30) cannot be adjusted to lie in the range of the operator \(\varvec{\mathcal {L}}_{i}\) and, hence, system (30) has no solution. Indeed, the manifold does not exist in the presence of certain outer resonances (see Cabré et al. [6], Haller and Ponsioen [29]). Haro et al. [31] refer to inner and outer resonances as internal and cross-resonances, and we use this terminology of Ponsioen et al. [61] as internal resonances carry a different meaning in the context of mechanics.

Next, we discuss two common choices for reduced dynamics parametrization, i.e., the normal form and the graph style parametrizations, which are useful for solving the cohomological equation (31) in the presence of inner resonances.

4.1.2 Normal form parametrization

Normal forms provide us tools for the qualitative and quantitative understanding of local bifurcations in dynamical systems. Normal form computations for any dynamical system involve successive near-identity coordinate transformations to simplify the transformed dynamics. The simplest form of dynamics that one can hope for is linear. In the presence of resonances, however, a transformation that linearizes the dynamics does not exist and the normal form procedure results in nonlinear dynamics that is only “as simple as possible.” This is achieved by systematically removing the nonessential terms from the Taylor series up to any given order (see, e.g., Guckenheimer and Holmes [27]).

Using the parametrization method, we can simultaneously compute the normal form parametrization for the reduced dynamics \(\mathbf {R}\) along with the parametrization \(\mathbf {W}\) for the manifold. As discussed earlier, in the absence of any inner resonances, the trivial choice (see Eq. (37)) leads to the simplest (i.e., linear) reduced dynamics, which is automatically obtained from the normal form procedure. However, when the inner resonance relations (40) hold, then the dynamics cannot be linearized. In such cases, we can compute the essential nonlinear terms at degree i following the normal form style of parametrization. This involves first projecting the invariance equation (31) onto \(\mathrm {ker}(\varvec{\mathcal {L}}_{i}^{\star })\) to eliminate the unknowns \(\mathbf {W}_{i}\) and then solving for the essential non-trivial terms in \(\mathbf {R}_{i}\) by computing a partial inverse (see Murdock [57]). In prior approaches, this is achieved by transforming the governing equations into diagonal coordinates [27, 31, 61, 63] which causes the matrix \(\varvec{\mathcal {L}}_{i}\) to be diagonal and hence simplifies the detection of its kernel. Here, we develop explicit expressions for the computation of normal form directly in physical coordinates using only the knowledge of the left-eigenvectors \(\mathbf {u}_{j}\) associated with the master subspace E, as summarized below.

We focus on the case of a system with \(r_{i}\in \mathbb {N}_{0}\) inner resonances and no outer resonances at order i. Taking \(\mathbf {D}=\mathcal {\varvec{R}}_{i,i}^{\top }\) and \(\mathbf {C}=\mathbf {I}_{M^{i}}\) in Proposition 2, we can directly estimate \(\mathrm {ker}(\varvec{\mathcal {L}}_{i}^{\star })\) using only the eigenvalues \(\lambda _{j}\) and the corresponding eigenvectors of the master subspace E. Specifically, the generalized eigenvalues for the matrix pair \((\mathbf {I}_{M^{i}},\mathcal {\varvec{R}}_{i,i}^{\top })\) are given by \(\mu _{\varvec{\ell }}=\frac{1}{\lambda _{{\varvec{\ell }}}}\) with left-eigenvectors \(\mathbf {e}_{\varvec{\ell }}\) according to Eq. (39) and with the left kernel \(\mathcal {N}_{i}\) of \(\varvec{\mathcal {L}}_{i}\) given as

$$\begin{aligned} \mathcal {N}_{i}:= & {} \mathrm {ker}(\varvec{\mathcal {L}}_{i}^{\star })\nonumber \\= & {} \text {span}\Big (\left( \mathbf {e}_{\varvec{\ell }}\otimes \mathbf {u}_{j}\right) \in \mathbb {C}^{NM^{i}}| {\lambda _{{\varvec{\ell }}}}=\lambda _{j},\nonumber \\&\varvec{\ell }\in \Delta _{i,M},j\in \{1,\dots ,M\}\Big ). \end{aligned}$$
(42)

Now, let \(\mathbf {N}_{i}\in \mathbb {C}^{NM^{i}\times r_{i}}\) be a basis for \(\mathcal {N}_{i}\), which can be obtained by simply stacking the column vectors \(\left( \mathbf {e}_{\varvec{\ell }}\otimes \mathbf {u}_{j}\right) \) from Definition (42) that are associated with the modes with inner resonances (see Eq. (40)). Then, the reduced dynamics coefficients in the normal form parametrization are chosen by projecting the invariance equation (31) onto \(\mathbf {N}_{i}\) as

$$\begin{aligned} \mathbf {N}_{i}^{\star }\varvec{\mathcal {L}}_{i}\mathbf {w}_{i}=\mathbf {N}_{i}^{\star }\mathbf {h}_{i}(\mathbf {R}_{i}). \end{aligned}$$
(43)

The left-hand side of Eq. (43) is identically zero since columns of \(\mathbf {N}_{i}\) belong to \(\mathrm {ker}(\varvec{\mathcal {L}}_{i}^{*})\) , i.e.,

$$\begin{aligned} \mathbf {N}_{i}^{\star }\varvec{\mathcal {L}}_{i}=\mathbf {0}. \end{aligned}$$
(44)

Hence, we are able to eliminate the unknowns \(\mathbf {W}_{i}\) from Eq. (43) to obtain

$$\begin{aligned} \mathbf {N}_{i}^{\star }\mathbf {D}_{i}\mathrm {vec}\left( \mathbf {R}_{i}\right) =\mathbf {N}_{i}^{\star }\mathrm {vec}\left( \mathbf {C}_{i}\right) . \end{aligned}$$
(45)

To solve Eq. (45), we may further simplify it using the normalization (11) which results in

$$\begin{aligned} \mathbf {N}_{i}^{\star }\mathbf {D}_{i}&={\mathbf {E}_{i}}^{\top }, \end{aligned}$$
(46)

where \(\mathbf {E}_{i}\in \mathbb {R}^{M^{i+1}\times r_{i}}\) is a matrix whose columns are of the form \(\left( \mathbf {e}_{\varvec{\ell }}\otimes \mathbf {e}_{j}\right) \), such that \(\left( {\varvec{\ell }},j\right) \) are pairs with inner resonances, i.e., \({\lambda _{{\varvec{\ell }}}}=\lambda _{j}\) and \(\mathbf {e}_{j}\in \mathbb {R}^{M}\) is the unit vector aligned along the \(j^{\mathrm {th}}\) coordinate axis. Here, the columns \(\left( \mathbf {e}_{\varvec{\ell }}\otimes \mathbf {e}_{j}\right) \) of \(\mathbf {E}_{i}\) must be arranged analogous to the columns \(\left( \mathbf {e}_{\varvec{\ell }}\otimes \mathbf {u}_{j}\right) \) of \(\mathbf {N}_{i}\). Using the relation (46), and noting that \(\mathbf {N}_{i}\) is a Boolean matrix with \(\mathbf {E}_{i}{\mathbf {E}_{i}}^{\top }=\mathbf {I}\), we obtain the canonical solution to (45) for the coefficients \(\mathbf {R}_{i}\) as

$$\begin{aligned} \text {Normal form style:} \quad \mathrm {vec}\left( \mathbf {R}_{i}\right) = \mathbf {E}_{i}\mathbf {N}_{i}^{\star }\mathrm {vec} \left( \mathbf {C}_{i}\right) . \end{aligned}$$
(47)

Note that for each inner resonant pair \(\left( {\varvec{\ell }},j\right) \) in the definition of \( \mathcal {N}_i \) (42), the solution choice (47) produces non-trivial coefficients in the \( j ^{\mathrm {th}}\) equation of reduced dynamics (20) precisely for the monomial \( p_{\ell _1}\dots p_{\ell _i} \) corresponding to an inner resonance. As a result, Eq. (47) directly provides the normal form coefficients of the reduced dynamics on the manifold in physical coordinates, using only the knowledge of the master modes spanning the adjoint modal subspace \(E^{\star }\).

Remark 3

(Near-resonances) As the resonance relations (40)-(41) are meant for the real as well as imaginary parts of the eigenvalues simultaneously, these are seldom satisfied exactly. However, in lightly damped systems (where the real parts of the eigenvalues are small), near-resonances might exist between the imaginary parts of the eigenvalues. In such cases, it is desirable to include the corresponding near-resonant modes in the normal form parametrization of the reduced dynamics; otherwise, it leads to small divisors (ill-conditioning) in solving system (31) and the domain of the validity of the Taylor approximation to the manifold shrinks (see, e.g., Guckenheimer and Holmes [27]).

4.1.3 Graph style parametrization

As the name suggests, a graph style of parametrization for the reduced dynamics is the result of expressing the manifold as a graph over the master subspace E (see Haro et al [31]), as done in the graph transform method. The graph style of parametrization may be appealing in the context of center manifold computation, where an infinite number of inner resonances may arise. For instance, in a system with a two-dimensional center subspace with eigenvalues \(\lambda _{1,2}=\pm \mathrm {i}\omega \), its center manifold exhibits the inner resonances

$$\begin{aligned} \lambda _{1}&=\left( \ell +1\right) \lambda _{1}+\ell \lambda _{2}, \end{aligned}$$
(48)
$$\begin{aligned} \lambda _{2}&=\ell \lambda _{1}+\left( \ell +1\right) \lambda _{2},\quad \forall \ell \in \mathbb {N}. \end{aligned}$$
(49)

In our setting, a graph style of parametrization is achieved by projecting the invariance equations (31) onto the subspace \(\mathcal {G}_{i}\) defined as

$$\begin{aligned} \mathcal {G}_{i}:=\text {span} \left( \left( \mathbf {e}_{\varvec{\ell }}\otimes \mathbf {u}_{j}\right) \in \mathbb {C}^{NM^{i}},\varvec{\ell }\in \Delta _{i,M},j\in \{1,\dots ,M\} \right) . \end{aligned}$$
(50)

Note that in the case of inner resonances, \(\mathcal {N}_{i}\subset \mathcal {G}_{i}\) (cf. Definition (42)) and, hence, \(\mathcal {G}_{i}\) includes all possible resonant subspaces at order i. Then, similarly to the normal form style, we define a basis \(\mathbf {G}_{i}\in \mathbb {C}^{NM^{i}\times M^{i}}\) for \(\mathcal {G}_{i}\), obtained by stacking the column vectors \(\left( \mathbf {e}_{\varvec{\ell }}\otimes \mathbf {u}_{j}\right) \) in Definition (50). We obtain a graph style of parametrization by projecting (31) on to \(\mathcal {G}_{i}\) and equating the right-hand side to zero as

$$\begin{aligned} \mathbf {G}_{i}^{\star }\mathbf {h}_{i}(\mathbf {R}_{i})=\mathbf {0}. \end{aligned}$$
(51)

In contrast to the normal form style (47), where only the coefficients of resonant monomials are non-trivial, we generally obtain a larger set of monomials with non-trivial coefficients in the graph style. Hence, the normal form style retains only the minimal number of nonlinear terms in the reduced that are essential for solving the invariance equation (21) at each order i, whereas the graph style generally leads to more complex expressions of the reduced dynamics.

For the choice of \(\mathbf {W}_{1}=\mathbf {V}_{E}\) (see Eq. (29)) and the normalization condition (11), Eq. (51) can be simplified to obtain the reduced dynamics coefficients in the graph style as

$$\begin{aligned} \text {Graph style:}\quad \mathbf {R}_{i}=\mathbf {U}_{E}^{\star }\mathbf {C}_{i}. \end{aligned}$$
(52)

Note that Eq. (52) directly provides the reduced dynamics coefficients in the graph style without the evaluation of \(\mathbf {G}_{i}\). Hence, an advantage of using the graph style parametrization relative to the normal form style is that specific inner resonances need not be identified a priori.

More generally, a combination of graph and normal form styles of parametrization may also be used depending on the problem. This is referred to as a mixed style of parametrization as discussed by Haro et al. [31]. A mixed style may be particularly appealing in the context of parameter-dependent manifold computation, where the parameters are dummy dynamic variables and the associated modes always have trivial dynamics. Thus, it is desirable to choose a graph style for the parametric modes and a normal form style for remaining master modes which may feature inner resonances (see Haro et al. [31], Murdock [57]).

4.1.4 Computing the parametrization coefficients \(\mathbf {w}_{i}\)

Once the reduced dynamics coefficients \(\mathbf {R}_{i}\) (specific to the choice of parametrization style) are determined (see Eqs. (47), (52)), we can compute the manifold parametrization coefficients \(\mathbf {W}_{i}\) by solving Eq. (30).

When the coefficient matrix \(\varvec{\mathcal {L}}_{i}\) is (nearly) singular in the presence of (near) resonances, numerical blow-up errors (ill-conditioning) may occur due to the small divisors that arise in solving system (30) using conventional solvers (see also Remark 3). As an alternative, we adopt a norm-minimizing solution to (30) given by

$$\begin{aligned} \mathbf {w}_{i}=\min _{\mathbf {x}\in \mathbb {C}^{NM^{i}},\varvec{\mathcal {L}}_{i}\mathbf {x}=\mathbf {h}_{i}(\mathbf {R}_{i})}\Vert \mathbf {x}\Vert ^{2}, \end{aligned}$$
(53)

which can be obtained using existing routines, such as the \(\texttt {lsqminnorm}\) in MATLAB. Other commonly used techniques in the literature include the simultaneous solution of equations (31) and (45) or (50). This involves the inversion of a bordered matrix that extends \(\varvec{\mathcal {L}}_{i}\) and ends up being non-singular (see, e.g., Beyn and Kleß [2], Kuznetsov [50]).

To summarize, we have developed an automated procedure for computing invariant manifolds attached to fixed points of system (6) and for choosing different styles of parametrizations for their reduced dynamics (Eqs. (47), (52)) by solving invariance equations (31) in the physical coordinates and only using the eigenvectors associated with its master spectral subspace. The open-source MATLAB package [38] automates this computational procedure. Next, we illustrate applications of this computation procedure developed so far.

4.2 Applications

4.2.1 Parameter-dependent center manifolds and their reduced dynamics

We illustrate the automated procedure developed above to compute the center manifold in the Lorenz system and its normal form style of reduced dynamics without performing any diagonalization and using only the modes associated with the center subspace. In the following example, we compute the \(\rho \)-dependent center manifold and the normal form of the reduced dynamics to analyze the local bifurcation around \(\rho =1\) (see section 3.2 in Guckenheimer and Holmes [27]). Consider the Lorenz system

$$\begin{aligned} \begin{aligned}\dot{x}&=\sigma (y-x),\\ \dot{y}&=\rho x-y-xz,\\ \dot{z}&=-\beta z+xy, \end{aligned} \end{aligned}$$
(54)

where \((x,y,z)\in \mathbb {R}^{3},\sigma ,\rho ,\beta >0\). The basic steps are as follows.

  1. 1.

    Setup: With a new variable \(\mu :=\rho -1\) in the Lorenz system (54), we obtain an extended system of the form (6) with

    $$\begin{aligned}&\mathbf {z}=\left[ \begin{array}{c} x\\ y\\ z\\ \mu \end{array}\right] ,\quad \mathbf {A}=\left[ \begin{array}{cccc} -\sigma &{} \sigma &{} 0 &{} 0\\ 1 &{} -1 &{} 0 &{} 0\\ 0 &{} 0 &{} -\beta &{} 0\\ 0 &{} 0 &{} 0 &{} 0 \end{array}\right] ,\nonumber \\&\quad \mathbf {B}=\mathbf {I}_{4},\quad \mathbf {F}(\mathbf {z})=\left[ \begin{array}{c} 0\\ x\mu -xz\\ xy\\ 0 \end{array}\right] . \end{aligned}$$
    (55)

    For \(\sigma =\beta =1\), the eigenvalues of \(\mathbf {A}\) are given by \(\lambda _{1}=\lambda _{2}=0\), \(\lambda _{3}=-2\) , and \(\lambda _{4}=1\). The nonlinearity \(\mathbf {F}\) is quadratic and can be expressed according to the Kronecker expansion (17) as

    $$\begin{aligned} \mathbf {F}(\mathbf {z})=\mathbf {F}_{2}\mathbf {z}^{\otimes 2}, \end{aligned}$$

    where the term \(\mathbf {z}^{\otimes 2}=(\mathbf {z}\otimes \mathbf {z})=(x^{2},xy,xz,x\mu ,\) \(yx,y^{2},yz,y\mu ,zx,zy,z^{2},z\mu ,\mu x,\mu y,\mu z,\mu ^{2})^{T}\) contains the monomials and \(\mathbf {F}_{2}\in \mathbb {R}^{4\times 4^{2}}\) is a sparse matrix representation of the coefficients of the quadratic nonlinearity. The nonzero entries of \(\mathbf {F}_{2}\) corresponding to the monomials \(x\eta \), xz and xy in the definition of \(\mathbf {F}\) (see Eq. (55)) are given as

    $$\begin{aligned} \left( \mathbf {F}_{2}\right) _{24}=1,\quad \left( \mathbf {F}_{2}\right) _{23}=-1,\quad \left( \mathbf {F}_{2}\right) _{32}=1. \end{aligned}$$
  2. 2.

    Choose master subspace: We construct a center manifold over the center-subspace E spanned by the eigenvectors corresponding to the two zero eigenvalues. We obtain the eigenvectors associated with this E satisfying the normalization condition (11), as

    $$\begin{aligned} \varvec{\Lambda }_{E}&=\left[ \begin{array}{cc} 0 &{} 0\\ 0 &{} 0 \end{array}\right] ,\quad \mathbf {V}_{E}=\mathbf {U}_{E}=\left[ \begin{array}{cc} \begin{array}{c} \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}\\ 0\\ 0 \end{array} &{} \begin{array}{c} 0\\ 0\\ 0\\ 1 \end{array}\end{array}\right] . \end{aligned}$$
    (56)
  3. 3.

    Assemble invariance equations: At leading order, the parametrization coefficients for the center manifold and its reduced dynamics can be simply chosen as (see Eqs. (56) and (29))

    $$\begin{aligned} \mathbf {W}_{1}=\mathbf {V}_{E},\quad \mathbf {R}_{1}= \varvec{\Lambda }_{E}= \mathbf {0}\in \mathbb {R}^{2\times 2}. \end{aligned}$$

    To obtain the parametrization coefficients at order 2, we need to solve the vectorized invariance equation (31) for \(i=2\), i.e.,

    $$\begin{aligned} \varvec{\mathcal {L}}_{2}\mathrm {vec}\left( \mathbf {W}_{2}\right) =\mathrm {vec}\left( \mathbf {C}_{2}\right) -\left( \mathbf {I}_{4}\otimes \mathbf {W}_{1}\right) \mathrm {vec}\left( \mathbf {R}_{2}\right) , \end{aligned}$$
    (57)

    where

    $$\begin{aligned} \varvec{\mathcal {L}}_{2}&=\left( \varvec{\mathcal {R}}_{2,2}^{\top }\otimes \mathbf {I}_{4}\right) -\left( \mathbf {I}_{4}\otimes \mathbf {A}\right) \in \mathbb {R}^{64\times 64},\\ \mathbf {C}_{2}&=\mathbf {F}_{2}\mathbf {W}_{1}^{\otimes 2}. \end{aligned}$$
  4. 4.

    Resonance detection: We deduce the eigenvalues of \(\varvec{\mathcal {R}}_{2,2}^{\top }\) from the formula (39) as

    $$\begin{aligned} \lambda _{(i,j)}=\lambda _{i}+\lambda _{j}=0,\quad \forall i,j\in \{1,2\}. \end{aligned}$$

    Thus, as per Definition (40), we observe inner-resonances between the two (zero) eigenvalues of the center-subspace, i.e.,

    $$\begin{aligned} \lambda _{(i,j)}=\lambda _{k}=0,\quad \forall i,j,k\in \{1,2\}, \end{aligned}$$

    and no outer-resonances, i.e.,

    $$\begin{aligned} \lambda _{(i,j)}\ne \lambda _{k}=0,\quad \forall i,j\in \{1,2\},\quad k\in \{3,4\}. \end{aligned}$$
  5. 5.

    Choice of parametrization: Hence, the singular system (57) is solvable for a non-trivial choice of reduced dynamics the \(\mathbf {R}_{2}\). Using Eq. (47), we choose the normal form parametrization of the reduced dynamics which results in the following nonzero entry in the coefficient array \(\mathbf {R}_{2}\):

    $$\begin{aligned} \left( \mathbf {R}_{2}\right) _{12}=\frac{1}{2}. \end{aligned}$$
  6. 6.

    Recursion: This procedure can be recursively applied to obtain higher-order terms on the center manifold dynamics. The reduced dynamics on the two-dimensional center manifold up to cubic terms is given as

    $$\begin{aligned} \dot{\mathbf {p}}=\left[ \begin{array}{c} \dot{p}_{1}\\ \dot{p}_{2} \end{array}\right] =\left[ \begin{array}{c} \frac{1}{2}p_{1}p_{2}+\frac{1}{4}p_{1}^{3}-\frac{1}{8}p_{1}p_{2}^{2}\\ 0 \end{array}\right] . \end{aligned}$$
    (58)

    Here, the variable \(p_{2}\) is the modal coordinate along the center direction associated with the parameter \(\mu \). The normal form parametrization automatically results in trivial dynamics along this direction. Indeed, the near-identity transformation associated with the normal form leaves the coordinate \(\mu \)-mode unchanged, which prompts us to replace \(p_{2}\) by \(\mu \) in Eq. (58). Hence, we obtain the parameter-dependent dynamics on the center manifold as

    $$\begin{aligned} \dot{p}=R_{\mu }(p)=\frac{1}{2}p\mu -\frac{1}{8}p\mu ^{2}+\frac{1}{4}p^{3}, \end{aligned}$$

    which recovers the pitchfork bifurcation (see section 3.4 in Guckenheimer and Holmes [27]) with respect to the parameter \(\mu \).

For more involved applications to center manifold computation, we refer to the work of Carini et al. [9], who analyze the stability of bifurcating flows using a similar methodology for computing parameter-dependent center manifold and normal forms.

4.2.2 Lyapunov subcenter manifolds and conservative backbone curves

The Lyapunov subcenter manifolds (LSMs) form centerpieces of periodic response in conservative, unforced, mechanical systems (see Kerschen et al. [45], de la Llave and Kogelbauer [15]). We discuss how the above methodology can be applied in such systems to compute LSMs and directly extract conservative backbone curves, i.e., the functional relationship between amplitudes and frequency of the periodic orbits on the LSM.

We consider the following form of a conservative mechanical system

$$\begin{aligned} \mathbf {M}\ddot{\mathbf {x}}+\mathbf {K}\mathbf {x}+\mathbf {f}(\mathbf {x}, \dot{\mathbf {x}})=\mathbf {0},\quad \mathbf {x}\in \mathbb {R}^{n}, \end{aligned}$$
(59)

where \(\mathbf {M},\mathbf {K}\in \mathbb {R}^{n\times n}\) are positive definite mass and stiffness matrices and \(\mathbf {f}=\mathcal {O}(\Vert \mathbf {x}\Vert ^{2},\Vert \dot{\mathbf {x}}\Vert ^{2},\Vert \mathbf {x}\Vert \Vert \dot{\mathbf {x}}\Vert )\) is a conservative nonlinearity. The quadratic eigenvalue problem

$$\begin{aligned} \mathbf {K}{\varvec{\varphi }}_{j}=\omega _{j}^{2}\mathbf {M} {\varvec{\varphi }}_{j},\quad j=1,\dots ,n \end{aligned}$$
(60)

provides us the vibration modes \({\varvec{\varphi }}_{j}\in \mathbb {R}^n\) and the corresponding natural frequencies \(\omega _{j}\) of system (59). In the first-order form (5) with \(\mathbf {C}=\mathbf {0},\mathbf {N}=\mathbf {M}\), the eigenvalues and eigenvectors can be expressed using Eq. (60) as

$$\begin{aligned} \lambda _{2j-1}&=\mathrm {i}\omega _{j},\quad \lambda _{2j}=\bar{\lambda }_{2j-1},\end{aligned}$$
(61)
$$\begin{aligned} \mathbf {v}_{2j-1}&=\left[ \begin{array}{c} {\varvec{\varphi }}_{j}\\ \lambda _{2j-1}{\varvec{\varphi }}_{j} \end{array}\right] ,\quad \mathbf {v}_{2j}=\bar{\mathbf {v}}_{2j-1},\quad j=1,\dots ,n. \end{aligned}$$
(62)

Any distinct pair of eigenvalues \(\pm \mathrm {i}\omega _{m}\), where \(m=1,\dots ,n\), spans a two-dimensional linear modal subspace. An LSM is a unique, analytic, two-dimensional, nonlinear extension to such a linear modal subspace and is guaranteed to exist if the master eigenfrequency \(\omega _{m}\) is not in resonance with any of the remaining eigenfrequencies of the system (Kelley [44]), i.e., under the non-resonance conditions

$$\begin{aligned} \omega _{m}\ne k\omega _{i},\quad \forall k\in \mathbb {Z},\quad i=\{1,\dots ,n\}\backslash m. \end{aligned}$$
(63)

The LSM over the \(m^{\mathrm {th}}\) mode can be computed by solving the invariance equation (21) in the physical coordinates using only the master mode \({\varvec{\varphi }}_{m}\) that spans the two-dimensional modal subspace \(E=\mathrm {span}\left( \mathbf {v}_{2m-1},\mathbf {v}_{2m}\right) \). The leading-order coefficients in the parametrizations for the LSM and its reduced dynamics are given by Eq. (29) as

$$\begin{aligned} \mathbf {W}_{1}&=[\mathbf {v}_{2m-1},\mathbf {v}_{2m}], \end{aligned}$$
(64)
$$\begin{aligned} \mathbf {R}_{1}&={\varvec{\Lambda }}_{E}=\mathrm {diag}(\mathrm {i}\omega _{m},-\mathrm {i}\omega _{m}). \end{aligned}$$
(65)

Note that for any \(\ell \in \mathbb {N}\), the master subspace E satisfies the inner resonance relations

$$\begin{aligned} \lambda _{2m-1}&=\left( \ell +1\right) \lambda _{2m-1}+\ell \lambda _{2m}, \end{aligned}$$
(66)
$$\begin{aligned} \lambda _{2m}&=\ell \lambda _{2m-1}+\left( \ell +1\right) \lambda _{2m}, \end{aligned}$$
(67)

which result in the following reduced dynamics in the normal form parametrization style (see Eq. (45))

$$\begin{aligned} \dot{\mathbf {p}}=\mathbf {R}(\mathbf {p})=\left[ \begin{array}{c} i\omega _{m}p\\ -i\omega _{m}\bar{p} \end{array}\right] +\sum _{\ell \in \mathbb {N}}\left[ \begin{array}{c} \gamma _{\ell }p^{\ell +1}\bar{p}^{\ell }\\ \bar{\gamma }_{\ell }p^{\ell }\bar{p}^{\ell +1} \end{array}\right] , \end{aligned}$$
(68)

where the \(\gamma _{\ell }\) are the non-trivial coefficients associated with the monomials in the normal form (45). Then, the following statement directly provides us the conservative backbone associated with the \(m^{\mathrm {th}}\) mode.

Lemma 1

Under the non-resonance condition (63), the backbone curve, i.e., the functional relationship between the polar response amplitude \(\rho \) and the oscillation frequency \(\omega \) of the periodic orbits of the LSM associated with the mode \({\varvec{\varphi }}_{m}\) of the conservative mechanical system (59), is given as

$$\begin{aligned} \omega (\rho )=\omega _{m}+\sum _{\ell \in \mathbb {N}}\mathrm {Im} (\gamma _{\ell })\rho ^{2\ell }. \end{aligned}$$
(69)

Proof

See Appendix B. \(\square \)

5 Invariant manifolds and their reduced dynamics under non-autonomous forcing

In the non-autonomous setting of system (2), i.e., for \(\epsilon >0\), the fixed point is typically replaced by an invariant torus created by the quasiperiodic term \(\epsilon \mathbf {F}^{ext}(\mathbf {z},\varvec{\Omega }t)\), or by a periodic orbit when \(\varvec{\Omega }\) is one-dimensional. Indeed, for small enough \(\epsilon >0\), the existence of a small-amplitude invariant torus \(\gamma _{\epsilon }\) in the extended phase space of system (2) is guaranteed if the origin is a hyperbolic fixed point in its \(\epsilon =0\) limit (see Guckenheimer and Holmes [27]). In this setting, we have an invariant manifold \(\mathcal {W}(E,\gamma _{\epsilon })\), which can be viewed as a fiber bundle that perturbs smoothly from the spectral subbundle \(\gamma _{\epsilon }\times E\) under the addition of the nonlinear terms, as long as appropriate resonance conditions hold (see Theorem 4 of Haller and Ponsioen [29], Theorem 4.1 of Haro and de la Llave [32,33,34]).

Fig. 5
figure 5

Applying the parametrization method to system (2) with \(\epsilon >0\), we obtain a parametrization \(\mathbf {W}_{\epsilon }:\mathbb {C}^{M}\times \mathbb {T}^{K}\rightarrow \mathbb {R}^{N}\) of an \((M+K)-\)dimensional perturbed manifold (whisker) attached to a small-amplitude whiskered torus \(\gamma _{\epsilon }\) parametrized by the angular variables \(\varvec{\phi }\in \mathbb {T}^{K}\) with \(\dot{\varvec{\phi }}=\varvec{\Omega }\). This whisker is perturbed from the master spectral subspace E of the linear system (7) under the addition of nonlinear and \(\mathcal {O}(\epsilon )\) terms of system (2) (cf. Fig. 4). Furthermore, we have the freedom to choose a parametrization \(\mathbf {R}_{\epsilon }:\mathbb {C}^{M}\times \mathbb {T}^{K}\rightarrow \mathbb {C}^{M}\) of the reduced dynamics on the manifold such that the function \(\mathbf {W}_{\epsilon }\) also maps the reduced system trajectories \(\mathbf {p}(t)\) onto the full system trajectories on the invariant manifold, i.e., \(\mathbf {z}(t)=\mathbf {W}_{\epsilon }\left( \mathbf {p}(t),\mathcal {\varvec{\Omega }}t\right) \)

In contrast to the invariant manifold \(\mathcal {W}(E)\) from the autonomous setting, the perturbed manifold or whisker, \(\mathcal {W}(E,\gamma _{\epsilon })\), is attached to \(\gamma _{\epsilon }\) instead of the origin and \(\dim \left( \mathcal {W}(E,\gamma _{\epsilon })\right) =\dim \left( E\right) +\dim \left( \gamma _{\epsilon }\right) =M+K\), as shown in Fig. 5. From a computational viewpoint, now the manifold \(\mathcal {W}(E,\gamma _{\epsilon })\) and its reduced dynamics need to be additionally parametrized by the angular variables \(\varvec{\phi }\in \mathbb {T}^{K}\) that correspond to the multi-frequency vector \(\varvec{\Omega }\in \mathbb {R}^{K}\) as

$$\begin{aligned} \mathbf {W}_{\epsilon }(\mathbf {p},\varvec{\phi })&=\mathbf {W}(\mathbf {p})+\epsilon \mathbf {X}(\mathbf {p},\varvec{\phi })+\mathcal {O}\left( \epsilon ^{2}\right) , \end{aligned}$$
(70)
$$\begin{aligned} \mathbf {R}_{\epsilon }(\mathbf {p},\varvec{\phi })&=\mathbf {R}(\mathbf {p})+\epsilon \mathbf {S}(\mathbf {p},\varvec{\phi })+\mathcal {O}\left( \epsilon ^{2}\right) . \end{aligned}$$
(71)

Here, \(\mathbf {W}_{\epsilon }:\mathbb {C}^{M}\times \mathbb {T}^{K}\rightarrow \mathbb {R}^{N}\), \(\mathbf {R}_{\epsilon }:\mathbb {C}^{M}\times \mathbb {T}^{K}\rightarrow \mathbb {C}^{M}\) are parametrizations for the invariant manifold \(\mathcal {W}(E,\gamma _{\epsilon })\) and its reduced dynamics; \(\mathbf {W}(\mathbf {p}),\mathbf {R}(\mathbf {p})\) recover the manifold \(\mathcal {W}(E)\) and its reduced dynamics in the unforced limit of \(\epsilon =0\); and \(\mathbf {X}(\mathbf {p},\varvec{\phi }),\mathbf {S}(\mathbf {p},\varvec{\phi })\) denote the \(\mathcal {O}(\epsilon )\) terms, which depend on the angular variables \(\varvec{\phi }\) due to the presence of forcing \(\mathbf {F}^{ext}(\mathbf {z},\varvec{\Omega }t)\). Invoking the invariance of \(\mathcal {W}(E,\gamma _{\epsilon })\), we substitute the expansions (70)-(71) into the governing equations (2) and collect the \(\mathcal {O}(\epsilon )\) terms to obtain (cf. Ponsioen et al. [63])

$$\begin{aligned}&\mathbf {B}\left[ D\mathbf {W}(\mathbf {p})\mathbf {S} (\mathbf {p},\varvec{\phi })+\partial _{\mathbf {p}}\mathbf {X}(\mathbf {p}, \varvec{\phi })\mathbf {R}(\mathbf {p})+\partial _{{\varvec{\phi }}} \mathbf {X}(\mathbf {p},\varvec{\phi })\cdot \varvec{\Omega }\right] \nonumber \\&=\left[ \mathbf {A}+D\mathbf {F}(\mathbf {W}(\mathbf {p}))\right] \mathbf {X} (\mathbf {p},\varvec{\phi })+\mathbf {F}^{ext}(\varvec{\phi },\mathbf {W} (\mathbf {p})). \end{aligned}$$
(72)

The terms \(\mathbf {X}(\mathbf {p},\varvec{\phi }),\mathbf {S}(\mathbf {p},\varvec{\phi })\) can be further expanded into Taylor series in \(\mathbf {p}\) with coefficients that depend on the angular variables \(\varvec{\phi }\) as

$$\begin{aligned}&\mathbf {X}(\mathbf {p},\varvec{\phi })=\mathbf {X}_{0}(\varvec{\phi }) +\sum _{j=1}^{\Gamma _{S}}\mathbf {X}_{j}(\varvec{\phi })\mathbf {p}^{\otimes j}, \end{aligned}$$
(73)
$$\begin{aligned}&\mathbf {S}(\mathbf {p},\varvec{\phi })=\mathbf {S}_{0}(\varvec{\phi }) +\sum _{j=1}^{\Gamma _{R}}\mathbf {S}_{j}(\varvec{\phi })\mathbf {p}^{\otimes j}. \end{aligned}$$
(74)

Collecting the \(\mathcal {O}(1)\) terms in \(\mathbf {p}\) from the invariance equation (72), we obtain

$$\begin{aligned} \mathbf {B}\left[ \mathbf {W}_{1}\mathbf {S}_{0}(\varvec{\phi }) +\partial _{{\varvec{\phi }}}\mathbf {X}_{0}(\varvec{\phi }) \cdot \varvec{\Omega }\right] =\mathbf {A}\mathbf {X}_{0}(\varvec{\phi }) +\mathbf {F}^{ext}(\varvec{\phi }), \end{aligned}$$
(75)

which is a system of linear differential equations for the unknown, time-dependent coefficients \(\mathbf {X}_{0}(\varvec{\phi })\). Similarly to the autonomous setting, the choice of reduced dynamics \(\mathbf {S}_{0}(\varvec{\phi })\) again provides us the freedom to remove (near-) resonant terms via a normal form style of parametrization.

In this work, we restrict our attention to the computation of the leading-order non-autonomous contributions, i.e., \(\mathbf {X}_{0}(\varvec{\phi }),\mathbf {S}_{0}(\varvec{\phi })\). To this end, we perform a Fourier expansion of the different terms in Eq. (75) as

$$\begin{aligned}&\mathbf {F}^{ext}(\mathbf {z},\varvec{\phi })=\sum _{\varvec{\kappa }\in \mathbb {Z}^{K}}\mathbf {F}_{0,\varvec{\kappa }}e^{\mathrm {i}\left\langle \varvec{\kappa },\varvec{\phi }\right\rangle }+\mathcal {O}(|\mathbf {z}|), \end{aligned}$$
(76)
$$\begin{aligned}&\mathbf {X}_{0}(\varvec{\phi })=\sum _{\varvec{\kappa }\in \mathbb {Z}^{K}}\mathbf {x}_{0,\varvec{\kappa }}e^{\mathrm {i}\left\langle \varvec{\kappa },\varvec{\phi }\right\rangle }, \end{aligned}$$
(77)
$$\begin{aligned}&\mathbf {S}_{0}(\varvec{\phi })=\sum _{\varvec{\kappa }\in \mathbb {Z}^{K}}\mathbf {s}_{0,\varvec{\kappa }}e^{\mathrm {i}\left\langle \varvec{\kappa },\varvec{\phi }\right\rangle }, \end{aligned}$$
(78)

where \(\mathbf {F}_{0,\varvec{\kappa }}\in \mathbb {C}^{N}\) are the known Fourier coefficients for the forcing \(\mathbf {F}^{ext}(\mathbf {z},\varvec{\Omega }t)\) and \(\mathbf {x}_{0,\varvec{\kappa }}\in \mathbb {C}^{N}\), and \(\mathbf {s}_{0,\varvec{\kappa }}\in \mathbb {C}^{M}\) are the unknown Fourier coefficients for the leading-order, non-autonomous components of \(\mathbf {X},\mathbf {S}\). Upon substituting Eqs. (76)-(78) into Eq. (75) and comparing Fourier coefficients at order \(\varvec{\kappa }\), we obtain linear equations in terms of the variables \(\mathbf {x}_{0,\varvec{\kappa }},\mathbf {s}_{0,\varvec{\kappa }}\) as

$$\begin{aligned} \varvec{\mathcal {L}}_{0,\varvec{\kappa }}\mathbf {x}_{0,\varvec{\kappa }}&=\mathbf {h}_{0,\varvec{\kappa }}(\mathbf {s}_{0,\varvec{\kappa }}),\quad \varvec{\kappa }\in \mathbb {Z}^{K}, \end{aligned}$$
(79)

where

$$\begin{aligned} \varvec{\mathcal {L}}_{0,\varvec{\kappa }}&:=\mathrm {i}\left\langle \varvec{\kappa },\varvec{\Omega }\right\rangle \mathbf {B}-\mathbf {A}\in \mathbb {C}^{N\times N},\\ \mathbf {h}_{0,\varvec{\kappa }}(\mathbf {s}_{0,\varvec{\kappa }})&:=\mathbf {F}_{0,\varvec{\kappa }}-\mathbf {B}\mathbf {W}_{1}\mathbf {s}_{0,\varvec{\kappa }}\in \mathbb {C}^{N}. \end{aligned}$$

The coefficient matrix \(\varvec{\mathcal {L}}_{0,\varvec{\kappa }}\) in (79) becomes (nearly) singular when the forcing is (nearly) resonant with any of eigenvalues of the system (\(\mathbf {A},\mathbf {B})\), i.e., when

$$\begin{aligned} \mathrm {i}\left\langle \varvec{\kappa },\varvec{\Omega }\right\rangle \approx \lambda _{j},\quad j\in \{1,\dots ,N\}. \end{aligned}$$
(80)

Similarly to the autonomous setting, such nearly resonant forcing leads to small divisors, while we are solving system (79) (cf. Remark 3) and hence it is desirable to include such terms in the reduced dynamics as per the normal form style of parametrization. This results in (cf. Eq. (47))

$$\begin{aligned} \text {Normal form style:}\quad \mathbf {s}_{0,\varvec{\kappa }}=\mathbf {e}_{j} \left( \mathbf {u}_{j}^{\star }\mathbf {F}_{0,\varvec{\kappa }}\right) \nonumber \\ \quad \forall \varvec{\kappa }\in \mathbb {Z}^{K},\quad j\in \{1,\dots ,M\}:\mathrm {i}\left\langle \varvec{\kappa },\varvec{\Omega }\right\rangle \approx \lambda _{j}. \end{aligned}$$
(81)

Alternatively, using a graph style parametrization, we obtain (cf. Eq. (52))

$$\begin{aligned}&\text {Graph style:}\quad \mathbf {s}_{0,\varvec{\kappa }} =\mathbf {e}_{j}\left( \mathbf {u}_{j}^{\star }\mathbf {F}_{0,\varvec{\kappa }} \right) \quad \forall \varvec{\kappa }\in \mathbb {Z}^{K},\nonumber \\&j\in \{1,\dots ,M\}. \end{aligned}$$
(82)

Note, however, that these choices are only available for the modes in the master subspace that are resonant with the external frequency \(\varvec{\Omega }\), i.e., \(j=1,\dots ,M\) in the approximation (80). If the near-resonance relation (80) holds for any eigenvalues outside the master subspace, i.e., \(j=M+1,\dots ,N\), then the domain of convergence of our Taylor approximations is reduced. Depending on the application, a workaround for this may be to include any nearly resonant modes in the master subspace from the start.

Finally, upon determining the reduced dynamics coefficients \(\mathbf {s}_{0,\varvec{\kappa }}\) specific to the chosen parametrization style (see Eqs. (81), (82)), we compute a norm-minimizing solution to (79) given by

$$\begin{aligned} \mathbf {x}_{0,\varvec{\kappa }}=\min _{\mathbf {y}\in \mathbb {C}^{N}, \varvec{\mathcal {L}}_{0,\varvec{\kappa }}\mathbf {y}= \mathbf {h}_{0,\varvec{\kappa }}(\mathbf {s}_{0,\varvec{\kappa }})} \Vert \mathbf {y}\Vert ^{2}, \end{aligned}$$
(83)

as we did in the autonomous setting (cf. Eq. (53)).

Remark 4

(Parallelization) For each \(\varvec{\kappa }\in \mathbb {Z}^{K}\), the reduced dynamics coefficients \(\mathbf {s}_{0,\varvec{\kappa }}\) and the manifold coefficients \(\mathbf {x}_{0,\varvec{\kappa }}\) can be determined independently of each other. Hence, parallel computation of these coefficients will result in high speedup due to minimal cross-communication across the processes (see also Remark 2).

5.1 Spectral submanifolds and forced response curves

In structural dynamics, predicting the steady-state response of mechanical systems in response to periodic forcing is often the end goal of the analysis. This response is commonly expressed in terms of the FRC depicting the response amplitude as a function of the external forcing frequency. FRCs are computationally expensive to obtain for large structural systems of engineering significance (see, e.g., Ponsioen et al. [63], Jain et al. [42]). The recent theory of spectral submanifolds [29] (SSM), however, has enabled the fast extraction of such FRCs via exact reduced-order models. The analytic results of Breunung and Haller [3], Ponsioen et al. [62] make it possible to obtain FRCs from the normal form of the reduced dynamics on two-dimensional SSMs without any numerical simulation. These approaches, however, develop SSM computations in diagonal coordinates assuming semisimplicity of the matrix \(\mathbf {B}^{-1}\mathbf {A}\). This has limited applicability for high-dimensional finite element-based problems, as we have discussed in Sect. 3. Here, we revisit their results in our context.

We consider the mechanical system (1) under periodic, position-independent forcing as

$$\begin{aligned} \mathbf {M}\ddot{\mathbf {x}}+\mathbf {C}\dot{\mathbf {x}}+\mathbf {K}\mathbf {x}+\mathbf {f}(\mathbf {x},\dot{\mathbf {x}})=\epsilon \mathbf {f}^{ext}(\Omega t), \end{aligned}$$
(84)

where \(\Omega \in \mathbb {R}_{+}\) is the external forcing frequency and the periodic forcing \(\mathbf {F}^{ext}(\Omega t)\) can be expressed in a Fourier expansion as

$$\begin{aligned} \mathbf {F}^{ext}(\Omega t)=\sum _{\kappa \in \mathbb {Z}}\mathbf {F}_{\kappa }^{ext}e^{\mathrm {i}\kappa \Omega t}. \end{aligned}$$
(85)

We assume that the system (84) represents a lightly damped structure. This implies that \(\Omega \) satisfies the following near-resonance relationshipFootnote 1 with a two-dimensional spectral subspace associated with the eigenvalues \(\lambda ,\bar{\lambda }\):

$$\begin{aligned} \lambda -\mathrm {i}\eta \Omega \approx 0,\quad \bar{\lambda }+\mathrm {i} \eta \Omega \approx 0, \end{aligned}$$
(86)

for some \(\eta \in \mathbb {N}\). The left and right eigenvectors associated with the eigenvalues \(\{\lambda ,\bar{\lambda }\}\) are \(\{\mathbf {u},\bar{\mathbf {u}}\}\) and \(\{\mathbf {v},\bar{\mathbf {v}}\}\). Furthermore, under light damping (i.e., \(\frac{|\mathrm {Re}(\lambda )|}{|\mathrm {Im}(\lambda )|}\ll 1\)), the near-resonance relationships

$$\begin{aligned} \lambda&\approx \left( \ell +1\right) \lambda +\ell \bar{\lambda },\quad \bar{\lambda }\approx \ell \lambda +\left( \ell +1\right) \bar{\lambda }, \end{aligned}$$
(87)

will hold for any finite \(\ell \in \mathbb {N}\) (see Szalai et al. [68]). As per Eqs. (47) and (81), the near-resonances (86)-(87) lead to the following normal form for the reduced dynamics (cf. Breunung and Haller [3]):

$$\begin{aligned}&\mathbf {R}_{\epsilon }(\mathbf {p},\Omega t)=\left[ \begin{array}{c} \lambda p\\ \bar{\lambda }\bar{p} \end{array}\right] +\sum _{\ell \in \mathbb {N}}\left[ \begin{array}{c} \gamma _{\ell }p^{\ell +1}\bar{p}^{\ell }\\ \bar{\gamma }_{j}p^{\ell }\bar{p}^{\ell +1} \end{array}\right] \nonumber \\&\quad +\epsilon \left[ \begin{array}{c} \mathbf {u}^{\star }\mathbf {F}_{\eta }^{ext}e^{\mathrm {i}\eta \Omega t}\\ \bar{\mathbf {u}}^{\star }\bar{\mathbf {F}}_{\eta }^{ext}e^{-\mathrm {i}\eta \Omega t} \end{array}\right] +\mathcal {O}(\epsilon |p|), \end{aligned}$$
(88)

where the coefficients \(\gamma _{\ell }\) are determined automatically from the normal form style parametrization (47) of the reduced dynamics on the two-dimensional SSM.

Theorem 3.8 of Breunung and Haller [3] provides explicit expressions for extracting FRCs from the reduced dynamics on two-dimensional SSMs near a resonance with the forcing frequency. Their expressions are derived under the assumption of proportional damping, mono-harmonic, cosinusoidal and synchronous forcing on the structure. The following statement generalizes their expressions to system (84) with periodic forcing (85) and provides us a tool to extract forced-response curves near resonance from two-dimensional SSMs in physical coordinates.

Lemma 2

Under the near-resonance relationships (86) and (87):

(i) Reduced-order model on SSMs: The reduced dynamics (88) in polar coordinates \((\rho ,\theta )\) is given by

$$\begin{aligned} \left[ \begin{array}{c} \dot{\rho }\\ \rho \dot{\psi } \end{array}\right]&=\mathbf {r}(\rho ,\psi ,\Omega ):=\left[ \begin{array}{c} a(\rho )\\ b(\rho ,\Omega ) \end{array}\right] \nonumber \\&+\left[ \begin{array}{cc} \cos \psi &{} \sin \psi \\ -\sin \psi &{} \cos \psi \end{array}\right] \left[ \begin{array}{c} \mathrm {Re}\left( f\right) \\ \mathrm {Im}\left( f\right) \end{array}\right] , \end{aligned}$$
(89)
$$\begin{aligned} \dot{\phi }&=\Omega , \end{aligned}$$
(90)

where

$$\begin{aligned} a(\rho )&=\mathrm {Re}\left( \rho \lambda +\sum _{\ell \in \mathbb {N}}\gamma _{\ell }\rho ^{2\ell +1}\right) ,\\ b(\rho ,\Omega )&=\mathrm {Im}\left( \rho \lambda +\sum _{\ell \in \mathbb {N}}\gamma _{\ell }\rho ^{2\ell +1}\right) -\eta \rho \Omega ,\\ f&=\epsilon \mathbf {u}^{\star }\mathbf {F}_{\eta }^{ext},\\ \psi&=\theta -\eta \phi . \end{aligned}$$

(ii) FRC: The fixed points of the system (89) correspond to periodic orbits with frequency \(\eta \Omega \) and are given by the zero level set of the scalar function

$$\begin{aligned} \mathcal {F}(\rho ,\Omega )&:=\left[ a(\rho )\right] ^{2}+\left[ b(\rho ,\Omega )\right] ^{2}-|f|^{2}. \end{aligned}$$
(91)

(iii) Phase shift: The constant phase shift \(\psi \) between the external forcing \(\mathbf {f}^{ext}(\Omega t)\) and a \(\rho \)-amplitude periodic response, obtained as a zero of Eq. (91), is given by

$$\begin{aligned} \psi =\arctan \left( \frac{b(\rho ,\Omega )\mathrm {Re}(f)-a(\rho )\mathrm {Im}(f)}{-a(\rho )\mathrm {Re}(f)-b(\rho ,\Omega )\mathrm {Im}(f)}\right) . \end{aligned}$$
(92)

(iv) Stability: The stability of the periodic response is determined by the eigenvalues of the Jacobian

$$\begin{aligned} J(\rho )=\left[ \begin{array}{cc} \partial _{\rho }a &{} -b(\rho ,\Omega )\\ \frac{\partial _{\rho }b(\rho ,\Omega )}{\rho } &{} \frac{a(\rho )}{\rho } \end{array}\right] . \end{aligned}$$
(93)

Proof

See Appendix C\(\square \)

Note that the zero level set of \(\mathcal {F}\), which provides the FRC, can also be written as the zero-level set of the functions

$$\begin{aligned} \mathcal {G}^{\pm }(\rho ,\Omega ):=b(\rho ,\Omega )\pm \sqrt{|f|^{2} -\left[ a(\rho )\right] ^{2}}. \end{aligned}$$

Despite the equivalence in the zero-level sets of the functions \(\mathcal {F}\) and \(\mathcal {G}^{\pm }\), one over the other might be preferred to avoid numerical difficulties. The zero-level set of \(\mathcal {F}\) is a one-dimensional submanifold in the \((\rho ,\Omega )\) space for a given forcing of small enough amplitude |f|. The parameter values for which the FRC contains more than one connected component are referred in the literature as the emergence of detached resonance curves or isolas. The non-spurious zeros of the polynomial \(a(\rho )\) result in the non-trivial steady state for the full system (see Ponsioen et al. [62]). The analytical formulas given in Lemma 2 enable us to compute the FRCs along with isolas, if those exist.

In the case of (near-) outer resonances of \(\lambda \) with any of the remaining eigenvalues of the system, such a two-dimensional SSM does not exist (see Haller and Ponsioen [29]) and one should include the resonant eigenvalues in the master modal subspace E, resulting in higher-dimensional SSMs with inner resonances. The reduced dynamics on such high-dimensional SSMs can again be used to compute FRCs via numerical continuation, as discussed by Li et al. [54].

The automated computation procedure developed here is also applicable for treating high-dimensional problems with inner resonances up to arbitrarily high order of accuracy. A numerical implementation of the computational methodology developed in this work is available in the form of the open-source MATLAB package, SSMTool 2.0 [38], which is integrated with a generic finite element solver (Jain et al. [37]) and coco  [13]. This allows us to treat high-dimensional mechanics problems, as we demonstrate over several numerical examples in the next section.

Fig. 6
figure 6

The schematic of an \(n-\)degree-of-freedom, nonlinear oscillator chain where each spring has linear stiffness k [N/m] and cubic stiffness \(\kappa \), [\(\hbox {N/m}^{3}\)]; each damper has linear damping coefficient c [N s/m]; and each mass (m [kg]) is forced periodically at frequency \(\Omega \) [rad/s] (see Eq. (94))

6 Numerical examples

In the following examples, we perform local SSM computations on mechanical systems following the methodology discussed in Sects. 4 and 5, which involves the solution of invariance equations (21) and (72). We use the reduced-dynamics on two-dimensional SSMs attached to periodic orbits for obtaining FRCs of various nonlinear mechanical systems via Lemma 2.

The equations of motion governing the following examples are given in the general form:

$$\begin{aligned} \mathbf {M}\ddot{\mathbf {x}}+\mathbf {C}\dot{\mathbf {x}}+\mathbf {K}\mathbf {x} +\mathbf {f}(\mathbf {x})=\epsilon \mathbf {f}^{ext}(\Omega t),\qquad \mathbf {x}(t)\in \mathbb {R}^{n}. \end{aligned}$$
(94)

An SSM characterizes the deformation in the corresponding modal subspace that arises due to the addition of nonlinearities in the linearized counterpart of system (94). Specifically, the nonlinear terms in the Taylor expansions \(\mathbf {W}\), \(\mathbf {R}\) (see Eqs. (22) and (23)) end up being non-trivial precisely due to the presence of the nonlinearity \(\mathbf {f}\) in system (94). For each of the following examples, we illustrate this deformation of the modal subspace by taking a snapshot (Poincaré section) of the non-autonomous SSM along with its reduced dynamics at an arbitrary time instant, \(t=t_0\). We then plot the SSM as a graph over the modal coordinates \([\rho \cos \theta ,~\rho \sin \theta ]\), where \(\theta = (\psi + \eta \Omega t_0)\) (see Lemma 2).

To this end, we simply simulate the autonomous, two-dimensional ROM (89) which results in the reduced dynamics trajectories \(\rho (t)\) and \( \theta (t)\) on the SSM in polar coordinates. We then map these trajectories onto the SSM using the parametrization \(\mathbf {W}_{\epsilon } (\mathbf {p}(t), \Omega t_0)) \), where

$$\begin{aligned} \mathbf {p}(t) = \rho (t) \left[ \begin{array}{c} e^{\mathrm {i}\theta (t)}\\ e^{-\mathrm {i}\theta (t)} \end{array}\right] = \rho (t) \left[ \begin{array}{c} e^{\mathrm {i}(\psi (t) + \eta \Omega t_0)}\\ e^{-\mathrm {i}(\psi (t) + \eta \Omega t_0)} \end{array}\right] . \end{aligned}$$
(95)

We also compare these results with global computational techniques involving numerical continuation of the periodic response via collocation, spectral and shooting-based approximations. While the local manifold computations we have discussed would benefit greatly from parallel computing (see Remarks 2 and 4), in this work, we refrain from any parallel computations for a fair comparison of computation time with other techniques, where the tools we have employed do not use parallelization. We perform all computations via openly available MATLAB packages on version 2019b of MATLAB.

6.1 Finite element-type oscillator chain

As a first example, we consider the nonlinear oscillator chain example used by Jain et al. [42], whose computational implementation can be made to resemble a finite element assembly, with each of the nonlinear springs treated as an element.

Fig. 7
figure 7

a Poincaré section of the non-autonomous SSM computed around the second mode (eigenvalues (102)) for \(\Omega = 0.6158\) rad/s, where the reduced dynamics in polar coordinates \(\rho \), \(\theta \) is obtained by simulating the ROM (89) (see Eq. (95)). The fixed points in blue and red directly provide us the stable and unstable periodic orbits on the FRC. b FRC obtained via local computations of SSM at \(\mathcal {O}(5)\) agrees with those obtained using global continuation methods involving the harmonic balance method (NLvib [46]) and collocation (coco [13]); the computation is performed for \(n=10\) (see Table 2 for computation times); and the plot shows the displacement amplitude for the \(5^{\mathrm {th}}\) (middle) degree of freedom

The equations of motion for the n-mass oscillator chain, shown in Fig. 6, are given by system (94) with

$$\begin{aligned} \mathbf {M}=m\mathbf {I}_{n},\quad \mathbf {K}=k\mathbf {L}_{n},\quad \mathbf {C}=c\mathbf {L}_{n},\quad \mathbf {f}(\mathbf {x}) =\kappa \mathbf {f}_{3}\mathbf {x}^{\otimes 3}, \end{aligned}$$
(96)

where \(\mathbf {L}_{n}\) is a Toeplitz matrix given as

$$\begin{aligned} \mathbf {L}_{n}=\left[ \begin{array}{ccccc} 2 &{} -1\\ -1 &{} 2 &{} -1\\ &{} \ddots &{} \ddots &{} \ddots \\ &{} &{} -1 &{} 2 &{} -1\\ &{} &{} &{} -1 &{} 2 \end{array}\right] \in \mathbb {R}^{n\times n}, \end{aligned}$$
(97)

and \(\mathbf {f}_{3}\in \mathbb {R}^{n\times n^{3}}\) is a sparse cubic coefficients array such that

$$\begin{aligned} \mathbf {f}_{3}\mathbf {x}^{\otimes 3}=\left[ \begin{array}{c} x_{1}^{3}-(x_{2}-x_{1})^{3}\\ (x_{2}-x_{1})^{3}-(x_{3}-x_{2})^{3}\\ \vdots \\ (x_{n}-x_{n-1})^{3}-x_{n}^{3} \end{array}\right] . \end{aligned}$$
(98)

We choose the parameter values

$$\begin{aligned}&m=1,\quad k=1,\quad c=0.1,\quad \kappa =0.3,\nonumber \\&\mathbf {f}^{ext} (\Omega t)=\mathbf {f}_{0}\cos (\Omega t),\quad \epsilon =0.1, \end{aligned}$$
(99)

where forcing frequency \(\Omega \) in the range of 0.23-1 rad/s and the forcing shape

$$\begin{aligned} \mathbf {f}_{0}= & {} [-0.386,-0.587,-0.521,-0.243,0.095,0.335,\nonumber \\&0.402,0.323, 0.188,0.075]^{\top } \end{aligned}$$
(100)

are chosen to excite the first three modes of the system. For the chosen parameter values, the pairs of eigenvalues associated with the first three modes are

$$\begin{aligned} \lambda _{1,2}&=-0.0041\pm 0.2846\mathrm {i}, \end{aligned}$$
(101)
$$\begin{aligned} \lambda _{3.4}&=-0.0159\pm 0.5632\mathrm {i}, \end{aligned}$$
(102)
$$\begin{aligned} \lambda _{5,6}&=-0.0345\pm 0.8301\mathrm {i}. \end{aligned}$$
(103)

For \(\Omega \in [0.23,1]\), these three pairs of eigenvalues (101)-(103) are nearly resonant with \(\Omega \) as per approximations (86) with \(\eta =1\). We subdivide the frequency range into three intervals around each of these near-resonant eigenvalue pairs. We then perform SSM computations up to quintic order to approximate the near-resonant FRC via Lemma 2 for each pair of near-resonant eigenvalues.

Table 2 Computation times for obtaining the FRC depicted in Fig. 7

Figure 7a illustrates the Poincaré section of the non-autonomous SSM computed around the second mode with eigenvalues (102) and near-resonant forcing frequency \(\Omega = 0.6158\) rad/s (period \(T = 2\pi /\Omega \)). Each curve of the reduced dynamics shown in Fig. 7a represents iterates of the period T-Poincaré map. In particular, any hyperbolic fixed points correspond to T-periodic orbits of the full system with the same hyperbolicity according to Lemma 2. Hence, we directly obtain unstable and stable periodic orbits on the FRC by investigating the stable (blue) and unstable (red) fixed points of the reduced dynamics on the SSM for different values of \(\Omega \), as shown in Fig. 7b.

Figure 7b further shows that the FRC obtained from these SSM computations agrees with the spectral (harmonic balance) and collocation-based approximations. We perform these harmonic balance approximations using an openly available MATLAB package, NLvib [46], which implements an alternating frequency–time (AFT) approach. We choose 5 harmonics for approximations in the frequency domain and \(2^{7}\) time steps for the approximations in the time domain. For performing collocation-based continuation, we use the \(\texttt {po-}\)toolbox of \(\textsc {coco}\) [13] with default settings and adaptive refinement of collocation mesh and one-dimensional atlas algorithm.

The total computation time consumed in model generation, coefficient assembly and computation of all eigenvalues of this system was less than 1 second on a Windows-PC with Intel Core i7-4790 CPU @ 3.60GHz and 32 GB RAM. We compare the computation times for obtaining the FRC using different methods in Table 2.

In this example, the SSM-based analytic approximation to FRC using Lemma 2 involves the computation of the \(\mathcal {O}(5)\)-autonomous SSM three times (once around each resonant pair). The leading-order non-autonomous SSM computation needs to be repeated for each \(\Omega \) in the frequency span [0.23, 1]. We emphasize that while each of these SSM computations is parallelizable (see Remark 2) in contrast to continuation-based global methods, we have reported computation times via a sequential implementation in Table 2. As expected, we observe from Table 2 that local approximations to SSMs are a much faster means to compute FRCs in comparison with global techniques that involve collocation or spectral (harmonic balance) approximations.

6.2 Von Kármán Beam

We now consider a finite element model of a geometrically nonlinear, cantilevered von Kármán beam (Jain et al. [41]), illustrated in Fig. 8a. The geometric and material properties of the beam are given in Table 3. The equations of motion are again given in the general form (94). This model is programmed in the finite element solver [37], which directly provides us the matrices \(\mathbf {M},\mathbf {C},\mathbf {K}\) and the coefficients of the nonlinearity \(\mathbf {f}\) in physical coordinates. We discretize this model using 10 elements resulting in \(n=30\) degrees of freedom.

Fig. 8
figure 8

The schematic of a two-dimensional von Kármán beam model (Jain et al. [41]) with height h and length L, initially aligned with the \(x_{1}\) axis, see Table 3 for geometric and material properties

Table 3 Physical parameters of the von Kármán beam model (see Fig. 8a)
Fig. 9
figure 9

Poincaré sections of the non-autonomous SSM computed around the first mode (eigenvalues (104)) of the beam for near-resonant forcing frequency \(\Omega = 5.4\) rad/s. The projection of the SSM onto the axial degree of freedom b located at the tip of the beam shows significant curvature in contrast to that onto the transverse degree of freedom (a), which appears relatively flat. The reduced dynamics in polar coordinates \(\rho \), \(\theta \) is obtained by simulating the ROM (89) (see Eq. (95)); the fixed points in blue and red directly provide us the stable and unstable periodic orbits on the FRC for different values of \(\Omega \) (see Fig. 10)

The eigenvalue pair associated with the first mode of vibration of the beam is given by

$$\begin{aligned} \lambda _{1,2}=-0.0019+5.1681\mathrm {i}, \end{aligned}$$
(104)

, and external forcing is chosen as

$$\begin{aligned} \mathbf {f}^{ext}(\Omega t)=\mathbf {f}_{0}\cos (\Omega t),\quad \epsilon =10^{-3}, \end{aligned}$$
(105)

where \(\mathbf {f}_{0}\) represents a spatially uniform forcing vector with transverse forcing magnitude of 0.5N/m across the length of the beam. We choose the forcing frequency \(\Omega \) in the range 4.1-6.2 rad/s for which the eigenvalue pair \(\lambda _{1,2}\) (104) is nearly resonant with \(\Omega \) (see (86)). We then perform \(\mathcal {O}(5)\) SSM computations to approximate the near-resonant FRC around the first natural frequency via Lemma 2.

Figure 9 illustrates the Poincaré section of the non-autonomous SSM computed around the first mode with eigenvalues (104) and near-resonant forcing frequency \(\Omega = 5.4\) rad/s (period \(T = 2\pi /\Omega \)). We observe in Fig. 9a that the graph of the manifold is flat along the transverse degree of freedom, which gives the impression that there is no significant deformation of the modal subspace under the addition of nonlinearities in this system. At the same time, however, Fig. 9b depicts a significant curvature of the SSM along the axial degree of freedom, which is related to the bending–stretching coupling introduced by the geometric nonlinearities in any beam model. Hence, we note that the invariance computation automatically accounts for the important physical effects arising due to nonlinearities in the form of the parametrizations \(\mathbf {W}\) and \(\mathbf {R}\) of the manifold and its reduced dynamics. These effects, otherwise, are typically captured by a heuristic projection of the governing equation onto carefully selected modes (see Jain et al. [41], Buza et al. [5] for a discussion).

Finally, in Fig. 10, we obtain unstable and stable periodic orbits on the FRC by investigating the stable (blue) and unstable (red) fixed points of the reduced dynamics on the SSM for different values of \(\Omega \). Figure 10 also shows that the FRC obtained via local SSM computation closely approximates the FRCs obtained using various global continuation techniques: collocation approximations via \(\textsc {coco}\) [13] and harmonic balance approximations via NLvib [46]. These continuation were performed with the same settings as in the previous example.

Fig. 10
figure 10

FRCs of the von Kármán beam model (see Fig. 8) with \(n=30\) degrees of freedom under harmonic, spatially uniform transverse loading (see Eq. (105)). The FRC obtained via local computations of SSM at \(\mathcal {O}(5)\) agrees with those obtained using global continuation methods involving the harmonic balance method (NLvib [46]) and collocation (\(\textsc {coco}\) [13]); the plot shows the displacement amplitude for in the \(x_{3}\) direction at the tip of the beam (see Table 4 for computation times)

Table 4 Computation time for obtaining the FRCs depicted in Fig. 8b

Once again, the total computation time spent on model generation, coefficient assembly and computing the first 10 eigenvalues of this system was less than 1 second on a Windows-PC with Intel Core i7-4790 CPU @ 3.60GHz and 32 GB RAM. Table 4 records the computation times to obtain FRCs via each of these methods. For the collocation-based response computation via coco [13], we also employ the atlas-\(k\hbox {d}\) algorithm (see Dankowicz et al. [14]) in addition to the default atlas-1d algorithm used in the previous example. Atlas-kd allows the user to choose the subspace of the continuation variables along which the continuation step size h is measured. Here, we choose this subset to be \( ({z}_{out}(0), \Omega , T) \), where \( T = \frac{2\pi }{\Omega } \) is the time period of periodic response and \( {z}_{out} \) is the response at output degree of freedom shown in Fig. 10. We allow for the continuation step size to adaptively vary between the values \( h_{min} = 10^{-5} \) to \( h_{max} = 50 \) and a maximum residual norm for the predictor step to be 10. We found these settings to be optimal for this atlas-kd run since relaxing these tolerances further has no effect on the continuation speed. Once again, the computation times in Table 4 indicate orders-of-magnitude higher speed in reliably approximating FRC via local SSMs computations in comparison with global techniques that involve collocation or spectral approximations.

6.3 Shallow-arch structure

Next, we consider a finite element model of a geometrically nonlinear shallow arch structure, illustrated in Fig. 11a (Jain and Tiso [39]).

Fig. 11
figure 11

a schematic of a shallow-arch structure (Jain and Tiso [39]), see Table 5 for geometric and material properties. This plate is simply supported at the two opposite edges aligned along the y-axis. b The finite element mesh (containing 400 elements, 1320 degrees of freedom) deformed along first bending mode having undamped natural frequency of approximately 23.47 Hz

The geometrical and material properties of this curved plate are given in Table 5. The plate is simply supported at the two opposite edges aligned along the y-axis in Fig. 11a. The model is discretized using flat, triangular shell elements and contains 400 elements, resulting in \(n=1320\) degrees of freedom. The open-source finite element code [37] directly provides us the matrices \(\mathbf {M},\mathbf {C},\mathbf {K}\) and the coefficients of the nonlinearity \(\mathbf {f}\) in the equations of motion (94).

Table 5 Geometrical and material parameters of the shallow-arch structure in Fig. 11a

The first mode of vibration of this structure is shown in Fig. 11b, and the corresponding eigenvalue pair is given by

$$\begin{aligned} \lambda _{1,2}=-0.29\pm 147.45\mathrm {i}. \end{aligned}$$
(106)

The external forcing is again given by

$$\begin{aligned} \mathbf {f}^{ext}(\Omega t)=\mathbf {f}_{0}\cos (\Omega t),\quad \epsilon =0.1, \end{aligned}$$
(107)

where \(\mathbf {f}_{0}\) represents a vector of concentrated load in z-direction with magnitude of 100 N at the mesh node located at \(x=\frac{L}{2},y=\frac{H}{2}\) in Fig. ()a. We choose the forcing frequency \(\Omega \) in the range 133–162 rad/s for which the eigenvalue pair \(\lambda _{1,2}\) is nearly resonant with \(\Omega \) (see (86)).

Fig. 12
figure 12

a Poincaré section of the non-autonomous SSM of the shallow-arch structure (see Fig. 11) computed around the first mode (eigenvalues (106)) for near-resonant forcing frequency \(\Omega = 146.49\) rad/s. The reduced dynamics in polar coordinates \(\rho \), \(\theta \) is obtained by simulating the ROM (89) (see Eq. (95)); the fixed points in blue and red directly provide us the stable and unstable periodic orbits on the FRC b for different values of \(\Omega \). FRCs obtained using local SSM computations at \(\mathcal {O}(3),\mathcal {O}(5)\) and \(\mathcal {O}(7)\) agree with that obtained via global continuation based on the shooting method, which implements the Newmark time integration (see Table 6 for computation times); plots (a) and (b) show the displacements in the x and z-directions at the mesh node located at \(x=\frac{L}{2},y=\frac{H}{2}\) in Fig. 11

Table 6 Computation time for obtaining the FRCs depicted in Fig. 12

We then compute the near-resonant FRC around the first natural frequency via \(\mathcal {O}(3),\mathcal {O}(5),\) and \(\mathcal {O}(7)\) SSM computations using Lemma 2. Once again, Fig. 12a shows the Poincaré section of the non-autonomous SSM for the near-resonant forcing frequency \(\Omega = 146.49\) rad/s, where we directly obtain the unstable (red) and stable (blue) periodic orbits on the FRC as hyperbolic fixed points of the reduced dynamics (89) on the SSM. The three FRCs at \(\mathcal {O}(3),\mathcal {O}(5)\) and \(\mathcal {O}(7)\) seem to converge to softening response shown in Fig. 12b. Note that we expect a softening behavior in the FRC of shallow arches (see, e.g., Buza et al. [4, 5]).

Due to excessive memory requirements, this FRC could not be computed using collocation approximations via \(\textsc {coco}\) [13] or using harmonic balance approximations via NLvib [46]. Instead, we compare this FRC to another global continuation technique based on the shooting method, which is still feasible (see Introduction).

For shooting, we use the classic Newmark time integration scheme (Newmark [58], see Géradin and Rixen [24] for a review) as the common Runge–Kutta schemes (e.g., \(\texttt {ode45}\) of MATLAB) struggle to converge in structural dynamics problems. We use an open-source toolbox [52], based on the atlas-1d algorithm of \(\textsc {coco}\) [13] for continuation of the periodic solution trajectory obtained via shooting (see Dancowicz et al. [14]). We use a constant time step throughout time integration which is chosen by dividing the time span \(T=\frac{2\pi }{\Omega }\) into 100 equal intervals. We found this choice of time step to be nearly optimal for this problem as larger time steps lead to non-quadratic convergence during Newton–Raphson iterations and smaller time steps result in slower computations. The stability of the response is computed by integrating the equations of variation around the converged periodic orbit.

Table 7 Geometrical and material parameters of the shallow-arch structure in Fig. 11a
Fig. 13
figure 13

a A wing structure with NACA 0012 airfoil stiffened with ribs (Jain et al. [40]), see Table 5 for geometric and material properties. b The finite element mesh is illustrated after removing the skin panels. The wing is cantilevered at the \(z=0\) plane. The mesh contains 49,968 elements which results in \(n=133,920\) degrees of freedom

Fig. 14
figure 14

a Poincaré section of the non-autonomous SSM of the aircraft wing structure with \( n= \)133,920 degrees of freedom (see Fig. 13) computed around the first mode (eigenvalues (108)) for near-resonant forcing frequency \(\Omega = 29.8\) rad/s. The reduced dynamics in polar coordinates \(\rho \), \(\theta \) is obtained by simulating the ROM (89) (see Eq. (95)); the fixed points in blue and red directly provide us the stable and unstable periodic orbits on the FRC bfor different values of \(\Omega \). FRCs obtained using local SSM computations at \(\mathcal {O}(3),\mathcal {O}(5)\) and \(\mathcal {O}(7)\) converge toward a hardening response; plots a and b show the displacements in the x and y-directions at the tip-node 1 shown in Fig. 13b (see Table 8 for the computational resources consumed)

The total time consumed in model generation and coefficient assembly was 33 seconds on a Windows-PC with Intel Core i7-4790 CPU @ 3.60GHz and 32 GB RAM. This includes the time spent in computing the first 10 eigenvalues of this system, which took less than 1 second. Figure 12b shows that this shooting-based global continuation agrees with the SSM-based approximation to the FRC. Obtaining this FRC via the shooting methods, however, takes more than 2 days, in contrast to SSM-based approximation using the proposed computational methodology, which still takes less than a minute even at \(\mathcal {O}(7)\), as shown in Table 6.

6.4 Aircraft Wing

As a final example, we consider the finite element model of a geometrically nonlinear aircraft wing originally presented by Jain et al [40] (see Fig. 13). The wing is cantilevered at one of its ends, and the structure is meshed using flat triangular shell elements featuring 6 degrees of freedom per node. With 49,968 elements and 133,920 degrees of freedom, this model provides a physically relevant as well as computationally realistic problem that is beyond feasibility for global continuation techniques based on collocation, spectral and shooting methods, as shown by previous examples. The open-source finite element code [37] directly provides us the matrices \(\mathbf {M},\mathbf {K}\) and the coefficients of the nonlinearity \(\mathbf {f}\) in the equations of motion (94) (Table 7).

For assembling coefficients on a problem of this size, we used the Euler supercomputing cluster at ETH Zurich. The total time consumed in model generation and coefficient assembly was 1 hour 21 minutes and 38 seconds without any parallelization. This time includes the time taken for computing the first 10 eigenvalues of this system, which was approximately 5 s. The main bottleneck was the memory consumption during the assembly of the coefficients of the nonlinearity \(\mathbf {f}\), where the peak memory consumption was around 183 GB. However, once assembled, these coefficients consume only about 1.8 GB of RAM. This extraordinary memory consumption during assembly occurs due to a suboptimal assembly procedure of sparse tensors [1]. To avoid these bottlenecks, parallel computing and distributed memory architectures need to be employed, which are currently not available in the packages we have used.

Table 8 Computation time and memory requirements for obtaining the three FRCs depicted in Fig. 14

In this example, we choose Rayleigh damping (see, e.g., Géradin and Rixen [24]), which is commonly employed in structural dynamics applications to construct the damping matrix \(\mathbf {C}=\alpha \mathbf {M}+\beta \mathbf {K}\) as a linear combination of mass and stiffness matrices. The constants \(\alpha ,\beta \) are chosen to ensure a damping ratio of 0.4% along the first two vibration modes. The eigenvalue pair associated with the first mode of vibration is given by

$$\begin{aligned} \lambda _{1,2}=-0.0587\pm 29.3428\mathrm {i}. \end{aligned}$$
(108)

Once again, we choose harmonic external forcing given by

$$\begin{aligned} \mathbf {f}^{ext}(\Omega t)=\mathbf {f}_{0}\cos (\Omega t),\quad \epsilon =0.01, \end{aligned}$$
(109)

where \(\mathbf {f}_{0}\) represents a vector of concentrated loads at the tip nodes 1 and 2 (see Fig. ()b) in the transverse y-direction each with a magnitude of 100 N. We choose the forcing frequency \(\Omega \) in the range 26.4–32.3 rad/s for which the eigenvalue pair \(\lambda _{1,2}\) is nearly resonant with \(\Omega \) (see (86)). We then compute the near-resonant FRCs around the first natural frequency via \(\mathcal {O}(3),\mathcal {O}(5),\) and \(\mathcal {O}(7)\) SSM computations using Lemma 2.

Similarly to the previous examples, Fig. 14a shows the Poincaré section of the non-autonomous SSM for the near-resonant forcing frequency \(\Omega = 29.8\) rad/s. The hyperbolic fixed points of the reduced dynamics (89) on the SSM directly provide the stable (blue) and unstable (red) periodic orbits on the FRC for different values of forcing frequency \(\Omega \). On a macro-level, this wing example resembles a cantilevered beam and we expect a hardening type response. Indeed, the three FRCs at \(\mathcal {O}(3),\mathcal {O}(5),\) and \(\mathcal {O}(7)\) converge toward a hardening-type response, as shown in Fig. 14b.

Table 8 depicts the computational resources consumed in obtaining these three FRCs. The peaks in memory consumption reported in Table 8 occur during the composition of nonlinearity (see Eq. (27)). Note that these peaks are short-lived, however, as the average memory consumption during all these computations is much lower. We remark that in the context of finite element applications, these memory peaks can be significantly reduced by implementing the nonlinearity composition at the element level in contrast to the currently performed implementation at the full system level. Once again, use of parallel computing and distributed memory architectures would be greatly beneficial in this context.

7 Conclusions

In this work, we have reformulated the parametrization method for local approximations of invariant manifolds and their reduced dynamics in the context of high-dimensional nonlinear mechanics problems. In this class of problems, the classically used system diagonalization at the linear level is no longer feasible. Instead, we have developed expressions that enable the computation of invariant manifolds and their reduced dynamics in physical coordinates using only the master modes associated with the invariant manifold. Hence, these computations facilitate mathematically rigorous nonlinear model reduction in very high-dimensional problems. A numerical implementation of the proposed computational methodology is available in the open-source MATLAB package, SSMTool 2.0 [38], which enables the computation of invariant manifolds in finite element-based discretized problems via an integrated finite element solver [37] and bifurcation analysis of the reduced dynamics on these invariant manifolds via its coco [13] integration.

We have connected this computational methodology to several applications of engineering significance, including the computation of parameter-dependent center manifolds; Lyapunov subcenter manifolds (LSM) and their associated conservative backbone curves; and spectral submanifolds (SSM) and their associated forced response curves (FRCs) in dissipative mechanical systems. We have also demonstrated fast and reliable computations of FRCs via a normal form style parametrization of SSMs in very large mechanical structures, which has been a computationally intractable task for other available approaches.

While our examples focused on the applications of two-dimensional SSMs, this automated computation procedure and its numerical implementation [38] can treat higher-dimensional invariant manifolds as well. Specifically, the reduced dynamics on higher-dimensional SSMs can be used for the direct computation of FRCs in internally resonant mechanical systems featuring energy transfer among multiple modes, as will be demonstrated in forthcoming publications (Li et al. [54]; Li and Haller [53]). Furthermore, in the non-autonomous setting, we have restricted our expressions to the leading-order contributions from the forcing. Similar expressions, however, can also be obtained for higher-order terms at the non-autonomous level, which is relevant for the nonlinear analysis of parametrically excited systems. These expressions and the related numerical implementation are currently under development.

Finally, as we have noted, these computations will further benefit from parallelization since the invariance equations can be solved independently for each monomial/Fourier multi-index (see Remarks 2 and 4, and Sect. 6.4). This development is currently underway and will be reported elsewhere.