Skip to main content

Koopman Spectrum and Stability of Cascaded Dynamical Systems

  • Chapter
  • First Online:
The Koopman Operator in Systems and Control

Part of the book series: Lecture Notes in Control and Information Sciences ((LNCIS,volume 484))

Abstract

This chapter investigates the behavior of cascaded dynamical systems through the lens of the Koopman operator and, in particular, its so-called principal eigenfunctions. It is shown that there exist perturbation functions for the initial conditions of each component system that make the orbits for the cascaded system and the decoupled component systems have zero asymptotic relative error. This in turn implies that the evolutions are asymptotically equivalent. By analyzing the exact form of the initial condition perturbation functions, the maximum error between the trajectories of the decoupled systems and the cascaded system can be bounded. More colloquially, these results say that cascaded compositions of stable systems are stable. It is also shown that the process of wiring the component systems together in a cascade structure preserves the principal eigenvalues and these principal eigenvalues are preserved between topologically conjugate systems. Thus, the analysis of cascaded systems is reduced to the determination of the principal eigenvalues and eigenfunctions of each component’s Koopman operator and the form of the perturbation functions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We use \(\circ t\) instead of t to remind ourselves that these are compositions of nonlinear operators.

  2. 2.

    In (5.5) and later, we are using the notation that if we have a collection of maps \(f_i : X \rightarrow X_i\), \((i=1,\ldots ,n)\), the vector-valued map \(\mathbf {f} : X \rightarrow X_1\times \cdots \times X_n\) defined by \(\mathbf {f}(\mathbf {x}) := (f_1(\mathbf {x}),\ldots ,f_n(\mathbf {x}))\) can be written as \(\mathbf {f}(\mathbf {x}) \equiv (f_1,\ldots , f_n)(\mathbf {x})\).

  3. 3.

    Note that \(\mathbf {w}_{i,s} := (\mathbf {e}_{i,s}^*\mathbf {V}_i^{-1})^{*} = (\mathbf {V}_{i}^{*})^{-1} \mathbf {e}_{i,s}\) is the sth dual basis vector in system i; that is \(\langle {\mathbf {v}_{i,t}},{\mathbf {w}_{i,s}}\rangle _{\mathbb {C}^{d_i}} = \mathbf {w}_{i,s}^* \mathbf {v}_{i,t} = \delta _{s,t}\), where \(\mathbf {v}_{i,t}\) is the tth eigenvector of \(\mathbf {L}_i\).

  4. 4.

    We take the empty sum \(\sum _{j=1}^{0}\) to be 0.

  5. 5.

    We choose seven layers here merely for display purposes—graphs for the behavior of the six downstream component systems are easily displayed in a 2-by-3 table.

References

  1. Banaszuk, A., Fonoberov, V.A., Frewen, T.A., Kobilarov, M., Mathew, G., Mezić, I., Pinto, A., Sahai, T., Sane, H., Speranzon, A., Surana, A.: Scalable approach to uncertainty quantification and robust design of interconnected dynamical systems. Annu. Rev. Control 35(1), 77–98 (2011)

    Article  Google Scholar 

  2. Budisic, M., Mohr, R., Mezić, I.: Applied koopmanism. Chaos 22(4), 047,510 (2012)

    Article  MathSciNet  Google Scholar 

  3. Callier, F., Chan, W., Desoer, C.: Input-output stability theory of interconnected systems using decomposition techniques. IEEE Trans. Circuits Syst. 23(12), 714–729 (1976)

    Article  MathSciNet  Google Scholar 

  4. Heersink, B., Warren, M.A., Hoffmann, H.: Dynamic mode decomposition for interconnected control systems (2017). arXiv.org

  5. Lan, Y., Mezić, I.: On the architecture of cell regulation networks. BMC Syst. Biol. 5(1), 37 (2011)

    Article  Google Scholar 

  6. Lasota, A., Mackey, M.C.: Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics. Applied Mathematical Sciences, vol. 97, 2nd edn. Springer, Berlin (1994)

    Book  Google Scholar 

  7. Mauroy, A., Mezić, I.: Global stability analysis using the eigenfunctions of the Koopman operator. IEEE Trans. Autom. Control 61(11), 3356–3369 (2016)

    Article  MathSciNet  Google Scholar 

  8. Mesbahi, A., Haeri, M.: Conditions on decomposing linear systems with more than one matrix to block triangular or diagonal form. IEEE Trans. Autom. Control 60(1), 233–239 (2015)

    Article  MathSciNet  Google Scholar 

  9. Mezić, I.: Coupled nonlinear dynamical systems: asymptotic behavior and uncertainty propagation. In: 43rd IEEE Conference on Decision and Control, pp. 1778–1783. Atlantis, Paradise Island, Bahamas (2004)

    Google Scholar 

  10. Mezić, I.: Spectral properties of dynamical systems, model reduction and decompositions. Nonlinear Dyn. 41, 309–325 (2005)

    Article  MathSciNet  Google Scholar 

  11. Mezić, I., Banaszuk, A.: Comparison of systems with complex behavior: spectral methods. In: 39th IEEE Conference on Decision and Control, pp. 1224–1231. UCSB, Sydney, Australia (2000)

    Google Scholar 

  12. Mezić, I., Banaszuk, A.: Comparison of systems with complex behavior. Phys. D: Nonlinear Phenom. 197(1), 101–133 (2004)

    Article  MathSciNet  Google Scholar 

  13. Michel, A.N.: On the status of stability of interconnected systems. IEEE Trans. Autom. Control 28(6), 639–653 (1983)

    Article  MathSciNet  Google Scholar 

  14. Mohr, R., Mezić, I.: Construction of eigenfunctions for scalar-type operators via laplace averages with connections to the Koopman operator, 1–25 (2014). arXiv.org

  15. Mohr, R., Mezić, I.: Koopman principal eigenfunctions and linearization of diffeomorphisms (2016). arXiv.org

  16. Pichai, V., Sezer, M.E., Siljak, D.D.: A graph-theoretic algorithm for hierarchical decomposition of dynamic-systems with applications to estimation and control. IEEE Trans. Syst. Man Cybern. 13(2), 197–207 (1983)

    Article  MathSciNet  Google Scholar 

  17. Ryan, R.A.: Introduction to Tensor Products of Banach Spaces. Springer Monographs in Mathematics. Springer, London (2002)

    Book  Google Scholar 

  18. Shen-Orr, S.S., Milo, R., Mangan, S., Alon, U.: Network motifs in the transcriptional regulation network of Escherichia coli. Nature Genet. 31(1), 64–68 (2002)

    Article  Google Scholar 

Download references

Acknowledgements

This research was partially funded under a subcontract from HRL Laboratories, LLC under DARPA contract N66001-16-C-4053 and additionally funded by the DARPA Contract HR0011-16-C-0116.

  The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Distribution Statement “A”: Approved for Public Release, Distribution Unlimited.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryan Mohr .

Editor information

Editors and Affiliations

5.6 Appendix

5.6 Appendix

Here, we collect all the technical proofs of the above results. It can be skipped on a first reading.

1.1 5.6.1 Proof of Theorem 5.2: 0 Asymptotic Relative Error—Linear, Chained Cascades

The first lemma gives the general solution for the ith level of the chained linear cascade system.

Lemma 5.1

For all \(i=1,\ldots , n\) and \(t \ge 0\), denote by \(\mathbf {x}_i(t)\) the solution \(\varPi _i \circ \mathsf {Lin}^{\circ t}(\mathbf {x}_1,\ldots ,\mathbf {x}_n)\) of the ith level of (5.31). For \(i\ge 2\), the general solution satisfies

$$\begin{aligned} \mathbf {x}_{i}(t) \equiv \varPi _i \circ \mathsf {Lin}^{\circ t}(\mathbf {x}_1,\ldots , \mathbf {x}_n) = \mathbf {L}_{i}^{t} \mathbf {x}_i + \mathbf {L}_{i}^{t-1} \mathbf {V}_{i} \sum _{k=0}^{t-1} \varLambda _{i}^{-k} \mathbf {V}_{i}^{-1} \mathbf {C}_{i,i-1} \mathbf {x}_{i-1}(k). \end{aligned}$$
(5.55)

Proof

Repeatedly using (5.31), we have

$$\begin{aligned} \mathbf {x}_{i}(t)&= \mathbf {L}_{i} \mathbf {x}_{i}(t-1) + \mathbf {C}_{i,i-1}\mathbf {x}_{i-1}(t-1) \\&= \mathbf {L}_{i} \left[ \mathbf {L}_{i} \mathbf {x}_{i}(t-2) + \mathbf {C}_{i,i-1}\mathbf {x}_{i-1}(t-2) \right] + \mathbf {C}_{i,i-1}\mathbf {x}_{i-1}(t-1) \\&= \mathbf {L}_{i}^{2} \mathbf {x}_{i}(t-2) + \left[ \mathbf {L}_{i} \mathbf {C}_{i,i-1}\mathbf {x}_{i-1}(t-2) + \mathbf {C}_{i,i-1}\mathbf {x}_{i-1}(t-1) \right] \\&~~\vdots \\&= \mathbf {L}_{i}^{t} \mathbf {x}_{i}(0) + \left[ \mathbf {L}_{i}^{t-1} \mathbf {C}_{i,i-1} \mathbf {x}_{i-1}(0) + \mathbf {L}_{i}^{t-2} \mathbf {C}_{i,i-1} \mathbf {x}_{i-1}(1) + \cdots \right. \\&\qquad \qquad \qquad +\left. \mathbf {L}_{i}^{1} \mathbf {C}_{i,i-1} \mathbf {x}_{t-2}(1) + \mathbf {C}_{i,i+1} \mathbf {x}_{i-1}(t-1)\right] \\&= \mathbf {L}_{i}^{t} \mathbf {x}_{i}(0) + \sum _{k=0}^{t-1} \mathbf {L}_{i}^{t-1-k} \mathbf {C}_{i,i-1} \mathbf {x}_{i-1}(k) \\&= \mathbf {L}_{i}^{t} \mathbf {x}_{i}(0) + \mathbf {L}_{i}^{t-1}\sum _{k=0}^{t-1} \mathbf {L}_{i}^{-k} \mathbf {C}_{i,i-1} \mathbf {x}_{i-1}(k) . \end{aligned}$$

Replacing \(\mathbf {L}_{i}^{-k}\) with \(\mathbf {V}_{i} \varLambda _{i}^{-k} \mathbf {V}_{i}^{-1}\) in this final expression gives (5.55). \(\square \)

Lemma 5.2

Assume Condition 5.1 holds for (5.31) and each \(\mathbf {L}_i\) is diagonalized by \(\mathbf {L}_{i} = \mathbf {V}_{i} \varLambda _{i} \mathbf {V}_{i}^{-1}\). For any matrix \(\mathbf {B} \in \mathbb {C}^{d_i \times d_j}\), the following equality holds for any \(i,j \in \lbrace 1,\ldots , n \rbrace \) with \(i\ne j\):

$$\begin{aligned} \sum _{k=0}^{t-1} \varLambda _{i}^{-k} \mathbf {B} \varLambda _{j}^{k} = \tilde{\mathbf {B}} - \varLambda _{i}^{-t} \tilde{\mathbf {B}} \varLambda _{j}^{t}, \end{aligned}$$
(5.56)

where \(\tilde{\mathbf {B}} \in \mathbb {C}^{d_i \times d_j}\) is the matrix whose \((\ell ,m)\)th entry is given by

$$\begin{aligned}{}[\tilde{\mathbf {B}}]_{\ell ,m} = [ \mathbf {B} ]_{\ell ,m} \left( 1 - \frac{\lambda _{j,m} }{\lambda _{i,\ell }}\right) ^{-1}. \end{aligned}$$
(5.57)

Proof

For any matrix \(\mathbf {M}\), we denote the \((\ell ,m)\)th entry as \([\mathbf {M}]_{\ell ,m}\). The \((\ell ,m)\)th entry of (5.56) is given by

$$\begin{aligned}{}[\varLambda _{i}^{-k} \mathbf {B} \varLambda _{j}^{k}]_{\ell ,m}&= \sum _{s=1}^{d_i} [\varLambda _{i}^{-k}]_{\ell ,s} [\mathbf {B} \varLambda _{j}^{k}]_{s,m} \\&= \sum _{s=1}^{d_i} [\varLambda _{i}^{-k}]_{\ell ,s} \sum _{u=1}^{d_j} [\mathbf {B}]_{s,u}[\varLambda _{j}^{k}]_{u,m}. \end{aligned}$$

Since \(\varLambda _i\) is diagonal, \([\varLambda _{j}^{k}]_{u,m} = 0\) for \(u\ne m\) and \([\varLambda _{j}^{k}]_{m,m} = \lambda _{j,m}^{k}\). This gives

$$\begin{aligned}{}[\varLambda _{i}^{-k} \mathbf {B} \varLambda _{j}^{k}]_{\ell ,m}&= \sum _{s=1}^{d_i} [\varLambda _{i}^{-k}]_{\ell ,s} [\mathbf {B}]_{s,m} \lambda _{j,m}^{k}. \end{aligned}$$

Since \(\varLambda _{i}^{-k}\) is diagonal, we have

$$\begin{aligned}{}[\varLambda _{i}^{-k} \mathbf {B} \varLambda _{j}^{k}]_{\ell ,m} = \lambda _{i,\ell }^{-k} [\mathbf {B}]_{\ell ,m} \lambda _{j,m}^k = [\mathbf {B}]_{\ell ,m} \left( \frac{\lambda _{j,m}}{\lambda _{i,\ell }}\right) ^k. \end{aligned}$$
(5.58)

Summing from \(k=0,\ldots , t-1\), gives

$$\begin{aligned} \sum _{k=0}^{t-1} [\varLambda _{i}^{-k} \mathbf {B} \varLambda _{j}^{k}]_{\ell ,m}&= \sum _{k=0}^{t-1} [\mathbf {B}]_{\ell ,m} \left( \frac{\lambda _{j,m}}{\lambda _{i,\ell }}\right) ^k \\&= [\mathbf {B}]_{\ell ,m} \frac{1 - \left( \frac{\lambda _{j,m}}{\lambda _{i,\ell }}\right) ^t}{1-\left( \frac{\lambda _{j,m}}{\lambda _{i,\ell }}\right) } \\&= [\tilde{\mathbf {B}}]_{\ell ,m} - [\tilde{\mathbf {B}}]_{\ell ,m} \left( \frac{\lambda _{j,m}}{\lambda _{i,\ell }}\right) ^t. \end{aligned}$$

Using (5.58), but with \(\mathbf {B}\) and k replaced by \(\tilde{\mathbf {B}}\) and t, respectively, we get

$$\begin{aligned}{}[\tilde{\mathbf {B}}]_{\ell ,m} \left( \frac{\lambda _{j,m}}{\lambda _{i,\ell }}\right) ^t = [\varLambda _{i}^{-t} \tilde{B} \varLambda _{j}^{t}]_{\ell ,m}. \end{aligned}$$

Therefore,

$$\begin{aligned} \left[ \Big ( \sum _{k=0}^{t-1} \varLambda _{i}^{-k} \mathbf {B} \varLambda _{j}^{k} \Big )\right] _{\ell ,m}&= \sum _{k=0}^{t-1} [\varLambda _{i}^{-k} \mathbf {B} \varLambda _{j}^{k}]_{\ell ,m} \\&= [\tilde{\mathbf {B}}]_{\ell ,m} - [\tilde{\mathbf {B}}]_{\ell ,m} \left( \frac{\lambda _{j,m}}{\lambda _{i,\ell }}\right) ^t \\&= [\tilde{\mathbf {B}}]_{\ell ,m} - [\varLambda _{i}^{-t} \tilde{\mathbf {B}} \varLambda _{j}^{t}]_{\ell ,m} \\&= [\tilde{\mathbf {B}} -\varLambda _{i}^{-t} \tilde{\mathbf {B}} \varLambda _{j}^{t}]_{\ell ,m}. \end{aligned}$$

This is equivalent to (5.56). \(\square \)

Lemma 5.3

For each \(i=2,\ldots ,n\), the solution of (5.31) is

$$\begin{aligned} \varPi _i \circ \mathsf {Lin}^{\circ t}(\mathbf {x}_1,\ldots ,\mathbf {x}_n) = \sum _{j=1}^{i} (-1)^{i-j} \mathbf {D}_{i,j} \mathbf {L}_{j}^{t} \mathsf {pert}_{j}(\mathbf {x}_1,\ldots , \mathbf {x}_{j}), \end{aligned}$$
(5.59)

where

$$\begin{aligned} \mathbf {D}_{i,i}&= \mathbf {I}_{d_i}&i\in \lbrace 1,\ldots ,n \rbrace ,\end{aligned}$$
(5.60)
$$\begin{aligned} \mathbf {D}_{i,j}&= \mathbf {L}_{i}^{-1} \mathbf {V}_{i} \tilde{\mathbf {C}}_{i,j} \mathbf {V}_{j}^{-1}&i\in \lbrace 2,\ldots ,n \rbrace , j\in \lbrace 1,\ldots , i-1 \rbrace , \end{aligned}$$
(5.61)

and the matrix \(\tilde{\mathbf {C}}_{i,j} \in \mathbb {C}^{d_{i} \times d_{j}}\) has elements

$$\begin{aligned}{}[\tilde{\mathbf {C}}_{i,j}]_{\ell ,m} = \left[ \mathbf {V}_{i}^{-1} \mathbf {C}_{i,i-1} \mathbf {D}_{i-1,j} \mathbf {V}_{j}\right] _{\ell ,m} \left( 1 - \frac{\lambda _{j,m}}{\lambda _{i,\ell }} \right) ^{-1}, \qquad&i\in \lbrace 2,\ldots ,n \rbrace , \\&j\in \lbrace 1,\ldots , i-1 \rbrace . \nonumber \end{aligned}$$
(5.62)

The perturbation functions \(\mathsf {pert}_{i} : \mathbb {C}^{d_1} \times \cdots \times \mathbb {C}^{d_i} \rightarrow \mathbb {C}^{d_i}\) are multilinear maps defined inductively by

$$\begin{aligned} \mathsf {pert}_{1}(\mathbf {x}_1)&= \mathbf {x}_1 \end{aligned}$$
(5.63)
$$\begin{aligned} \mathsf {pert}_{i}(\mathbf {x}_1,\ldots , \mathbf {x}_i)&= \mathbf {x}_i + \sum _{j=1}^{i-1} (-1)^{i-1-j} \mathbf {D}_{i,j} \mathsf {pert}_{j}(\mathbf {x}_1, \ldots , \mathbf {x}_{j})&i\in \lbrace 2,\ldots ,n \rbrace . \end{aligned}$$
(5.64)

Proof

We prove the result using induction. First note that the solution for \(\varPi _1\circ \mathsf {Lin}^{\circ t}(\mathbf {x}_1,\ldots ,\mathbf {x}_n)\) can be written as

$$\begin{aligned} \mathbf {x}_1(t) = \varPi _1\circ \mathsf {Lin}^{\circ t}(\mathbf {x}_1,\ldots ,\mathbf {x}_n) = \mathbf {L}_{1}^{t} \mathbf {x}_1 \equiv \mathbf {D}_{1,1} \mathbf {L}_{1}^{t} \mathsf {pert}_1(\mathbf {x}_1). \end{aligned}$$
(5.65)

Seed step: Consider \(\mathbf {x}_2(t) = \varPi _2 \circ \mathsf {Lin}^{\circ t}(\mathbf {x}_1,\ldots ,\mathbf {x}_n)\). By Lemma 5.1, Eq. (5.55), this is

$$\begin{aligned} \mathbf {x}_2(t)&= \mathbf {L}_{2}^{t} \mathbf {x}_2 + \mathbf {L}_{2}^{t-1} \mathbf {V}_{2} \sum _{k=0}^{t-1} \varLambda _{2}^{-k} \mathbf {V}_{2}^{-1} \mathbf {C}_{2,1} \mathbf {x}_{1}(k) \nonumber \\&= \mathbf {L}_{2}^{t} \mathbf {x}_2 + \mathbf {L}_{2}^{t-1} \mathbf {V}_{2} \sum _{k=0}^{t-1} \varLambda _{2}^{-k} \mathbf {V}_{2}^{-1} \mathbf {C}_{2,1} \mathbf {D}_{1,1} \mathbf {L}_{1}^{k} \mathsf {pert}_1(\mathbf {x}_1), \end{aligned}$$
(5.66)

where in the second line we have replaced \(\mathbf {x}_1(k)\) with (5.65) for \(t = k\). Using \(\mathbf {L}_{1}^{k} = \mathbf {V}_{1} \varLambda _{1}^{k}\mathbf {V}_{1}^{-1}\) in the second line gives

$$\begin{aligned} \mathbf {x}_2(t) = \mathbf {L}_{2}^{t} \mathbf {x}_2 + \mathbf {L}_{2}^{t-1} \mathbf {V}_{2} \left( \sum _{k=0}^{t-1} \varLambda _{2}^{-k} \mathbf {V}_{2}^{-1} \mathbf {C}_{2,1} \mathbf {D}_{1,1} \mathbf {V}_{1} \varLambda _{1}^{k}\right) \mathbf {V}_{1}^{-1} \mathsf {pert}_1(\mathbf {x}_1). \end{aligned}$$
(5.67)

Lemma 5.2, (5.56), with \(\mathbf {B} \equiv \mathbf {V}_{2}^{-1} \mathbf {C}_{2,1} \mathbf {D}_{1,1} \mathbf {V}_{1}\) gives that

$$\begin{aligned} \left( \sum _{k=0}^{t-1} \varLambda _{2}^{-k} \mathbf {V}_{2}^{-1} \mathbf {C}_{2,1} \mathbf {D}_{1,1} \mathbf {V}_{1} \varLambda _{1}^{k}\right) = \tilde{\mathbf {C}}_{2,1} - \varLambda _{2}^{-t} \tilde{\mathbf {C}}_{2,1} \varLambda _{1}^{t}, \end{aligned}$$
(5.68)

where the elements of \(\tilde{\mathbf {C}}_{2,1}\) are given as

$$\begin{aligned}&[\tilde{\mathbf {C}}_{2,1}]_{\ell ,m} = [ \mathbf {V}_{2}^{-1} \mathbf {C}_{2,1} \mathbf {D}_{1,1} \mathbf {V}_{1} ]_{\ell ,m} \Big (1 - \frac{\lambda _{1,m}}{\lambda _{2,\ell }} \Big )^{-1},&\ell \in \lbrace 1,\ldots ,d_2 \rbrace , m\in \lbrace 1,\ldots ,d_1 \rbrace . \end{aligned}$$
(5.69)

Equation (5.69) is the same as (5.62) for \(i=2\). Using (5.68) in (5.67) gives

$$\begin{aligned} \mathbf {x}_2(t)&= \mathbf {L}_{2}^{t} \mathbf {x}_2 + \mathbf {L}_{2}^{t-1} \mathbf {V}_{2} \left( \tilde{\mathbf {C}}_{2,1} - \varLambda _{2}^{-t} \tilde{\mathbf {C}}_{2,1} \varLambda _{1}^{t}\right) V_{1}^{-1} \mathsf {pert}_1(\mathbf {x}_1) \\&= \mathbf {L}_{2}^{t}\Big ( \mathbf {x}_2 + \mathbf {L}_{2}^{-1} \mathbf {V}_{2} \tilde{\mathbf {C}}_{2,1}\mathbf {V}_{1}^{-1} \mathsf {pert}_1(\mathbf {x}_1)\Big ) - \mathbf {L}_{2}^{t-1} \mathbf {V}_{2}\varLambda _{2}^{-t} \tilde{\mathbf {C}}_{2,1} \varLambda _{1}^{t}\mathbf {V}_{1}^{-1} \mathsf {pert}_1(\mathbf {x}_1). \end{aligned}$$

Since \(\mathbf {L}_{2}^{t-1} \mathbf {V}_{2}\varLambda _{2}^{-t} = \mathbf {L}_{2}^{-1}\mathbf {V}_{2}\) and \(\varLambda _{1}^{t}\mathbf {V}_{1}^{-1} = \mathbf {V}_{1}^{-1} \mathbf {L}_{1}^{t}\), we get

$$\begin{aligned} \mathbf {x}_2(t) = \mathbf {L}_{2}^{t}\Big ( \mathbf {x}_2 + \mathbf {L}_{2}^{-1} \mathbf {V}_{2} \tilde{\mathbf {C}}_{2,1}\mathbf {V}_{1}^{-1} \mathsf {pert}_1(\mathbf {x}_1)\Big ) - \mathbf {L}_{2}^{-1}\mathbf {V}_{2} \tilde{\mathbf {C}}_{2,1} \mathbf {V}_{1}^{-1} \mathbf {L}_{1}^{t} \mathsf {pert}_1(\mathbf {x}_1). \end{aligned}$$
(5.70)

Defining \(\mathbf {D}_{2,1}\) as

$$\begin{aligned} \mathbf {D}_{2,1} = \mathbf {L}_{2}^{-1} \mathbf {V}_{2} \tilde{\mathbf {C}}_{2,1}\mathbf {V}_{1}^{-1} \end{aligned}$$
(5.71)

and \(\mathsf {pert}_{2} : \mathbb {C}^{d_{1}}\times \mathbb {C}^{d_{2}} \rightarrow \mathbb {C}^{d_{2}}\) as

$$\begin{aligned} \mathsf {pert}_{2}(\mathbf {x}_1,\mathbf {x}_2) = \mathbf {x}_2 + \mathbf {D}_{2,1} \mathsf {pert}_1(\mathbf {x}_1) \end{aligned}$$
(5.72)

gives

$$\begin{aligned} \mathbf {x}_{2}(t)&= \mathbf {L}_{2}^{t} \mathsf {pert}_{2}(\mathbf {x}_1,\mathbf {x}_2) - \mathbf {D}_{2,1} \mathbf {L}_1^{t} \mathsf {pert}_{1}(\mathbf {x}_1) \\&= (-1)^{0} \mathbf {D}_{2,2} \mathbf {L}_{2}^{t} \mathsf {pert}_{2}(\mathbf {x}_1,\mathbf {x}_2) - \mathbf {D}_{2,1} \mathbf {L}_1^{t} \mathsf {pert}_{1}(\mathbf {x}_1), \end{aligned}$$

since \(\mathbf {D}_{2,2} = \mathbf {I}_{d_2}\) by definition. Finally,

$$\begin{aligned} \mathbf {x}_{2}(t) = \sum _{s=0}^{1} (-1)^{s} \mathbf {D}_{2,2-s} \mathbf {L}_{2-s}^{t}\mathsf {pert}_{2-s}(\mathbf {x}_1,\ldots , \mathbf {x}_{2-s}). \end{aligned}$$
(5.73)

Using the change of variables \(j = 2 - s\), we have that

$$\begin{aligned} \mathbf {x}_{2}(t) = \sum _{j=1}^{2} (-1)^{2-j} \mathbf {D}_{2,j} \mathbf {L}_{j}^{t}\mathsf {pert}_{j}(\mathbf {x}_1,\ldots , \mathbf {x}_{j}). \end{aligned}$$
(5.74)

Equations (5.71)–(5.74) are equivalent to Eqs. (5.59), (5.61), and (5.64), for \(j=2\).

Induction step: Assume (5.59)–(5.64) hold for for all \(j \le i\) where \(i\in \lbrace 2,\ldots , n-1 \rbrace \). We show they hold for \(i+1\) as well.

Write \(\mathbf {x}_{i+1}(t) = \varPi _{i+1}\mathsf {Lin}^{\circ t}(\mathbf {x}_1,\ldots ,\mathbf {x}_n)\). By Lemma 5.1, Eq. (5.55), the solution is

$$\begin{aligned} \mathbf {x}_{i+1}(t)&= \mathbf {L}_{i+1}^{t} \mathbf {x}_{i+1} + \mathbf {L}_{i+1}^{t-1} \mathbf {V}_{i+1} \sum _{k=0}^{t-1} \varLambda _{i+1}^{-k} \mathbf {V}_{i+1}^{-1} \mathbf {C}_{i+1,i} \mathbf {x}_{i}(k) . \end{aligned}$$

By the induction hypothesis,

$$\begin{aligned} \mathbf {x}_{i}(k) = \sum _{j=1}^{i} (-1)^{i-j} \mathbf {D}_{i,j} \mathbf {L}_{j}^{k} \mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j}). \end{aligned}$$
(5.75)

which gives that \(\mathbf {x}_{i+1}(t)\) is (after interchanging the finite sums)

$$\begin{aligned} \mathbf {x}_{i+1}(t)&= \mathbf {L}_{i+1}^{t} \mathbf {x}_{i+1} + \sum _{j=1}^{i} (-1)^{i-j} \mathbf {L}_{i+1}^{t-1} \mathbf {V}_{i+1} \sum _{k=0}^{t-1} \varLambda _{i+1}^{-k} \mathbf {V}_{i+1}^{-1} \mathbf {C}_{i+1,i} \mathbf {D}_{i,j} \mathbf {L}_{j}^{k} \mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j}). \end{aligned}$$

Since \(\mathbf {L}_{j}\) is diagonalizable, we substitute \(\mathbf {V}_{j} \varLambda _{j}^{k} \mathbf {V}_{j}^{-1}\) for \(\mathbf {L}_{j}^{k}\) in the above equation to get

$$\begin{aligned} \begin{aligned}&\mathbf {x}_{i+1}(t) \\&\quad = \mathbf {L}_{i+1}^{t} \mathbf {x}_{i+1} \\&\qquad + \sum _{j=1}^{i} (-1)^{i-j} \mathbf {L}_{i+1}^{t-1} \mathbf {V}_{i+1} \left( \sum _{k=0}^{t-1} \varLambda _{i+1}^{-k} \mathbf {V}_{i+1}^{-1} \mathbf {C}_{i+1,i} \mathbf {D}_{i,j} \mathbf {V}_{j} \varLambda _{j}^{k} \right) \mathbf {V}_{j}^{-1}\mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j}) . \end{aligned} \end{aligned}$$
(5.76)

By Lemma 5.2, Eq. (5.56), with \(\mathbf {B} \equiv \mathbf {V}_{i+1}^{-1} \mathbf {C}_{i+1,i} \mathbf {D}_{i,j} \mathbf {V}_{j}\), we have

$$\begin{aligned} \sum _{k=0}^{t-1} \varLambda _{i+1}^{-k} \mathbf {V}_{i+1}^{-1} \mathbf {C}_{i+1,i} \mathbf {D}_{i,j} \mathbf {V}_{j} \varLambda _{j}^{k} = \tilde{\mathbf {C}}_{i+1,j} - \varLambda _{i+1}^{-t}\tilde{\mathbf {C}}_{i+1,j} \varLambda _{j}^{t}, \end{aligned}$$
(5.77)

where for \(j \in \lbrace 1,\ldots , i \rbrace \) the matrix \(\tilde{\mathbf {C}}_{i+1,j} \in \mathbb {C}^{d_{i+1}\times d_{j}} \) has elements

$$\begin{aligned} \left[ \tilde{\mathbf {C}}_{i+1,j}\right] _{\ell ,m} = \left[ \mathbf {V}_{i+1}^{-1} \mathbf {C}_{i+1,i} \mathbf {D}_{i,j} \mathbf {V}_{j}\right] _{\ell ,m} \left( 1 - \frac{\lambda _{j,m}}{\lambda _{i+1,\ell }}\right) ^{-1}. \end{aligned}$$
(5.78)

Equation (5.78) is (5.62) for \(i+1\). Plugging (5.77) into (5.76) gives

$$\begin{aligned} \mathbf {x}_{i+1}&(t)\\&= \mathbf {L}_{i+1}^{t} \mathbf {x}_{i+1} \\&\quad + \sum _{j=1}^{i} (-1)^{i-j} \mathbf {L}_{i+1}^{t-1} \mathbf {V}_{i+1} \left( \tilde{\mathbf {C}}_{i+1,j} - \varLambda _{i+1}^{-t}\tilde{\mathbf {C}}_{i+1,j} \varLambda _{j}^{t} \right) \mathbf {V}_{j}^{-1}\mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j}) \\&= \mathbf {L}_{i+1}^{t} \mathbf {x}_{i+1} + \sum _{j=1}^{i} (-1)^{i-j} \mathbf {L}_{i+1}^{t-1} \mathbf {V}_{i+1} \tilde{\mathbf {C}}_{i+1,j} \mathbf {V}_{j}^{-1}\mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j})\\&\quad + \sum _{j=1}^{i} (-1)^{i+1-j} \mathbf {L}_{i+1}^{t-1} \mathbf {V}_{i+1} \varLambda _{i+1}^{-t}\tilde{\mathbf {C}}_{i+1,j} \varLambda _{j}^{t} \mathbf {V}_{j}^{-1}\mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j}) \\&= \mathbf {L}_{i+1}^{t} \left[ \mathbf {x}_{i+1} + \sum _{j=1}^{i} (-1)^{i-j} \mathbf {L}_{i+1}^{-1} \mathbf {V}_{i+1} \tilde{\mathbf {C}}_{i+1,j} \mathbf {V}_{j}^{-1}\mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j})\right] \\&\quad + \sum _{j=1}^{i} (-1)^{i+1-j} \mathbf {L}_{i+1}^{t-1} \mathbf {V}_{i+1} \varLambda _{i+1}^{-t}\tilde{\mathbf {C}}_{i+1,j} \varLambda _{j}^{t} \mathbf {V}_{j}^{-1}\mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j}). \end{aligned}$$

Since \(\mathbf {L}_{i+1}^{t-1} \mathbf {V}_{i+1} \varLambda _{i+1}^{-t} = \mathbf {L}_{i+1}^{-1} \mathbf {V}_{i+1}\) and \(\varLambda _{j}^{t} \mathbf {V}_{j}^{-1} = \mathbf {V}_{j}^{-1} \mathbf {L}_{j}^{t}\), then

$$\begin{aligned} \mathbf {x}_{i+1}(t)&= \mathbf {L}_{i+1}^{t} \left[ \mathbf {x}_{i+1} + \sum _{j=1}^{i} (-1)^{i-j} \left( \mathbf {L}_{i+1}^{-1} \mathbf {V}_{i+1} \tilde{\mathbf {C}}_{i+1,j} \mathbf {V}_{j}^{-1}\right) \mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j})\right] \\&\quad + \sum _{j=1}^{i} (-1)^{i+1-j} \left( \mathbf {L}_{i+1}^{-1} \mathbf {V}_{i+1} \tilde{\mathbf {C}}_{i+1,j} \mathbf {V}_{j}^{-1}\right) \mathbf {L}_{j}^{t}\mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j}). \end{aligned}$$

For \(j=1,\ldots , i\), define

$$\begin{aligned} \mathbf {D}_{i+1,j} = \mathbf {L}_{i+1}^{-1} \mathbf {V}_{i+1} \tilde{\mathbf {C}}_{i+1,j} \mathbf {V}_{j}^{-1} \end{aligned}$$
(5.79)

as in Eq. (5.61) and \(\mathsf {pert}_{i+1} : \mathbb {C}^{d_1}\times \cdots \times \mathbb {C}^{d_{i+1}} \rightarrow \mathbb {C}^{d_{i+1}}\) as

$$\begin{aligned} \mathsf {pert}_{i+1}(\mathbf {x}_1,\ldots , \mathbf {x}_n)&= \mathbf {x}_{i+1} + \sum _{j=1}^{i} (-1)^{i-j} \mathbf {L}_{i+1}^{-1} \mathbf {V}_{i+1} \tilde{\mathbf {C}}_{i+1,j} \mathbf {V}_{j}^{-1}\mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j}) \\&= \mathbf {x}_{i+1} + \sum _{j=1}^{i} (-1)^{i-j} \mathbf {D}_{i+1,j}\mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j}) \end{aligned}$$

as in Eq. (5.64) with the substitution \(i \mapsto i+1\). Substituting these definitions into the expression for the solution \(\mathbf {x}_{i+1}(t)\) and defining \(\mathbf {D}_{i+1,i+1} = \mathbf {I}_{d_{i+1}}\), we have

$$\begin{aligned} \mathbf {x}_{i+1}(t)&= \mathbf {L}_{i+1}^{t}\mathsf {pert}_{i+1}(\mathbf {x}_1,\ldots ,\mathbf {x}_{i+1}) + \sum _{j=1}^{i} (-1)^{i+1-j} \mathbf {D}_{i+1,j} \mathbf {L}_{j}^{t} \mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j}) \\&= (-1)^0 \mathbf {D}_{i+1,i+1} \mathbf {L}_{i+1}^{t}\mathsf {pert}_{i+1}(\mathbf {x}_1,\ldots ,\mathbf {x}_{i+1}) \\&\qquad + \sum _{j=1}^{i} (-1)^{i+1-j} \mathbf {D}_{i+1,j} \mathbf {L}_{j}^{t} \mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j}) \\&= \sum _{j=1}^{i+1} (-1)^{i+1-j} \mathbf {D}_{i+1,j} \mathbf {L}_{j}^{t} \mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j}) . \end{aligned}$$

Comparing with (5.59) with the substitution \(i\mapsto i+1\), we see that the induction is complete. This completes the proof. \(\square \)

Corollary 5.3

Assume that Condition 5.1 holds for (5.31). Then for all \(i\in \lbrace 2,\ldots ,n \rbrace \) and \(t \in \mathbb {N}\),

$$\begin{aligned} {\left\| {\varPi _i \circ \mathsf {Lin}^{\circ t}(\mathbf {x}_1, \ldots , x_n) - \mathbf {L}_i^{t}( \mathsf {pert}_{i}(\mathbf {x}_1, \ldots , \mathbf {x}_i) ) }\right\| } {\le } {\sum _{j=1}^{i-1}} {\Vert {\mathbf {D}_{i,j}}\Vert } {\Vert {\mathbf {L}_{j}^t \mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j})}\Vert }, \end{aligned}$$
(5.80)

where \(\mathbf {D}_{i,j}\) and \(\mathsf {pert}_{j}\) are given by (5.61) and (5.64). Furthermore,

$$\begin{aligned} \lim _{t\rightarrow \infty } \frac{\left\| {\varPi _i \circ \mathsf {Lin}^{\circ t}(\mathbf {x}_1,\ldots , \mathbf {x}_n) - \mathbf {L}_i^{t} (\mathsf {pert}_{i}(\mathbf {x}_1, \ldots , \mathbf {x}_i) ) }\right\| }{\left\| {\mathbf {L}_i}\right\| ^t} = 0. \end{aligned}$$
(5.81)

Proof

Inequality (5.80) follows directly from Lemma 5.3, Eq. (5.59) and the fact that \(\mathbf {D}_{i,i} = \mathbf {I}_{d_{i}}\). Equation (5.81) follows from the condition 5.1, equation (5.34).

1.2 5.6.2 Proof of Theorem 5.3: Perturbation of Principal Eigenfunctions—Nominal, Linear System

We now prove Theorem 5.3. It is a straightforward application of Theorem 5.2.

Proof

(Proof of Theorem 5.3) We first show that for \(i \ge 1\) and \(t \ge 0\) that (5.45) holds.

By definition,

$$\begin{aligned}&U_{\mathsf {Lin}}^{t} \phi _{(0,\ldots ,0,s_i,0,\ldots ,0)}(\mathbf {x}_1,\ldots , \mathbf {x}_n) \\&\qquad = \phi _{(0,\ldots ,0,s_i,0,\ldots ,0)}(\mathsf {Lin}^{\circ t}(\mathbf {x}_1,\ldots , \mathbf {x}_n)) \nonumber \\&\qquad = \phi _{i,s_i}(\varPi _i \circ \mathsf {Lin}^{\circ t}(\mathbf {x}_1,\ldots , \mathbf {x}_n)) \\&\qquad = \phi _{i,s_i}(\varPi _i\circ \mathsf {Nom}^{\circ t}(\mathsf {pert}(\mathbf {x}_1,\ldots , \mathbf {x}_n))) \\&\qquad \quad + \phi _{i,s_i}(\varPi _i \circ \mathsf {Lin}^{\circ t}(\mathbf {x}_1,\ldots , \mathbf {x}_n) - \varPi _i \circ \mathsf {Nom}^{\circ t}(\mathsf {pert}(\mathbf {x}_1,\ldots , \mathbf {x}_n))) \\&\qquad = (U_{\mathsf {Nom}}^{t}(\phi _{i,s_i}\circ \varPi _i))\circ \mathsf {pert}(\mathbf {x}_1,\ldots , \mathbf {x}_n) \\&\qquad \quad + \phi _{i,s_i}(\varPi _i \circ \mathsf {Lin}^{\circ t}(\mathbf {x}_1,\ldots , \mathbf {x}_n) - \varPi _i \circ \mathsf {Nom}^{\circ t}(\mathsf {pert}(\mathbf {x}_1,\ldots , \mathbf {x}_n))) \\&\qquad = (U_{\mathsf {Nom}}^{t}\phi _{(0,\ldots ,0,s_i,0,\ldots ,0)})\circ \mathsf {pert}(\mathbf {x}_1,\ldots , \mathbf {x}_n) \\&\qquad \quad + \phi _{i,s_i}(\varPi _i \circ \mathsf {Lin}^{\circ t}(\mathbf {x}_1,\ldots , \mathbf {x}_n) - \varPi _i \circ \mathsf {Nom}^{\circ t}(\mathsf {pert}(\mathbf {x}_1,\ldots , \mathbf {x}_n))). \end{aligned}$$

Therefore,

$$\begin{aligned}&{\left| U_{\mathsf {Lin}}^{t} \phi _{(0,\ldots ,0,s_i,0,\ldots ,0)}(\mathbf {x}_1,\ldots , \mathbf {x}_n) - U_{\mathsf {Nom}}^{t}\phi _{(0,\ldots ,0,s_i,0,\ldots ,0)}(\mathsf {pert}(\mathbf {x}_1,\ldots , \mathbf {x}_n))\right| } \nonumber \\&\qquad \le \Vert \phi _{i,s_i}\Vert \left\| {\varPi _i \circ \mathsf {Lin}^{\circ t}(\mathbf {x}_1,\ldots , \mathbf {x}_n) - \varPi _i\circ \mathsf {Nom}^{\circ t}(\mathsf {pert}(\mathbf {x}_1,\ldots , \mathbf {x}_n)) }\right\| . \end{aligned}$$
(5.82)

By Theorem 5.2, Eq. (5.36), for all \(t \ge 0\) and \(i\ge 1\),

$$\begin{aligned}&\left\| {\varPi _i \circ \mathsf {Lin}^{\circ t}(\mathbf {x}_1,\ldots , \mathbf {x}_n) - \varPi _i \circ \mathsf {Nom}^{\circ t}(\mathsf {pert}(\mathbf {x}_1,\ldots , \mathbf {x}_n)) }\right\| \nonumber \\&\qquad \le \sum _{j=1}^{i-1} \Vert \mathbf {D}_{i,j}\Vert \Vert \mathbf {L}_{j}^t \mathsf {pert}_{j}(\mathbf {x}_1,\ldots ,\mathbf {x}_{j})\Vert . \end{aligned}$$
(5.83)

This estimate along with (5.82) gives (5.45).

By Theorem 5.2, Eq. (5.43), for all \(i\in \lbrace 1,\ldots , n \rbrace \) and any \(\varepsilon > 0\),

$$\begin{aligned} \frac{\left\| {\varPi _i \circ \mathsf {Lin}^{\circ t}(\mathbf {x}_1,\ldots , \mathbf {x}_n) - \varPi _i \circ \mathsf {Nom}^{\circ t}(\mathsf {pert}(\mathbf {x}_1,\ldots , \mathbf {x}_n)) }\right\| }{\Vert \mathbf {L}_i\Vert ^{t}} \le \varepsilon , \end{aligned}$$
(5.84)

for all t large enough. This is equivalent to (5.46). \(\square \)

The Generalized Laplace Analysis theorem uses Laplace averages of the Koopman operator to project a function onto an eigenspace. This is is the proof of Corollary 5.2. The following definition and theorem are taken from [14].

Definition 5.5

(Dominating point spectrum) For \(r > 0\), let \(\mathbb D_r\) be the open disc of radius r centered at 0 in the complex plane and let \(\sigma (U; \mathbb D_r) = \sigma (U) \cap \mathbb D_r\). If there exists an \(R > 0\) such that \(\sigma (U)\setminus \mathbb D_R\) is not empty and for every \(r > R\), we have

  1. 1.

    if \(\sigma (U; \mathbb D_r) \cap \sigma _p(U) \ne \emptyset \), then the peripheral spectrum of \(\sigma (U; \mathbb D_r)\) is not empty, and

  2. 2.

    the set \(\sigma (U) \setminus \mathbb D_r\) consists only of eigenvalues (i.e., \(\sigma (U) \setminus \mathbb D_r \subset \sigma _p(U)\)).

Theorem 5.6

Let \(\sigma (U)\) have a dominating point spectrum and assume that the point spectrum is concentrated on isolated circles in the complex plane. Let \(\lambda \) be an eigenvalue of U. The projection \(P_\lambda \) onto the \(N(\lambda I - U)\), the \(\lambda \)-eigenspace of U, can be computed as

$$\begin{aligned} P_\lambda = \lim _{n\rightarrow \infty } \frac{1}{n} \sum _{k=0}^{n-1} \lambda ^{-k} U^k \left( I - \sum _{\mu \in \Omega } P_\mu \right) , \end{aligned}$$
(5.85)

where the limit exists in the strong operator topology and where \(\Omega = \lbrace \mu \in \sigma _p(U) : |\mu | > |\lambda | \rbrace \).

1.3 5.6.3 Proof of Theorem 5.4: Asymptotic Equivalence for Nonlinear, Chained Cascades

We now prove Theorem 5.4. It is a straightforward application of Theorem 5.2 and the fact that the topological conjugacy is a homeomorphism.

Proof

(Proof of Theorem 5.4) Fix \(\varepsilon > 0\) and \(\mathbf {Y} = (\mathbf {y}_1,\ldots , \mathbf {y}_n) \in \mathbb {C}^{d_1}\times \cdots \times \mathbb {C}^{d_n}\). Define \(\mathbf {X} = \tau ^{-1}(\mathbf {Y})\). Denote by \(\overline{B_{i}}\) the closed unit ball of radius centered at the origin in \(\mathbb {C}^{d_i}\). Condition 5.1 and Theorem 5.2, Eq. (5.43) guarantee that \(\mathsf {Lin}^{\circ t}(\mathbf {X})\) and \(\mathsf {Nom}^{\circ t}(\mathsf {pert}(\mathbf {X}))\) are in the compact set \(\overline{B_{1}} \times \cdots \times \overline{B_{n}}\) for all t large enough.

Since \(\tau \) is continuous, it is uniformly continuous on \(\overline{B_{1}} \times \cdots \times \overline{B_{r}}\). Let \(\delta > 0\) be such that if \(\mathbf {X}, \mathbf {X}' \in \overline{B_1} \times \cdots \times \overline{B_n}\) and \(\Vert \mathbf {X} - \mathbf {X}'\Vert _{\times } < \delta \), then \(\Vert \tau (\mathbf {X}) - \tau (\mathbf {X}')\Vert _{\times } < \varepsilon \).

By Corollary 5.1, there is a \(T \in \mathbb {N}\) such that \(t \ge T\) implies

$$\begin{aligned} \Vert \mathsf {Lin}^{\circ t}(\mathbf {X}) - \mathsf {Nom}^{\circ t}(\mathsf {pert}(\mathbf {X})) \Vert _{\times } < \delta . \end{aligned}$$
(5.86)

The uniform continuity of \(\tau \) implies that

$$\begin{aligned} \Vert \tau \circ \mathsf {Lin}^{\circ t}(\mathbf {X}) - \tau \circ \mathsf {Nom}^{\circ t}(\mathsf {pert}(\mathbf {X})) \Vert _{\times }&< \varepsilon ,&(t \ge T). \end{aligned}$$
(5.87)

Now, since \(\tau \) is a topological conjugacy, \(\mathsf {Lin}^{\circ t} = \tau ^{-1}\circ \mathsf {NonLin}^{\circ t} \circ \tau \). Plugging this into (5.87) gives, for all \(t \ge T\),

$$\begin{aligned} \varepsilon&> \Vert \tau \circ (\tau ^{-1}\circ \mathsf {NonLin}^{\circ t} \circ \tau )(\mathbf {X}) - \tau \circ \mathsf {Nom}^{\circ t}(\mathsf {pert}(\mathbf {X})) \Vert _{\times } \\&= \Vert (\mathsf {NonLin}^{\circ t}(\tau (\mathbf {X})) - \tau \circ \mathsf {Nom}^{\circ t}(\mathsf {pert}(\mathbf {X})) \Vert _{\times } \\&= \Vert (\mathsf {NonLin}^{\circ t}(\tau (\mathbf {X})) - (\tau \circ \mathsf {Nom}^{\circ t}\circ \tau ^{-1})\circ \tau \circ (\mathsf {pert}(\mathbf {X})) \Vert _{\times } \\&= \Vert (\mathsf {NonLin}^{\circ t})(\tau (\tau ^{-1}(\mathbf {Y}))) - (\tau \circ \mathsf {Nom}^{\circ t}\circ \tau ^{-1})\circ \tau \circ (\mathsf {pert}(\tau ^{-1}(\mathbf {Y}))) \Vert _{\times } \\&= \Vert \mathsf {NonLin}^{\circ t}(\mathbf {Y}) - (\tau \circ \mathsf {Nom}^{\circ t}\circ \tau ^{-1})\circ (\tau \circ \mathsf {pert} \circ \tau ^{-1})(\mathbf {Y}) \Vert _{\times } . \end{aligned}$$

Therefore,

$$\begin{aligned} \lim _{t\rightarrow \infty } \Vert \mathsf {NonLin}^{\circ t}(\mathbf {Y}) - (\tau \circ \mathsf {Nom}^{\circ t}\circ \tau ^{-1})\circ (\tau \circ \mathsf {pert} \circ \tau ^{-1})(\mathbf {Y}) \Vert _{\times } = 0. \end{aligned}$$

This completes the proof. \(\square \)

1.4 5.6.4 Proof of Theorem 5.5: Perturbation of Principal Eigenfunctions—Nominal, Nonlinear Cascades

To save space in the following proof, we will write \(\phi _{(0,\ldots ,0,s_i,0,\ldots ,0)}\) as \(\phi _{s_i \mathbf {e}_{n,i}}\), where \(\mathbf {e}_{n,i}\) is the ith canonical basis vector of length n.

Proof

(Proof of Theorem 5.5) Fix \(\mathbf {Y} = (\mathbf {y}_1,\ldots , \mathbf {y}_n) \in \mathbb {C}^{d_1}\times \cdots \times \mathbb {C}^{d_n}\) and let \(\mathbf {X} = \tau ^{-1}(\mathbf {Y})\). The topological conjugacy satisfies

$$\begin{aligned} \mathsf {Lin}^{\circ t}(\mathbf {X}) = (\tau ^{-1} \circ \mathsf {NonLin}^{\circ t} \circ \tau )(\mathbf {X}). \end{aligned}$$
(5.88)

Using this relation, we get

$$\begin{aligned} U_{\mathsf {Lin}}^{t} \phi _{s_i \mathbf {e}_{n,i}}(\mathbf {X})&= \phi _{s_i \mathbf {e}_{n,i}}(\mathsf {Lin}^{\circ t}(\mathbf {X})) \\&= \phi _{s_i \mathbf {e}_{n,i}}((\tau ^{-1} \circ \mathsf {NonLin}^{\circ t} \circ \tau )(\mathbf {X})) \\&= (\phi _{s_i \mathbf {e}_{n,i}}\circ \tau ^{-1})(\mathsf {NonLin}^{\circ t}(\tau (\mathbf {X}))) \\&= U_{\mathsf {NonLin}}^{t}(\phi _{s_i \mathbf {e}_{n,i}}\circ \tau ^{-1})(\mathbf {Y}). \end{aligned}$$

On the other hand,

$$\begin{aligned} U_{\mathsf {Nom}}^{t} \phi _{s_i \mathbf {e}_{n,i}}(\mathsf {pert}(\mathbf {X}))&= \phi _{s_i \mathbf {e}_{n,i}}(\mathsf {Nom}^{\circ t}(\mathsf {pert}(\mathbf {X}))) \\&= \phi _{s_i \mathbf {e}_{n,i}}(\tau ^{-1}\circ \tau \circ \mathsf {Nom}^{\circ t}\circ \tau ^{-1} \circ \tau (\mathsf {pert}(\mathbf {X}))) \\&= (\phi _{s_i \mathbf {e}_{n,i}}\circ \tau ^{-1})((\tau \circ \mathsf {Nom}^{\circ t}\circ \tau ^{-1}) \circ \tau (\mathsf {pert}(\mathbf {X}))) \\&= (\phi _{s_i \mathbf {e}_{n,i}}\circ \tau ^{-1})((\tau \circ \mathsf {Nom}^{\circ t}\circ \tau ^{-1}) (\tau \circ \mathsf {pert} \circ \tau ^{-1})(\mathbf {X})) \\&= U_{\tau \circ \mathsf {Nom}^{t}\circ \tau ^{-1}}^{\circ t}(\phi _{s_i \mathbf {e}_{n,i}}\circ \tau ^{-1})((\tau \circ \mathsf {pert} \circ \tau ^{-1})(\mathbf {Y})) . \end{aligned}$$

Combining these two expression, we have

$$\begin{aligned}&{\left| U_{\mathsf {NonLin}}^{t}(\phi _{s_i \mathbf {e}_{n,i}}\circ \tau ^{-1})(\mathbf {Y}) - U_{\tau \circ \mathsf {Nom}^{\circ t}\circ \tau ^{-1}}^{\circ t}(\phi _{s_i \mathbf {e}_{n,i}}\circ \tau ^{-1})((\tau \circ \mathsf {pert} \circ \tau ^{-1})(\mathbf {Y}))\right| } \\&\qquad \qquad = {\left| U_{\mathsf {Lin}}^{t} \phi _{s_i \mathbf {e}_{n,i}}(\mathbf {X}) - U_{\mathsf {Nom}}^{t} \phi _{s_i \mathbf {e}_{n,i}}(\mathsf {pert}(\mathbf {X}))\right| }. \end{aligned}$$

Theorem 5.3, Eq. (5.46), implies

$$\begin{aligned} 0&= \lim _{t\rightarrow \infty } \frac{{\left| U_{\mathsf {Lin}}^{t} \phi _{s_i \mathbf {e}_{n,i}}(\mathbf {X}) - U_{\mathsf {Nom}}^{t} \phi _{s_i \mathbf {e}_{n,i}}(\mathsf {pert}(\mathbf {X}))\right| } }{\Vert \mathbf {L}_i\Vert ^t} \\&=\lim _{t\rightarrow \infty } \frac{{\left| U_{\mathsf {NonLin}}^{t}(\phi _{s_i \mathbf {e}_{n,i}}\circ \tau ^{-1})(\mathbf {y}) - U_{\tau \circ \mathsf {Nom}^{t}\circ \tau ^{-1}}^{\circ t}(\phi _{s_i \mathbf {e}_{n,i}}\circ \tau ^{-1})((\tau \circ \mathsf {pert} \circ \tau ^{-1})(\mathbf {Y}))\right| } }{\Vert \mathbf {L}_i\Vert ^t} . \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Mohr, R., Mezić, I. (2020). Koopman Spectrum and Stability of Cascaded Dynamical Systems. In: Mauroy, A., Mezić, I., Susuki, Y. (eds) The Koopman Operator in Systems and Control. Lecture Notes in Control and Information Sciences, vol 484. Springer, Cham. https://doi.org/10.1007/978-3-030-35713-9_5

Download citation

Publish with us

Policies and ethics