Skip to main content
Log in

Low-Rank Dynamic Mode Decomposition: An Exact and Tractable Solution

  • Published:
Journal of Nonlinear Science Aims and scope Submit manuscript

Abstract

This work studies the linear approximation of high-dimensional dynamical systems using low-rank dynamic mode decomposition. Searching this approximation in a data-driven approach is formalized as attempting to solve a low-rank constrained optimization problem. This problem is non-convex, and state-of-the-art algorithms are all sub-optimal. This paper shows that there exists a closed-form solution, which is computed in polynomial time, and characterizes the \(\ell _2\)-norm of the optimal approximation error. The paper also proposes low-complexity algorithms building reduced models from this optimal solution, based on singular value decomposition or eigenvalue decomposition. The algorithms are evaluated by numerical simulations using synthetic and physical data benchmarks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. Diagonalizability is guaranteed if all the nonzero eigenvalues are distinct. However, this condition is only sufficient and the class of diagonalizable matrices is larger (Horn and Johnson 2012).

  2. The “DMD” of system (1) refers to the EVD of the solution of problem (8) without the low-rank constraint.

  3. We do not evaluate the sparse DMD approach since the error norm induced by this method will always be greater than the one induced by low-rank projected DMD, see details in Héas and Herzet (2021).

  4. The peak signal-to-noise ratio is defined as \(20 \log _{10} \frac{\max _{t,i}\Vert x_t(\theta _i)\Vert _\infty }{ \sigma }\), where \(\sigma \) denotes the standard deviation of the standard normal distribution.

References

  • Auliac, G., Caby, J.: Mathématiques 3e Année: Topologie et analyse, Objectif Licence, EdiScience (2005)

  • Bertsekas, D.: Nonlinear Programming. Athena Scientific, Belmont (1995)

    MATH  Google Scholar 

  • Budišić, M., Mohr, R., Mezić, I.: Applied Koopmanism. Chaos Interdiscip. J. Nonlinear Sci. 22(4), 047510 (2012)

    Article  MathSciNet  Google Scholar 

  • Chandrasekhar, S.: Hydrodynamic and Hydromagnetic Stability. Courier Corporation, Chelmsford (2013)

    MATH  Google Scholar 

  • Chen, K.K., Tu, J.H., Rowley, C.W.: Variants of dynamic mode decomposition: boundary condition, Koopman, and Fourier analyses. J. Nonlinear Sci. 22(6), 887–915 (2012)

    Article  MathSciNet  Google Scholar 

  • Cohen, A., DeVore, R.: Approximation of high-dimensional parametric PDEs. Acta Numerica 24, 1–159 (2015). https://doi.org/10.1017/S0962492915000033

    Article  MathSciNet  MATH  Google Scholar 

  • Cui, T., Marzouk, Y.M., Willcox, K.E.: Data-driven model reduction for the Bayesian solution of inverse problems. Int. J. Numer. Methods Eng. 102, 966–990 (2015)

    Article  MathSciNet  Google Scholar 

  • Dawson, S.T.M., Hemati, M.S., Williams, M.O., Rowley, C.W.: Characterizing and correcting for the effect of sensor noise in the dynamic mode decomposition. Exp. Fluids 57, 42 (2016). https://doi.org/10.1007/s00348-016-2127-7

    Article  Google Scholar 

  • Eckart, C., Young, G.: The approximation of one matrix by another of lower rank. Psychometrika 1(3), 211–218 (1936)

    Article  Google Scholar 

  • Fazel, M.: Matrix Rank Minimization with Applications, Stanford University, Ph.D. Thesis (2002)

  • Golub, G., Van Loan, C.: Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore (2013)

    MATH  Google Scholar 

  • Hackbusch, W.: Tensor Spaces and Numerical Tensor Calculus, vol. 42. Springer, Berlin (2012)

    Book  Google Scholar 

  • Hasselmann, K.: PIPs and POPs: the reduction of complex dynamical systems using principal interaction and oscillation patterns. J. Geophys. Res. Atmos. 93(D9), 11015–11021 (1988)

    Article  Google Scholar 

  • Héas, P., Herzet, C.: Low-rank approximation of linear maps (2018)

  • Héas, P., Herzet, C.: State-of-the-art algorithms for low rank dynamic mode decomposition (2021)

  • Héas, P., Herzet, C., Combès, B.: Generalized kernel-based dynamic mode decomposition. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2020)

  • Hemati, M.S., Rowley, C.W., Deem, E.A., Cattafesta, L.N.: De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets. Theoret. Comput. Fluid Dyn. 31(4), 349–368 (2017)

    Article  Google Scholar 

  • Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (2012)

    Book  Google Scholar 

  • Jain, P., Meka, R., Dhillon, I.S.: Guaranteed rank minimization via singular value projection. In: Advances in Neural Information Processing Systems, pp. 937–945 (2010)

  • Jovanovic, M., Schmid, P., Nichols, J.: Low-rank and sparse dynamic mode decomposition. Center for Turbulence Research Annual Research Briefs, pp. 139–152 (2012)

  • Klus, S., Koltai, P., Schütte, C.: On the numerical approximation of the Perron-Frobenius and Koopman operator. arXiv preprint arXiv:1512.05997 (2015)

  • Kutz, J.N., Brunton, S.L., Brunton, B.W., Proctor, J.L.: Dynamic mode decomposition: data-driven modeling of complex systems (2016)

  • Lee, K., Bresler, Y.: Guaranteed Minimum Rank Approximation from Linear Observations by Nuclear Norm Minimization with an Ellipsoidal Constraint(2009)

  • Lee, K., Bresler, Y.: Admira: atomic decomposition for minimum rank approximation. IEEE Trans. Inf. Theory 56(9), 4402–4416 (2010)

    Article  MathSciNet  Google Scholar 

  • Li, Q., Dietrich, F., Bollt, E.M., Kevrekidis, I.G.: Extended dynamic mode decomposition with dictionary learning: a data-driven adaptive spectral decomposition of the Koopman operator. arXiv preprint arXiv:1707.00225 (2017)

  • Mesbahi, M., Papavassilopoulos, G.P.: On the rank minimization problem over a positive semidefinite linear matrix inequality. IEEE Trans. Autom. Control 42(2), 239–243 (1997)

    Article  MathSciNet  Google Scholar 

  • Mishra, B., Meyer, G., Bach, F., Sepulchre, R.: Low-rank optimization with trace norm penalty. SIAM J. Optim. 23(4), 2124–2149 (2013)

    Article  MathSciNet  Google Scholar 

  • Parrilo, P.A., Khatri, S.: On cone-invariant linear matrix inequalities. IEEE Trans. Autom. Control 45(8), 1558–1563 (2000)

    Article  MathSciNet  Google Scholar 

  • Penland, C., Magorian, T.: Prediction of nino 3 sea surface temperatures using linear inverse modeling. J. Clim. 6(6), 1067–1076 (1993)

    Article  Google Scholar 

  • Quarteroni, A., Manzoni, A., Negri, F.: Reduced Basis Methods for Partial Differential Equations: An Introduction, vol. 92. Springer, Berlin (2015)

    MATH  Google Scholar 

  • Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev 52(3), 471–501 (2010)

    Article  MathSciNet  Google Scholar 

  • Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. J. Fluid Mech. 656, 5–28 (2010)

    Article  MathSciNet  Google Scholar 

  • Taylor, G., Green, A.: Mechanism of the production of small eddies from large ones. Proc. R. Soc. Lond. A 158(895), 499–521 (1937)

    Article  Google Scholar 

  • Tu, J.H., Rowley, C.W., Luchtenburg, D.M., Brunton, S.L., Kutz, J.N.: On dynamic mode decomposition: theory and applications. J. Comput. Dyn. 1(2), 391–421 (2014)

    Article  MathSciNet  Google Scholar 

  • Williams, M.O., Kevrekidis, I., Rowley, C.: A data-driven approximation of the Koopman operator: extending dynamic mode decomposition. J. Nonlinear Sci. 25(6), 1307–1346 (2015)

    Article  MathSciNet  Google Scholar 

  • Williams, M.O., Rowley, C.W., Kevrekidis, I.G.: A kernel-based approach to data-driven Koopman spectral analysis. arXiv preprint arXiv:1411.2260 (2014)

  • Wynn, A., Pearson, D., Ganapathisubramani, B., Goulart, P.J.: Optimal mode decomposition for unsteady flows. J. Fluid Mech. 733, 473–503 (2013)

    Article  MathSciNet  Google Scholar 

  • Yeung, E., Kundu, S., Hodas, N.: Learning Deep Neural Network Representations for Koopman Operators of Nonlinear Dynamical Systems (2017)

Download references

Acknowledgements

The authors thank the “Agence Nationale de la Recherche” (ANR) which partially funded this research through the GERONIMO project (ANR-13-JS03-0002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Patrick Héas.

Additional information

Communicated by Paul Newton.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Proof of Theorem 1

We begin by showing the first part of the theorem, namely that \(A_k^\star ={U}_{{\mathbf {Z}},k} {{U}_{{\mathbf {Z}},k}}^\intercal {{\mathbf {Y}}}{{\mathbf {X}}}^{\dagger }\) is a solution of (9). We first prove in this paragraph the existence of a minimizer of (9). Let us show that we can restrict our attention to a minimization problem over the set

$$\begin{aligned} {\mathcal {A}}=\{{\tilde{A}} \in \mathbb {R}^{n \times n} : \text {rank}({\tilde{A}}) \le k, \text {Im}({\tilde{A}}^\intercal ) \subseteq \text {Im}({{\mathbf {X}}})\}. \end{aligned}$$

Indeed, any matrix \( A \in \{{\tilde{A}} \in \mathbb {R}^{n \times n} : \text {rank}({\tilde{A}}) \le k\}\) can be decomposed in two components: \( A= A^\parallel + A^\perp \) where \( A^\parallel \) belongs to the set \({\mathcal {A}}\), such that columns of \(A^\parallel \) are orthogonal to those of \(A^\perp \), i.e., \( A^\perp ( A^\parallel )^\intercal =0\). From this construction, we have that rows of \(A^\perp \) are orthogonal to rows of \({{\mathbf {X}}}\). Using this decomposition, we thus have that \(\Vert {{\mathbf {Y}}}- A {{\mathbf {X}}}\Vert _F^2=\Vert {{\mathbf {Y}}}- A^\parallel {{\mathbf {X}}}\Vert _F^2\). Moreover, because of this orthogonal property, we have that \( \text {rank}( A)=\text {rank}( A^\parallel ) +\text {rank}( A^\perp ) \) so that \( \text {rank}( A^\parallel ) \le \text {rank}( A)\). In consequence, if A is a minimizer of (9), then \( A^\parallel \) is also a minimizer since it leads to same value of the cost function and since it is admissible: \(\text {rank}( A^\parallel ) \le \text {rank}( A) \le k\). Therefore, it is sufficient to find a minimizer over the set \({\mathcal {A}}\).

Now, according to the Weierstrass theorem (Bertsekas 1995, Proposition A.8), the existence is guaranteed if the admissible set \({\mathcal {A}}\) is closed and the objective function \(\Vert {{\mathbf {Y}}}- A {{\mathbf {X}}}\Vert _F^2\) is coercive. Let us prove these two properties. We first show that \({\mathcal {A}}\) is closed. According to Hackbusch (2012), Lemma 2.4, the set of low-rank matrices is closed. Moreover, it is well known that a linear sub-space of a normed finite-dimensional vector space is closed (Auliac and Caby 2005, Chapter 7.2), so that the set of matrices \({\mathcal {A}}=\{{\tilde{A}} \in \mathbb {R}^{n \times n} : \text {Im}({\tilde{A}}^\intercal ) \subseteq \text {Im}({{\mathbf {X}}})\}\) is closed. Since \({\mathcal {A}}\) is the intersection of two closed sets, we deduce that \({\mathcal {A}}\) is closed. Next, we show coercivity. Let us consider the SVD of any \(A\in {\mathcal {A}}\): \(A=U_A\varSigma _A V_A^\intercal \), where \(\varSigma _A=\text {diag}(\sigma _{A,1}\cdots \sigma _{A,k})\). From the definition of the Frobenius norm, we have for any \(A \in {\mathcal {A}}\), \( \Vert A\Vert _F =( \sum _{i=1}^k\sigma _{A,i}^2)^{1/2} \). We have that \(\Vert A\Vert _F \rightarrow \infty \) if a non-empty subset of singular values, say \(\{\sigma _{A,j}\}_{j \in {\mathcal {J}}}\), tend to infinity. Therefore, we have

$$\begin{aligned} \lim _{ \Vert A\Vert _F \rightarrow \infty :A \in {\mathcal {A}} } \Vert {{\mathbf {Y}}}- A {{\mathbf {X}}}\Vert _F^2&= \lim _{\Vert A\Vert _F \rightarrow \infty : A \in {\mathcal {A}} } \Vert {{\mathbf {Y}}}\Vert ^2_F -2 \,\text {trace}(Y^\intercal A {{\mathbf {X}}})+ \Vert A {{\mathbf {X}}}\Vert _F^2, \\&= \lim _{ \Vert A\Vert _F \rightarrow \infty :A \in {\mathcal {A}} } \Vert A {{\mathbf {X}}}\Vert _F^2 = \lim _{\Vert A\Vert _F \rightarrow \infty : A \in {\mathcal {A}} } \Vert \varSigma _A V_A^\intercal {{\mathbf {X}}}\Vert _F^2, \\&= \lim _{\sigma _{A,j} \rightarrow \infty : A \in {\mathcal {A}},j \in {\mathcal {J}}} \sum _{j=1}^n \sigma _{A,j}^2 \Vert {{\mathbf {X}}}^\intercal v_A^{j} \Vert _2^2 = \infty . \end{aligned}$$

The second equality is obtained because the dominant term when \(\Vert A\Vert _F \rightarrow \infty \) is the quadratic one \( \Vert A {{\mathbf {X}}}\Vert _F^2\). The third equality follows from the invariance of the Frobenius norm to unitary transforms, while the last equality is obtained noticing that \( \Vert {{\mathbf {X}}}^\intercal v_A^{j} \Vert _2 \ne 0\) because \( v_A^{j} \in \text {Im}({{\mathbf {X}}})\) since \(A \in {\mathcal {A}}\). This shows that the objective function is coercive over the closed set \({\mathcal {A}}\). Thus, using the Weierstrass theorem, this shows the existence of a minimizer of (9) in \({\mathcal {A}}\) and thus in \(\{{\tilde{A}} \in \mathbb {R}^{n \times n} : \text {rank}({\tilde{A}}) \le k\}\). We will no longer restrict our attention to the domain \({\mathcal {A}}\) in the following and come back to the original problem (9) implying the set of low-rank matrices.

Next, problem (9) can be rewritten as the unconstrained minimization

$$\begin{aligned} A_k^\star \in&{{\,\mathrm{arg\,min}\,}}_{A=PQ^\intercal : P,Q \in \mathbb {R}^{n \times k}} \Vert {{\mathbf {Y}}}-A {{\mathbf {X}}}\Vert ^2_F. \end{aligned}$$
(21)

In the following, we will use the first-order optimality condition of problem (21) to characterize its minimizers. A closed-form expression for a minimizer will then be obtained introducing an additional orthonormal property. The first-order optimality condition and the additional orthonormal property are presented in the following lemma, which is proven in “Appendix B”.

Lemma 1

Problem (21) admits a solution such that

$$\begin{aligned}&P^\intercal P=I_k \end{aligned}$$
(22)
$$\begin{aligned}&{{\mathbf {X}}}{{\mathbf {Y}}}^\intercal P= {{\mathbf {X}}}{{\mathbf {X}}}^\intercal Q. \end{aligned}$$
(23)

To find a closed-form expression of a minimizer of (21), we need to rewrite condition (23). We prove that this condition is equivalent to

$$\begin{aligned} {\mathbb {P}}_{{{\mathbf {X}}}^\intercal }{{\mathbf {Y}}}^\intercal P={{\mathbf {X}}}^\intercal Q. \end{aligned}$$
(24)

Indeed, we show by contradiction that (23) implies that, for any solution of the form \(PQ^\intercal \), there exists \(Z\in \mathbb {R}^{m \times k}\) such that

$$\begin{aligned} {\mathbb {P}}_{{{\mathbf {X}}}^\intercal }{{\mathbf {Y}}}^\intercal P +Z={{\mathbf {X}}}^\intercal Q, \end{aligned}$$
(25)

with columns of Z in \(\ker ({{\mathbf {X}}})\). Indeed, if \( {\mathbb {P}}_{{{\mathbf {X}}}^\intercal }{{\mathbf {Y}}}^\intercal P +Z \ne {{\mathbf {X}}}^\intercal Q\), then by multiplying both sides on the left by \({{\mathbf {X}}}\) we obtain \( {\mathbb {P}}_{{{\mathbf {X}}}}{{\mathbf {X}}}{{\mathbf {Y}}}^\intercal P +{{\mathbf {X}}}Z= {\mathbb {P}}_{{{\mathbf {X}}}} {{\mathbf {X}}}{{\mathbf {Y}}}^\intercal P \ne {{\mathbf {X}}}{{\mathbf {X}}}^\intercal Q\). Since \( {\mathbb {P}}_{{{\mathbf {X}}}}\) is the orthogonal projector onto the sub-space spanned by the columns of \({{\mathbf {X}}}\), the latter relation implies that \( {{\mathbf {X}}}{{\mathbf {Y}}}^\intercal P \ne {{\mathbf {X}}}{{\mathbf {X}}}^\intercal Q\) which contradicts (23). This proves that (23) implies (25).

Now, since columns of the two terms in the left-hand side of (25) are orthogonal and since columns of the matrix in the right-hand side are in the image of \({{\mathbf {X}}}^\intercal \), we deduce that the only admissible choice is Z with columns belonging both to \(\ker ({{\mathbf {X}}})\) and \(\text {Im}({{\mathbf {X}}}^\intercal )\), i.e., Z is a matrix full of zeros. Therefore, we obtain the necessary condition (24).

We have shown on the one hand that (23) implies (24). On the other hand, by multiplying on the left both sides of (24) by \({{\mathbf {X}}}\), we obtain (23) (\({{\mathbf {X}}}{\mathbb {P}}_{{{\mathbf {X}}}^\intercal }={{\mathbf {X}}}\) because \({{\mathbf {X}}}{{\mathbf {X}}}^\dag \) is the orthogonal projector onto the space spanned by the columns of \({{\mathbf {X}}}\)). Therefore, the necessary conditions (23) and (24) are equivalent.

We are now ready to characterize a minimizer of (9). According to Lemma 1, we have

$$\begin{aligned}&\min _{A \in \mathbb {R}^{n \times n} : \text {rank}(A) \le k }\Vert {{\mathbf {Y}}}- A{{\mathbf {X}}}\Vert _F^2 \nonumber \\&\quad =\min _{( {{\tilde{P}}}, {{\tilde{Q}}}) \in \mathbb {R}^{n \times k} \times \mathbb {R}^{n \times k} }\Vert {{\mathbf {Y}}}- {{\tilde{P}}}{{\tilde{Q}}}^\intercal {{\mathbf {X}}}\Vert _F^2 \quad s.t. \quad \left\{ \begin{aligned}&{{\tilde{P}}}^\intercal {{\tilde{P}}}=I_k\\&\quad {{\mathbf {X}}}{{\mathbf {Y}}}^\intercal {{\tilde{P}}} = {{\mathbf {X}}}{{\mathbf {X}}}^\intercal {{\tilde{Q}}}\\ \end{aligned}\right. , \end{aligned}$$
(26)
$$\begin{aligned}&\quad =\min _{( {{\tilde{P}}}, {{\tilde{Q}}}) \in \mathbb {R}^{n \times k} \times \mathbb {R}^{n \times k} }\Vert {{\mathbf {Y}}}- {{\tilde{P}}}{{\tilde{Q}}}^\intercal {{\mathbf {X}}}\Vert _F^2 \quad s.t. \quad \left\{ \begin{aligned}&{{\tilde{P}}}^\intercal {{\tilde{P}}}=I_k \\&\quad {\mathbb {P}}_{{{\mathbf {X}}}^\intercal }{{\mathbf {Y}}}^\intercal {\tilde{P}}={{\mathbf {X}}}^\intercal {{\tilde{Q}}}\\ \end{aligned}\right. ,\nonumber \\&\quad =\min _{ {{\tilde{P}}} \in \mathbb {R}^{n \times k} }\Vert {{\mathbf {Y}}}- {{\tilde{P}}} {{\tilde{P}}}^\intercal {{\mathbf {Y}}}{\mathbb {P}}_{{{\mathbf {X}}}^\intercal } \Vert _F^2 \quad s.t. \quad {{\tilde{P}}}^\intercal {{\tilde{P}}}=I_k,\end{aligned}$$
(27)
$$\begin{aligned}&\quad =\min _{ {{\tilde{P}}} \in \mathbb {R}^{n \times k} }\Vert ({{\mathbf {Y}}}- {{\tilde{P}}} {{\tilde{P}}}^\intercal {{\mathbf {Y}}}){\mathbb {P}}_{{{\mathbf {X}}}^\intercal } + {{\mathbf {Y}}}(I_m-{\mathbb {P}}_{{{\mathbf {X}}}^\intercal }) \Vert _F^2 \quad s.t. \quad {{\tilde{P}}}^\intercal {{\tilde{P}}}=I_k, \nonumber \\&\quad =\min _{ {{\tilde{P}}} \in \mathbb {R}^{n \times k} }\Vert {\mathbf {Z}}- {{\tilde{P}}} {{\tilde{P}}}^\intercal {\mathbf {Z}}\Vert _F^2 + \Vert {{\mathbf {Y}}}(I_m-{\mathbb {P}}_{{{\mathbf {X}}}^\intercal }) \Vert _F^2 \quad s.t. \quad {{\tilde{P}}}^\intercal {{\tilde{P}}}=I_k. \end{aligned}$$
(28)

The second equality is obtained from the equivalence between (23) and (24). The third equality is obtained by introducing the second constraint in the cost function and noticing that projection operators are always symmetric, i.e., \(({\mathbb {P}}_{{{\mathbf {X}}}^\intercal })^\intercal = {\mathbb {P}}_{{{\mathbf {X}}}^\intercal }, \) while the last equality follows from the definition of \({\mathbf {Z}}\) given in (15) and the orthogonality of the columns of the two terms. Problem (28) is a proper orthogonal decomposition problem with the snapshot matrix \({\mathbf {Z}}\). The solution of this proper orthogonal decomposition problem is the matrix \({U}_{{\mathbf {Z}},k}\) (with orthonormal columns) defined in Sect. 4.1, see e.g., (Quarteroni et al. 2015, Proposition 6.1). We thus obtain from (27) that

$$\begin{aligned} \min _{A \in \mathbb {R}^{n \times n} : \text {rank}(A) \le k }\Vert {{\mathbf {Y}}}- A{{\mathbf {X}}}\Vert _F^2 =\Vert {{\mathbf {Y}}}-{U}_{{\mathbf {Z}},k}{{U}_{{\mathbf {Z}},k}}^\intercal {{\mathbf {Y}}}{\mathbb {P}}_{{{\mathbf {X}}}^\intercal } \Vert _F^2 =\Vert {{\mathbf {Y}}}- {\mathbb {P}}_{{\mathbf {Z}},k} {{\mathbf {Y}}}{\mathbb {P}}_{{{\mathbf {X}}}^\intercal } \Vert _F^2 . \end{aligned}$$
(29)

Furthermore, we verify that \(A_k^\star ={U}_{{\mathbf {Z}},k}{W}^\intercal \) with \({W}=({{\mathbf {X}}}^\intercal )^\dag {{\mathbf {Y}}}^\intercal {U}_{{\mathbf {Z}},k}\) is a minimizer of (21). Indeed, since \({{\mathbf {X}}}{{\mathbf {X}}}^\intercal {W}={{\mathbf {X}}}{{\mathbf {X}}}^\intercal ({{\mathbf {X}}}^\intercal )^\dag {{\mathbf {Y}}}^\intercal {U}_{{\mathbf {Z}},k}= {{\mathbf {X}}}{{\mathbf {Y}}}^\intercal {U}_{{\mathbf {Z}},k}\), we check that \(({U}_{{\mathbf {Z}},k},{W})\) is admissible for problem (26). We also check using (24) that \( \Vert {{\mathbf {Y}}}- {U}_{{\mathbf {Z}},k}{W}^\intercal {{\mathbf {X}}}\Vert _F^2=\Vert {{\mathbf {Y}}}- {\mathbb {P}}_{{\mathbf {Z}},k} {{\mathbf {Y}}}{\mathbb {P}}_{{{\mathbf {X}}}^\intercal }\Vert _F^2, \) i.e., that \(({U}_{{\mathbf {Z}},k},{W})\) reaches the minimum given in (29). In consequence, we have shown that problem (21) and equivalently problem (9) admit the minimizer \(A_k^\star ={U}_{{\mathbf {Z}},k}{W}^\intercal = {\mathbb {P}}_{{\mathbf {Z}},k} {{\mathbf {Y}}}{{\mathbf {X}}}^{\dagger }\).

It remains to prove the second part of the theorem, namely the characterization of the approximation error. The sought result follows from standard proper orthogonal decomposition analysis. Indeed, according to (Quarteroni et al. 2015, Proposition 6.1) the first term of the cost function in (28) evaluated at \(A_k^\star \) is \( \Vert {\mathbf {Z}}- {\mathbb {P}}_{{\mathbf {Z}},k} {\mathbf {Z}}\Vert _F^2= \sum _{i=k+1}^m \sigma _{{\mathbf {Z}},i}^2. \)

Proof of Lemma 1

We begin by proving that any minimizer of (21) can be rewritten as \(PQ^\intercal \) where \( P^\intercal P=I_k\). Indeed, the existence of the SVD of \( {\tilde{A}}\) for any minimizer \({\tilde{A}} \in \mathbb {R}^{n \times n }\) guarantees that

$$\begin{aligned} \Vert {{\mathbf {Y}}}- {\tilde{A}} {{\mathbf {X}}}\Vert ^2_F = \Vert {{\mathbf {Y}}}- U_{ {\tilde{A}} }\varSigma _{ {\tilde{A}} }V^\intercal _{ {\tilde{A}} }{{\mathbf {X}}}\Vert ^2_F, \end{aligned}$$

where \(U_{ {\tilde{A}} } \in \mathbb {R}^{n \times k}\) possesses orthonormal columns. Making the identification \( P=U_{ {\tilde{A}} }\) and \( Q=V_{ {\tilde{A}} }\varSigma _{ {\tilde{A}} }\), we verify that \( \Vert {{\mathbf {Y}}}- {\tilde{A}} {{\mathbf {X}}}\Vert ^2_F= \Vert {{\mathbf {Y}}}- PQ^\intercal {{\mathbf {X}}}\Vert ^2_F\) and that \( P\) possesses orthonormal columns. Next, any solution \(PQ^\intercal \) of (21) should satisfy the first-order optimality condition with respect to the jth column denoted \(q_j\) of matrix Q, that is

$$\begin{aligned} 2\left[ -{{\mathbf {X}}}{{\mathbf {Y}}}^\intercal p_j + \sum _{i=1}^k (p_i^\intercal p_j) {{\mathbf {X}}}{{\mathbf {X}}}^\intercal q_i\right] =0, \end{aligned}$$

where the jth column of matrix P is denoted \(p_j\). In particular, a solution with \( P\) possessing orthonormal columns should satisfy \( {{\mathbf {X}}}{{\mathbf {Y}}}^\intercal p_j= {{\mathbf {X}}}{{\mathbf {X}}}^\intercal q_j , \) or in matrix form \({{\mathbf {X}}}{{\mathbf {Y}}}^\intercal P={{\mathbf {X}}}{{\mathbf {X}}}^\intercal Q. \quad \) \(\square \)

Proof of Proposition 1

We have \(A_k^\star = {\mathbb {P}}_{{\mathbf {Z}},k} {{\mathbf {Y}}}{{\mathbf {X}}}^\dagger ={U}_{{\mathbf {Z}},k} {W}^\intercal \) which implies that

$$\begin{aligned} {W}^\intercal {U}_{{\mathbf {Z}},k}={{U}_{{\mathbf {Z}},k}}^\intercal {{\mathbf {Y}}}{{\mathbf {X}}}^\dagger {U}_{{\mathbf {Z}},k}= {{U}_{{\mathbf {Z}},k}}^\intercal {\mathbb {P}}_{{\mathbf {Z}},k} {{\mathbf {Y}}}{{\mathbf {X}}}^\dagger {U}_{{\mathbf {Z}},k}= {{U}_{{\mathbf {Z}},k}}^\intercal {U}_{{\mathbf {Z}},k} {W}^\intercal {U}_{{\mathbf {Z}},k}. \end{aligned}$$

Using the definition of \(\zeta _i\)’s and \(\xi _i\)’s in (20), since the \(w^r_i\)’s and \(w^\ell _i\)’s are the right and left eigenvectors of \({W}^\intercal {U}_{{\mathbf {Z}},k}\), we verify that

$$\begin{aligned} A^\star _k \zeta _i= {U}_{{\mathbf {Z}},k} {W}^\intercal {U}_{{\mathbf {Z}},k} w^r_i= {U}_{{\mathbf {Z}},k} \lambda _i w^r_i =\lambda _i \zeta _i, \end{aligned}$$

and that

$$\begin{aligned} (A^\star _k)^\intercal \xi _i={\hat{Q}} {{U}_{{\mathbf {Z}},k}}^\intercal {W} w^\ell _i = {W} \lambda _i w^\ell _i=\lambda _i \xi _i. \end{aligned}$$

Finally, \(\xi _i^\intercal \zeta _i =1\) is a sufficient condition so that \(\xi _i^\intercal A_k^\star \zeta _i =\lambda _i. \quad \) \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Héas, P., Herzet, C. Low-Rank Dynamic Mode Decomposition: An Exact and Tractable Solution. J Nonlinear Sci 32, 8 (2022). https://doi.org/10.1007/s00332-021-09770-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00332-021-09770-w

Keywords

Mathematics Subject Classification

Navigation