Skip to main content
Log in

Randomized linear algebra for model reduction. Part I: Galerkin methods and error estimation

  • Published:
Advances in Computational Mathematics Aims and scope Submit manuscript

Abstract

We propose a probabilistic way for reducing the cost of classical projection-based model order reduction methods for parameter-dependent linear equations. A reduced order model is here approximated from its random sketch, which is a set of low-dimensional random projections of the reduced approximation space and the spaces of associated residuals. This approach exploits the fact that the residuals associated with approximations in low-dimensional spaces are also contained in low-dimensional spaces. We provide conditions on the dimension of the random sketch for the resulting reduced order model to be quasi-optimal with high probability. Our approach can be used for reducing both complexity and memory requirements. The provided algorithms are well suited for any modern computational environment. Major operations, except solving linear systems of equations, are embarrassingly parallel. Our version of proper orthogonal decomposition can be computed on multiple workstations with a communication cost independent of the dimension of the full order model. The reduced order model can even be constructed in a so-called streaming environment, i.e., under extreme memory constraints. In addition, we provide an efficient way for estimating the error of the reduced order model, which is not only more efficient than the classical approach but is also less sensitive to round-off errors. Finally, the methodology is validated on benchmark problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. We have \(\forall \mathbf {x} \in \mathcal {S}, \exists \mathbf {y} \in \mathcal {N}\) such that ∥xy∥≤ γ.

  2. Indeed, \(\exists \mathbf {n}_{0} \in \mathcal {N}\) such that ∥nn0∥ := α1γ. Let α0 = 1. Then assuming that \(\|\mathbf {n} - {\sum }^{m}_{i=0} \alpha _i \mathbf {n}_{i} \|:=\alpha _{m+1} \leq \gamma ^{m+1}\), \(\exists \mathbf {n}_{m+1} \in \mathcal {N}\) such that \(\| \frac {1}{\alpha _{m+1}} (\mathbf {n} - {\sum }^{m}_{i=0} \alpha _i \mathbf {n}_{i}) - \mathbf {n}_{m+1} \| \leq \gamma \)\(\|\mathbf {n} - {\sum }^{m+1}_{i=0} \alpha _i \mathbf {n}_{i} \| \leq \alpha _{m+1} \gamma \leq \gamma ^{m+2} \).

References

  1. Achlioptas, D.: Database-friendly random projections: Johnson-Lindenstrauss with binary coins. J. Comput. Syst. Sci. 66(4), 671–687 (2003)

    Article  MathSciNet  Google Scholar 

  2. Ailon, N., Liberty, E.: Fast dimension reduction using Rademacher series on dual bch codes. Discret. Comput. Geom. 42(4), 615 (2009)

    Article  MathSciNet  Google Scholar 

  3. Alla, A., Kutz, J.N.: Randomized model order reduction. tech. report, arXiv:http://arxiv.org/abs/1611.02316 (2016)

  4. Baker, C.G., Gallivan, K.A., Dooren, P.V.: Low-rank incremental methods for computing dominant singular subspaces. Linear Algebra Appl. 436(8), 2866–2888 (2012)

    Article  MathSciNet  Google Scholar 

  5. Balabanov, O., Nouy, A.: Randomized linear algebra for model reduction. Part ii: minimal residual methods and dictionary-based approximation. arXiv:http://arxiv.org/abs/1910.14378 (2019)

  6. Bebendorf, M.: Why finite element discretizations can be factored by triangular hierarchical matrices. SIAM J. Numer. Anal. 45(4), 1472–1494 (2007)

    Article  MathSciNet  Google Scholar 

  7. Bebendorf, M.: Hierarchical Matrices. Springer, Berlin (2008)

    MATH  Google Scholar 

  8. Benner, P., Cohen, A., Ohlberger, M., Willcox, K. (eds.): Model reduction and approximation: theory and algorithms. SIAM, Philadelphia, PA (2017)

  9. Benner, P., Gugercin, S., Willcox, K.: A survey of projection-based model reduction methods for parametric dynamical systems. SIAM Rev. 57(4), 483–531 (2015)

    Article  MathSciNet  Google Scholar 

  10. Boman, E.G., Hendrickson, B., Vavasis, S.: Solving elliptic finite element systems in near-linear time with support preconditioners. SIAM J. Numer. Anal. 46 (6), 3264–3284 (2008)

    Article  MathSciNet  Google Scholar 

  11. Bourgain, J., Lindenstrauss, J., Milman, V.: Approximation of zonoids by zonotopes. Acta Math. 162(1), 73–141 (1989)

    Article  MathSciNet  Google Scholar 

  12. Boutsidis, C., Gittens, A.: Improved matrix algorithms via the subsampled randomized Hadamard transform. SIAM J. Matrix Anal. Appl. 34(3), 1301–1340 (2013)

    Article  MathSciNet  Google Scholar 

  13. Braconnier, T., Ferrier, M., Jouhaud, J.-C., Montagnac, M., Sagaut, P.: Towards an adaptive POD/SVD surrogate model for aeronautic design. Comput. Fluids 40(1), 195–209 (2011)

    Article  MathSciNet  Google Scholar 

  14. Buhr, A., Engwer, C., Ohlberger, M., Rave, S.: A numerically stable a posteriori error estimator for reduced basis approximations of elliptic equations. In: Onate, X.O.E., Huerta, A. (eds.) Proceedings of the 11th World Congress on Computational Mechanics. CIMNE, pp 4094–4102, Barcelona (2014)

  15. Buhr, A., Smetana, K.: Randomized local model order reduction. SIAM J. Sci. Comput. 40(4), A2120–A2151 (2018)

    Article  MathSciNet  Google Scholar 

  16. Casenave, F., Ern, A., Lelièvre, T.: Accurate and online-efficient evaluation of the a posteriori error bound in the reduced basis method. ESAIM: Mathematical Modelling and Numerical Analysis 48(1), 207–229 (2014)

    Article  MathSciNet  Google Scholar 

  17. Elman, H.C., Silvester, D.J., Wathen, A.J.: Finite elements and fast iterative solvers: with applications in incompressible fluid dynamics. Numer. Math. Sci. Comput. (2014)

  18. Engquist, B., Ying, L.: Sweeping preconditioner for the Helmholtz equation: hierarchical matrix representation. Commun. Pure Appl. Math. 64(5), 697–735 (2011)

    Article  MathSciNet  Google Scholar 

  19. Gross, D., Nesme, V.: Note on sampling without replacing from a finite collection of matrices. arXiv:http://arxiv.org/abs/1001.2738 (2010)

  20. Haasdonk, B.: Reduced basis methods for parametrized PDEs – a tutorial introduction for stationary and instationary problems. In: Benner, P., Cohen, A., Ohlberger, M., Willcox, K. (eds.) Model Reduction and Approximation, pp 65–136. SIAM, Philadelphia (2017)

    Chapter  Google Scholar 

  21. Hackbusch, W.: Hierarchical matrices: algorithms and analysis, vol. 49. Springer, Berlin (2015)

    Book  Google Scholar 

  22. Halko, N., Martinsson, P.-G., Tropp, J.A.: Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53(2), 217–288 (2011)

    Article  MathSciNet  Google Scholar 

  23. Hesthaven, J.S., Rozza, G., Stamm, B.: Certified reduced basis methods for parametrized partial differential equations, 1st edn. Springer Briefs in Mathematics, Switzerland (2015)

    MATH  Google Scholar 

  24. Himpe, C., Leibner, T., Rave, S.: Hierarchical approximate proper orthogonal decomposition. SIAM J. Sci. Comput. 40(5), A3267–A3292 (2018)

    Article  MathSciNet  Google Scholar 

  25. Hochman, A., Villena, J.F., Polimeridis, A.G., Silveira, L.M., White, J.K., Daniel, L.: Reduced-order models for electromagnetic scattering problems. IEEE Trans. Antennas Propag. 62(6), 3150–3162 (2014)

    Article  MathSciNet  Google Scholar 

  26. Knezevic, D.J., Peterson, J.W.: A high-performance parallel implementation of the certified reduced basis method. Comput. Methods Appl. Mech. Eng. 200(13), 1455–1466 (2011)

    Article  Google Scholar 

  27. Lee, Y.T., Sidford, A.: Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. In: 2013 IEEE 54th Annual Symposium on Foundations of Computer Science (FOCS), vol. 00, pp 147–156 (2014)

  28. Maday, Y., Nguyen, N.C., Patera, A.T., Pau, S.H.: A general multipurpose interpolation procedure: the magic points. Communications on Pure & Applied Analysis 8(1), 383 (2009)

    Article  MathSciNet  Google Scholar 

  29. Martinsson, P.-G.: A fast direct solver for a class of elliptic partial differential equations. J. Sci. Comput. 38(3), 316–330 (2009)

    Article  MathSciNet  Google Scholar 

  30. Oxberry, G.M., Kostova-Vassilevska, T., Arrighi, W., Chand, K.: Limited-memory adaptive snapshot selection for proper orthogonal decomposition. Int. J. Numer. Methods Eng. 109(2), 198–217 (2017)

    Article  MathSciNet  Google Scholar 

  31. Quarteroni, A., Manzoni, A., Negri, F.: Reduced basis methods for partial differential equations: an introduction, vol. 92. Springer, Berlin (2015)

    MATH  Google Scholar 

  32. Rozza, G., Huynh, D.B.P., Patera, A.T.: Reduced basis approximation and a posteriori error estimation for affinely parametrized elliptic coercive partial differential equations. Arch. Comput. Meth. Eng. 15(3), 229 (2008)

    Article  Google Scholar 

  33. Sarlos, T.: Improved approximation algorithms for large matrices via random projections. In: 2006 IEEE 47th annual symposium on foundations of computer science (FOCS), pp 143–152. IEEE (2006)

  34. Sirovich, L.: Turbulence and the dynamics of coherent structures. I. coherent structures. Q. Appl. Math. 45(3), 561–571 (1987)

    Article  MathSciNet  Google Scholar 

  35. Tropp, J.A.: Improved analysis of the subsampled randomized Hadamard transform. Adv. Adapt. Data Anal. 3(01n02), 115–126 (2011)

    Article  MathSciNet  Google Scholar 

  36. Tropp, J.A.: User-friendly tail bounds for sums of random matrices. Found. Comput. Math. 12(4), 389–434 (2012)

    Article  MathSciNet  Google Scholar 

  37. Tropp, J.A., et al.: An introduction to matrix concentration inequalities. Foundations and Trends®;in Machine Learning 8(1-2), 1–230 (2015)

    Article  Google Scholar 

  38. Woodruff, D.P., et al.: Sketching as a tool for numerical linear algebra. Foundations and Trends®;in Theoretical Computer Science 10(1–2), 1–157 (2014)

    Article  MathSciNet  Google Scholar 

  39. Xia, J., Chandrasekaran, S., Gu, M., Li, X.S.: Superfast multifrontal method for large structured linear systems of equations. SIAM J. Matrix Anal. Appl. 31(3), 1382–1411 (2009)

    Article  MathSciNet  Google Scholar 

  40. Zahm, O., Nouy, A.: Interpolation of inverse operators for preconditioning parameter-dependent equations. SIAM J. Sci. Comput. 38(2), A1044–A1074 (2016)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anthony Nouy.

Additional information

Communicated by: Anthony Patera

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

(TEX 21.9 KB)

Appendix

Appendix

Here we list the proofs of propositions and theorems from the paper.

Proof of Proposition 2.2 (modified Cea’s lemma)

For all xUr, it holds

$$ \begin{array}{@{}rcl@{}} \alpha_{r}(\mu) \| \mathbf{u}_{r}(\mu)- \mathbf{x} \|_{U} &\leq& \| \mathbf{r}(\mathbf{u}_{r}(\mu);\mu)- \mathbf{r}(\mathbf{x};\mu)\|_{U_{r}^{\prime}} \leq \| \mathbf{r}(\mathbf{u}_{r}(\mu);\mu) \|_{U_{r}^{\prime}}+ \| \mathbf{r}(\mathbf{x}; \mu) \|_{U_{r}^{\prime}}\\ &=& \| \mathbf{r}(\mathbf{x}; \mu) \|_{U_{r}^{\prime}} \leq \beta_{r}(\mu) \| \mathbf{u}(\mu)-\mathbf{x} \|_{U}, \end{array} $$

where the first and last inequalities directly follow from the definitions of αr(μ) and βr(μ), respectively. Now,

$$ \| \mathbf{u}(\mu)-\mathbf{u}_{r}(\mu)\|_{U} \!\leq\! \| \mathbf{u}(\mu)- \mathbf{x} \|_{U}+ \| \mathbf{u}_{r}(\mu)- \mathbf{x} \|_{U} \!\leq\! \| \mathbf{u}(\mu)- \mathbf{x} \|_{U}+ \frac{\beta_{r}(\mu)}{\alpha_{r}(\mu)} \| \mathbf{u}(\mu)- \mathbf{x} \|_{U}, $$

which completes the proof. □

Proof of Proposition 2.3

For all \(\mathbf {a} \in \mathbb {K}^{r}\) and x := Ura, it holds

$$ \begin{array}{ll} \frac{\| \mathbf{A}_{r}(\mu)\mathbf{a} \|}{\| \mathbf{a} \|}&= \underset{\mathbf{z} \in \mathbb{K}^{r} \backslash \{ \mathbf{0} \}} \max \frac{|\langle \mathbf{z}, \mathbf{A}_{r}(\mu)\mathbf{a} \rangle|}{\| \mathbf{z} \| \| \mathbf{a} \|} = \underset{ \mathbf{z} \in \mathbb{K}^{r} \backslash \{ \mathbf{0} \}} \max \frac{|\mathbf{z}^{\mathrm{H}}\mathbf{U}_{r}^{\mathrm{H}}\mathbf{A}(\mu)\mathbf{U}_{r}\mathbf{a}|}{\| \mathbf{z} \| \| \mathbf{a} \|} \\ &= \underset{\mathbf{y} \in U_{r} \backslash \{ \mathbf{0} \}} \max \frac{|\mathbf{y}^{\mathrm{H}}\mathbf{A}(\mu)\mathbf{x}|}{\| \mathbf{y} \|_{U} \| \mathbf{x} \|_{U}} = \frac{\| \mathbf{A}(\mu)\mathbf{x} \|_{U^{\prime}_{r}}}{ \| \mathbf{x} \|_{U}}. \end{array} $$

Then the proposition follows directly from definitions of αr(μ) and βr(μ). □

Proof of Proposition 2.4

We have

$$ \begin{array}{@{}rcl@{}} |s(\mu)- s_{r}^{\text{pd}}(\mu)| &=& |s(\mu) - s_{r}(\mu)+ \langle \mathbf{u}_{r}^{\text{du}}(\mu), \mathbf{r}(\mathbf{u}_{r}(\mu); \mu) \rangle| \\ &=& |\langle \mathbf{l}(\mu), \mathbf{u}(\mu)- \mathbf{u}_{r}(\mu) \rangle+ \langle \mathbf{A} (\mu)^{\mathrm{H}} \mathbf{u}_{r}^{\text{du}}(\mu), \mathbf{u}(\mu)- \mathbf{u}_{r}(\mu) \rangle| \\ &=& |\langle \mathbf{r}^{\text{du}} (\mathbf{u}_{r}^{\text{du}}(\mu); \mu), \mathbf{u}(\mu)- \mathbf{u}_{r}(\mu) \rangle |\\ &\leq& \| \mathbf{r}^{\text{du}} (\mathbf{u}_{r}^{\text{du}}(\mu); \mu) \|_{U^{\prime}} \| \mathbf{u}(\mu) - \mathbf{u}_{r}(\mu) \|_{U}, \end{array} $$

and the result follows from definition (2.10). □

Proof of Proposition 2.5

To prove the first inequality, we notice that \(\mathbf {Q}\mathbf {P}_{U_{r}}\mathbf {U}_{m}\) has rank at most r. Consequently,

$$ \| \mathbf{Q}\mathbf{U}_{m} - {\mathbf{B}^{*}_{r}}\|^{2}_{F}\leq \| \mathbf{Q}\mathbf{U}_{m}- \mathbf{Q}\mathbf{P}_{{U_{r}}}\mathbf{U}_{m} \|^{2}_{F}= \sum\limits^{m}_{i=1} \| \mathbf{u} (\mu^{i}) - \mathbf{P}_{{U_{r}}} \mathbf{u} (\mu^{i})\|^{2}_{U}. $$

For the second inequality, let us denote the i-th column vector of Br by bi. Since \(\mathbf {Q}\mathbf {R}_{U}^{-1}\mathbf {Q}^{\mathrm {H}}= \mathbf {Q}\mathbf {Q}^{\dagger }\), with Q the pseudo-inverse of Q, is the orthogonal projection onto range(Q), we have

$$ \begin{array}{ll} \| \mathbf{Q}\mathbf{U}_{m}- {\mathbf{B}_{r}}\|^{2}_{F} &\geq \|\mathbf{Q} \mathbf{R}_{U}^{-1}\mathbf{Q}^{\mathrm{H}} (\mathbf{Q}\mathbf{U}_{m}- {\mathbf{B}_{r}})\|^{2}_{F}= \sum\limits_{i=1}^{m} \|\mathbf{u} (\mu^{i}) - \mathbf{R}_{U}^{-1}\mathbf{Q}^{\mathrm{H}}{\mathbf{b}_{i}} \|^{2}_{U} \\ &\geq \sum\limits_{i=1}^{m} \|\mathbf{u} (\mu^{i})- \mathbf{P}_{{U_{r}}}\mathbf{u} (\mu^{i}) \|^{2}_{U}. \end{array} $$

Proof of Proposition 3.3

It is clear that \(\langle \cdot , \cdot \rangle ^{\boldsymbol {\Theta }}_{X}\) and \(\langle \cdot , \cdot \rangle ^{\boldsymbol {\Theta }}_{X^{\prime }}\) satisfy (conjugate) symmetry, linearity, and positive semi-definiteness properties. The definitenesses of \(\langle \cdot , \cdot \rangle ^{\boldsymbol {\Theta }}_{X}\) and \(\langle \cdot , \cdot \rangle ^{\boldsymbol {\Theta }}_{X^{\prime }}\) on Y and \(Y^{\prime }\), respectively, follow directly from Definition 3.1 and Corollary 3.2. □

Proof of Proposition 3.4

Using Definition 3.1, we have

$$ \begin{array}{@{}rcl@{}} \| \mathbf{y}^{\prime} \|^{\boldsymbol{\Theta}}_{Z^{\prime}} &=& \underset{\mathbf{x} \in Z \backslash \{ \mathbf{0} \}} \max \frac{|\langle \mathbf{R}_{X}^{-1}\mathbf{y}^{\prime}, \mathbf{x} \rangle^{\boldsymbol{\Theta}}_{X}|}{\| \mathbf{x} \|^{\boldsymbol{\Theta}}_{X}}\leq \underset{\mathbf{x} \in Z \backslash \{ \mathbf{0} \}} \max \frac{|\langle \mathbf{R}_{X}^{-1}\mathbf{y}^{\prime}, \mathbf{x} \rangle_{X}|+ \varepsilon \| \mathbf{y}^{\prime} \|_{X^{\prime}} \| \mathbf{x} \|_{X}} {\| \mathbf{x} \|^{\boldsymbol{\Theta}}_{X}} \\ &\leq& \underset{\mathbf{x} \in Z \backslash \{ \mathbf{0} \}} \max \frac{|\langle \mathbf{R}_{X}^{-1}\mathbf{y}^{\prime}, \mathbf{x} \rangle_{X}|+ \varepsilon \| \mathbf{y}^{\prime} \|_{X^{\prime}} \| \mathbf{x} \|_{X}} {\sqrt{1-\varepsilon}\| \mathbf{x} \|_{X}} \\ &\leq& \frac{1}{\sqrt{1-\varepsilon}} \left( \underset{\mathbf{x} \in Z \backslash \{ \mathbf{0} \}} \max \frac{|\langle \mathbf{y}^{\prime}, \mathbf{x} \rangle|} {\| \mathbf{x} \|_{X}} + \varepsilon\| \mathbf{y}^{\prime} \|_{X^{\prime}} \right), \end{array} $$

which yields the right inequality. To prove the left inequality, we assume that \( \| \mathbf {y}^{\prime } \|_{Z^{\prime }}- \varepsilon \| \mathbf {y}^{\prime } \|_{X^{\prime }}\geq 0\). Otherwise, the relation is obvious because \(\| \mathbf {y}^{\prime } \|^{\boldsymbol {\Theta }}_{Z^{\prime }}\geq 0\). By Definition 3.1, we have

$$ \begin{array}{ll} \| \mathbf{y}^{\prime} \|^{\boldsymbol{\Theta}}_{Z^{\prime}} &=\underset{\mathbf{x} \in Z \backslash \{ \mathbf{0} \}} \max \frac{|\langle \mathbf{R}_{X}^{-1}\mathbf{y}^{\prime}, \mathbf{x} \rangle^{\boldsymbol{\Theta}}_{X}|} {\| \mathbf{x} \|^{\boldsymbol{\Theta}}_{X}}\geq \underset{\mathbf{x} \in Z \backslash \{ \mathbf{0} \}} \max \frac{|\langle \mathbf{R}_{X}^{-1}\mathbf{y}^{\prime}, \mathbf{x} \rangle_{X}|- \varepsilon \| \mathbf{y}^{\prime} \|_{X^{\prime}} \| \mathbf{x} \|_{X} } {\| \mathbf{x} \|^{\boldsymbol{\Theta}}_{X}} \\ &\geq \underset{\mathbf{x} \in Z \backslash \{ \mathbf{0} \}} \max \frac{|\langle \mathbf{R}_{X}^{-1}\mathbf{y}^{\prime}, \mathbf{x} \rangle_{X}|- \varepsilon \| \mathbf{y}^{\prime} \|_{X^{\prime}} \| \mathbf{x} \|_{X}} {\sqrt{1+\varepsilon}\| \mathbf{x} \|_{X}} \\ &\geq \frac{1}{\sqrt{1+\varepsilon}} \left( \underset{\mathbf{x} \in Z \backslash \{ \mathbf{0} \}} \max \frac{|\langle \mathbf{y}^{\prime}, \mathbf{x} \rangle|} {\| \mathbf{x} \|_{X}}- \varepsilon \| \mathbf{y}^{\prime} \|_{X^{\prime}} \right), \end{array} $$

which completes the proof. □

Proof of Proposition 3.7

Let us start with the case \(\mathbb {K} = \mathbb {R}\). For the proof, we shall follow standard steps (see, e.g., [38, Section 2.1]). Given a d-dimensional subspace \(V \subseteq \mathbb {R}^{n}\), let \(\mathcal {S} = \{ \mathbf {x} \in V : \ \| \mathbf {x} \| = 1 \}\) be the unit sphere of V. According to [11, Lemma 2.4], for any γ > 0 there exists a γ-net \(\mathcal {N}\) of \(\mathcal {S}\)Footnote 1 satisfying \(\# \mathcal {N} \leq (1+2/\gamma )^{d}\). For η such that 0 < η < 1/2, let \(\boldsymbol {\Theta } \in \mathbb {R}^{k \times n}\) be a rescaled Gaussian or Rademacher matrix with \(k\geq 6\eta ^{-2} ({2} d \log (1+2/\gamma )+ \log (1/\delta ))\). By [1, Lemmas 4.1 and 5.1] and an union bound argument, we obtain for a fixed xV

$$ \mathbb{P}(|\|\mathbf{x}\|^{2} - \|\boldsymbol{\Theta} \mathbf{x}\|^{2}| \leq \eta \|\mathbf{x}\|^{2} ) \geq 1 -2\exp(-k\eta^{2}/6). $$

Consequently, using a union bound for the probability of success, we have

$$ \left\{\ | \| \mathbf{x} + \mathbf{y} \|^{2} - \|\boldsymbol{\Theta}(\mathbf{x} + \mathbf{y}) \|^{2} | \leq \eta \|\mathbf{x} + \mathbf{y} \|^{2}, \quad \forall \mathbf{x}, \mathbf{y} \in \mathcal{N}\right\}, $$

holds with probability at least 1 − δ. Then, we deduce that

$$ \left\{\ | \langle \mathbf{x} , \mathbf{y} \rangle - \langle \boldsymbol{\Theta} \mathbf{x} , \boldsymbol{\Theta} \mathbf{y} \rangle| \leq \eta, \quad \forall \mathbf{x}, \mathbf{y} \in \mathcal{N}\right\} $$
(7.1)

holds with probability at least 1 − δ. Now, let n be some vector in \(\mathcal {S}\). Assuming γ < 1, it can be proven by induction that \( \mathbf {n} = {\sum }_{i\ge 0} \alpha _{i} \mathbf {n}_{i},\) where \(\mathbf {n}_{i} \in \mathcal {N}\) and 0 ≤ αiγi.Footnote 2 If (7.1) is satisfied, then

$$ \begin{array}{@{}rcl@{}} \| \boldsymbol{\Theta} \mathbf{n} \|^{2} &=& \sum\limits_{i,j \geq 0} \langle \boldsymbol{\Theta} \mathbf{n}_{i}, \boldsymbol{\Theta}\mathbf{n}_{j} \rangle \alpha_{i}\alpha_{j} \\ &\leq& \sum\limits_{i,j \geq 0} ( \langle \mathbf{n}_{i}, \mathbf{n}_{j} \rangle \alpha_{i} \alpha_{j} + \eta \alpha_{i} \alpha_{j}) = 1 + \eta ( \sum\limits_{i \geq 0} \alpha_{i} )^{2} \le 1 + \frac{\eta}{(1-\gamma)^{2}}, \end{array} $$

and similarly \( \| \boldsymbol {\Theta } \mathbf {n} \|^{2} \geq 1-\frac {\eta }{(1-\gamma )^{2}}\). Therefore, if (7.1) is satisfied, we have

$$ | 1- \| \boldsymbol{\Theta} \mathbf{n} \|^{2}| \leq \eta / (1-\gamma)^{2}. $$
(7.2)

For a given ε ≤ 0.5/(1 − γ)2, let η = (1 − γ)2ε. Since (7.2) holds for an arbitrary vector \(\mathbf {n} \in \mathcal {S}\), using the parallelogram identity, we easily obtain that

$$ \ \left| \langle \mathbf{x}, \mathbf{y} \rangle - \langle \boldsymbol{\Theta} \mathbf{x}, \boldsymbol{\Theta} \mathbf{y} \rangle \right|\leq \varepsilon \| \mathbf{x} \| \| \mathbf{y} \| $$
(7.3)

holds for all x,yV if (7.1) is satisfied. We conclude that if \(k\geq 6 \varepsilon ^{-2} (1-\gamma )^{-4} ({2} d \log (1+2/\gamma )+ \log (1/\delta ))\) then Θ is a 22ε-subspace embedding for V with probability at least 1 − δ. The lower bound for the number of rows of Θ is obtained by taking \(\gamma = \arg \min \limits _{x \in (0,1)}(\log (1+2/x)/(1-x)^{4})\approx 0.0656\).

The statement of the proposition for the case \(\mathbb {K} = \mathbb {C}\) can be deduced from the fact that if Θ is (ε,δ, 2d) oblivious 22 subspace embedding for \(\mathbb {K} = \mathbb {R}\), then it is (ε,δ,d) oblivious 22 subspace embedding for \(\mathbb {K} = \mathbb {C}\). A detailed proof of this fact is provided in the supplementary material. To show this, we first note that the real part and the imaginary part of any vector from a d-dimensional subspace \(V^{*} \subseteq \mathbb {C}^{n}\) belong to a certain subspace \(W \subseteq \mathbb {R}^{n}\) with \(\dim (W) \leq 2d\). Further, one can show that if Θ is 22ε-subspace embedding for W, then it is 22ε-subspace embedding for V. □

Proof of Proposition 3.9

Let \(\boldsymbol {\Theta } \in \mathbb {R}^{k\times n}\) be a P-SRHT matrix, let V be an arbitrary d-dimensional subspace of \(\mathbb {K}^{n}\), and let \(\mathbf {V} \in \mathbb {K}^{n\times d}\) be a matrix whose columns form an orthonormal basis of V. Recall, Θ is equal to the first n columns of matrix \(\boldsymbol {\Theta }^{*} = k^{-1/2} (\mathbf {R}\mathbf {H}_{s}\mathbf {D}) \in \mathbb {R}^{k \times s}\). Next we shall use the fact that for any orthonormal matrix \(\mathbf{V} ^{*} \in \mathbb {K}^{s\times d}\), all singular values of a matrix ΘV belong to interval \([\sqrt {1-\varepsilon }, \sqrt {1+\varepsilon }]\) with probability at least 1 − δ. This result is basically a restatement of [12, Lemma 4.1] and [35, Theorem 3.1] including the complex case and with improved constants. It can be shown to hold by mimicking the proof in [35] with a few additional algebraic operations. For a detailed proof of the statement, see the supplementary material.

By taking V with the first n × d block equal to V and zeros elsewhere, and using the fact that ΘV and ΘV have the same singular values, we obtain that

$$ | \| \mathbf{V}{\mathbf{z}} \|^2 - \| \mathbf{Theta}\mathbf{V}{\mathbf{z}} \|^2 |= |{\mathbf{z}}^{\mathrm{H}} (\mathbf{I}- \mathbf{V}^{\mathrm{H}}\mathbf{Theta}^{\mathrm{H}}\mathbf{Theta}\mathbf{V}) {\mathbf{z}} | \leq \varepsilon \| {\mathbf{z}} \|^2= {\varepsilon} \| \mathbf{V}{\mathbf{z}} \|^2, \quad \forall {\mathbf{z}} \in \mathbb{K}^{d} $$
(4)

holds with probability at least 1 − δ. Using the parallelogram identity, it can be easily proven that relation (4) implies

$$ \ \left | \langle \mathbf{x}, \mathbf{y} \rangle - \langle \mathbf{Theta}\mathbf{x}, \mathbf{Theta}\mathbf{y} \rangle \right |\leq \varepsilon \| \mathbf{x} \| \| \mathbf{y} \|, \quad \forall \mathbf{x}, \mathbf{y} \in V.$$

We conclude that Θ is a (ε,δ,d) oblivious 22 subspace embedding. □

Proof of Proposition 3.11

Let V be any d-dimensional subspace of X and let V := {Qx : xV }. Since the following relations hold 〈⋅,⋅〉U = 〈Q⋅,Q⋅〉 and \(\langle \cdot , \cdot \rangle ^{\boldsymbol {\Theta }}_{U} = \langle \mathbf {Q} \cdot , \mathbf {Q} \cdot \rangle _{2}^{\boldsymbol {\Omega }}\), we have that sketching matrix Θ is an ε-embedding for V if and only if Ω is an ε-embedding for V. It follows from the definition of Ω that this matrix is an ε-embedding for V with probability at least 1 − δ, which completes the proof. □

Proof of Proposition 4.1 (sketched Cea’s lemma)

The proof exactly follows the one of Proposition 2.2 with \(\| \cdot \|_{U_{r}^{\prime }}\) replaced by \(\| \cdot \|^{\boldsymbol {\Theta }}_{U_{r}^{\prime }}\). □

Proof of Proposition 4.2

According to Proposition 3.4, and by definition of ar(μ), we have

$$ \begin{array}{ll} \alpha^{\boldsymbol{\Theta}}_{r}(\mu) &=\underset{\mathbf{x} \in U_{r} \backslash \{ \mathbf{0} \}} \min \frac{\| \mathbf{A}(\mu)\mathbf{x} \|^{\boldsymbol{\Theta}}_{U_{r}^{\prime}}}{\| \mathbf{x} \|_{U}}\geq \frac{1}{\sqrt{1+\varepsilon}}\underset{\mathbf{x} \in U_{r} \backslash \{ \mathbf{0} \}} \min \frac{(\| \mathbf{A}(\mu)\mathbf{x} \|_{U_{r}^{\prime}}- \varepsilon\| \mathbf{A}(\mu)\mathbf{x}\|_{U^{\prime}})}{\| \mathbf{x} \|_{U}} \\ &\geq \frac{1}{\sqrt{1+\varepsilon}}(1-\varepsilon a_{r}(\mu)) \underset{\mathbf{x} \in U_{r} \backslash \{ \mathbf{0} \}} \min \frac{\| \mathbf{A}(\mu)\mathbf{x} \|_{U_{r}^{\prime}}}{\| \mathbf{x} \|_{U}}. \end{array} $$

Similarly,

$$ \begin{array}{ll} \beta^{\boldsymbol{\Theta}}_{r}(\mu) &= \underset{ \mathbf{x} \in \left( \text{span} \{ \mathbf{u}(\mu) \}+ U_{r} \right) \backslash \{ \mathbf{0} \}} \max \frac{ \| \mathbf{A}(\mu)\mathbf{x} \|^{\boldsymbol{\Theta}}_{U_{r}^{\prime}}}{\| \mathbf{x} \|_{U}} \\ &\leq \frac{1}{\sqrt{1-\varepsilon}} \underset{\mathbf{x} \in \left( \text{span} \{ \mathbf{u}(\mu) \} + U_{r} \right) \backslash \{ \mathbf{0} \}} \max \frac{\| \mathbf{A}(\mu)\mathbf{x} \|_{U_{r}^{\prime}}+ \varepsilon\| \mathbf{A}(\mu)\mathbf{x} \|_{U^{\prime}}}{\| \mathbf{x} \|_{U}} \\ &\leq \frac{1}{\sqrt{1-\varepsilon}} \left( \underset{\mathbf{x} \in \left( \text{span} \{ \mathbf{u}(\mu) \}+ U_{r} \right) \backslash \{ \mathbf{0} \}} \max \frac{\| \mathbf{A}(\mu)\mathbf{x} \|_{U_{r}^{\prime}}}{\| \mathbf{x} \|_{U}}+ \varepsilon \underset{\mathbf{x} \in U \backslash \{ \mathbf{0} \}} \max \frac{\| \mathbf{A}(\mu)\mathbf{x} \|_{U^{\prime}}}{\| \mathbf{x} \|_{U}} \right). \end{array} $$

Proof of Proposition 4.3

Let \(\mathbf {a} \in \mathbb {K}^{r}\) and x := Ura. Then

$$ \begin{array}{ll} \frac{\| \mathbf{A}_{r}(\mu)\mathbf{a} \|}{\| \mathbf{a} \|}&= \underset{\mathbf{z} \in \mathbb{K}^{r} \backslash \{ \mathbf{0} \}} \max \frac{|\langle \mathbf{z}, \mathbf{A}_{r}(\mu)\mathbf{a} \rangle|}{\| \mathbf{z} \| \| \mathbf{a} \|} = \underset{ \mathbf{z} \in \mathbb{K}^{r} \backslash \{ \mathbf{0} \}} \max \frac{|\mathbf{z}^{\mathrm{H}}\mathbf{U}_{r}^{\mathrm{H}}\boldsymbol{\Theta}^{\mathrm{H}}\boldsymbol{\Theta}\mathbf{R}_{U}^{-1}\mathbf{A}(\mu)\mathbf{U}_{r}\mathbf{a}|}{\| \mathbf{z} \| \| \mathbf{a} \|} \\ &= \underset{\mathbf{y} \in U_{r} \backslash \{ \mathbf{0} \}} \max \frac{|\mathbf{y}^{\mathrm{H}}\boldsymbol{\Theta}^{\mathrm{H}}\boldsymbol{\Theta}\mathbf{R}_{U}^{-1}\mathbf{A}(\mu)\mathbf{x}|}{\| \mathbf{y} \|^{\boldsymbol{\Theta}}_{U} \| \mathbf{x} \|^{\boldsymbol{\Theta}}_{U}}= \underset{\mathbf{y} \in U_{r} \backslash \{ \mathbf{0} \}} \max \frac{|\langle \mathbf{y}, \mathbf{R}_{U}^{-1}\mathbf{A}(\mu)\mathbf{x} \rangle^{\boldsymbol{\Theta}}_{U}|}{\| \mathbf{y} \|^{\boldsymbol{\Theta}}_{U} \| \mathbf{x} \|^{\boldsymbol{\Theta}}_{U}} \\ &= \frac{\| \mathbf{A}(\mu)\mathbf{x} \|^{\boldsymbol{\Theta}}_{U^{\prime}_{r}}}{ \| \mathbf{x} \|^{\boldsymbol{\Theta}}_{U}}. \end{array} $$
(7.4)

By definition,

$$ \sqrt{1-\varepsilon} \| \mathbf{x} \|_{U} \leq \| \mathbf{x} \|^{\boldsymbol{\Theta}}_{U} \leq \sqrt{1+\varepsilon} \| \mathbf{x} \|_{U}. $$
(7.5)

Combining (7.4) and (7.5) we conclude

$$ \frac{1}{\sqrt{1+\varepsilon}}\frac{\| \mathbf{A}(\mu)\mathbf{x} \|^{\boldsymbol{\Theta}}_{U^{\prime}_{r}}}{\| \mathbf{x} \|_{U}} \leq \frac{\|\mathbf{A}_{r}(\mu)\mathbf{a} \|}{\| \mathbf{a} \|}\leq \frac{1}{\sqrt{1-\varepsilon}}\frac{\| \mathbf{A}(\mu)\mathbf{x} \|^{\boldsymbol{\Theta}}_{U^{\prime}_{r}}}{\| \mathbf{x} \|_{U}}. $$

The statement of the proposition follows immediately from definitions of \(\alpha ^{\boldsymbol {\Theta }}_{r}(\mu )\) and \(\beta ^{\boldsymbol {\Theta }}_{r}(\mu )\). □

Proof of Proposition 4.4

The proposition directly follows from relations (2.10), (3.1), (3.2), and (4.7). □

Proof of Proposition 4.6

We have

$$ \begin{array}{ll} |s^{\text{pd}}(\mu)- s_{r}^{\text{spd}}(\mu)|&= | \langle \mathbf{u}_{r}^{\text{du}}(\mu), \mathbf{R}_{U}^{-1}\mathbf{r}(\mathbf{u}_{r}(\mu);\mu) \rangle_{U}-\langle \mathbf{u}_{r}^{\text{du}}(\mu), \mathbf{R}_{U}^{-1}\mathbf{r}(\mathbf{u}_{r}(\mu);\mu) \rangle^{\boldsymbol{\Theta}}_{U}| \\ &\leq \varepsilon \| \mathbf{r}(\mathbf{u}_{r}(\mu); \mu) ||_{U^{\prime}} \| \mathbf{u}_{r}^{\text{du}}(\mu) \|_{U} \\ &\leq \varepsilon \| \mathbf{r}(\mathbf{u}_{r}(\mu);\mu) ||_{U^{\prime}} \frac{\| \mathbf{A}(\mu)^{\mathrm{H}} \mathbf{u}_{r}^{\text{du}}(\mu) \|_{U^{\prime}}}{\eta(\mu)} \\ &\leq \varepsilon \| \mathbf{r}(\mathbf{u}_{r}(\mu);\mu) ||_{U^{\prime}} \frac{\| \mathbf{r}^{\text{du}}(\mathbf{u}_{r}^{\text{du}}(\mu);\mu)\|_{U^{\prime}}+ \| \mathbf{l}(\mu) \|_{U^{\prime}}}{\eta (\mu)}, \end{array} $$
(7.6)

and (4.10) follows by combining (7.6) with (2.15). □

Proof of Proposition 5.1

In total, there are at most \(\binom {m}{r}\)r-dimensional subspaces that could be spanned from m snapshots. Therefore, by using the definition of Θ, the fact that \(\dim (Y_{r}(\mu ))\leq 2 r+1\) and a union bound for the probability of success, we deduce that Θ is a U2ε-subspace embedding for Yr(μ), for fixed \(\mu \in \mathcal {P}_{\text {train}}\), with probability at least 1 − m− 1δ. The proposition then follows from another union bound. □

Proof of Proposition 5.4

We have

$$ {\varDelta}^{\text{POD}}(V) = \frac{1}{m} \| \mathbf{U}_{m}^{\boldsymbol{\Theta}} - \boldsymbol{\Theta} \mathbf{P}^{\boldsymbol{\Theta}}_{V} \mathbf{U}_{m}\|_{F}. $$

Moreover, matrix \(\boldsymbol {\Theta } \mathbf {P}^{\boldsymbol {\Theta }}_{U_{r}} \mathbf {U}_{m}\) is the rank-r truncated SVD approximation of \(\mathbf {U}_{m}^{\boldsymbol {\Theta }}\). The statements of the proposition can be then derived from the standard properties of the SVD. □

Proof of Theorem 5.5

Clearly, if Θ is a U2ε-subspace embedding for Y, then \(\text {rank}(\mathbf {U}^{\boldsymbol {\Theta }}_{m})\geq r\). Therefore Ur is well-defined. Let \(\{( \lambda _{i}, \mathbf {t}_{i}) \}_{i=1}^{l}\) and Tr be given by Definition 5.3. In general, \(\mathbf {P}^{\boldsymbol {\Theta }}_{U_{r}}\) defined by (5.3) may not be unique. Let us further assume that \(\mathbf {P}^{\boldsymbol {\Theta }}_{U_{r}}\) is provided for xUm by \( \mathbf {P}^{\boldsymbol {\Theta }}_{U_{r}}\mathbf {x}:= \mathbf {U}_{r}\mathbf {U}_{r}^{\mathrm {H}}\boldsymbol {\Theta }^{\mathrm {H}}\boldsymbol {\Theta }\mathbf {x}, \) where \(\mathbf {U}_{r}= \mathbf {U}_{m}[\frac {1}{\sqrt {\lambda _{1}}}\mathbf {t}_{1}, ..., \frac {1}{\sqrt {\lambda _{r}}}\mathbf {t}_{r}]\). Observe that \( \mathbf {P}^{\boldsymbol {\Theta }}_{U_{r}} \mathbf {U}_{m} = \mathbf {U}_{m} \mathbf {T}_{r} \mathbf {T}_{r}^{\mathrm {H}}. \) For the first part of the theorem, we establish the following inequalities. Let \(\mathbf {Q} \in \mathbb {K}^{n\times n}\) denote the adjoint of a Cholesky factor of RU, then

$$ \begin{array}{@{}rcl@{}} &&\frac{1}{m} \sum\limits_{i=1}^{m} \| (\mathbf{I} -\mathbf{P}_{Y})(\mathbf{u}(\mu^{i})- \mathbf{P}^{\boldsymbol{\Theta}}_{U_{r}}\mathbf{u}(\mu^{i})) \|_{U}^{2}= \frac{1}{m} \| \mathbf{Q}(\mathbf{I} -\mathbf{P}_{Y})\mathbf{U}_{m} (\mathbf{I} - \mathbf{T}_{r}\mathbf{T}_{r}^{\mathrm{H}}) \|_{F}^{2}\\ &\leq& \frac{1}{m} \| \mathbf{Q}(\mathbf{I}- \mathbf{P}_{Y})\mathbf{U}_{m}\|_{F}^{2} \| \mathbf{I}- \mathbf{T}_{r}\mathbf{T}_{r}^{\mathrm{H}} \|^{2}= {\varDelta}_{Y} \| \mathbf{I}- \mathbf{T}_{r}\mathbf{T}_{r}^{\mathrm{H}} \|^{2}\leq {\varDelta}_{Y}, \end{array} $$

and

$$ \begin{array}{@{}rcl@{}} &&\frac{1}{m} \sum\limits_{i=1}^{m} \left( \| (\mathbf{I}- \mathbf{P}_{Y})(\mathbf{u}(\mu^{i})- \mathbf{P}^{\boldsymbol{\Theta}}_{U_{r}}\mathbf{u}(\mu^{i})) \|^{\boldsymbol{\Theta}}_{U} \right)^{2}= \frac{1}{m} \| \boldsymbol{\Theta}(\mathbf{I}- \mathbf{P}_{Y})\mathbf{U}_{m}(\mathbf{I}- \mathbf{T}_{r}\mathbf{T}_{r}^{\mathrm{H}}) \|_{F}^{2} \\ &\leq& \frac{1}{m} \| \boldsymbol{\Theta}(\mathbf{I}- \mathbf{P}_{Y})\mathbf{U}_{m} \|_{F}^{2} \| \mathbf{I}- \mathbf{T}_{r}\mathbf{T}_{r}^{\mathrm{H}} \|^{2} \leq(1+\varepsilon) {\varDelta}_{Y} \| \mathbf{I}- \mathbf{T}_{r}\mathbf{T}_{r}^{\mathrm{H}} \|^{2}\leq (1+\varepsilon){\varDelta}_{Y}. \end{array} $$

Now, we have

$$ \begin{array}{@{}rcl@{}} &&\frac{1}{m} \sum\limits_{i=1}^{m} \| \mathbf{u}(\mu^{i})- \mathbf{P}_{U_{r}}\mathbf{u}(\mu^{i}) \|_{U}^{2}\leq \frac{1}{m} \sum\limits_{i=1}^{m} \| \mathbf{u}(\mu^{i})- \mathbf{P}^{\boldsymbol{\Theta}}_{U_{r}}\mathbf{u}(\mu^{i}) \|_{U}^{2} \\ &=& \frac{1}{m} \sum\limits_{i=1}^{m} \left( \| \mathbf{P}_{Y}(\mathbf{u}(\mu^{i})- \mathbf{P}^{\boldsymbol{\Theta}}_{U_{r}}\mathbf{u}(\mu^{i})) \|_{U}^{2}+ \| (\mathbf{I}- \mathbf{P}_{Y})(\mathbf{u}(\mu^{i})- \mathbf{P}^{\boldsymbol{\Theta}}_{U_{r}}\mathbf{u}(\mu^{i})) \|_{U}^{2} \right) \\ & \leq& \frac{1}{m} \sum\limits_{i=1}^{m} \| \mathbf{P}_{Y}(\mathbf{u}(\mu^{i})- \mathbf{P}^{\boldsymbol{\Theta}}_{U_{r}}\mathbf{u}(\mu^{i})) \|_{U}^{2}+ {\varDelta}_{Y} \leq \frac{1}{m} \frac{1}{1-\varepsilon} \sum\limits_{i=1}^{m} \left( \| \mathbf{P}_{Y}(\mathbf{u}(\mu^{i})- \mathbf{P}^{\boldsymbol{\Theta}}_{U_{r}}\mathbf{u}(\mu^{i})) \|^{\boldsymbol{\Theta}}_{U} \right)^{2}+ {\varDelta}_{Y} \\ &\leq& \frac{1}{1-\varepsilon} \frac{1}{m} \sum\limits_{i=1}^{m} 2 \left( \left( \| \mathbf{u}(\mu^{i})- \mathbf{P}^{\boldsymbol{\Theta}}_{U_{r}}\mathbf{u}(\mu^{i}) \|^{\boldsymbol{\Theta}}_{U} \right)^{2} + \left( \| (\mathbf{I}- \mathbf{P}_{Y})(\mathbf{u}(\mu^{i})- \mathbf{P}^{\boldsymbol{\Theta}}_{U_{r}}\mathbf{u}(\mu^{i})) \|^{\boldsymbol{\Theta}}_{U} \right)^{2} \right)+ {\varDelta}_{Y} \\ &\leq& \frac{1}{1-\varepsilon} \frac{1}{m} \sum\limits_{i=1}^{m} 2 \left( \| \mathbf{u}(\mu^{i})- \mathbf{P}_{U^{*}_{r}}\mathbf{u}(\mu^{i}) \|^{\boldsymbol{\Theta}}_{U} \right)^{2}+ (\frac{2(1+\varepsilon)}{1-\varepsilon}+1){\varDelta}_{Y} \\ &\leq& \frac{2(1+\varepsilon)}{1-\varepsilon} \frac{1}{m} \sum\limits_{i=1}^{m} \| \mathbf{u}(\mu^{i})- \mathbf{P}_{U^{*}_{r}}\mathbf{u}(\mu^{i}) \|_{U}^{2}+ (\frac{2(1+\varepsilon)}{1-\varepsilon}+1){\varDelta}_{Y}, \end{array} $$

which is equivalent to (5.7).

The second part of the theorem can be proved as follows. Assume that Θ is U2ε-subspace embedding for Um, then

$$ \begin{array}{@{}rcl@{}} &&\frac{1}{m} \sum\limits_{i=1}^{m} \| \mathbf{u}(\mu^{i})- \mathbf{P}_{U_{r}}\mathbf{u}(\mu^{i}) \|_{U}^{2} \leq \frac{1}{m} \sum\limits_{i=1}^{m} \| \mathbf{u}(\mu^{i})- \mathbf{P}^{\boldsymbol{\Theta}}_{U_{r}}\mathbf{u}(\mu^{i}) \|_{U}^{2}\\ &\leq& \frac{1}{m} \frac{1}{1-\varepsilon}\sum\limits_{i=1}^{m} \left( \| \mathbf{u}(\mu^{i})- \mathbf{P}^{\boldsymbol{\Theta}}_{U_{r}}\mathbf{u}(\mu^{i}) \|^{\boldsymbol{\Theta}}_{U} \right)^{2} \leq \frac{1}{m} \frac{1}{1-\varepsilon} \sum\limits_{i=1}^{m} \left( \| \mathbf{u}(\mu^{i})- \mathbf{P}_{U^{*}_{r}}\mathbf{u}(\mu^{i}) \|^{\boldsymbol{\Theta}}_{U} \right)^{2}\\ &\leq& \frac{1}{m} \frac{1+\varepsilon}{1-\varepsilon} \sum\limits_{i=1}^{m} \| \mathbf{u}(\mu^{i})- \mathbf{P}_{U^{*}_{r}}\mathbf{u}(\mu^{i}) \|_{U}^{2}, \end{array} $$

which completes the proof. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Balabanov, O., Nouy, A. Randomized linear algebra for model reduction. Part I: Galerkin methods and error estimation. Adv Comput Math 45, 2969–3019 (2019). https://doi.org/10.1007/s10444-019-09725-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10444-019-09725-6

Keywords

Mathematics Subject Classification (2010)

Navigation