Skip to main content
Log in

Offline-Enhanced Reduced Basis Method Through Adaptive Construction of the Surrogate Training Set

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

The reduced basis method (RBM) is a popular certified model reduction approach for solving parametrized partial differential equations. One critical stage of the offline portion of the algorithm is a greedy algorithm, requiring maximization of an error estimate over parameter space. In practice this maximization is usually performed by replacing the parameter domain continuum with a discrete “training” set. When the dimension of parameter space is large, it is necessary to significantly increase the size of this training set in order to effectively search parameter space. Large training sets diminish the attractiveness of RBM algorithms since this proportionally increases the cost of the offline phase. In this work we propose novel strategies for offline RBM algorithms that mitigate the computational difficulty of maximizing error estimates over a training set. The main idea is to identify a subset of the training set, a “surrogate training set” (STS), on which to perform greedy algorithms. The STS we construct is much smaller in size than the full training set, yet our examples suggest that it is accurate enough to induce the solution manifold of interest at the current offline RBM iteration. We propose two algorithms to construct the STS: our first algorithm, the successive maximization method, is inspired by inverse transform sampling for non-standard univariate probability distributions. The second constructs an STS by identifying pivots in the Cholesky decomposition of an approximate error correlation matrix. We demonstrate the algorithm through numerical experiments, showing that it is capable of accelerating offline RBM procedures without degrading accuracy, assuming that the solution manifold has rapidly decaying Kolmogorov width.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. In implementations, in order to ameliorate ill-conditioning issues that may arise in (2.7) we first apply the Gram–Schmidt process with respect to the \((\cdot , \cdot )_{X}\) inner product each time a new snapshot \(u^{\mathcal {N}}(\varvec{\mu }^{{N}})\) is generated to obtain a \((\cdot , \cdot )_{X}\)-orthonormal basis \(\{\xi _{{N}}^{\mathcal {N}}\}_{{N} = 1}^{N_\mathrm{max}}\). We omit explicitly denoting or showing this orthogonalization procedure.

  2. “Offline” is a standard descriptor for this general portion of the full RBM algorithm; see the “Appendix”.

  3. Available for download at http://www.ians.uni-stuttgart.de/MoRePaS/software/index.html.

References

  1. Babuška, I., Nobile, F., Tempone, R.: A stochastic collocation method for elliptic partial differential equations with random input data. SIAM Rev. 52(2), 317–355 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  2. Barrault, M., Maday, Y., Nguyen, N.C., Patera, A.T.: An ‘empirical interpolation’ method: application to efficient reduced-basis discretization of partial differential equations. C. R. Math. Acad. Sci. Paris 339(9), 667–672 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  3. Berkooz, G., Holmes, P., Lumley, J.L.: The proper orthogonal decomposition in the analysis of turbulent flows. In Ann. Rev. Fluid Mech. 25, 539–575. Annual Reviews, Palo Alto, CA, (1993)

  4. Binev, P., Cohen, A., Dahmen, W., DeVore, R., Petrova, G., Wojtaszczyk, P.: Convergence rates for greedy algorithms in reduced basis methods. SIAM J. Math. Anal. 43(3), 1457–1472 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  5. Chen, P., Quarteroni, A.: Accurate and efficient evaluation of failure probability for partial different equations with random input data. Comput. Methods Appl. Mech. Eng. 267, 233–260 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  6. Chen, P., Quarteroni, A., Rozza, G.: A weighted reduced basis method for elliptic partial differential equations with random input data. SIAM J. Numer. Anal. 51(6), 3163–3185 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  7. Chen, P., Quarteroni, A., Rozza, G.: Multilevel and weighted reduced basis method for stochastic optimal control problems constrained by Stokes equations. Numer. Math. 133(1), 67–102 (2016)

    Article  MATH  MathSciNet  Google Scholar 

  8. Chen, Y.: A certified natural-norm successive constraint method for parametric inf–sup lower bounds. Appl. Numer. Math. 99, 98–108 (2016)

    Article  MATH  MathSciNet  Google Scholar 

  9. Chen, Y., Hesthaven, J.S., Maday, Y., Rodríguez, J.: Certified reduced basis methods and output bounds for the harmonic Maxwell’s equations. SIAM J. Sci. Comput. 32(2), 970–996 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  10. Feldmann, P., Freund, R.W.: Efficient linear circuit analysis by padé approximation via the lanczos process. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 14(5), 639–649 (1995)

    Article  Google Scholar 

  11. Grepl, M.A., Maday, Y., Nguyen, N.C., Patera, A.T.: Efficient reduced-basis treatment of nonaffine and nonlinear partial differential equations. M2AN. Math. Model. Numer. Anal. 41(3), 575–605 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  12. Grepl, M.A., Patera, A.T.: A posteriori error bounds for reduced-basis approximations of parametrized parabolic partial differential equations. ESAIM Math. Model. Numer. Anal. 39(1), 157–181 (2005)

    Article  MATH  Google Scholar 

  13. Gugercin, S., Antoulas, A.C.: A survey of model reduction by balanced truncation and some new results. Int. J. Control 77(8), 748–766 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  14. Haasdonk, B.: Reduced basis methods for parametrized pdes-a tutorial introduction for stationary and instationary problems. Reduced Order Modelling, Luminy Book Series (2014)

  15. Haasdonk, B.: RBmatlab 2016)

  16. Haasdonk, B., Dihlmann, M., Ohlberger, M.: A training set and multiple bases generation approach for parameterized model reduction based on adaptive grids in parameter space. Math. Comput. Model. Dyn. Syst. 17(4), 423–442 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  17. Harbrecht, H., Peters, M., Schneider, R.: On the low-rank approximation by the pivoted Cholesky decomposition. Appl. Numer. Math. 62(4), 428–440 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  18. Hesthaven, J.S., Stamm, B., Zhang, S.: Efficient greedy algorithms for high-dimensional parameter spaces with applications to empirical interpolation and reduced basis methods? ESAIM Math. Model. Numer. Anal. 48(1), 259–283 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  19. Huynh, D.B.P., Knezevic, D.J., Chen, Y., Hesthaven, J.S., Patera, A.T.: A natural-norm successive constraint method for inf–sup lower bounds. Comput. Methods Appl. Mech. Eng. 199(29–32), 1963–1975 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  20. Huynh, D.B.P., Rozza, G., Sen, S., Patera, A.T.: A successive constraint linear optimization method for lower bounds of parametric coercivity and inf–sup stability constants. C. R. Math. Acad. Sci. Paris 345(8), 473–478 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  21. Jiang, J., Chen, Y., Narayan, A.: A goal-oriented reduced basis methods-accelerated generalized polynomial Chaos algorithm. SIAM/ASA J. Uncertain. Quant. 4(1), 1398–1420 (2016)

    Article  MATH  MathSciNet  Google Scholar 

  22. Lassila, T., Quarteroni, A., Rozza, G.: A reduced basis model with parametric coupling for fluid-structure interaction problems. SIAM J. Sci. Comput. 34(2), A1187–A1213 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  23. Maday, Y., Patera, A.T., Turinici, G.: Global a priori convergence theory for reduced-basis approximations of single-parameter symmetric coercive elliptic partial differential equations. C. R. Math. Acad. Sci. Paris 335(3), 289–294 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  24. Moore, B.C.: Principal component analysis in linear systems: controllability, observability, and model reduction. IEEE Trans. Autom. Control 26(1), 17–32 (1981)

    Article  MATH  MathSciNet  Google Scholar 

  25. Noor, A.K.: Recent advances in reduction methods for nonlinear problems. Comput. Struct. 13(1–3), 31–44 (1981)

    Article  MATH  MathSciNet  Google Scholar 

  26. Noor, A.K., Peters, J.M.: Reduced basis technique for nonlinear analysis of structures. AIAA J. 18(4), 455–462 (1980)

    Article  Google Scholar 

  27. Patera, A., Rozza, G.: Reduced basis approximation and a posteriori error estimation for parametrized partial differential equations. Copyright MIT (2007)

  28. Pinkus, A.: \(n\)-widths in approximation theory, volume 7 of Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)]. Springer-Verlag, Berlin (1985)

  29. Porsching, T.A.: Estimation of the error in the reduced basis method solution of nonlinear equations. Math. Comput. 45(172), 487–496 (1985)

    Article  MATH  MathSciNet  Google Scholar 

  30. Porsching, T.A., Lee, M.-Y.L.: The reduced basis method for initial value problems. SIAM J. Numer. Anal. 24(6), 1277–1287 (1987)

    Article  MATH  MathSciNet  Google Scholar 

  31. Quarteroni, A., Rozza, G.: Numerical solution of parametrized Navier–Stokes equations by reduced basis methods. Numer. Methods Partial Differ. Equ. 23(4), 923–948 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  32. Quarteroni, A., Rozza, G., Manzoni, A.: Certified reduced basis approximation for parametrized partial differential equations and applications. J. Math. Ind. 1: Art. 3, 44 (2011)

  33. Rozza, G., Huynh, D.B.P., Patera, A.T.: Reduced basis approximation and a posteriori error estimation for affinely parametrized elliptic coercive partial differential equations: application to transport and continuum mechanics. Arch. Comput. Methods Eng. 15(3), 229–275 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  34. Saad, Y.: Iterative methods for sparse linear systems. Society for Industrial and Applied Mathematics, 2nd edn. (2003)

  35. Sen, S.: Reduced-basis approximation and a posteriori error estimation for many-parameter heat conduction problems. Numer. Heat Transf. Part B Fundam. 54(5), 369–389 (2008)

    Article  Google Scholar 

  36. Urban, K., Volkwein, S., Zeeb, O.: Greedy sampling using nonlinear optimization. In: Reduced Order Methods for Modeling and Computational Reduction, pp. 137–157. Springer (2014)

  37. Veroy, K., Patera, A.T.: Certified real-time solution of the parametrized steady incompressible Navier–Stokes equations: rigorous reduced-basis a posteriori error bounds. Internat. J. Numer. Methods Fluids 47(8–9), 773–788 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  38. Willcox, K., Peraire, J.: Balanced model reduction via the proper orthogonal decomposition. AIAA J. 40(11), 2323–2330 (2002)

    Article  Google Scholar 

  39. Xiu, D., Hesthaven, J.S.: High-order collocation methods for differential equations with random inputs. SIAM J. Sci. Comput. 27(3), 1118–1139 (2005)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiahua Jiang.

Additional information

Y. Chen and J. Jiang were partially supported by National Science Foundation Grants DMS-1216928 and DMS-1719698. A. Narayan is partially supported by NSF DMS-1552238, AFOSR FA9550-15-1-0467, and DARPA N660011524053.

A Classical RBM Specifics: Greedy Algorithms, Efficiency, and Operational Count

A Classical RBM Specifics: Greedy Algorithms, Efficiency, and Operational Count

This “Appendix” contains the mathematical and algorithmic portions of RBM algorithms that are not directly the subject of this manuscript. These specifics are well-known in the RBM literature and community, and we include this “Appendix” mainly for completeness of this manuscript. Section A.1 discusses the mathematical justification for why the greedy procedure (2.11) is a good selection of parameter snapshots. Section A.2 gives an overview of the RBM procedure, and quantifies the computational complexity of the RBM algorithm. Careful scrutiny of this operational count illustrates why RBM algorithms can simulate parameterized problems with \(\mathcal {N}\)-independent complexity in the online phase of the algorithm.

Finally, Sect. A.3 discusses an efficient methodology to compute entries of the approximate Gramian \(\widetilde{G}\) used by (3.4) in the CDM algorithm. This procedure is a relatively straightforward application of the offline–online decomposition already employed by RBM algorithms.

1.1 A.1 Greedy and Weak Greedy Algorithms

The best N-dimensional RB space \(X_{N}^{\mathcal {N}}\) in \(X^\mathcal{}\) among all possible N-dimensional subspaces of the solution manifold \(u\left( \cdot ; \mathcal {D}\right) \) is in theory the one with the smallest Kolmogorov N-width \(d_N\) [28]:

$$\begin{aligned} d_N \left[ u\left( \cdot ; \mathcal {D}\right) \right] := \inf _{\begin{array}{c} X_N \subset X^{\mathcal {N}} u\left( \cdot ; \Gamma \right) \\ \dim X_N = N \end{array}}\;\; \sup _{\mu \in \mathcal {D}}\;\; \inf _{v \in X_N} \left\| u(\cdot , \mu ) - v \right\| _X \end{aligned}$$
(A.1)

The identification of an exact-infimizer for the outer “inf” is usually infeasible, but a prominent approach is to employ a greedy strategy which locates this N-dimensional space hierarchically. A first sample set \(S_{1} = \{\varvec{\mu }^{1}\}\) is identified by randomly selecting \(\varvec{\mu }^1\) from \(\Xi _\mathrm{train}\); its associated reduced basis space \(X_{1}^{\mathcal {N}} = \text {span}\{u^{\mathcal {N}}(\varvec{\mu }^{1})\}\) is likewise computed. Subsequently parameter values are greedily chosen as sub-optimal solutions to an \(L^{2}(\Xi _\mathrm{train}; X)\) optimization problem [33]: for \(N = 2, \dots , N_\mathrm{max}\), we find

$$\begin{aligned} \varvec{\mu }^{N} = \underset{\varvec{\mu } \in \Xi _\mathrm{train}}{\text {argmax} }~ || u^{\mathcal {N}}(\varvec{\mu }) - u^{\mathcal {N}}_{N-1}(\varvec{\mu })||_{X^{\mathcal {N}}} \end{aligned}$$
(A.2)

where \(u^{\mathcal {N}}_{N-1}(\varvec{\mu })\) is the RB solution (2.8) in the current \((N-1)\)-dimensional subspace. Direct calculation of \(u^\mathcal {N}(\varvec{\mu })\) to solve this optimization problem over all \(\varvec{\mu }\) is impractical. Therefore, an even weaker greedy algorithm is usually employed where we replace the error \(||u^{\mathcal {N}}(\varvec{\mu }) - u^{\mathcal {N}}_{N-1}(\varvec{\mu })||_{X}\) by an inexpensive and computable a posteriori bound \(\Delta _{N-1}\) (see the next section). After identifying \(\varvec{\mu }^N\), the parameter snapshot set and the reduced basis space are augmented, \(S_{N} = S_{N-1} \cup \{\varvec{\mu }^{N}\} ~\text {and} ~ X^{\mathcal {N}}_{N} = X^{\mathcal {N}}_{N-1} \oplus \{u(\varvec{\mu }^{N})\} \), respectively.

1.2 A.2 Offline–Online Decomposition

The last component of RBM that we plan to review in this section is the Offline–Online decomposition procedure [33]. The complexity of the offline stage depends on \(\mathcal {N}\) which is performed only once in preparation for the subsequent online computation, whose complexity is independent of \(\mathcal {N}\). It is in the \(\mathcal {N}\)-independent online stage where RBM achieves certifiable orders-of-magnitude speedup compared with other many-query approaches. The topic of this paper addresses acceleration of the offline portion of the RBM algorithm. In order to put this contribution of this paper in context, in this section we perform a detailed complexity analysis of the decomposition.

We let \(N_\mathrm{train} = |\Xi _\mathrm{train}|\) denote the cardinality (size) of \(\Xi _\mathrm{train}\); \(N \le N_\mathrm{max}\) is the dimension of the reduced basis approximation computed in the offline stage. Computation of the the lower bound \(\alpha _{LB}^{\mathcal {N}}(\varvec{\mu })\) is accomplished via the Successive Constraint Method [20].

During the online stage and for any new \(\varvec{\mu }\), the online cost of evaluating \(\alpha _{LB}^{\mathcal {N}}(\varvec{\mu })\) is negligible, but we use \(W_{\alpha }\) to denote the average cost for evaluating these values over \(\Xi _\mathrm{train}\) (this includes the offline cost). \(W_{s}\) is the operational complexity of solving problem (2.5) once by the chosen numerical method. For most discretizations, \(\mathcal {N}^2 \lesssim W_{s} \le \mathcal {N}^3\). Finally, \(W_{m}\) is the work to evaluate the \(X^\mathcal {N}\)-inner product \((f, g)_{X^{\mathcal {N}}}\) which usually satisfies \(\mathcal {N} \lesssim W_{m} \lesssim \mathcal {N}^2\). Using these notations we can present a rough operation count for the three components of the algorithm.

1.2.1 A.2.1 Online Solve and its Preparation

The system (2.9) is usually of small size: a set of N linear algebraic equations for N unknowns, with \(N \ll \mathcal {N}\). However, the formation of the stiffness matrix involves \(u^{\mathcal {N}}(\varvec{\mu }^n)\) for \(1 \le n \le N\); direct computation with these quantities requires \(\mathcal {N}\)-dependent complexity. It is the affine parameter assumption (2.4) that allows us to circumvent complexity in the online stage. By (2.4), the stiffness matrix for (2.9) can be expressed as

$$\begin{aligned} \sum _{m=1}^{N}\sum _{q = 1}^{Q_{a}} \Theta ^{q}(\varvec{\mu })a^{q}\left( u^{\mathcal {N}}\left( \varvec{\mu }^m\right) , u^{\mathcal {N}}\left( \varvec{\mu }^n\right) \right) u_{Nm}^{\mathcal {N}}(\varvec{\mu })&= f\left( u^{\mathcal {N}}\left( \varvec{\mu }^n\right) \right) ,&n&=1, \ldots , N \end{aligned}$$
(A.3)

During the offline stage, we can precompute the \(Q_a\) matrices for \(q=1, \ldots , Q_a\) with a cost of order \(\mathcal {N}^2 N^2 Q_a\). During the online phase, we need only assemble the reduced stiffness matrix according to (A.3), and solve the reduced \(N \times N\) system. The total online operation count is thus of order \(Q_{a}N^3 + N^4\).

1.2.2 A.2.2 Error Estimator Calculations

With a cost of order \(Q_a N \mathcal {N}\) in the offline stage, we can calculate functions C and \(\mathcal {L}_{m}^{q}\), \(1\le m \le N, 1 \le q \le Q_{a}\) both defined by

$$\begin{aligned} {\left\{ \begin{array}{ll} (\mathcal {C}, v) = f(v)_{X^{\mathcal {N}}} \quad \forall v \in X^{\mathcal {N}} \\ (\mathcal {L}_{m}^{q}, v)_{X^{\mathcal {N}}} = -a^{q}(u^{\mathcal {N}}\left( \varvec{\mu }^m\right) , v) \quad \forall v \in X^{\mathcal {N}}. \end{array}\right. } \end{aligned}$$
(A.4)

Here, we assume that the X-inner product can be “inverted” with cost of order \(\mathcal {N}\), i.e. that the mass matrix is block diagonal. The availability of \(\mathcal {C}\) and \(\mathcal {L}_m^q\) facilitates an Offline–Online decomposition of the term \(||r_N(\cdot ; \varvec{\mu })||_{(X^{\mathcal {N}})'}\) in the error estimate (2.13) due to that its square can be written as

$$\begin{aligned}&(\mathcal {C}, \mathcal {C})_{X^{\mathcal {N}}} + 2\sum _{q = 1}^{Q_{a}}\sum _{m = 1}^{N}\Theta ^{q}(\varvec{\mu })u^{\mathcal {N}}_{Nm}(\varvec{\mu })(\mathcal {C}, \mathcal {L}_{m}^{q})_{X}\nonumber \\&\quad + \sum _{q = 1}^{Q_{a}}\sum _{m = 1}^{N}\Theta ^{q}(\varvec{\mu })u^{\mathcal {N}}_{Nm} \left\{ \sum _{q' = 1}^{Q_{a}}\sum _{m' = 1}^{N}\Theta ^{q'}(\varvec{\mu })u^{\mathcal {N}}_{Nm'}(\mathcal {L}_{m}^{q}, \mathcal {L}_{m'}^{q'})_{X^{\mathcal {N}}}\right\} . \end{aligned}$$
(A.5)

Therefore, in the offline stage we should calculate and store \((\mathcal {C}, \mathcal {C})_{X^{\mathcal {N}}}, (\mathcal {C},\mathcal {L}_{m}^{q})_{X^{\mathcal {N}}}, (\mathcal {L}_{m}^{q}, \mathcal {L}_{m'}^{q'})_{X^{\mathcal {N}}}, \,\, 1 \le m, m' \le N_\mathrm{RB}, 1 \le q, q' \le Q_{a}\). This cost is of the order \(Q_a^2 N^3 W_m\). During the online stage, given any parameter \(\varvec{\mu }\), we only need to evaluate \(\Theta ^{q}(\varvec{\mu }), 1 \le q \le Q, u^{\mathcal {N}}_{Nm}(\varvec{\mu }), 1 \le m \le N\), and compute the sum (A.5). Thus, the online operation count for each \(\varvec{\mu }\) is \(O(Q^2_{a}N^3)\).

1.2.3 A.2.3 Greedy Sweeping

In the offline phase of the algorithm, we repeatedly sweep \(\Xi _\mathrm{train}\) for maximization of the error estimator \(\Delta _n(\varvec{\mu }), \, 1 \le n \le N\). The offline cost includes:

  • computing the lower bound \(\alpha _{LB}^{\mathcal {N}}(\varvec{\mu })\). The operation count is \(O(N_\mathrm{train}W_{\alpha })\),

  • sweeping the training set by calculating the reduced basis solution and the a posteriori error estimate at each location. The operation count of the former one is \(O(N_\mathrm{train}(Q_{a}N_{RB}^3 + N_{RB}^4))\). The operation count of the latter one is \(O(N_\mathrm{train}Q^2_{a}N^{3}_{RB}).\)

  • solving system (2.5) N times. The total operation count is \(O(N W_{s})\).

1.2.4 A.2.4 Summary

The total offline portion of the algorithm has complexity of the order

The total online cost including the error certification is of order \(Q^2_{a}N^3\).

1.3 A.3 Offline–Online Decomposition for the Approximate CDM-RBM Gramian \(\widetilde{G}\)

The entries of the matrix \(\widetilde{G}\) defined in (3.4) can be efficiently computed assuming that we can compute the approximate errors \(\left\{ \widetilde{e}(\varvec{\mu }): \varvec{\mu }\in \Xi _\mathrm{train}\right\} \) in an offline–online fashion. To accomplish this, note that \(\mathbb {A}_{\mathcal {N}}(\varvec{\mu })(\varvec{v}) = \sum _{k = 1}^{Q_{a}}\theta ^a_{k}(\varvec{\mu })A_{k}(\varvec{v})\) by (2.4), where \(A_{k}(.)\) is a nonparametric matrix operator, so that

$$\begin{aligned} \begin{aligned} \widetilde{e}(\varvec{\mu })&= \left( \sum _{m = 1}^{Q}u^{\mathcal {N}}_{Nm}(\varvec{\mu }) \mathbb {A}^{-1}_{\mathcal {N}}(\varvec{\mu }^{m})\right) \left( f^{\mathcal {N}} - \mathbb {A}_{\mathcal {N}}(\varvec{\mu })\left( \sum ^{N}_{m=1}u^{\mathcal {N}}_{Nm}(\varvec{\mu }) u^{\mathcal {N}}\left( \varvec{\mu }^m\right) \right) \right) \\&=\sum _{m = 1}^{Q} u^{\mathcal {N}}_{Nm}(\varvec{\mu }) \bigg ( \mathbb {A}^{-1}_{\mathcal {N}}(\varvec{\mu }^{m})f^{\mathcal {N}}\bigg ) - \sum _{m = 1}^{Q}\sum _{m' = 1}^{N} u^{\mathcal {N}}_{Nm}(\varvec{\mu })u^{\mathcal {N}}_{Nm'}(\varvec{\mu }) \bigg (\mathbb {A}^{-1}_{\mathcal {N}}(\varvec{\mu }^{m})\mathbb {A}_{\mathcal {N}}(\varvec{\mu })\left( u^{\mathcal {N}}\left( \varvec{\mu }^{m'}\right) \right) \bigg )\\&= \sum _{m = 1}^{Q} u^{\mathcal {N}}_{Nm}(\varvec{\mu }) \bigg ( \mathbb {A}^{-1}_{\mathcal {N}}(\varvec{\mu }^{m})f^{\mathcal {N}}\bigg ) - \sum _{m = 1}^{Q}\sum _{m' = 1}^{N} \sum _{k = 1}^{Q_{a}} \theta ^a_{k}(\varvec{\mu })u^{\mathcal {N}}_{Nm}(\varvec{\mu })u^{\mathcal {N}}_{Nm'}(\varvec{\mu }) \bigg (\mathbb {A}^{-1}_{\mathcal {N}}(\varvec{\mu }^{m})A_{k}( u^{\mathcal {N}}\left( \varvec{\mu }^{m'}\right) )\bigg ), \end{aligned} \end{aligned}$$

Therefore, we can split this computation into offline and online components as follows:

  • Offline Calculate \(\mathbb {A}^{-1}_{\mathcal {N}}(\varvec{\mu }^{m})f^{\mathcal {N}}\) and \(\mathbb {A}^{-1}_{\mathcal {N}}(\varvec{\mu }^{m})A_{k}\left( u^{\mathcal {N}}\left( \varvec{\mu }^{m'}\right) \right) \) for \(1 \le m' \le N,~1 \le m \le Q ~,~ 1 \le k \le Q_{a},~ {Q \le N}\), with complexity \(O(\mathcal {N}^2Q N Q_{a})\).

  • Online Evaluate the coefficients \(u^{\mathcal {N}}_{Nm}(\varvec{\mu })\) and \(\theta ^a_{k}(\varvec{\mu })u^{\mathcal {N}}_{Nm}(\varvec{\mu })u^{\mathcal {N}}_{Nm'}(\varvec{\mu })\) and form \(\widetilde{e}(\varvec{\mu })\). The online computation has complexity \(O(\mathcal {N}Q N Q_{a})\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, J., Chen, Y. & Narayan, A. Offline-Enhanced Reduced Basis Method Through Adaptive Construction of the Surrogate Training Set. J Sci Comput 73, 853–875 (2017). https://doi.org/10.1007/s10915-017-0551-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10915-017-0551-3

Keywords

Navigation