# Wave propagation characteristics of Parareal

## Abstract

The paper derives and analyses the (semi-)discrete dispersion relation of the Parareal parallel-in-time integration method. It investigates Parareal’s wave propagation characteristics with the aim to better understand what causes the well documented stability problems for hyperbolic equations. The analysis shows that the instability is caused by convergence of the amplification factor to the exact value from above for medium to high wave numbers. Phase errors in the coarse propagator are identified as the culprit, which suggests that specifically tailored coarse level methods could provide a remedy.

## Keywords

Parareal Convergence Dispersion relation Hyperbolic problem Advection-dominated problem## 1 Introduction

Parallel computing has become ubiquitous in science and engineering, but requires suitable numerical algorithms to be efficient. Parallel-in-time integration methods have been identified as a promising direction to increase the level of concurrency in computer simulations that involve the numerical solution of time dependent partial differential equations [5]. A variety of methods has been proposed [8, 10, 12, 22, 25], the earliest going back to 1964 [27]. While even complex diffusive problems can be tackled successfully [11, 14, 24, 33]—although parallel efficiencies remain low—hyperbolic or advection-dominated problems have proved to be much harder to parallelise in time. This currently prevents the use of parallel-in-time integration for most problems in computational fluid dynamics, even though many applications struggle with excessive solution times and could benefit greatly from new parallelisation strategies.

For the Parareal parallel-in-time algorithm there is some theory available illustrating its limitations in this respect. Bal shows that Parareal with a sufficiently damping coarse method is unconditionally stable for parabolic problems but not for hyperbolic equations [2]. An early investigation of Parareal’s stability properties showed instabilities for imaginary eigenvalues [31]. Gander and Vandewalle [17] give a detailed analysis of Parareal’s convergence and show that even for the simple advection equation \(u_t + u_x = 0\), Parareal is either unstable or inefficient. Numerical experiments reveal that the instability emerges in the nonlinear case as a degradation of convergence with increasing Reynolds number [32]. Approaches exist to stabilise Parareal for hyperbolic equations [3, 4, 7, 13, 15, 23, 29], but typically with significant overhead, leading to further degradation of efficiency, or limited applicability.

Since a key characteristic of hyperbolic problems is the existence of waves propagating with finite speeds, understanding Parareal’s wave propagation characteristics is important to understand and, hopefully, resolve these problems. However, no such analysis exists that gives insight into *how* the instability emerges. A better understanding of the instability could show the way to novel methods that allow the efficient and robust parallel-in-time solution of flows governed by advection. Additionally, just like for “classical” time stepping methods, detailed knowledge of Parareal’s theoretical properties for test cases will help understanding its performance for complex test problems where mathematical theory is not available.

To this end, the paper derives a discrete dispersion relation for Parareal to study how plane wave solutions \(u(x,t) = \exp (-i \omega t) \exp (i \kappa x)\) are propagated in time. It studies the discrete phase speed and amplification factor and how they depend on e.g. the number of processors, choice of coarse propagator and other parameters. The analysis reveals that the source of the instability is a convergence from above in the amplification factor in higher wave number modes. In diffusive problems, where high wave numbers are naturally strongly damped, this does not cause any problems, but in hyperbolic problems with little or no diffusion it causes the amplification factors to exceed a value of one and thus triggers instability. Furthermore, the paper identifies phase errors in the coarse propagator as the source of these issues. This suggests that controlling coarse level phase errors could be key to devising efficient parallel-in-time methods for hyperbolic equations.

All results presented in this paper have been produced using pyParareal, a simple open-source Python code. It is freely available [28] to maximise the usefulness of the here presented analysis, allowing other researchers to test different equations or to explore sets of parameters which are not analysed in this paper.

## 2 Parareal for linear problems

*T*] into

*P*time slices

*P*indicating the number of processors. Denote as \(\mathcal {F}_{\delta t}\) and \(\mathcal {G}_{\Delta t}\) two “classical” time integration methods with time steps of length \(\delta t\) and \(\Delta t\) (e.g. Runge–Kutta methods). For the sake of simplicity, assume that all slice \([T_{j-1}, T_{j})\) have the same length \(\Delta T\) and that this length is an integer multiple of both \(\delta t\) and \(\Delta t\) so that \(\Delta T = N_{\text {c}} \Delta t\) and \(\Delta T = N_{\text {f}} \delta t\). Below, \(\delta t\) will always denote the time step size of the fine method and \(\Delta t\) the time step size of the coarse method, so that we omit the indices and just write \(\mathcal {G}\) and \(\mathcal {F}\) to avoid clutter. Standard serial time marching using the method denoted as \(\mathcal {F}\) would correspond to evaluating

*P*processors. If the number of iterations

*K*is small enough and the coarse method much cheaper than the fine, iteration (4) can run in less wall clock time than serially computing (3).

### 2.1 The Parareal iteration in matrix form

### 2.2 Stability function of Parareal

*K*iterations as multiplication by a single matrix. The fine propagator solution satisfies

*n*entries out of \(\mathbf {u}^k\). Now, a full Parareal update from some initial value \(u_0\) to an approximation \(u_{P}\) using

*K*iterations can be written compactly as

*K*,

*T*, \(\Delta T\),

*P*, \(\Delta t\), \(\delta t\), \(\mathcal {F}\), \(\mathcal {G}\) and

*A*. Note that Staff and Rønquist derived the stability function for the scalar case using Pascal’s tree [31].

### 2.3 Weak scaling versus longer simulation times

There are two different application scenarios for Parareal that we can study when increasing the number of processors *P*. If we fix the final time *T*, increasing *P* will lead to better resolution since the coarse time step \(\Delta t\) cannot be larger than the length of a time slice \(\Delta T\)—the coarse method has to perform at least one step per time slice. In this scenario, more processors are used to absorb the cost of higher temporal resolution (“weak scaling”).

Alternatively, we can use additional processors to compute until a later final time *T* and this is the scenario investigated here. Consequently, we study here the case where *T* and *P* increase together and always assume that \(T=P\), that is each time slice has length one and increasing *P* means parallelising over more time slices covering a longer time interval. Since dispersion properties of numerical methods are typically analysed for a unit interval, this causes some issues that we resolve by “normalising” the Parareal stability function, see Sect. 3.1.

### 2.4 Maximum singular value

*P*iterations Parareal will always reproduce the fine solution exactly. Therefore, all eigenvalues of \(\mathbf {E}\) are zero and the spectral radius is not useful to analyse convergence. Below, to investigate convergence, we will therefore compute the maximum singular value \(\sigma \) of \(\mathbf {E}\) instead. Since

*P*/ 2 iterations before beginning to decrease [16]. However, achieving fast convergence and good efficiency will typically require \(\sigma \ll 1\). Note that if the coarse method is used to generate \(\mathbf {u}_0\), it follows from (9) and (14) that the initial error is

*P*. Instead, at least for linear problems, we can fix

*k*such that

### 2.5 Convergence and (in)stability of Parareal

*k*. A configuration of Parareal is referred to as

*unstable*if it leads to an amplification factor of more than unity. This corresponds to an artificial increase in wave amplitudes and, just as for classical time stepping methods, would result in a diverging numerical approximation if the method is used recursively

*P*processors it would mean computing one window \([0, T_{P}]\), then restarting it with the final approximation as initial value for the next window \([T_P, 2 T_{P}]\) and so on.

## 3 Discrete dispersion relation for the advection–diffusion equation

*R*is the stability function of the employed method. Now assume that the approximation of \(\hat{u}\) is a discrete plane wave so that the solution at the end of the

*n*th time slice is given by

*R*in polar coordinates, that is \(R = \left| R \right| \exp (i \theta )\) with \(\theta = \texttt {angle}(R)\), we get

*R*instead, we get some approximate \(\omega = \omega _{r} + i \omega _{i}\) with \(\omega _{r}, \omega _{i} \in \mathbb {R}\). The resulting semi-discrete solution becomes

*phase velocity*while \(\exp (\omega _i)\) is called

*amplification factor*. In the continuous case, the phase speed is equal to

*U*and the amplification factor equal to \(e^{-\nu \kappa ^2}\). Note that for (25) the exact phase speed should be independent of the wave number \(\kappa \). However, the discrete phase speed of a numerical method often will change with \(\kappa \), thus introducing numerical dispersion. Also note that for \(\nu > 0\) higher wave numbers decay faster because the amplification factor decreases rapidly as \(\kappa \) increases.

### 3.1 Normalisation

The update function R for Parareal in Eq. (19) denotes not an update over [0, 1] but over \([0,T_P]\) where \(T_P = P\) is the number of processors. A phase speed of \(\omega _{r}/\kappa = 1.0\), for example, indicates a wave that travels across a whole interval [0, 1] during the step. If scaled up to an interval [0, *P*], the corresponding phase speed would become \(\omega _{r}/\kappa = P\) instead.

*U*,

*T*and \(\nu \), this identity is not necessarily satisfied when computing the complex logarithm using np.log. For example, for \(\kappa = 2\), \(U = 1\), \(\nu = 0\) and \(T>0\), the exact stability function is \(R = e^{-2 i T}\). In Python, we obtain for \(T=1\)

*a*complex logarithm but not necessarily the right one in terms of the dispersion relation.

*R*for Parareal to [0, 1]. To this end, decompose

*P*]. Since there are

*P*many roots \(\root P \of {R}\), we have to select the right one. First, we use the zeros function of numPy to find all

*P*complex roots \(z_i\) of

We compute \(\omega \) and the resulting phase speed and amplification factor for a range of wave numbers \(0 \le \kappa _1 \le \kappa _2 \le \cdots \le \kappa _{N} \le \pi \). For \(\kappa _1\), \(\theta _\mathrm{targ}\) is set to the angle of the frequency \(\omega \) computed from the analytic dispersion relation. After that, \(\theta _\mathrm{targ}\) is set to the angle of the root selected for the previous value of \(\kappa \). The rationale is that small changes in \(\kappa \) should only result in small changes of frequency and phase so that \(\theta (\omega _{i-1}) \approx \theta (\omega _i)\) if the increment between wave numbers is small enough. From the selected root \(\root P \of {R}\) we then compute \(\omega \) using (30), the resulting discrete phase speed and amplification factor and the target angle \(\theta _\mathrm{targ}\) for the next wave number.

## 4 Analysis of the dispersion relation

After showing how to derive Parareal’s dispersion relation and normalising it to the unit interval, this section now provides a detailed analysis of different factors influencing its discrete phase speed and amplification factor.

### 4.1 Influence of diffusion

In both cases, the discrete phase speed of Parareal converges almost monotonically toward the continuous phase speed. Even for ten iterations, Parareal still causes significant slowing of medium to large wave number modes. Parareal requires almost the full number of iterations, \(k=15\), before it faithfully reproduces the correct phase speed across most of the spectrum. However, for any number of iterations where speedup might still be possible, Parareal will introduce significant numerical dispersion. Slight artificial acceleration is also observed for high wave number modes for \(k=15\) in the non-diffusive and \(k=10\) in the diffusive case, but generally phase speeds are quite similar in the diffusive and non-diffusive case.

*from above*. For the diffusive case where the exact values are smaller than one this does not cause instabilities. In the non-diffusive case, however, any overestimation of the analytical amplification factor immediately causes instability. There is, in a sense, “no room” for the amplification factor to converge to the correct value from above. This also means that using a non-diffusive method as coarse propagator, for example trapezoidal rule, leads to disastrous consequences (not shown) where most parts of the spectrum are unstable for almost any value of

*k*.

Figure 4 illustrate how the phase speed and amplitude errors discussed above manifest themselves. It shows a single Gauss peak advected with a velocity of \(U=1.0\) with \(\nu = 0.0\) on a spatial domain [0, 4] over a time interval [0, 16] distributed over \(P=16\) processors and \(N_c = 2\) coarse time steps per slice. A spectral discretisation is used in space, allowing to represent the derivative exactly. For \(k=5\) iterations, most higher wave numbers are damped out and the result looks essentially like a low wave number sine function. The artificially amplified medium to high wave number modes create a “bulge” for \(k=10\) while dispersion leads to a significant trough at the sides of the domain. After fifteen iterations, the solution approximates the main part of the Gauss peak reasonably well, but dispersion still leads to visible errors along the flanks. The right figure shows a part of the resulting spectrum. For \(k=5\), only the lowest wave number modes are present, leading to the sine shaped solution. After \(k=10\) iterations, most of the spectrum is still being truncated but a small range of wave numbers around \(\kappa = 0.05\) is being artificially amplified which creates the “bulge” seen in the left figure. Finally, for \(k=15\) iterations, Parareal starts to correctly capture the spectrum but the still significant overestimation of low wave number amplitudes and underestimation of higher modes causes visible errors.

### Observation 1

The amplification factor in Parareal for higher wave numbers converges “from above”. In diffusive problems these wave numbers are damped, so the exact amplification factor is significantly smaller than one, leaving room for Parareal to overestimate it without crossing the threshold to instability. For non-diffusive problems where the exact amplification factor is one, every overestimation causes the mode to become unstable.

### 4.2 Low order finite difference in coarse method

The dispersion properties of the implicit Euler method together with the first order upwind finite difference are qualitatively similar to the ones for implicit Euler with the exact symbol (not shown).^{1} Using the upwind finite difference instead of the exact symbol gives qualitatively similar wave propagation characteristics for the coarse propagator. Numerical slowdown increases (up to the point where modes at the higher end of the spectrum almost do not propagate at all) and numerical diffusion becomes somewhat stronger. As a result, Parareal’s dispersion properties (also not shown) are also relatively similar, except for too small phase speeds even for \(k=15\).

In Parareal, this causes a situation similar to what happens when using the trapezoidal rule as coarse propagator, albeit less drastic. Figure 6 shows again the phase speed (left) and amplification factor (right) for the same configuration as before but implicit Euler with centred finite difference for \(\mathcal {G}\). The failure of the coarse method to remove high wave number modes again leads to an earlier triggering of the instability. Whereas for Parareal using \(\delta \) on the coarse level the iteration \(k=5\) was will stable (see Fig. 3), it is now unstable. For iterations \(k=10\) and \(k=15\), large parts of the spectrum remain unstable. Also, the stronger numerical slow down of the coarse method makes it harder for Parareal to correct for phase speed errors. Where before Parareal with \(k=15\) iteration captured the exact phase speed reasonably well, in Fig. 6 we still see significant numerical slow down of the higher wave number modes.

### Observation 2

The choice of finite difference stencil used in the coarse propagator can have a significant effect on Parareal. It seems that centred stencils that fail to remove high wave number modes cause similar problems as non-diffusive time stepping methods, suggesting that stencils with upwind-bias are a much better choice.

### 4.3 Influence of phase and amplitude errors

*phase errors*or

*amplitude errors*in the coarse method trigger the instability, we construct coarse propagators where either the phase or the amplitude is exact. Denote as \(R_{\text {euler}}\) the stability function of backward Euler and as \(R_{\text {exact}}\) the stability function of the exact integrator. Then, a method with the amplification factor of backward Euler and no phase speed error can be constructed as

Figure 8 shows the solution for the same setup that was used for Fig. 4, except using the \(R_1\) artificial coarse propagator without phase errors instead of the backward Euler. For \(k=5\) iterations, the peak is strongly damped but, because \(\mathcal {G}\) has no phase errors, in the correct place. After \(k=10\) iterations, Parareal has corrected for most the numerical damping and already provides a reasonable approximation, even though the amplitude of most wave numbers in the spectrum is still severely underestimated. However, the lack of phase errors and resulting numerical dispersion avoids the “bulge” and distortions that were present in Fig. 4. Finally, for \(k=15\) iterations, the solution provided by Parareal is indistinguishable from the exact solution. Small underestimation of the amplitudes of larger wave numbers can still be seen in the spectrum, but the effect is minimal. Note that this does not mean that Parareal will provide speedup—in a realistic scenario, where \(\mathcal {F}\) is not exact but a time stepping method, too, it would depend on how many iterations are required for Parareal to be as accurate as the fine method run serially and the actual runtimes of both propagators. All that can be said so far is that avoiding coarse propagator phase errors avoids the instability and leads to faster convergence.

In summary, these results strongly suggest that *phase errors* in the coarse method are responsible for the instability, which is in line with previous findings that Parareal can quickly correct even for very strong numerical diffusion as long as a wave is placed at roughly the correct position by the coarse predictor [29].

### Observation 3

The instability in Parareal seems to be caused by phase errors in the coarse propagator while amplitude errors are quickly corrected by the iteration.

#### 4.3.1 Relation to asymptotic Parareal

#### 4.3.2 Phase error or mismatch

### Observation 4

Analysing further the issue of phase errors shows that the instability seems to arise from mismatches between the phase speed of coarse and fine propagator.

### 4.4 Coarse time step

Using a smaller time step for the coarse method will obviously reduce its phase error and can thus be expected to benefit Parareal convergence. Figure 12 shows that this is indeed true. It shows discrete phase speed (left) and amplification factor (right) for the same configuration as used for Fig. 3, except now using two coarse step per time slice instead of one. Since the coarse propagator alone is now already significantly more accurate, Parareal with \(k=5\) and \(k=10\) iterations provides more accurate phase speeds and, for \(k=15\), reproduces the exact value exactly, The reduced phase errors translate into a milder instability. For \(k=10\), some wave numbers have amplification factors above one, but both the range of unstable wave numbers and the severity of the instability are much smaller than if only a single coarse time step is used. This explains why configurations can be quite successful where both \(\mathcal {F}\) and \(\mathcal {G}\) use nearly identical time steps and the difference in runtime is achieved by other means, e.g. an expensive high order spatial discretisation for the fine and a cheap low order discretisation on the coarse level.

### Observation 5

Since phase errors of the coarse method obviously depend on its time step size, reducing the coarse time step helps to reduce the range of unstable wave numbers and the severity of the instability.

### 4.5 Number of time slices

All examples so far only considered \(P=16\) time slices and processors. To illustrate the effect of increasing *P*, Fig. 13 shows the discrete dispersion relation for standard Parareal for \(P=64\) time slices or processors (same configuration as in Fig. 3 except for *P*). Even for \(k=15\) iterations, Parareal reproduces the correct phase speed (left figure) very poorly—waves across large parts of the spectrum suffer from substantial numerical slowdown. Also, converge is slow and there is only marginal improvement from \(k=5\) to \(k=15\) iterations. Convergence is somewhat faster for the amplification factor (right figure) with more substantial improvement for \(k=15\) over the coarse method. However, there also remains significant numerical attenuation of the upper half of the spectrum. If integrating the Gauss peak with this configuration, the result at \(T=64\) after \(k=5\) iterations is essentially a straight line (not shown) as almost all modes beyond \(\kappa =0\) are strongly damped. A small overshoot at around \(\kappa =1\) is noticeable for \(k=15\) iterations and this will worsen as *k* increases. In general, as *P* increases, it takes more iterations to trigger the instability since the slow convergence requires longer to correct for the strong diffusivity of the coarse method.

These results suggest that the high wave numbers are the slowest to converge and that convergence deteriorates as *P* increases. This is confirmed by Fig. 14, showing the maximum singular value for three wave numbers plotted against *P*. Convergence generally gets worse as *P* increases, but note that even for \(P=64\) the low wave number mode (blue circles) still converges monotonically while the high wave number mode (green crosses) might already converge non-monotonically for only \(P=4\) processors. There also seems to be a limit for \(\sigma \) as *P* increases, with higher wave numbers levelling off at higher values of \(\sigma \).

### Observation 6

While convergence becomes slower as the number of processors *P* increases, low wave numbers converge monotonically even for large numbers of *P* but high wave numbers might not do so already for \(P=4\).

### 4.6 Wave number

*k*comes close to \(P=16\). Interestingly, this is the case for both \(\nu =0\) and \(\nu =0.1\). While the “bulge” is even more pronounced in the diffusive case, since modes decay proportional to \(e^{-\nu \kappa ^2 t}\) in amplitude, the absolute values of the defect are orders of magnitude smaller. Therefore, at least until \(k=7\) iterations, it is no longer the high wave number mode \(\kappa = 2.69\) that will restrict performance, but rather the lower wave number \(\kappa = 0.9\). Then, the instability for the high wave number kicks in and for \(k \ge 8\) wave number \(\kappa = 2.69\) is again causing the largest defect. As \(\nu \) increases, however, the defects for \(\kappa = 2.69\) will reduce further, the cross-over point will move to later iterations and eventually lower wave numbers will determine convergence for all iterations. In a sense, in line with the analysis above, Parareal propagates high wave number modes very wrongly in both cases, but since high wave number modes are quickly attenuated if \(\nu > 0\), it does not matter very much in the diffusive case.

### Observation 7

Since diffusion naturally damps higher wave numbers, it removes the issue of slow or no convergence at the end of the spectrum. Therefore, as the diffusivity parameter \(\nu \) increases, the wave number that converges the slowest and determines performance of Parareal becomes smaller.

## 5 Conclusions

Efficient parallel-in-time integration of hyperbolic and advection-dominated problems has been shown to be problematic. This prevents application of a promising new parallelisation strategy to many problems in computational fluid dynamics, despite the urgent need for better utilisation of massively parallel computers. For the Parareal parallel-in-time method, mathematical theory has shown that the algorithm is either unstable or inefficient when applied to hyperbolic equations, but so far no detailed analysis exists of how exactly the instability emerges.

The paper presents the first detailed analysis of how Parareal propagates waves and the ways in which the instability is triggered. It uses a formulation of Parareal as a preconditioned fixed point iteration for linear problems to derive its stability function. From there, a discrete dispersion relation is obtained that allows to study the phase speed and amplitude errors from Parareal when computing wave-like solutions. To deal with issues arising from increasing the time interval together with the number of processors, a simple procedure is introduced to normalise the stability function to the unit interval.

Analysis of the discrete dispersion relation and the maximum singular value of the error propagation matrix then allows to make a range of observations, illustrating where the issues of Parareal for wave problems originate. A key finding is that the source of the instability are different discrete phase speeds on the coarse and fine level, which cause instability of higher wave number modes. Interestingly, the overestimation of high wave number amplitudes is present in diffusive problems, too, but since there these amplitudes are naturally strongly damped, it does not trigger instabilities. Further analysis addresses the role of the number of processors, the coarse time step size and comments on possible connections to asymptotic Parareal and multi-grid methods for the Helmholtz equation.

The analysis presented here will be useful to interpret and understand performance of Parareal for more complex problems in computational fluid dynamics. A natural line of future research would be to attempt to develop a new, more stable, parallel-in-time method for hyperbolic problems based on the provided observations. For example, the update in Parareal proceeds component wise. That means that if the coarse propagator moves a wave at the wrong speed, the update will not know that a simple shift of entries could provide a good correction. Attempting to somehow modify the Parareal update to take into account this type of information seems promising, even though probably challenging to do in 3D. Extending the analysis presented here to systems with multiple waves, e.g. the shallow water equations, or to nonlinear problem where wave numbers interact would be another interesting line of inquiry. Furthermore, the framework used here to analyse Parareal is straightforward to adopt for other parallel-in-time integration methods as long as a matrix representation for them is available.

## Footnotes

- 1.
The script plot_ieuler_dispersion.py supplied together with the Python code can be used to visualize the dispersion properties of the coarse propagator alone.

## Notes

### Acknowledgements

I would like to thank Martin Gander, Martin Schreiber and Beth Wingate for the very interesting discussions at the 5th Workshop on Parallel-in-time integration at the Banff International Research Station (BIRS) in November 2016, which led to the comments about asymptotic Parareal and multi-grid for 1D Helmholtz equation in the paper. Parts of the pseudo-spectral code used to illustrate the effect of numerical dispersion come from David Ketcheson’s short course PseudoSpectralPython [21]. The pyParareal code written for this paper relies heavily on the open Python packages NumPy [34], SciPy [20] and Matplotlib [19].

## References

- 1.Amodio, P., Brugnano, L.: Parallel solution in time of ODEs: some achievements and perspectives. Appl. Numer. Math.
**59**(3–4), 424–435 (2009). https://doi.org/10.1016/j.apnum.2008.03.024 MathSciNetCrossRefzbMATHGoogle Scholar - 2.Bal, G.: On the convergence and the stability of the Parareal algorithm to solve partial differential equations. In: Kornhuber, R., et al. (eds.) Domain Decomposition Methods in Science and Engineering, Lecture Notes in Computational Science and Engineering, vol. 40, pp. 426–432. Springer, Berlin (2005). https://doi.org/10.1007/3-540-26825-1_43 Google Scholar
- 3.Chen, F., Hesthaven, J.S., Zhu, X.: On the use of reduced basis methods to accelerate and stabilize the Parareal method. In: Quarteroni, A., Rozza, G. (eds.) Reduced Order Methods for Modeling and Computational Reduction, MS&A—Modeling, Simulation and Applications, vol. 9, pp. 187–214. Springer, Berlin (2014). https://doi.org/10.1007/978-3-319-02090-7_7 Google Scholar
- 4.Dai, X., Maday, Y.: Stable Parareal in time method for first- and second-order hyperbolic systems. SIAM J. Sci. Comput.
**35**(1), A52–A78 (2013). https://doi.org/10.1137/110861002 MathSciNetCrossRefzbMATHGoogle Scholar - 5.Dongarra, J., et al.: Applied mathematics research for exascale computing. Technical Report LLNL-TR-651000, Lawrence Livermore National Laboratory (2014). http://science.energy.gov/%7E/media/ascr/pdf/research/am/docs/EMWGreport.pdf
- 6.Durran, D.R.: Numerical Methods for Fluid Dynamics, Texts in Applied Mathematics, vol. 32. Springer, New York (2010). https://doi.org/10.1007/978-1-4419-6412-0 CrossRefzbMATHGoogle Scholar
- 7.Eghbal, A., Gerber, A.G., Aubanel, E.: Acceleration of unsteady hydrodynamic simulations using the Parareal algorithm. J. Comput. Sci. (2016). https://doi.org/10.1016/j.jocs.2016.12.006 Google Scholar
- 8.Emmett, M., Minion, M.L.: Toward an efficient parallel in time method for partial differential equations. Commun. Appl. Math. Comput. Sci.
**7**, 105–132 (2012). https://doi.org/10.2140/camcos.2012.7.105 MathSciNetCrossRefzbMATHGoogle Scholar - 9.Ernst, O.G., Gander, M.J.: Multigrid methods for Helmholtz problems: a convergent scheme in 1D using standard components. In: Direct and Inverse Problems in Wave Propagation and Applications. De Gruyer (2013). https://doi.org/10.1515/9783110282283.135
- 10.Falgout, R.D., Friedhoff, S., Kolev, T.V., MacLachlan, S.P., Schroder, J.B.: Parallel time integration with multigrid. SIAM J. Sci. Comput.
**36**, C635–C661 (2014). https://doi.org/10.1137/130944230 MathSciNetCrossRefzbMATHGoogle Scholar - 11.Falgout, R.D., Manteuffel, T.A., O’Neill, B., Schroder, J.B.: Multigrid reduction in time for nonlinear parabolic problems: A case study. SIAM J. Sci. Comput. 39(5), S298–S322. (2016). https://doi.org/10.1137/16M1082330
- 12.Farhat, C., Chandesris, M.: Time-decomposed parallel time-integrators: theory and feasibility studies for fluid, structure, and fluid–structure applications. Int. J. Numer. Methods Eng.
**58**(9), 1397–1434 (2003). https://doi.org/10.1002/nme.860 MathSciNetCrossRefzbMATHGoogle Scholar - 13.Farhat, C., Cortial, J., Dastillung, C., Bavestrello, H.: Time-parallel implicit integrators for the near-real-time prediction of linear structural dynamic responses. Int. J. Numer. Methods Eng.
**67**, 697–724 (2006). https://doi.org/10.1002/nme.1653 MathSciNetCrossRefzbMATHGoogle Scholar - 14.Fischer, P.F., Hecht, F., Maday, Y.: A Parareal in time semi-implicit approximation of the Navier–Stokes equations. In: Kornhuber, R., et al. (eds.) Domain Decomposition Methods in Science and Engineering, Lecture Notes in Computational Science and Engineering, vol. 40, pp. 433–440. Springer, Berlin (2005). https://doi.org/10.1007/3-540-26825-1_44 CrossRefGoogle Scholar
- 15.Gander, M.J., Petcu, M.: Analysis of a Krylov subspace enhanced Parareal algorithm for linear problem. ESAIM: Proc.
**25**, 114–129 (2008). https://doi.org/10.1051/proc:082508 MathSciNetCrossRefzbMATHGoogle Scholar - 16.Gander, M.J., Vandewalle, S.: Analysis of the Parareal time-parallel time-integration method. SIAM J. Sci. Comput.
**29**(2), 556–578 (2007). https://doi.org/10.1137/05064607X MathSciNetCrossRefzbMATHGoogle Scholar - 17.Gander, M.J., Vandewalle, S.: On the superlinear and linear convergence of the parareal algorithm. In: Widlund, O.B., Keyes, D.E. (eds.) Domain Decomposition Methods in Science and Engineering, Lecture Notes in Computational Science and Engineering, vol. 55, pp. 291–298. Springer, Berlin (2007). https://doi.org/10.1007/978-3-540-34469-8_34 CrossRefGoogle Scholar
- 18.Haut, T., Wingate, B.: An asymptotic parallel-in-time method for highly oscillatory PDEs. SIAM J. Sci. Comput.
**36**(2), A693–A713 (2014). https://doi.org/10.1137/130914577 MathSciNetCrossRefzbMATHGoogle Scholar - 19.Hunter, J.D.: Matplotlib: a 2D graphics environment. Comput. Sci. Eng.
**9**(3), 90–95 (2007). https://doi.org/10.1109/MCSE.2007.55 CrossRefGoogle Scholar - 20.Jones, E., Oliphant, T., Peterson, P., et al.: SciPy: open source scientific tools for python (2001–). http://www.scipy.org/. Accessed 28 May 2018
- 21.Ketcheson, D.I.: Pseudo spectral python (2015). https://github.com/ketch/PseudoSpectralPython. Accessed 28 May 2018
- 22.Kiehl, M.: Parallel multiple shooting for the solution of initial value problems. Parallel Comput.
**20**(3), 275–295 (1994). https://doi.org/10.1016/S0167-8191(06)80013-X MathSciNetCrossRefzbMATHGoogle Scholar - 23.Kooij, G., Botchev, M., Geurts, B.: A block Krylov subspace implementation of the time-parallel Paraexp method and its extension for nonlinear partial differential equations. J. Comput. Appl. Math.
**316**, 229–246 (2017). https://doi.org/10.1016/j.cam.2016.09.036.. (Selected papers from NUMDIFF-14)MathSciNetCrossRefzbMATHGoogle Scholar - 24.Kreienbuehl, A., Naegel, A., Ruprecht, D., Speck, R., Wittum, G., Krause, R.: Numerical simulation of skin transport using Parareal. Comput. Vis. Sci.
**17**, 99–108(2015). https://doi.org/10.1007/s00791-015-0246-y - 25.Lions, J.L., Maday, Y., Turinici, G.: A “Parareal” in time discretization of PDE’s. Comptes Rendus l’Acad. Sci. Ser. I Math.
**332**, 661–668 (2001). https://doi.org/10.1016/S0764-4442(00)01793-6 zbMATHGoogle Scholar - 26.Minion, M.L.: A hybrid Parareal spectral deferred corrections method. Commun. Appl. Math. Comput. Sci.
**5**(2), 265–301 (2010). https://doi.org/10.2140/camcos.2010.5.265 MathSciNetCrossRefzbMATHGoogle Scholar - 27.Nievergelt, J.: Parallel methods for integrating ordinary differential equations. Commun. ACM
**7**(12), 731–733 (1964). https://doi.org/10.1145/355588.365137 MathSciNetCrossRefzbMATHGoogle Scholar - 28.Ruprecht, D.: v2.0 Parallel-in-time/pyparareal: wave propagation characteristics of parareal (2017). https://doi.org/10.5281/zenodo.1012274
- 29.Ruprecht, D., Krause, R.: Explicit parallel-in-time integration of a linear acoustic–advection system. Comput. Fluids
**59**, 72–83 (2012). https://doi.org/10.1016/j.compfluid.2012.02.015 MathSciNetCrossRefzbMATHGoogle Scholar - 30.Schreiber, M., Peixoto, P.S., Haut, T.,Wingate, B.: Beyond spatial scalability limitations with a massively parallel method for linear oscillatory problems. Int. J. High Perform. Comput. Appl. (2017). https://doi.org/10.1177/1094342016687625
- 31.Staff, G.A., Rønquist, E.M.: Stability of the Parareal algorithm. In: Kornhuber, R., et al. (eds.) Domain Decomposition Methods in Science and Engineering, Lecture Notes in Computational Science and Engineering, vol. 40, pp. 449–456. Springer, Berlin (2005). https://doi.org/10.1007/3-540-26825-1_46 CrossRefGoogle Scholar
- 32.Steiner, J., Ruprecht, D., Speck, R., Krause, R.: Convergence of Parareal for the Navier–Stokes equations depending on the Reynolds number. In: Abdulle, A., Deparis, S., Kressner, D., Nobile, F., Picasso, M. (eds.) Numerical Mathematics and Advanced Applications—ENUMATH 2013, Lecture Notes in Computational Science and Engineering, vol. 103, pp. 195–202. Springer, Berlin (2015). https://doi.org/10.1007/978-3-319-10705-9_19 Google Scholar
- 33.Trindade, J.M.F., Pereira, J.C.F.: Parallel-in-time simulation of the unsteady Navier–Stokes equations for incompressible flow. Int. J. Numer. Methods Fluids
**45**(10), 1123–1136 (2004). https://doi.org/10.1002/fld.732 CrossRefzbMATHGoogle Scholar - 34.van der Walt, S., Colbert, S.C., Varoquaux, G.: The numpy array: a structure for efficient numerical computation. Comput. Sci. Eng.
**13**(2), 22–30 (2011). https://doi.org/10.1109/MCSE.2011.37 CrossRefGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.