Abstract
The order of accuracy of any convergent time integration method for systems of differential equations can be increased by using the sequence acceleration method known as Richardson extrapolation, as well as its variants (classical Richardson extrapolation and multiple Richardson extrapolation). The original (classical) version of Richardson extrapolation consists in taking a linear combination of numerical solutions obtained by two different time-steps with time-step sizes h and h/2 by the same numerical method. Multiple Richardson extrapolation is a generalization of this procedure, where the extrapolation is applied to the combination of some underlying numerical method and the classical Richardson extrapolation. This procedure increases the accuracy order of the underlying method from p to \(p+2\), and with each repetition, the order is further increased by one. In this paper we investigate the convergence of multiple Richardson extrapolation in the case where the underlying numerical method is an explicit Runge–Kutta method, and the computational efficiency is also checked.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Some problems arising in science and engineering are so complex that their solutions cannot be obtained in a closed form. Therefore, various numerical techniques have been developed, which provide approximations to the exact solution. The most popular ones include Heun’s method by Karl Heun, 3rd and 4th order Runge–Kutta methods by Carl Runge and Wilhelm Kutta, the Runge–Kutta–Fehlberg method by Erwin Fehlberg etc. However, the provided accuracy is sometimes not sufficient.
In the early 20th century, L. F. Richardson elaborated a general method that can be used to increase the rate of convergence of a numerical approximation. This, so-called Richardson extrapolation, is useful in solving differential equations, too. In the first version, the so called classical Richardson extrapolation (CRE), the problem is solved on two different meshes with time step sizes h and h/2 by the same numerical method, and the two solution are combined linearly [13, 14]. Under an appropriate smoothness condition, this procedure enhances the consistency order of the underlying method from p to \(p+1\) [3, 4, 8]. Lately, a straightforward generalization of this method was described, which is called repeated Richardson extrapolation (RRE) [16,17,18]. Here more than two meshes are used, and the corresponding numerical solutions are combined linearly to achieve even higher order of accuracy. For the RRE, even with several repetitions mainly the issue of absolute stability was investigated, for explicit as well as implicit Runge–Kutta methods as underlying methods. Attempts to use the advanced versions of the RRE adaptively showed promising results in an atmospheric chemistry model [19, 20].
In this paper we investigate another possible generalization of CRE called multiple Richardson extrapolation (MRE) [2]. The idea of this procedure is to apply the Richardson extrapolation method to the combination of some underlying method and the CRE. This procedure can increase the order of the underlying method by two. When the obtained numerical method is combined again with CRE, the order can be further increased.
In [2] we investigated the absolute stability properties of several explicit Runge–Kutta methods combined with MRE, and found greater stability regions than those obtained for CRE. In this paper we focus on the convergence analysis of multiple Richardson extrapolation when combined with explicit Runge–Kutta methods.
The structure of the paper is as follows. In Sect. 2 we give an overview of the classical Richardson extrapolation, and introduce the idea of multiple Richardson extrapolation in comparison with the repeated Richardson extrapolation. In Sect. 3 we move on to the convergence analysis based on the Lipschitz property of the right-hand side function f in the system of ODEs to be solved. We consider the general form of one-step explicit methods, involving all explicit Runge–Kutta methods, and recall their convergence analysis following [15]. Then we present the Butcher tableau of an explicit Runge–Kutta method combined with CRE, which shows that the combined method is again an explicit Runge–Kutta method. This observation serves as a basis for our convergence theorem about the combination of any ERK method combined with multiple Richardson extrapolation. In Sect. 4 we illustrate our theoretical results with numerical experiments. In Sect. 5 we compare the efficiency of the trapezoidal (TP) method and its combinations with CRE and MRE.
2 Mathematical foundation
2.1 The classical Richardson extrapolation (CRE)
Consider the Cauchy problem for a system of ODE’s
where the unknown function y is of the type \(\mathbb {R}\rightarrow \mathbb {R}^d\) and \(y_0\in \mathbb {R}^d\) is given. Assume that a numerical method is applied to solve (1), which is consistent in order p. We divide [0, T] into N sub-intervals of length h, and define two meshes on [0, T]:
and
Note that \(\Omega _h^0\subset \Omega _{h}^1\). Let us denote the numerical solution obtained by the chosen underlying method at time \(t_n\) of the coarse mesh by \(z_n\). If we solve the problem with the same method, but on the finer mesh \(\Omega _h^1\), too, then we will have approximate values in the points of the fine mesh, but it is only the coarse mesh in the points of which both numerical solutions yield approximate solutions. Let us denote by \(w_n\) the solution value obtained in the second case at point \(t_n\) of the coarse mesh. (So, \(w_n\) denotes an approximation to the solution at the same time point of [0, T] as \(z_n\), but \(w_n\) is obtained in 2n time steps. So, in the notation we always use the index of the given time point on the coarse mesh which the numerical approximation refers to, and not the number of the time steps that were carried out to obtain the given numerical solution value.) The CRE consists in combining these solutions in the points of the coarse mesh to obtain higher order of consistency. If the exact solution y(t) is \(p+1\) times continuously differentiable, then the Richardson extrapolation is defined by the formula
This formula describes the so-called passive CRE if \(z_n\) and \(w_n\) are calculated independently. However, in this paper we will only deal with the active implementation of the CRE, where in the n-th time-step we perform the calculations on both meshes starting from the combined solution. This approach yields a method whose consistency order is increased by one for explicit methods under suitable smoothness conditions [3, 4, 8]
3 The multiple Richardson extrapolation (MRE)
Consider the general IVP (1) and choose an underlying method with order of consistency p. In the case of the CRE we had numerical solutions on two meshes, which were combined linearly according to formula (4). The question arises naturally whether using more than two meshes the numerical solutions can be combined to further enhance the accuracy. We divide [0, T] into N sub-intervals of length h. Consider the meshes
where \(r\in \mathbb {N}_0\). Clearly, by increasing r the mesh is refined, and \(\Omega _h^r\subset \Omega _h^{r+1}\) for all \(r\in \mathbb {N}_0\). Particularly, if \(r=2\), then we have three meshes, \(\Omega _h^0\), \(\Omega _h^1\) and \(\Omega _h^2\). One possibility to use all the three meshes is the repeated Richardson extrapolation (RRE), where the three numerical solutions are combined directly to achieve order \(p+2\). Another approach can be the multiple Richardson extrapolation (MRE), where we apply the CRE on the pairs of meshes \((\Omega _h^0, \Omega _h^1)\) and \((\Omega _h^1, \Omega _h^2)\), resulting in two numerical solutions of order \(p+1\), and then these are combined again to get order \(p+2\) on the mesh \(\Omega _h^0\). So, the MRE is defined by applying CRE to the combined method (CRE + underlying method). Figure 1 illustrates the operation of this method in comparison with RRE, resulting in the same consistency order. The numbers of sections along the dashed arrows show the number of time-steps performed within one large step of the coarsest grid.
The general formula of the MRE can be derived as follows. Denote the combined solution (calculated with the underlying method combined with CRE) at time \(t_n\) of the coarse mesh by \(z_{CRE,n}\). Then we can again combine the numerical solutions calculated on the meshes \(\Omega _h^1\) and \(\Omega _h^2\), resulting in a numerical solution \(w_{CRE}\), valid in the points of \(\Omega _h^1\). We combine \(z_{CRE,n}\) and \(w_{CRE,n}\) in all points \(t_n,n=0,1,\ldots , N\) of \(\Omega _h^0\), taking into account that the underlying method has order \(p+1\). This yields the formula of the multiple Richardson extrapolation:
Both RRE and MRE can be used with arbitrarily many repetitions. The RRE with several repetitions leads to the schemes denoted by 2TRRE, 3TRRE etc., studied in [16,17,18]. The scheme obtained by MRE can again be combined with CRE to get a new method denoted as 2MRE and so on. Since in the normal MRE the CRE is applied twice, the notation qMRE will be used when the CRE is applied \(q+1\) times. Figure 2 shows the comparison of the 2MRE and 2TRRE schemes, both yielding order \(p+3\) when the underlying numerical method has order p. All of the presented methods can be implemented in a passive and active way, but we will only deal with the latter one.
Note that the MRE is defined in such a way that numerical solutions of order \(p+1\) are calculated during the solution process. (And for q repetitions numerical solutions of order \(p+1\), \(p+2\),..., \(p+q-1\) are available.) This observation can be advantageous for inner error estimations. The same is not true for the RRE (qTRRE), where we do not have intermediate-order solutions.
We will only deal with the MRE in the sequel. Convergence analysis for the RRE should be the object of further studies.
4 Convergence analysis
One of the basic requirements from a numerical method is its convergence, which means that the numerical solution tends to the exact solution (at fixed points \(t^*\) of the time interval) as the grid is refined. I.e., the global error, defined by the difference of the exact solution and the numerical solution tends to zero as the time-step size tends to zero.
First we recall the convergence analysis for general explicit one-step methods following [15], where the Lipschitz property of the right-hand side function is assumed.
4.1 Convergence analysis for general explicit one-step methods
In [15, p. 10] a general explicit one-step method of the form
is considered, where the increment function \(\Phi \) is a continuous function of its variables. Assume that this method has order of consistency p, i.e., for the truncation error
the estimation \(|\Psi _n|\le Kh^p\) (with K independent of h) is valid for \(0<h\le h_0\) by some positive constant \(h_0\). Then the following theorem holds.
Definition 1
We say that \(\Phi \) satisfies the Lipschitz condition with respect to its second argument uniformly in t, i.e., there exist positive constants \(L_{\Phi }\) and \(h_0\) such that for \(0\le h \le h_0\)
Theorem 1
Consider a one-step method (7) with order of consistency p, where \(\Phi \) satisfies the Lipschitz condition according to Definition 1. Then, in case \(e_0=\mathcal {O}(h^p)\),
so the method is convergent of order p.
Proof
Substituting the exact solution into the numerical scheme we get the equality
Subtracting (7) from (9) leads to the relationship
For a p-th order scheme \(|\Psi _n|\le Kh^p\) if \(h\le h_0\), so taking the absolute value we obtain
Since this relationship holds for any n, we can write
Apply the inequalities \(1+hL_{\Phi }\le e^{hL_{\Phi }}\), \(1<e^{hL_{\Phi }}\), \(hn\le T\), and use the assumption \(e_0=\mathcal {O}(h^p)\) to get
The well-known explicit Euler (EE) method clearly belongs to the class of one-step methods (7), where
Therefore, as a consequence of Theorem 1, if f is continuous and has the Lipschitz property in its second argument uniformly in t on \(\tilde{R}:=[0,T]\times \mathbb {R}\), i.e., there exists a constant \(L>0\) such that
then the method is convergent.
4.2 Convergence analysis for explicit Runge–Kutta methods
The EE method is the simplest representative of explicit Runge–Kutta methods. These methods have the general form
with
where \(b_i, c_i\) and \(a_{ij} \in \mathbb {R}\) are given constants, and s is the number of stages. Such a method is widely represented by the Butcher tableau
where \(c=(0,c_2,c_3,\ldots ,c_s)^T\), \(b=(b_1,b_2,\ldots ,b_s)^T\) and the matrix \(A=[a_{ij}]_{i,j=1}^s\) is strictly lower triangular.
It is easy to see that ERK methods are of the form (7) with
with \(k_i,\ i=1,2,\ldots ,s\) given under (12). Therefore, Theorem 1 is applicable, and the following theorem is valid.
Theorem 2
A p-th order consistent ERK method is convergent in order p when the right-hand side function f in (1) is continuous and has the Lipschitz property in its second argument (10).
Proof
Clearly, \(\Phi \) satisfies the Lipschitz condition in its second argument if \(k_i\) has this property for all \(i=1,2,\ldots , s\), and one can show that this property is inherited by \(k_i\) from the Lipschitz property (10) of the function f. Obviously, \(k_1\) satisfies this property with Lipschitz constant L, since \(k_1=f(t_n,y_n)\). Denote \(a_{\max }=\max _{i,j}|a_{ij}|.\)
Then
Assume that \(k_1, k_2,k_3,\ldots , k_{i-1}\) all satisfy the Lipschitz condition, i.e.,
Then
where we used the length of the interval [0, T] as an upper bound for h. So, \(k_i\) satisfies the Lipschitz condition with Lipschitz constant
4.3 Convergence analysis for ERK + CRE
For a useful observation, first consider the EE method combined with MRE, which means that a) CRE is applied to the EE method, and b) the resulting scheme is again combined with CRE. For stage a) we apply the EE method over a time step of length h:
Then we take two steps of length \(\frac{h}{2}\):
So the numerical solution obtained by the EE+CRE method reads
This is exactly the explicit midpoint method (MP), which is a known ERK method. This observation gives rise to the conjecture that the combination of an ERK in general form with CRE will always result in a certain ERK method, which will be the basis for the proof of the following
Theorem 3
Any p-th order consistent ERK method combined with the CRE is convergent in order \(p+1\) if the right-hand side function f satisfies the Lipschitz property.
Proof
In [5] it is mentioned that any Runge–Kutta method combined with CRE results in a Runge–Kutta method with the Butcher tableau given in Table 1. This tableau can be obtained from the following formulas.
(1) First we take a step of length h with the general ERK method:
where
(2) From the initial value \(y_n\) we take two steps of length h/2. The first step results in
where
The result of the second step is
where
(3) We combine the numerical solutions \(z_{n+1}\) and \(w_{n+1}\):
Let \(\rho =\frac{1}{2^p-1} \), then we have
with
This yields the tableau in Table 1, where the notation e stands for the vector \((1,1,\ldots ,1)^T\in \mathbb {R}^s\).
Clearly, if the underlying method is an ERK method, then A is strictly lower triangular, and so the \(3s\times 3s\) matrix in the upright corner of the Butcher tableau given in Fig. 1 is also strictly lower triangular. Therefore, the combination of any ERK method with CRE results in an ERK method. Moreover, the consistency order of ERK + CRE is \(p+1\). So, in view of Theorem 2 we have proved in this way that any p-th order consistent ERK method combined with CRE results in a convergent numerical method by the Lipschitz property (10) of the function f, and the order of convergence is \(p+1\).
Remark 1
In [6] it was shown that the combination of any explicit Runge–Kutta method combined with the classical Richardson extrapolation is convergent if the right-hand side function f satisfies the Lipschitz property. During the proof, the Lipschitz property was shown for the numerical solution operator of the ERK + CRE scheme. There the authors consider a general multistep method as a starting point, which is useful instead of (7) when their convergence result is extended to DIRK methods in [7]. However, the order of convergence was not investigated.
Now, let us check the Butcher tableau under Table 1 by choosing EE as underlying method.
-
The EE method is defined as
$$\begin{aligned} y _ {n+1} =y_n+hf(t_n, y_n)=y_n+hk_1. \end{aligned}$$The Butcher tableau of EE is given in Table 2 . From this Table
$$\begin{aligned} A = \left( \begin{array}{cc} 0 \\ \end{array} \right) ,~~ {b^T}= 1, \\ \ \ \ \ c = 0. \\ \end{aligned}$$
The Butcher tableau of EE + CRE according to Table 1 is given under Table 3.
This tableau corresponds to the following method:
which is exactly the explicit mid-point (MP) formula, which can also be considered as a two-stage method with Butcher tableau under Table 4.
The Butcher tableau of EE + MRE according to Table 1 is given under Table 5.
4.4 Convergence analysis for MRE
The MRE is defined by applying the CRE to the combined method “underlying method + CRE”. However, according to our previous observation, the latter combination also yields an ERK method. Consequently, the following convergence theorem is valid.
Theorem 4
Any p-th order consistent ERK method combined with MRE will always be convergent in order \(p+2\) if f has the Lipschitz property (10).
Note that when the numerical scheme obtained by MRE is combined again with CRE, even arbitrarily many times, always an ERK method is obtained. (The number of the stages is tripled with each further application of the CRE.) Since with each application of the CRE the order increases by one, we are led to the following
Theorem 5
Any p-th order consistent ERK method combined with qMRE will always be convergent in order \(p+q+1\) if f has the Lipschitz property (10).
5 Numerical experiments
The accuracy of EE + MRE is tested in [2]. Now, we tested the different versions of MRE.
Example 1. Consider the scalar problem
The exact solution is \(y(t)=2\cot ^{-1}\left( e^{t^2}\cot (\frac{1}{2})\right) \). The global errors were calculated in absolute value at the end of the time interval [0, 1] and reducing h by a factor of 2. The expected increase of the convergence order by one with advanced versions of MRE when the first-order explicit Euler (EE) method and the second-order trapezoidal rule (TP) are used as underlying methods can be seen from Tables 6 and 7, respectively.
Example 2. The theta model [9, 12] is a simple one-dimensional biological neuron model that describes the state of a neuron with phase variable \(\theta \) as
The input function f(t) is typically chosen to be periodic. The theta model resembles the quadratic integrate and fire model (QIF) near the bifurcation point with a simple coordinate transform \(y(t)=\tan (\theta /2)\):
By the choice \(f(t)= \cos t -\sin ^2(t)\), the exact solution is \(y(t)=\sin t\). The global errors were calculated in absolute value at the end of the time interval [0, 1] and repeatedly reducing h by a factor of 2.
The expected increase of the convergence order by one with increasing versions of MRE when the first-order explicit Euler (EE) method and the second-order trapezoidal rule (TP) are used as underlying methods can be seen from Tables 8 and 9, respectively. We remark that the same error orders are obtained when the numerical solutions obtained above for y(t) are transformed back to \(\theta (t)\) values by the transformation \(\theta (t)=2\arctan (y(t))\)
6 Efficiency of the methods
Computational efficiency measures the amount of time or memory required to perform the elementary operations, namely, additions, subtractions, multiplications and divisions for a given activity [1, 10].
Higher-order methods take more work per step than low-order methods. However, they usually require fewer steps to obtain a prescribed accuracy. The most efficient method for a particular problem is the one that gives the desired accuracy with the least amount of work.
We compared the efficiency of TP, TP + CRE and TP + MRE when solving Example 1 using MATLAB with the tic–toc command and by taking the average of ten trials, on a machine with processor 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz (64-bit machine).
We can draw the following conclusions from Table 10.
-
With MRE, a solution with a global error falling in the interval [1E\(-\)5,1E\(-\)4] was obtained 4.57 times faster than by the underlying TP method alone, and 1.29 times faster than with TP + CRE. To achieve a global error in the interval [1E\(-\)10, 1E\(-\)9], MRE was 77.69 times faster than TP, and 1.33 times faster than CRE. Therefore, Multiple Richardson Extrapolation (MRE) took less time compared to TP and CRE for achieving the same accuracy. So, in our experiments TP combined with MRE proved to be more efficient than the underlying method TP alone or combined with CRE. And as a rule, the bigger the prescribed accuracy, the more time we saved with the extrapolation methods.
6.1 The role of parallelization in increasing the efficiency
In the above experiments all calculations were performed sequentially. However, nowadays parallel machines are widely available, which enables us to perform certain parts of the computation (those which are not built on each other) at the same time. It is worth examining how much time we can gain by the different versions of Richardson extrapolation when we exploit the possibility of parallelization.
-
Suppose that the underlying method requires \(\tau \) time to solve a certain initial value problem in one large time step, then its combination with the CRE and MRE roughly requires \(3\tau \) time and \(9\tau \) time in one large time step, respectively, when all calculations are performed sequentially. However, if we use parallelization, then the CRE roughly requires just \(2\tau \) time in one large time step and MRE requires \(4\tau \) time during a large time step. (Here we neglect the time required for calculating the linear combinations of the solutions, which is not very time-consuming.)
-
If we compare the two procedures, parallelization takes less time as compared with the sequential procedure. Namely, parallelization is roughly 1.5 times faster than the sequential procedure when we used the CRE and 2.25 times faster when we used the MRE. Hence, parallelization is an efficient means of reducing the run times required by the traditional way (sequential procedure).
7 Conclusion
Multiple Richardson extrapolation (MRE) is a generalization of the classical Richardson extrapolation method, where the extrapolation is applied to the combination of some underlying method and the classical Richardson extrapolation. We showed that whenever a p-th order consistent explicit Runge–Kutta method is combined with MRE, even arbitrarily many times, the obtained new scheme will be convergent when the right-hand side function f of the differential equation has the Lipschitz property in its second argument. The order of convergence is \(p+2\) when the underlying method is of order p, and the order is further increased by one with each repetition. In numerical experiments we showed that MRE is more efficient than CRE and the basis method TP applied alone, even if the calculations are performed sequentially. In practice, the application of parallel machines will further enhance the efficiency of the Richardson extrapolation methods.
References
U.M. Ascher, C. Greif, A First Course in Numerical Methods (Society for Industrial and Applied Mathematics, Philadelphia, 2011)
T. Bayleyegn, Á. Havasi, Multiple Richardson extrapolation applied to explicit Runge–Kutta methods, in Advances in High Performance Computing. ed. by S. Fidanova, I. Dimov (Springer, Berlin, 2021), pp.262–270
T. Bayleyegn, I. Faragó, Á. Havasi, On the Consistency Order of Runge–Kutta Methods Combined with Active Richardson Extrapolation. Lecture Notes in Computer Science (0302-9743 1611-3349): 13127, (2022). pp. 101–108
T. Bayleyegn, I. Faragó, Á. Havasi, On the consistency and convergence of classical Richardson extrapolation as applied to explicit one-step methods, in Mathematical Modelling and Analysis (accepted for publication) (2022)
R. Chan, A. Murua, Extrapolation of symplectic methods for Hamiltonian problems. Appl. Numer. Math. 34, 189–205 (2000)
I. Faragó, Á. Havasi, Z. Zlatev, The Convergence of Explicit Runge–Kutta Combined with Richardson Extrapolation Applications of Mathematics. (Institute of Mathematics AS CR, Prague, 2012), pp.99–106
I. Faragó, Á. Havasi, Z. Zlatev, The convergence of diagonally implicit Runge–Kutta methods combined with Richardson extrapolation. Comput. Math. Appl. 65, 395–401 (2013)
E. Hairer, S.P. Nørsett, G. Wanner, Solving Ordinary Differential Equations I: Nonstiff Problems (Springer, Berlin, 1987)
H. Liu, W. Sun, Computational efficiency of numerical approximations of tangent moduli for finite element implementation of a fiber-reinforced hyperelastic material model. Comput. Methods Biomech. Biomed. Eng. 19(11), 1171–1180 (2016). https://doi.org/10.1080/10255842.2015.1118467
J.D. Lambert, Numerical Methods for Ordinary Differential Equations (Wiley, New York, 1991)
S. McKennoch, T. Voegtlin, L. Bushnell, Spike-timing error backpropagation in theta neuron networks. Neural Comput. 21(1), 9–45 (2008). https://doi.org/10.1162/neco.2009.09-07-610
L.F. Richardson, The approximate arithmetical solution by finite differences of physical problems including differential equations, with an application to the stresses in a masonry dam. Philos. Trans. Roy. Soc. Lond. Ser. A 210, 307–357 (1911)
L.F. Richardson, The deferred approach to the limit, I. Single lattice. Philos. Trans. Roy. Soc. Lond. Ser. A 226, 299–349 (1927)
E. Süli, Numerical Solution of Ordinary Differential Equations (Mathematical Institute, University of Oxford, Oxford, 2014)
Z. Zlatev, I. Dimov, I. Faragó, K. Georgiev, Á. Havasi, Explicit Runge–Kutta methods combined with advanced versions of the Richardson extrapolation. Comput. Methods Appl. Math. (2019). https://doi.org/10.1515/cmam-2019-0016
Z. Zlatev, I. Dimov, I. Faragó, K. Georgiev, Á. Havasi, Stability properties of repeated Richardson extrapolation applied together with some implicit Runge–Kutta methods, in Finite Difference Methods: Theory and Applications Cham. ed. by I. Dimov, I. Faragó, L. Vulkov (Springer, Cham, 2019), pp.114–125
Z. Zlatev, I. Dimov, I. Faragó, K. Georgiev, Á. Havasi, Absolute stability and implementation of the two-times repeated Richardson extrapolation together with explicit Runge–Kutta methods, in Finite Difference Methods: Theory and Applications. ed. by I. Dimov, I. Faragó, L. Vulkov (Springer, Cham, 2019), pp.678–686
Z. Zlatev, I. Dimov, I. Faragó, K. Georgiev, Á. Havasi, Running an atmospheric chemistry scheme from a large air pollution model by using advanced versions of the Richardson extrapolation. Lecture Notes in Computer Science (0302-9743 1611-3349): 13127 (2022). pp. 188–197
Z. Zlatev, I. Dimov, I. Faragó, K. Georgiev, Á. Havasi, Efficient implementation of advanced Richardson extrapolation in an atmospheric chemical scheme. J. Math. Chem. 60(1), 219–238 (2022)
Acknowledgements
The project has been supported by the Hungarian Scientific Research Fund OTKA SNN125119 and also OTKA K137699.
Funding
Open access funding provided by Eötvös Loránd University.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The authors certify that there is no actual or potential conflict of interest in relation to this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bayleyegn, T., Faragó, I. & Havasi, Á. On the convergence of multiple Richardson extrapolation combined with explicit Runge–Kutta methods. Period Math Hung 88, 335–353 (2024). https://doi.org/10.1007/s10998-023-00557-y
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10998-023-00557-y