A new family of fourth-order energy-preserving integrators

For Hamiltonian systems with non-canonical structure matrices, a new family of fourth-order energy-preserving integrators is presented. The integrators take a form of a combination of Runge--Kutta methods and continuous-stage Runge--Kutta methods and feature a set of free parameters that offer greater flexibility and efficiency. Specifically, we demonstrate that by carefully choosing these free parameters a simplified Newton iteration applied to the integrators of order four can be parallelizable. This results in faster and more efficient integrators compared with existing fourth-order energy-preserving integrators.


Introduction
In this paper, we are concerned with the numerical integration of a system of ordinary differential equations (ODEs) of the form d dt y = S(y)∇H(y), y(0 where y : [0, T ) → R d is a dependent variable, S : R d → R d×d is a skew-symmetric matrix function, and H : R d → R is a real-valued function, which we call energy.The two functions S and H are assumed to be sufficiently regular.Along the solution to (1.1), the function H is constant: d dt H(y(t)) = ∇H(y(t)) ⊤ ẏ(t) = ∇H(y(t)) ⊤ S(y(t))∇H(y(t)) = 0, where the dot stands for the differentiation with respect to t.Conversely, a system of ODEs having a first-integral can always be formulated in the form of (1.1) with an appropriate skew-symmetric matrix function S(y) [14], although the expression of S(y) might not be unique.When S(y) is constant and in particular S = J −1 with the system is called a Hamiltonian system and the corresponding H is often referred to as the Hamiltonian.
In more general cases, where S depends on y, if the Poisson bracket satisfies the Jacobi identity, the system is called a Poisson system (see, e.g., [8,Chapter VII.2]).In this paper, we always call the system of the form (1.1) a Poisson system, even if the Poisson bracket does not satisfy the Jacobi identity, bearing in mind that this terminology is just for convenience only.Furthermore, depending on the structure of S(y), the system of the form (1.1) exhibits rich geometric properties, such as symplecticity.This paper focuses on energy-preserving integrators, which are a typical branch of geometric numerical integrators [8].In this paper, an energy-preserving integrator refers to a one-step method y 0 → y 1 , where y 1 ≈ y(h), such that H(y 1 ) = H(y 0 ).For such a method, H(y n ) = H(y 0 ) holds even if the step-size h is controlled adaptively.
The projection method is a relatively simple method.The projection method, while conceptually straightforward, encompasses a variety of approaches for projecting solutions onto the appropriate manifold.The effectiveness of this method, particularly concerning long-term behaviour, may depend on both the selected projection technique and the underlying integrator [8,Chapter IV.4].Therefore, caution is advised in employing the projection method. 1 A more sophisticated and systematic approach is called the discrete gradient method, which was first formulated by Gonzalez [6] (see also McLachlan, Quispel and Robidoux [9]), although a similar idea had been known for quite some time.The discrete gradient method usually produces a second-order energy-preserving integrator.The average vector field (AVF) method, proposed by Quispel and McLaren [15], is a subclass of the discrete gradient method, which is a B-series method and of order two when S is a constant skew-symmetric matrix.Over the past decade, extensions of the AVF method to higher order have been extensively studied.For a constant S, Hairer proposed the AVF collocation method [7] and Brugnano, Iavernaro and Trigiante proposed the Hamiltonian boundary value method [3].These methods are based on so-called continuous-stage Runge-Kutta methods.Also worth mentioning is a relatively new work [5], which establishes a general theory on the order theory for discrete gradient methods.
Roughly speaking, the computational cost of the AVF collocation method is almost the same as the Gauss method of the same order.Miyatake and Butcher [12] characterize the condition for the continuous stage Runge-Kutta (CSRK) method being energy-preserving in terms of the symmetry of an s × s matrix M defining the CSRK method and find that the order condition can also be characterized in terms of M −1 .The discussions using M seem fruitful in that one can construct an integrator with an intended order with some degrees of freedom.By manipulating the remaining parameters to enhance the integrator, for example, parallelizable integrators can be constructed.
For Poisson systems, efforts developing energy-preserving methods have also been devoted, and the aforementioned methods have been extended to this general class.For example, the AVF collocation method is extended to Poisson systems by introducing a new class of integration methods, which is a generalization of the CSRK method to partitioned systems [4] (see [1,2] for the extension of the Hamiltonian boundary value method to the Poisson systems).We refer to this new class of integration method as the partitioned continuous-stage Runge-Kutta (PCSRK) method.The s-degree PCSRK method 2 is characterized by the s × s matrices M i (i = 1, . . ., s) and the nodes c i (i = 1, . . ., s).The results given in [11] suggest that a PCSRK method is energy-preserving if all M i are symmetric, but it is not clear if the order condition is concisely characterized in terms of the matrices M i and nodes c i , as is clearly done for the CSRK methods.This task does not seem so trivial; there are several difficulties associated with it.For example, recall that for the constant S, the order condition is characterized in terms of the inverse of M; however, for Poisson systems, although the highest order of the s-degree PCSRK methods is 2s [4], the corresponding M i 's are singular.Thus, one cannot expect that the order conditions are characterized in terms of the inverse of M i 's.Other difficulties are discussed in Remark 3.1.
Taking the above backgrounds into consideration, we focus only on fourth-order methods and address the following issues.
-The fourth-order PCSRK method exists with s = 2 [4], which is unique if the degree is restricted to s = 2.In this paper, we set s = 3 and characterize the method for being order 4 in terms of M 1 , M 2 , M 3 and c 1 , c 2 , c 3 .The key idea is to require that the PCSRK method be reduced to the CSRK method discussed in [12] when S is constant, and the method is symmetric, and simplify the order conditions for the bi-coloured rooted trees with three vertices having a black root.This can be viewed as a three-degree PCSRK method with some degrees of freedom, and the parameters can be devised from another perspective.-As discussed in [12], an advantage of a numerical method with some degrees of freedom is that much more efficient variants may be able to be explored.Clearly, in general, the larger the degrees of the CSRK methods are, the more expensive the computational cost becomes.However, this is not always the case; for example, if the matrix M has a specific eigenstructure, the method can be implemented in a parallel architecture with almost the same cost as the case s = 1, though the memory usage grows.We show that a similar structure holds for the PCSRK methods.-Based on the above two points, we develop three-degree PCSRK integrators with some degrees of freedom, which are energy-preserving for Poisson systems, of order four, and efficiently implemented in a parallel architecture.The proposed integrators are reduced to the ones developed in [12] with similar properties for Hamiltonian systems.
The paper is organized as follows.In Section 2, after reviewing energy-preserving CSRK methods for constant S and their properties, we also discuss the formulation of energy-preserving PCSRK methods for general cases.We develop a family of energy-preserving PCSRK integrators in Section 3. We discuss the implementation issues and optimal parameter choices in Section 4. Concluding remarks are given in Section 5.

Hamiltonian systems and CSRK methods
Let us consider the system where S is a constant skew-symmetric matrix, but not necessarily J −1 .Hamiltonian systems are a typical example of this class.The average vector field (AVF) method reads where f (y) = S∇H(y).This method is of second order and energy-preserving H(y 1 ) = H(y 0 ).The method can be regarded as a continuous stage Runge-Kutta method.
A one-step method y 0 → y 1 is called an s-degree continuous stage Runge-Kutta (CSRK) method.
The above definition does not specify the order of A τ,ζ in terms of ζ , but let us focus on A τ,ζ which is a polynomial of degree s in τ and s − 1 in ζ .Such A τ,ζ will be denoted by with a constant matrix M ∈ R s×s so that the polynomial is identified with the matrix M. When s = 1 and M = 1, the method reduces to the AVF method.
A sufficient condition for energy-preservation can be characterized in terms of M.
The symmetry of M means (∂ /∂ τ)A τ,ζ is symmetric.The condition is also necessary under a mild condition [12].
Several characterizations of the order conditions with respect to M are given in [12].We note that the discussion there is based on the simplifying assumptions (see also [7]).A CSRK method is energypreserving and of order at least p = 2η if the symmetric matrix M ∈ R s×s satisfies where i k denotes the k-th column of the s × s identity matrix.If we choose η = s the method with is of order 2s and coincides with the AVF collocation method of order 2s [7].For example, when s = 2, we have As another illustrative example, when s = 3, the above characterization indicates that the method with is of order four as η = 2 except for α = 1/5.In this way, one can construct high-order energy-preserving integrators with some degrees of freedom.When α = 7/36, by introducing a new variable (parameter) α = 1/(36α − 7) the matrix M can be expressed more explicitly as

Poisson systems and PCSRK methods
Note that the AVF method (2.2) is not energy-preserving when applied to (1.1) with a non-constant S(y).
A straightforward modification is energy-preserving, symmetric and thus of order two.Here, S(y) is discretized by using the mid-point rule to ensure the method is symmetric.Other choices, such as S(y 0 ) and S(y 1 ), still guarantee the energypreservation, though the resulting integrator is of order one.It should be noted that in (2.5) ∇H(y) term is discretized in a CSRK manner while S(y) in a standard RK manner.This observation leads to the following class of numerical integrators applied to a partitioned system . ., Z s , y 1 and z 1 such that they satisfy ) with y 0 = z 0 .A one-step method y 0 → y 1 is called an s-degree partitioned CSRK (PCSRK) method.
We note that by definition Z i = Y c i and y 1 = z 1 ; thus, the scheme can be written in a more compact form The expression in Definition 2.2 is useful for discussing the order conditions.
As A i,τ, j,ζ depends on τ, j and ζ , but does not depend on i, it is convenient to express it as by using constant matrices M j ∈ R s×s for j = 1, . . ., s.For example, the second-order method proposed by Cohen and Hairer [4] is given by and c 1 , c 2 = 1/2 ∓ √ 3/6.We observe that which coincides with M in (2.3).Thus, the method is reduced to the AVF collocation method [7] when S is constant.It should be noted that both M 1 and M 2 are singular while M 1 + M 2 is nonsingular.This indicates that one cannot expect that M i is characterized as the inverse of some matrices.Sufficient conditions of PCSRK methods to be energy-preserving are characterized in terms of A i,τ, j,ζ , or equivalently, the M i matrices.
Theorem 2.2 ( [11]) When applied to (1.1), a PCSRK method is energy-preserving if all M i 's are symmetric The conditions for symmetry are also characterized as follows.
Theorem 2.3 (cf.[4]) Order conditions will be discussed in the next section.

A family of fourth-order energy-preserving integrators
In this section, we derive a family of fourth-order energy-preserving PCSRK methods.Specifically, we set s = 3 and aim to derive a 3-degree PCSRK integrator with some free parameters.We begin with a few remarks on order conditions.In the formulation of the PCSRK method (Definition 2.2), it is not necessary to calculate z 1 , because only y 1 is required as an output to proceed with the subsequent steps.The expression for y 1 is given as a P-series: where φ , σ and F are the elementary weights, symmetry and elementary differentials, respectively.The symbol T P y denotes the set of bi-coloured trees with black roots, i.e., T P y = { , , , , , , , , , , . . .}.
For more details about the order conditions with bi-coloured trees, refer to [8].
A PCSRK method is of order p if it satisfies where |τ| denotes the order of τ, i.e. the number of vertices of τ.
Here, e(τ) for the mono-coloured trees is defined by and e(τ) for the bi-coloured trees takes the same value with the mono-coloured trees.
Remark 3.1 Since checking all the order conditions is an immense task, one approach to avoid checking every condition relies on the simplifying assumptions.For the PCSRK methods, the simplifying assumptions are given as follows: where As discussed in [4], a method satisfying B(ρ),C(η), hC(η), D(ξ ), D(ξ ) has the order at least p = min(ρ, 2η + 2, ξ + η + 1).Utilizing simplifying assumptions is effective for deriving energy-preserving CSRK methods with some degrees of freedom for Hamiltonian systems.However, there is a subtle yet crucial difference between CSRK methods applied to Hamiltonian systems and PCSRK methods applied to Poisson systems.For Hamiltonian systems, if a CSRK method is energy-preserving, i.e.M = M T , then B(1) implies that B(ρ) is satisfied for all ρ = 1, 2, . . . .In other words, a consistent energy-preserving CSRK method automatically guarantees B(ρ).This is a very important property, which further simplifies the discussions using simplifying assumptions.However, this is not the case for Poisson systems.Now we set s = 3, and our strategy is as follows.First, as a sufficient condition for the method to be energy-preserving and of order 4, we require that the method coincides with the 3-degree energypreserving CSRK methods (2.4).We also require the sufficient conditions for being energy-preserving (Theorem 2.2) and symmetric (Theorem 2.3).The remaining task is to ensure the order conditions for the trees with three vertices and characterize M 1 , M 2 , M 3 and c 1 , c 2 , c 3 satisfying all assumptions and requirements as simply as possible.
As a sufficient condition, we require that In addition to this, we require that M 1 , M 2 , M 3 are symmetric, and the method itself is symmetric.Under the assumption that M 1 , M 2 , M 3 are symmetric, Theorem 2.3 indicates that the method is symmetric if and c 2 = 1/2, c 1 + c 3 = 1.Note that M 2 can be arbitrary as long as it is symmetric.
To ensure that the method is of order 4, the method needs to satisfy the order conditions for the trees , , , , .Note that the order conditions for and are automatically satisfied because assuming (3.1) means that the method is already of order at least 4 when applied to a Hamiltonian system.More precisely, the order conditions for the trees of order up to 4 which have only black nodes are automatically satisfied.
The assumptions of the following proposition aid in constructing the intended integrators.
Proposition 3.1 Assume that a 3-degree PCSRK method satisfies ) ) Then, when applied to Poisson systems, the method is energy-preserving, symmetric, and of order at least 4.
Proof From Theorem 2.2, the condition (3.2) ensures the energy-preservation.The condition (3.3) guarantees that the method is of order at least 4 when applied to the cases where S is constant, indicating that for Poisson systems, the order conditions for , , , , , , , are automatically satisfied.The conditions (3.4) and (3.5) relate to the method being symmetric, suggesting that only trees with three vertices must taken into account to guarantee that the method is of order at least 4.
Using M = M 1 + M 2 + M 3 and (3.5), we observe that Here, we used As a result, the condition (3.6) can be rewritten as and the first and second constraints (rows) are evidently compatible.Consequently, we only need to consider The remaining task is to determine or characterize M 1 , M 2 , M 3 and c 1 , c 2 , c 3 so that they satisfy the conditions (3.2)-(3.7).The condition (3.7) can be rewritten as It is found that the symmetric matrix M 3 satisfying (3.8) and (3.9) can be expressed as We can set c 1 < 1/2 arbitrary as long as c 1 = 0, and γ 1 , γ 2 , γ 3 , γ 4 are arbitrary real numbers.Accordingly, M 1 is determined from (3.4).Thus, we can regard c 1 , γ 1 , γ 2 , γ 3 , γ 4 and α in (3.3) as free parameters of the fourth-order methods.

Remark 3.2
The exploration of PCSRK methods with a degree greater than three, which are energypreserving and have an order of at least four, is a challenge.Of course, formulating the conditions corresponding to (3.2)-(3.7) is relatively straightforward.However, special attention must be paid to the (3.6)-type condition due to the fact that This equation no longer represents a single constraint.For example, when s = 4 or 5, the condition implies two independent constraints.This condition corresponds to the simplifying assumption C(η) with k = 2 and l = 1, which allows for a reduction in the number of order conditions.Specifically, in the above derivation, C(η) with k = 2 and l = 1 constitutes a single constraint, which simplifies the calculation.
When s > 3, it is crucial to carefully consider whether to employ the assumption or treat φ ( ), φ ( ) and φ ( ) independently.The advantage of imposing (3.10) lies in the fact that the condition for M i remains linear.
The author believes that a similar approach might help us construct higher-order methods.However, in order to streamline the derivation process, simplifying assumptions or their variants should be incorporated.Some challenges to address include expressing these assumptions in terms of the M i matrices.
This paper has been focusing on the derivation of 3-degree fourth-order methods due to their apparent practical utility.
Remark 3.3 The discussion presented above pertains to the case where C i,τ = τ.However, more general cases can also be examined, as has been discussed for Hamiltonian systems [12].Further exploration of these cases is not pursued here due to the cumbersome nature of the presentation and the limited practical advantages they appear to offer compared to the methods derived above.Remark 3.4 A function C(y) is referred to as a Casimir function if the condition ∇C(y) T B(y) = 0 holds for all y.In cases where the Casimir takes the quadratic form C(y) = y T Ay with a symmetric constant matrix A, the s-degree 2s-order PCSRK method [4] exactly inherits the Casimir.However, the newly introduced fourth-order integrators cannot generally preserve the Casimir.This limitation may be considered a potential drawback of the new family.

Parameter selection
This section addresses implementation concerns.We begin by outlining a strategy for implementing PC-SRK methods in a general context and then proceed to investigate parallelizable integrators.As shown below, the parallelizability is characterized in terms of only α.We also investigate the choice of other parameters.

Solving the nonlinear equations system
Let us express Y τ as where l i (τ) is defined by with c 0 = 0. Note that Y c i = Z i .We now regard (2.6) and (2.7) as a system of nonlinear equations in terms of Y c 1 , . . .,Y c s : It is not necessary to evaluate (2.6) at c 1 , . . ., c s as long as it is evaluated at s distinct points; however, c 1 , . . ., c s are considered to avoid cumbersome presentation.Let Then, we are concerned with solving Note here that In the first '≈', the term involving ∇ Y c j S(Y c i ) is omitted to avoid that tensors appear, by taking in mind that even if we care about such a term, the following discussion remains true.Thus, we have where and denotes an approximate Jacobian matrix.Therefore, a simplified Newton-like method gives the iteration formula

Parallelizable integrators
As is the case with implicit Runge-Kutta methods and continuous stage Runge-Kutta methods, if the matrix E has only real, distinct eigenvalues, the linear system (4.2) of size sN can be computed efficiently by using parallel architectures.If all eigenvalues of E are real and distinct, there exists a matrix T such that Then, it follows that which is block diagonal.Therefore, ρ l can be calculated based on The key point is that the linear system for ρ l of size sN consists of s linear systems of size N. Thus, the most computationally heavy part in updating Y l can be computed in parallel.We note that T is a 3-by-3 matrix and thus the explicit computation of T −1 and the multiplication of T −1 ⊗ I N with a vector are not particularly challenging.We note that E is identical to that appearing in the study of energy-preserving methods for Hamiltonian system [12].This theorem indicates that the parallelizability does not depend on the c i values.If the method is parallelizable when applied to Hamiltonian systems, it is also parallelizable to Poisson systems.
For the method satisfying the assumptions in Proposition 3.For the proposed method, only the coefficients for the bi-coloured trees of the shape , , , that are expressed in terms of (C) or (D) cannot coincide with the exact ones if α is in the range of (4.3).
We also note that, apart from constructing efficient fourth-order integrators, if we set α = 5 all the coefficients for the trees with 5 vertices can be exact.However, the condition (4.4) indicates that there remains one degree of freedom.Therefore, 3-degree sixth-order integrators are not unique, which suggests that there exists a family of s-degree 2s-order energy-preserving integrators (for s = 1 and 2, such integrators are unique).

Numerical verification
As a simple test confirming the order of accuracy and the effect of the choice of parameters for the proposed method, we employ the Lotka-Volterra system For the numerical experiment, the parameters were set to a = −2, b = −1, c = −0.5, ν = 1, µ = 2, and the initial values y 0 = (1.0,1.9, 0.5).In the numerical experiment, a high-order quadrature with the tolerance 10 −12 is used to approximate the integrals appearing in the integrator.The integrator actually preserves the Hamiltonian H(y).For instance, when the step size is set to h = 0.05, the error between true and numerical Hamiltonians grows as the time integration proceeds due to the rounding errors and the use of quadrature; however, at t = 10, this error remains below 10 −12 .This level of accuracy is noteworthy, actually highlighting the energy-preservation, especially when compared to the error in the Casimir invariant at the same point in time, which exceeds 0.01.
We now check that the proposed integrator is actually of order 4. We set γ 1 , . . . ,γ 4 to This choice corresponds to the 6th order integrator presented in [4] when α = 5.In Fig. 4.1, the error behaviour of the proposed method with this choice of parameters and α = −234 are compared with the 2nd and 4th order integrators proposed in [4].This figure supports that the proposed integrator actually attains the 4th-order accuracy.While the error of the proposed method is substantially bigger than that of AVF(4), efficiency could still favour the proposed method if its computation is more than twice as fast as AVF(4).Fig. 4.1 Error at t = 1 of numerical solutions for the Lotka-Volterra system.The proposed method with the parameters (4.5) and α = −234 is compared with the 2nd and 4th order methods proposed in [4].Dashed and dotted lines show the slope for the 2nd and 4th-order convergence, respectively.
Remark 4.1 We discuss the efficiency of our proposed method in more detail.It has been observed that to achieve a comparable level of accuracy, the proposed method necessitates nearly halving the step size.This suggests that if the computation time per time step of the proposed method is at least twice as fast as that of AVF(4), then the proposed method is considered to be more efficient.
A comparative analysis of the two methods is provided.We examine a scenario where the linear system of size 2d, required for solving AVF(4), is solved using a direct method on a single core.Conversely, three linear systems of size d for the proposed method are solved using the same direct method but potentially by three cores in parallel.Assuming that the simplified Newton iterations for both methods demand a nearly identical number of iterations to the convergence, the proposed method is approximately eight times faster because the computational cost of a direct solver typically scales with the cube of the system size.
However, it is important to note that such estimations are contingent upon the specifics of the problem and the selection of linear solvers.In the case of AVF(4), further parallelization might be achievable.Nonetheless, most techniques for parallelization can also be applied to the proposed method.Therefore, expecting the proposed method to compute a single step at least twice as fast as AVF(4) appears to be a reasonable assumption in most instances, particularly for large-scale problems.

Concluding remarks
In this paper, we have shown a family of fourth-order energy-preserving integrators based on the partitioned CSRK methods.The integrators can be implemented in a parallel architecture if one of the parameters is chosen such that it satisfies a certain inequality.Actual implementation must tailored to the problem at hand and it is anticipated that these integrators, with their parallelizability, will demonstrate substantial efficiency when applied to large-scale problems.A more detailed evaluation of their performance in large-scale contexts will be the subject of forthcoming research, as similar evaluation was studied for the CSRK methods [16].
Table A.5 Coefficients of the elementary differentials with trees of the shape with the black root for the P-series expansions of the exact solution, the fourth order AVF collocation method, and the proposed method.Table A.6 Coefficients of the elementary differentials with trees of the shape with the black root for the P-series expansions of the exact solution, the fourth order AVF collocation method, and the proposed method.

Theorem 4 . 1 (
[12]) The eigenvalues of the matrix E defined in (4.1) are independent of the c i values.