Performance of implicit A-stable time integration methods for multibody system dynamics

This paper illustrates the performance of several representative implicit A-stable time integration methods with algorithmic dissipation for multibody system dynamics, formulated as a set of mixed implicit first-order differential and algebraic equations. The integrators include the linear multi-step methods with two to four steps, the single-step reformulations of the linear multi-step methods, and explicit first-stage, singly diagonally-implicit Runge–Kutta methods. All methods are implemented in the free, general-purpose multibody solver MBDyn. Their formulations and implementation are presented. According to the comparison from linear analysis and numerical experiments, some general conclusions on the selection of integration schemes and their implementation are obtained. Although all of these methods can predict reasonably accurate solutions, the specific advantages that each of them has in different situations are discussed.


Introduction
Multibody system dynamics problems can be typically formulated as a set of Differential-Algebraic Equations (DAEs), often in semi-explicit form. The numerical treatment of DAEs is more challenging than that of Ordinary Differential Equations (ODEs). Typically, two strategies are employed: direct discretization of DAEs and discretization after reformulation [3]. Reformulation usually consists of some sort of index reduction that can convert DAEs into ODEs, and thus allows the problems to be solved using relatively conventional methods. However, this process may be costly, since it may require substantial user intervention and may be convenient only when a substantial reduction in coordinates can be achieved, which is not the case for examples where mechanisms are analyzed made of flexible components, so direct discretization has gained more attention in software development.
Direct discretization poses strict requirements on time integration methods. Explicit integrators, such as the central difference method [13] and the explicit Runge-Kutta methods [10], are difficult to use to solve DAEs directly because they cannot satisfy the algebraic H. Zhang huimin.zhang@polimi.it 1 School of Aeronautic Science and Engineering, Beihang University, Beijing 100083, China 2 Dipartimento di Scienze e Tecnologie Aerospaziali, Politecnico di Milano, Milano 20156, Italy constraint equations at the position level. Implicit integrators, including the linear multistep methods [23,24], the implicit Runge-Kutta methods [9] and many direct integration methods [30], are required. They are designed to possess good accuracy, A-stability (unconditional stability), and tunable algorithmic dissipation to provide accurate and robust solutions.
Implicit time integrators are briefly reviewed here. In the field of structural dynamics, several single-step single-solve methods, such as the Newmark method [25], the Wilson-θ method [31], the HHT-α method [17] (proposed by Hilber, Hughes and Taylor), the generalized-α method [11,22], the GSSSS (Generalized Single-Step Single-Solve) method [37], and many others [18,32], have been developed since the 1950s. Most of these methods have second-order accuracy, A-stability and tunable algorithmic dissipation from linear analysis. They were initially designed to solve second-order ODEs in structural dynamics, and some of their improved formulations can also be used to solve DAEs and general first-order differential equations [2,8,20]. Some comparisons between the linear two-step method and single-step methods have already been presented in [34,36] and are not reproduced here. Therefore, these single-step methods are not considered further in this work.
In the class of multi-step methods, the linear two-step method [24], and several backward difference formulas (BDFs) [19,28], have been efficiently used in multibody system dynamics. The optimal parameters of the linear three-and four-step methods with secondorder accuracy, A-stability and tunable algorithmic dissipation were given in [36]. According to Dahlquist's second barrier [12], the linear multi-step methods cannot exceed secondorder accuracy to possess A-stability. To eliminate the additional starting procedures of the multi-step methods, their equivalent single-step reformulations, obtained by introducing a few auxiliary variables, have been also proposed in [36]. These multi-step and equivalent single-step integrators are designed for systems of first-order differential equations, and can be generalized to solve second-and higher-order differential equations in a straightforward way.
Another important branch of implicit integrators are the multi-stage methods, represented by the Runge-Kutta family [10]. They evaluate the states at intermediate time points per step, and compute the states at discrete time points using a scheme like the quadrature formula. For solving DAEs, the stiffly-accurate Runge-Kutta methods without the quadrature step are more practical, because the constraints are satisfied at the final stage, but may not be satisfied after implementing the quadrature formula. Multi-stage methods can be designed to have higher-order accuracy and A-stability simultaneously, as in [1,6,21,35]. Considering the computational cost, the singly diagonally-implicit Runge-Kutta methods [27], which perform the computation of each stage in sequence, are more convenient and recommended. Consequently, a few recently proposed second-and higher-order stiffly-accurate, singly diagonally-implicit Runge-Kutta methods with explicit first-stage [21,35] are employed in this work.
The purpose of this work is to present a comparative study of several representative implicit, A-stable and algorithmically dissipative time integration methods for multibody system dynamics. The time integration methods employed include the linear multi-step methods [23,24,36], their equivalent single-step methods [36], and stiffly-accurate, explicit firststage, singly diagonally-implicit Runge-Kutta (ESDIRK) methods [21,35]. These methods are implemented in the free general-purpose multibody dynamics analysis software MB-Dyn 1 [24]. Their formulation is presented in Sect. 2, whereas Sect. 3 describes their implementation in MBDyn. Considering a linear problem, the accuracy, algorithmic dissipation and dispersion properties of these methods are discussed in Sect. 4. These methods are then applied to solve of some benchmark multibody dynamics problems in Sect. 5. By comparing the numerical results, several general conclusions on the implementation and selection of time integration methods are finally summarized in Sect. 6.

Formulation
Initial-value problems in MBDyn are formulated as a set of implicit first-order DAEs, whose general form is where y collects the differential and algebraic variables, and the overdot indicates the derivative with respect to time t . The initial condition y 0 is given, andẏ 0 needs to be solved from r(y 0 ,ẏ 0 , t 0 ) = 0. For a detailed problem description, the reader should refer to [24]. Problems associated with constrained dynamics, as discussed later in Sect. 5, are formulated as Index-3 DAEs, by directly enforcing the holonomic kinematic constraints on the position level through algebraic relations between the coordinates of the problem using Lagrange multipliers. No elaborated scaling strategy is used, except for scaling the Jacobian matrix contribution by the inverse of tb 0 , the coefficient that multiplies the derivative of the state at the current time step in implicit numerical schemes, as proposed in [7]. No hidden constraints, e.g., on the velocity level, are considered.

Linear multi-step methods
The linear multi-step methods can be expressed as where y k andẏ k represent the numerical solution at time t k , t = t k − t k−1 is the time step size, α j and β j are control parameters of the method. Considering the conditions of second-order accuracy, A-stability, and tunable algorithmic dissipation, the optimal parameters of the linear two-, three-, and four-step methods, referred to as LMS2, LMS3 and LMS4, can be found in [36]. They are controlled by a single parameter ρ ∞ ∈ [0, 1], to adjust the degree of algorithmic dissipation. The algorithmic dissipation becomes stronger as ρ ∞ decreases. When ρ ∞ = 0, they reduce to a second-order BDF. In [24], the parameters of LMS2 used in variable-time-step cases were given, while so far LMS3 and LMS4 were previously formulated only for fixed time steps. The initialization of the integration procedure of the multi-step methods, is performed employing the trapezoidal rule for the first few steps which is A-stable and second-order accurate.

Equivalent single-step methods
The single-step method proposed in [36], which can be used as an equivalent alternative of the linear multi-step method, has the form whereẏ j k (j = 1, 2, . . . , r − 1) are auxiliary variables, and γ i are control parameters. At the beginning,ẏ 1 0 =ẏ 2 0 = · · · =ẏ r−1 k =ẏ 0 is used. The parameters of the single-step methods equivalent to LMS2, LMS3 and LMS4, referred to as SS2, SS3 and SS4, respectively, were also given in [36]. SS3 and SS4 have complex parameters, so the auxiliary variables they produce may be complex numbers, but the state variables are real, as demonstrated in [36].

Explicit first-stage, singly diagonally-implicit Runge-Kutta (ESDIRK) methods
The stiffly-accurate s-stage ESDIRK is represented using Butcher's tableau [10], as where γ , a ij , c i , b i are control parameters. Considering the implicit first-order equation (1), the intermediate stages are formulated as The last stage is where c 1 = 0 and c s = 1. The computation of each stage can be implemented in sequence. Except for the explicit first stage, every other stage uses an implicit single-step or multi-step formula. They share the same effective stiffness matrix for linear analysis and the same form of Jacobian matrix in nonlinear iterations. In the procedure, the state variables at the time points are given, but those at the intermediate stages are not output. Therefore, the multistage methods are essentially single-step schemes, which only use the information from the last step to update the current one. The two-sub-step ρ ∞ -Bathe method [26], which can be seen as a 3-stage ESDIRK with second-order accuracy, A-stability and tunable algorithmic dissipation, is presented. Its parameters are The multi-sub-step methods MSSTC(n) and MSSTH(n), as (n + 1)-stage ESDIRKs proposed in [35], are also employed. MSSTC(n) is designed to have second-order accuracy, A-stability, tunable algorithmic dissipation, preserving low-frequency dynamics as much as possible. It employs the trapezoidal rule from the second to the nth stage, and a general formula in the last one. The optimal parameters of MSSTC (3,4,5) were given in [35].
MSSTH(n) is designed to have nth-order accuracy, A-stability, and tunable algorithmic dissipation. However, since only linear analysis was considered in [35], its parameters are modified here to satisfy the overall-order and stage-order conditions for Runge-Kutta methods [10,16,21], without changing the linear characteristics. The design of the modified MSSTH (3,4,5) considers the following conditions: -Each stage, from the second-one, has at least second-order accuracy; -The overall accuracy is nth-order, and on this basis, the local truncation error is minimized; -A-stability and tunable algorithmic dissipation for linear analysis.
Under the premise of A-stability, MSSTH(n) is designed to achieve the highest possible accuracy for a given ρ ∞ . The details of the parameter selection are shown in the Appendix.

Implementation
The detailed problem description, solution phases and implementation structure of MB-Dyn were presented in [24]. Time integrators are defined in the class ImplicitStep-Integrator. The multi-step, single-step, and multi-stage integrators are constructed using the template classes tplStepNIntegrator, tplSingleStepIntegrator and tplStageNIntegrator, respectively. They provide an Advance() method to perform a complete step. For multi-stage integrators, the operations of all stages are encapsulated in the method Advance(), so the solutions in the internal stages are hidden.
With the initial condition at t 0 , the multi-step and single-step integrators sequentially calculate the state variables at t 1 , t 2 , t 3 , · · · , t k , · · · , and the ESDIRKs need to solve the results at t 0+c 2 , t 0+c 3 , · · · , t 0+c s−1 , t 1 , t 1+c 2 , t 1+c 3 , · · · , t 1+c s−1 , t 2 , · · · , t k , · · · in turn. Note that the explicit first stage of ESDIRKs does not require any calculation. All methods show a very similar structure at each time point; the difference lies in the parameters and the number of previous time points used. The single-step methods, must additionally solve and store the auxiliary variables. Therefore, the implementation at a certain discrete time point, t N , which is used to represent all time points, including those at the end of each step t k and the internal time points of the ESDIRKs, is explained uniformly in this section. For multistep and single-step methods, the information at t N−1 , t N−2 , · · · , used at the current time point t N are the states of previous steps. For ESDIRKs, the information of t N−1 , t N−2 , · · · , are the states of the previously solved time points, which can be the last step and last stages.
At the discrete time point t N , the state variables y N andẏ N are obtained by solving Here Eq. (8b) represents the integration scheme. It is Eq. (2) for multi-step integrators, and Eq. (5) or Eq. (6) for the ESDIRKs. For single-step methods, it needs to be reorganized from Eqs. (4a)-(4e). SS2 uses SS3 uses

Prediction
A predictor-corrector approach [24] is used to solve Eqs. (8a)-(8b) in MBDyn. Prediction and correction are two separate, independent, and consecutive phases. The objective of the prediction phase is to determine a tentative value for the solution to start the correction. Herė y (0) N is used to represent the predicted value, and then y (0) N can be obtained directly by the integration scheme. The scheme for prediction has minimal effect on the accuracy of the solutions, but it may affect the number of iterations required, as shown in Sect. 5. The closer the predicted value is to the final solution, the fewer iterations are required. Consequently, the criterion for choosing the prediction scheme here is that it should not employ extra information that was not used in the integration scheme, and that it should have similar accuracy order to the integration scheme.
For the multi-step integrators,ẏ (0) N is predicted by a second-order scheme, aṡ Its local truncation error is defined as Expanding the right-hand side at t N by Taylor's theorem and letting the local truncation error i.e., second-order accuracy, yields For the single-step integrators, the integration schemes use only the states of the last step. So to make them truly single-step, the constant prediction is used, aṡ For the ESDIRKs, if t N is in the second-stage, using only the states of the last time point, the constant prediction in Eq. (15) is employed. If t N is in the third to (s − 1)th stage, since the integration scheme in these stages are all second-order accurate, the second-order prediction in Eq. (12) is used. Note that in these cases t N is the current time and t N−1 is the time of the last stage, so t N − t N−1 in Eq. (12) is not the time step size t . The accuracy of the last stage has the same order as the overall accuracy, so the second-order ρ ∞ -Bathe method, MSSTC (3,4,5), as well as the third-order MSSTH(3), ESDIRK3(2)4L[2]SA and ESDIRK3(2)5L [2]SA still use second-order prediction in the last stage, but for the last stage of the fourth-order MSSTH(4), ESDIRK4(3)6L[2]SA 2 and the fifth-order MSSTH(5), a fourth-order prediction scheme is employed, aṡ The parameters are determined by making it fourth-order accurate, as In terms of rotations, the orientation and angular velocity of each node used for spatial modelling in MBDyn are stored in the orientation matrix R, and vector ω, respectively. To predict the orientation at t N , the Cayley-Gibbs-Rodriguez (CGR) orientation parameters are assumed to be zero at the last point t N−1 , namely g k−1 ≡ 0, and those of the other previous steps, if needed, are extracted from the respective orientation matrices relative to that at time This procedure resembles typical approaches to Lie group integration (e.g., [14]), and was inspired by the spatial interpolation of finite rotations on 1D domains, originally formulated for geometrically exact beam analysis.
The CGR parameter derivatives are evaluated accordingly: where the matrix G(g) expresses the transformation from the rotation parameter derivatives to the angular velocity, ω = Gġ. For the detailed expressions of g(R) and G(g) please refer to [24].
The single-step integrators must additionally prepare the auxiliary variablesġ SS3 uses Certainly, other derivatives involved inẏ N−1 must be updated in the same way before computing at t N . The auxiliary variables of SS3 and SS4 may be complex numbers, so their real and imaginary parts are computed and stored separately. Due to the approximation of rotations, these single-step integrators are no longer exactly equivalent to the corresponding multi-step methods, but their solutions in numerical experiments are still very close, as shown in Sect. 5. This indicates that the approximation does not cause any obvious reduction in accuracy.
Having obtained the required g N−j ,ġ N−j (j ≥ 1) and g p N−1 (1 ≤ p ≤ r − 1), the CGR parameters and their derivatives at time t N , g (0) N andġ (0) N , are predicted using the previously illustrated schemes. Then, the predicted orientation matrix and angular velocity are computed as Of course, the relative rotation between the involved time steps is assumed limited to avoid the indeterminacies and singularities inherent in all three-parameter parameterizations, specifically those related to the CGR parameters (the magnitude of the relative rotations must be significantly smaller than π ). Such an assumption is considered acceptable when the time step of the integration is dictated by accuracy requirements.

Correction
After the prediction phase, by a Newton-like iteration method, y N andẏ N are corrected according to where r y and rẏ are the partial derivatives of r with respect to y andẏ, respectively; ∂y N /∂ẏ N = β 0 t for the multi-step integrators, ∂y N /∂ẏ N = γ 0 γ 2 t/γ 1 for SS2, for SS4, and ∂y N /∂ẏ N = γ t for the ESDIRKs; l denotes the number of iterations, and l = 0 at the initial.
In the correction phase, at each iteration, The final solution at t N is obtained when the prescribed convergence condition is satisfied. Then the procedure moves on to the solution for the next time point.

Generalization and extension
All previously described methods, which are summarized in Fig. 1, are available in MB-Dyn's public source code repository. 2 Other time integration schemes can be easily added using the provided class templates. To define a new integrator that belongs to the families of linear multi-step, equivalent single-step, or multi-stage methods, one simply needs to derive a new class from the corresponding template class, define the number of steps or stages (an integer template value), and overload the virtual methods that provide the coefficients for the prediction of state and derivative.
The implementation of these methods in MBDyn granted the possibility to evaluate their performance when applied to non-trivial, general-purpose multibody system dynamics problems. Some examples taken from the literature are investigated in Sect. 5.

Linear analysis
The dissipation and dispersion accuracy as well as the degree of algorithmic dissipation of the employed methods are compared in this section considering the single degree-offreedom problemẍ + ω 2 x = 0, an undamped oscillator. The numerical solution of the problem at time t k can be expressed as where c 1 and c 2 are constants determined by the initial conditions, and ω and ξ are the numerical natural frequency and damping ratio, respectively. Here ξ and (T −T )/T = ω/ω −1 are known as the amplitude decay ratio and period elongation ratio, respectively. They are typically used to measure the dissipation and dispersion accuracy in the low-frequency domain. They can be obtained from the characteristic roots of the method, as in [37]. Besides, the spectral radius of the method is used to measure the degree of algorithmic dissipation. Figures 2, 3 and 4 summarize the percentage amplitude decay (AD(%)), percentage period elongation (PE(%)) and spectral radius (ρ) of methods with tunable algorithmic dissipation, respectively. Considering the intrinsic spectral equivalence of the single-step methods and the corresponding multi-step methods, only the results of the multi-step methods are shown. Figures 5, 6 and 7 show AD(%), PE(%) and ρ of the higher-order ESDIRKs with ρ ∞ = 0.0. Because the s-stage ESDIRKs perform s − 1 implicit stages per step, to compare the results under similar computational cost, the abscissa is set as t/(nT ) in the figures, where n = 1 for the multi-step methods, and n = s − 1 for the s-stage ESDIRKs.
The employed methods all have A-stability and can provide algorithmic dissipation. Among them, LMS2, LMS3, LMS4, Bathe, MSSTC (3,4,5) are second-order accurate, and other ESDIRKs have higher-order accuracy. From the comparison, one can observe that the second-order methods, especially LMS3, LMS4, MSSTC (3,4,5), are really superior in preserving the frequency content. They show very small numerical damping ratios and spectral radii very close to 1 when t/(nT ) ≤ 0.1.
As the number of steps or stages used increases, both dissipation and dispersion accuracy improve for second-order methods, so LMS4 and MSSTC(5) have better accuracy, and can retain more frequency content. As ρ ∞ grows, these second-order methods also show higher Compared to the multi-step methods, the second-order multi-stage ones show better filtering ability for high-frequency content, since their spectral radii drop to ρ ∞ faster and earlier.
Compared to the second-order methods, the higher-order ESDIRKs exhibit higher dispersion accuracy, which gives them some accuracy advantage in transient simulations. On the other hand, the excessive algorithmic dissipation of MSSTH (3,4,5) and ESDIRK3 (2) (3,4,5) have tunable algorithmic dissipation, their accuracy cannot be improved when ρ ∞ increases, and when ρ ∞ = 1.0 they exhibit unexpected algorithmic dissipation at intermediate frequencies.
For these reasons, a small ρ ∞ , like 0.0, is recommended for MSSTH (3,4,5) to improve stability of the algorithm.
From the linear analysis, it appears that the second-order methods are very effective in preserving the amplitude, while the higher-order methods have better phase accuracy. Among second-order methods, the use of more steps or stages or a larger ρ ∞ helps to improve the dissipation and dispersion accuracy. Multi-step methods can retain more frequency content than multi-stage ones with the same ρ ∞ . Except for ESDIRK4 (3)

Numerical experiments
The performance of the previously discussed integrators is illustrated in this Section by solving some benchmark problems in MBDyn. 3 Two values of ρ ∞ , 0.0 and 0.6, are used for second-order methods, and ρ ∞ = 0.0 is used in MSSTH (3,4,5). In all examples, the integrators use the same t/n to predict the results under comparable computational cost. The results of the second-order integrators with ρ ∞ = 0.0, the second-order integrators with ρ ∞ = 0.6, and the higher-order integrators, are presented separately. Table 1 lists the line and symbol of different integrators used in the result figures in this Section. The accuracy and stability of the numerical results as well as the calculation efficiency of the methods, are discussed.

Andrew's squeezer mechanism
Problem description Andrew's squeezer mechanism [29], as shown in Fig. 8, is a planar system composed of seven rigid links. The coordinates of noteworthy points in the local reference frame, the mass, and the moment of inertia of each part are listed in Table 2. The origin of each local reference frame is placed in the first point with the name of each link. For links with only two points, the local x-axis is along the line connecting the points. For the link E-B-D, the local y-axis is aligned with E-B, pointing towards B. The coordinates of the center of mass (x C , y C ) are described in the local reference frame, and the rotational inertia I z of each body is expressed about its centre of mass. In the global reference frame Oxy, the coordinates of points A, B, C are (−0.06934 m, −0.00227 m), (−0.03635 m, 0.03273 m) and (0.01400 m, 0.07200 m), respectively. The point D is connected to the point C by a spring, whose stiffness characteristic is k = 4530 N/m and whose natural length is l 0 = 0.07785 m. The link O-F is driven with a constant torque T = 0.033 N · m, starting from an initial angle β 0 = −0.0620 rad. Gravity is not considered.
Numerical results This example is used to check how the methods employed can preserve the mechanical energy of the system. The total energy balance equation of the system can be expressed as since the torque T is constant, where E collects the kinetic and potential energy of the system. Without physical damping, E should be zero throughout the simulation. With t/n = 10 −4 s, Figs. 9 and 10 show, respectively, the angular velocity ω of bar O-F , and  With t/n = 10 −5 s, Fig. 11 shows that all methods can predict accurate results of ω. However, from Fig. 12, it is clear that the energy instability, i.e. E > 0 [5], is still observed for some higher-order integrators, including MSSTH From this example we can conclude that most of the higher-order integrators employed have worse stability in this type of problem, and are really not recommended for energyconserving purpose. Among the second-order integrators, the linear analysis in Sect. 4 illustrates that they can preserve more modes as the number of steps or stages used increases, but the multi-stage methods do not follow this rule here. This is because they use the trapezoidal rule from the second to the nth stage, and the non-dissipative trapezoidal rule is likely to give unreliable or unstable results for nonlinear problems [33]. Therefore, the multi-step and single-step integrators, especially LMS3, SS3, LMS4 and SS4 with a large ρ ∞ , such as 0.6, are better candidates in terms of energy conservation for general problems.

Flexible four-bar mechanism
Problem description Some of the benchmark problems for flexible mechanisms proposed in [4] are solved. Figure 13 shows the configuration of a flexible four-bar mechanism. Bars A-B, B-C, C-D and the ground are connected through revolute joints. The initial angles between them are all 90 deg. The bars' lengths are L 1 = L 3 = 0.12 m and L 2 = 0.24 m. The rotation axis of the revolute joint at point C is rotated by +5 deg about the y direction, to simulate an assembly defect that would lock the mechanism if the bars were rigid. The inertia and stiffness properties of the bars are listed in Table 3. Each bar is modeled in MBDyn using 5 three-node beam elements [15]. The angular velocity of the bar A-B at point A with respect to the frame is prescribed as = 0.6 rad/s during the simulation.  Figure 16 plots the results of ω 1 with a smaller step size t = 1 × 10 −3 s.  From Fig. 14 it is seen that the rotation angles θ computed by the methods used agree very well with each other. However, as shown in Figs. 15 and 16, since the angular velocity changes rapidly at certain moments, high-frequency oscillations can be observed, and the  Figs. 15 and 16, the oscillations become more pronounced with a smaller time step size or a larger ρ ∞ . LMS4 and SS4 with ρ ∞ = 0.6 and t = 1 × 10 −3 s exhibit the most significant oscillations. This example shows the importance of algorithmic dissipation for problems containing high-frequency pollution, which often appears in the solutions of velocities, accelerations and forces. For such problems, Bathe and the higher-order integrators, which have strong algorithmic dissipation from linear analysis in Sect. 4, with ρ ∞ = 0.0 and a suitable t , not too small, are recommended. Figure 17 shows the configuration of a beam actuated by a crank-link mechanism. The beam is clamped at one end; the other end is connected to the link by a spherical joint. To simulate an initial imperfection, the plane of the crank-link mechanism is offset from the plane of the beam by d  Table 4. The beam is meshed with 5 three-node beam elements, and both the link and the crank are modelled with 1 three- node beam element each. The rotation angle of the crank is prescribed as

Problem description
where T = 0.4 s.

Numerical results
The simulation was run in the interval [0, 0.8] s using t/n = 10 −3 s. The rotation angle θ and angular velocity ω 1 about the x axis, and the shear force F 3 along the z axis at the mid-span of the beam are summarized in Figs. 18, 19 and 20, respectively. Driven by the crank-link mechanism, the beam buckled laterally rather quickly, before the  4. LMS2 and SS2 with ρ ∞ = 0.0 and most of the higher-order schemes have larger algorithmic dissipation than the other schemes. Therefore, when the contribution of high-frequencies needs to be considered in the solution, the second-order methods, especially LMS3, SS3, LMS4, SS4 with a large ρ ∞ , such as 0.6, are recommended. Figure 21 shows the configuration of a rotating shaft. Its end R is connected to the ground by a revolute joint, whereas the other end T is connected to the ground by a cylindrical joint that allows rotation about and displacement along the shaft's axis. A rigid disk is attached to the shaft at the mid-span; its center of mass is offset from the reference axis of the shaft by d = 0.05 m in the z direction, thus inertially unbalancing the system. The shaft length is L = 6 m; its other properties are listed in Table 5. It is modelled with 16 three-node beam elements. The disk has mass m d = 70.573 kg, radius r d = 0.24 m, and thickness t d = 0.05 m. Its inertial tensor with respect to the center of mass is diag(2.0325, 1.0163, 1.0163) g · m 2 . The angular velocity of the revolute joint at the point R is prescribed as where A 1 = 0.8, A 2 = 1.2, T 1 = 0.5 s, T 2 = 1 s, T 3 = 1.25 s, ω = 60 rad/s, i.e., it gently grows to an amplitude slightly below ω, pauses there for some time, and then gently grows again above ω, which corresponds roughly to a characteristic frequency of the system.

Numerical results
The simulation was run in the interval [−1, 2.5] s using t/n = 10 −3 s. Figures 22, 23 and 24 show, respectively, the displacement u 3 , velocity v 3 and acceleration a 3 along the z axis at the shaft's mid-span within [0, 2.5] s. In this case, LMS4 with ρ ∞ = 0.6 failed to complete the solution. Excellent agreement can be observed between the results  computed by the other employed methods, since the time step is sufficient to accurately describe the participating modes regardless of the algorithmic dissipation.
Convergence Based on this example, the convergence rates of the employed methods are investigated. Figures 25, 26 and 27 plot the relative errors of u 3 , v 3 and a 3 versus t/n of the employed methods. The relative error RE is defined as where N is the number of total steps, x(t j ) and x j denote the exact and numerical solutions at time t j , respectively. Since no exact solution can be obtained analytically for problems like this, it is replaced by the reference solution obtained using ESDIRK4(3)6L[2]SA 2 with a very small step size, t = 10 −5 s. As can be seen, the second-order methods all exhibit a second-order convergence rate for displacement, velocity, and acceleration. The multi-step and equivalent single-step schemes Among the higher-order methods, the third-order MSSTH(3) and ESDIRK3(2)4L- [2]SA show a third-order convergence rate, and the third-order ESDIRK3(3)5L[2]SA shows a convergence rate exceeding the third-order. The fourth-order MSSTH(4) and ESDIRK4(3)6L[2]SA 2 show a convergence rate of about fourth-order when t/n is close to 0.001 s, but when t/n becomes smaller, their orders of convergence rate decrease, especially in a 3 . We think that this is because the reference solution, obtained using a very small step size, includes the contribution of a very large frequency range that contains many high frequencies not included in the numerical results. The treatment of these high-frequencies brings additional errors, which become more significant in the accelerations and in the cases where the error of the integrator in the frequency domain that it can retain is already quite small, such as the higher-order integrators with a small step size.
However, the fifth-order MSSTH(5) was never able to reach the fifth-order convergence rate. Regarding the reason, we think it may be related to the smoothness of the problem itself. Since the accuracy order is obtained from the Taylor expansion results, the higher-order

Average number of iterations
Since the same t/n is used in the examples, the number of steps or sub-steps required for all methods is the same. However, as discussed in Sect. 2, the scheme used for prediction also plays an important role in computational efficiency. Therefore, the average number of iterations required for each step/sub-step in all examples solved in this section is listed in Fig. 28, to illustrate the cost of the employed methods. Two time step sizes are considered here for each example. As shown in Fig. 28, the average number of iterations decreases with a smaller step size in all cases. The number of iterations spent by each integrator does not differ significantly. However, it can be observed that each single-step method always requires more iterations than the corresponding multi-step method. As discussed in Sect. 3, single-step methods use constant prediction, while multi-step methods use an explicit second-order scheme for prediction. Therefore, this observation supports the conclusion that the use of an explicit second-order prediction scheme in these second-order methods helps to improve computational efficiency.
Besides, also the higher-order methods require more iterations than the remaining second-order methods in most cases. This may also be explained by the consideration that the accuracy of the prediction scheme is not close enough to that of the time integration scheme. The second-order multi-step and multi-stage methods usually require the least number of iterations of all methods. The data from Fig. 28 seem to support the consideration that a suitable prediction scheme, which must be explicit and have accuracy close to that of the integration scheme, is helpful in saving computational costs.

Conclusions
In this work, the performance of several representative implicit, A-stable time integration methods is discussed in view of their application to multibody system dynamics. The employed methods include linear two-, three-, and four-step methods, referred to as LMS2, LMS3, LMS4, their equivalent single-step methods, indicated as SS2, SS3, SS4, and several explicit first-stage, singly diagonally-implicit Runge-Kutta methods (ESDIRKs), indicated as Bathe, MSSTC (3,4,5), MSSTH (3,4,5), ESDIRK3 (2) [21,35,36], but the parameters of MSSTH (3,4,5) are modified here to satisfy the overall and stage order conditions. The formulations of the employed methods, and their implementation in the free general-purpose multibody solver MBDyn are presented.
In terms of properties, the linear multi-step, single-step, Bathe and MSSTC(3, 4, 5) methods have second-order accuracy and tunable algorithmic dissipation, whereas the other ES-DIRKs can achieve higher-order accuracy. Several general conclusions can be drawn from the linear analysis and numerical experiments: -with a suitable step size, all employed methods can predict accurate solutions. -LMS3, SS3, LMS4 and SS4 with a large ρ ∞ , such as 0.6, show robust stability and good energy-conserving properties, making them suitable for long-term simulations and other cases where a large range of modes must be preserved, but these methods are not as good as others at filtering out high-frequency oscillations. -LMS2 and SS2 with ρ ∞ = 0.0 (namely the second-order Backward Difference Formula) as well as most of the employed higher-order methods show a strong algorithmic dissipation even in the low-frequency range, so that their solutions are more likely to exhibit obvious amplitude decay and consequently a loss of accuracy at the large time steps. -Bathe and the higher-order integrators with ρ ∞ = 0.0 can filter out high-frequency dynamics faster, so they are more useful for problems with high-frequency pollution.