1 Introduction

The dual time-stepping (DTS) technique can be used for solving a large system of nonlinear equations. The DTS procedure consists in adding a pseudo time-derivative of the solution with respect to the so-called dual time and marching in dual time to steady-state. It was employed in [1] for solving the compressible Euler equations and in [2] for the incompressible Euler and Navier–Stokes equations. Other examples in which derivatives in pseudo time are used to solve systems of nonlinear equations include various engineering fields such as magnetohydrodynamics [3], simulations of launch environments [4] and electronics [5].

One drawback with the dual time-stepping technique is that the pseudo-time iterations must be fully converged in order to preserve time accuracy [6]. Moreover, if the dual time integration is carried out with an explicit scheme, the method may become unstable for dual time-steps exceeding the physical ones [7]. These two limitations may require a large number of iterations and hence to a computationally expensive method.

For these reasons, significant efforts have been made during the last decade to improve the performances of DTS. One strategy to accelerate the convergence is to introduce a preconditioner multiplying the pseudo-time derivative [2, 8, 9]. Other improvements can be achieved by developing hybrid discretizations involving the physical-time derivative. In [6, 10] the alternating-direction implicit (ADI) scheme [11] is used in conjunction with the common second-order backward difference formula (BDF2). Another example is provided in [12], where the hybrid scheme is built with the lower-upper symmetric-Gauss–Seidel (LU-SGS) method [13]. A further improvement of the DTS is proposed in [14], where it is combined with a local time-stepping approach.

The goal of this paper is to explore if we can accelerate the convergence of DTS by adding a second order pseudo-time derivative. The article is organized as follows: in Sect. 2, the DTS technique is presented and its convergence properties are shown. Section 3 describes a new class of dual time-marching procedures and introduces the second-derivative DTS. In Sect. 4, numerical simulations that corroborate the theoretical results are presented, while in Sect. 5 the drawbacks of the scheme, and alternative formulations are discussed. In Sect. 6 conclusions are drawn.

2 The Dual Time-Stepping Technique

We start by illustrating how the conventional DTS is used.

2.1 A Hyperbolic Model Problem

Consider the one-dimensional advection equation

$$\begin{aligned} u_{t} + a u_{x} = 0, \quad x \in \varOmega , \quad t > 0, \end{aligned}$$
(1)

where a is a positive constant and \(\varOmega \) the spatial domain. Let \(u_{x} \approx D \mathbf {u}\) be a general discretization of the spatial derivative, where \(\mathbf {u}\) is the vector approximating the solution on a spatial grid. By applying the Euler-backward scheme in time to (1) and indicating the time-step by \(\varDelta t\), we get

$$\begin{aligned} \frac{\mathbf {u}^{n+1} - \mathbf {u}^{n}}{\varDelta t} + a D \mathbf {u}^{n+1} = 0. \end{aligned}$$
(2)

Here, \(\mathbf {u}^{n+1}\) and \(\mathbf {u}^{n}\) represent the approximated solution at the different times \(t^{n+1} = (n+1) \varDelta t\) and \(t^{n} = n \varDelta t\), respectively. The calculation of \(\mathbf {u}^{n+1}\) by directly inverting the matrix \(\left( I/\varDelta t + a D\right) \) in (2), where I is the identity matrix, may be excessively expensive.

Instead, we apply the DTS technique by replacing \(\mathbf {u}^{n+1}\) with \(\mathbf {w}\) and adding the dual-time derivative \(\mathbf {w}_{\tau }\) on the left hand-side of (2) to obtain

$$\begin{aligned} \mathbf {w}_{\tau } + \frac{\mathbf {w} - \mathbf {u}^{n}}{\varDelta t} + a D \mathbf {w} = 0, \quad \tau > 0. \end{aligned}$$
(3)

If the solution \(\mathbf {w}\) in (3) reaches steady-state, it will converge to \(\mathbf {u}^{n+1}\) in (2). The scheme (3) can be rewritten in the following compact form

$$\begin{aligned} \mathbf {w}_{\tau } + F\mathbf {w} = \mathbf {R}, \quad \tau > 0, \end{aligned}$$
(4)

where \(F = I/\varDelta t + a D\) and \(\mathbf {R} = \mathbf {u}^{n}/\varDelta t\) is given data.

2.2 Nonlinear Problems

Under mild restrictions, nonlinear differential problems can be related to linear formulations. As an example, consider a fully discretized problem using Euler-backward in time,

$$\begin{aligned} \frac{\mathbf {u}^{n+1} - \mathbf {u}^{n}}{\varDelta t} + L\left( \mathbf {u}^{n+1}\right) = \mathbf {0}. \end{aligned}$$
(5)

In (5), \(L\left( \mathbf {u}\right) \) is a nonlinear operator, typically coming from a nonlinear space approximation. Assuming small variations of the solution in time, a linearization of L can be performed:

$$\begin{aligned} L\left( \mathbf {u}^{n+1}\right) = L\left( \mathbf {u}^{n}\right) + \frac{\partial L}{\partial \mathbf {u}} \varDelta \mathbf {u} + O\left( \left\| \varDelta \mathbf {u}\right\| ^{2}\right) , \end{aligned}$$
(6)

where \(\partial L / \partial \mathbf {u}\) is the Jacobian matrix of L and \(\varDelta \mathbf {u} := \mathbf {u}^{n+1} - \mathbf {u}^{n}\). By substituting (6) into (5) and neglecting higher order terms, we obtain

$$\begin{aligned} \left( \frac{I}{\varDelta t} + \frac{\partial L}{\partial \mathbf {u}}\right) \varDelta \mathbf {u} \simeq - \,L\left( \mathbf {u}^{n}\right) . \end{aligned}$$
(7)

The linear problem (7) can be solved using the DTS technique (4), with \(F = I/\varDelta t + \partial L/\partial \mathbf {u}\) and \(\mathbf {R} = -\, L\left( \mathbf {u}^{n}\right) \). Hence, one can relate nonlinear problems to the linear setting, as long as the Jacobian matrix \(\partial L / \partial \mathbf {u}\) is well-defined. The assumption of small variations in time can always be fulfilled by considering sufficiently small time steps \(\varDelta t\).

2.3 Convergence

A general linear time-space discretization of a differential problem has the form

$$\begin{aligned} F\mathbf {u} = \mathbf {R}, \end{aligned}$$
(8)

where F is a nonsingular matrix and \(\mathbf {R}\) is given and independent of \(\mathbf {u}\). To simplify the upcoming analysis, we assume that F is diagonalizable, i.e. \(F = X \varLambda X^{-1}\) where X, \(\varLambda \) are the matrices containing the eigenvectors and eigenvalues of F, respectively.

By adding a dual-time derivative to the left hand-side of (8) we obtain (4), which converges in dual time to (8) if the following proposition holds.

Proposition 1

Let all the eigenvalues of the diagonalizable F have positive real parts. Then the solution of the dual-time dependent problem (4) converges to the solution of (8).

Proof

Applying the eigendecomposition of F to (4) yields

$$\begin{aligned} \mathbf {v}_{\tau } + \varLambda \mathbf {v} = X^{-1} \mathbf {R}, \quad \tau > 0, \end{aligned}$$
(9)

where \(\mathbf {v} = X^{-1} \mathbf {w}\). By multiplying (9) with \(e^{\varLambda \tau }\) from the left and integrating we find

$$\begin{aligned} \mathbf {v}\left( \tau \right) = e^{-\varLambda \tau } \left( \mathbf {v}\left( 0\right) - \left( X\varLambda \right) ^{-1} \mathbf {R}\right) + \left( X \varLambda \right) ^{-1} \mathbf {R}, \end{aligned}$$

which converges as \(\tau \rightarrow + \infty \) if all the eigenvalues of F have positive real parts. The steady-state solution \(\mathbf {w} = F^{-1} \mathbf {R}\) is recovered by multiplying \(\mathbf {v}\) with X. \(\square \)

Remark 1

The eigenvalue with the minimum real part determines the convergence rate in (4).

2.4 A Note on Preconditioning

To increase the convergence rate we may introduce a preconditioner \(\varPi \) which multiplies the first-derivative term in (4), yielding

$$\begin{aligned} \varPi ^{-1} \mathbf {w}_{\tau } + F\mathbf {w} = \mathbf {R}. \end{aligned}$$
(10)

The optimal choice of \(\varPi \) in (10) depends on the specific problem, and will not be discussed in detail in this paper. We simply observe that the choice \(\varPi = cF^{-1}\), with \(c > 0\), leads to a problem whose convergence does not depend on the eigenvalues of F, since (10) becomes

$$\begin{aligned} \mathbf {w}_{\tau } + c \mathbf {w} = c F^{-1} \mathbf {R}. \end{aligned}$$
(11)

Note that, according to Proposition 1, this formulation is always convergent. On the other hand, even though the magnitude of c can be chosen in order to get a fast convergence of (4), the formulation (11) requires the inverse of F.

2.5 Model Problem

The proof of Proposition 1 indicates that rather than considering the matrix-vector problem (8) at once, one may instead study the scalar model problem

$$\begin{aligned} w_{\tau } + \lambda w = r, \quad \tau > 0. \end{aligned}$$
(12)

The problem (12) is defined by considering each row in (9) separately, with the corresponding steady-state solution

$$\begin{aligned} u = r/\lambda , \quad \lambda \in {\mathbb {C}} {\setminus } \left\{ 0\right\} . \end{aligned}$$
(13)

3 The Second-Derivative DTS Technique

To possibly get an even faster decay to steady-state, we add two pseudo-time derivatives to the fully-discretized problem (8),

$$\begin{aligned} \mathbf {w}_{\tau \tau } + 2 G \mathbf {w}_{\tau } + F \mathbf {w} = \mathbf {R}, \quad \tau > 0, \end{aligned}$$
(14)

where G is a matrix to be chosen in order to improve the convergence.

Remark 2

A matrix multiplying the second derivative term in (14) would play the same role as \(\varPi ^{-1}\) in (10) for the classical DTS formulation. Hence we consider (14) to be the general second derivative DTS formulation.

We choose a diagonalizable matrix \(G = X \varGamma X^{-1}\) in (14) with the same eigenvectors as F. This allows us to rewrite (14) as a system of independent ODEs of the form

$$\begin{aligned} w_{\tau \tau } + 2 \gamma w_{\tau } + \lambda w = r, \quad \tau > 0, \end{aligned}$$
(15)

where \(\gamma , \lambda \) are eigenvalues of G and F, respectively. Note that the steady-state solution of (15) is given by (13). Thus, the convergence properties of the classical and second-derivative DTS can be compared by studying the scalar equations (12) and (15).

The second-order ordinary differential equation (15) can be written as a system of first-order equations

$$\begin{aligned} \mathbf {z}_{\tau } + A \mathbf {z} = \mathbf {b}, \quad \text {where} \quad \mathbf {z} = \begin{bmatrix} w \\ w_{\tau } \end{bmatrix}, \quad A = \begin{bmatrix} 0&\quad -\,1 \\ \lambda&\quad 2 \gamma \end{bmatrix}, \quad \mathbf {b} = \begin{bmatrix} 0 \\ r \end{bmatrix}. \end{aligned}$$
(16)

By using the matrix exponential notation the solution to the system (16)

$$\begin{aligned} \mathbf {z}\left( \tau \right) = e^{-A\tau } \left( \mathbf {z}\left( 0\right) - A^{-1}\mathbf {b}\right) + A^{-1}\mathbf {b} \end{aligned}$$
(17)

converges to \(A^{-1} \mathbf {b} = \left[ u,0\right] ^{T}\), i.e. \(w\left( \tau \right) \rightarrow u\), for any \(\mathbf {z}\left( 0\right) \) as \(\tau \rightarrow + \infty \), if the eigenvalues of A have positive real parts.

Remark 3

The matrix exponential \(e^{-A\tau }\) can be obtained from the Jordan form of \(A = VJV^{-1}\), where V is invertible and J is a triangular matrix composed by Jordan blocks. In particular, \(e^{-A\tau } = Ve^{-J\tau }V^{-1}\) and the eigenvalues of A characterize the convergence of (16). For distinct eigenvalues, J and V are the matrices containing the eigenvalues and eigenvectors of A, respectively.

3.1 Initial Convergence Analysis

An interpretation of (15) is given by the damped harmonic oscillator [15], if the coefficients \(\gamma \) and \(\lambda \) are real. This system converges to steady-state if both \(\gamma \) and \(\lambda \) in (15) are positive. Furthermore, the system approaches steady-state as quickly as possible, without oscillating, when it is critically damped, i.e. when \(\gamma = \sqrt{\lambda }\). In this section we will prove these results and use them as guidelines for the case with complex coefficients. However, we start by considering \(\gamma \in {\mathbb {R}}\), \(\lambda \in {\mathbb {R}} {\setminus } \left\{ 0\right\} \).

From Proposition 1, the classical DTS in (12) converges to the steady-state solution as \(\tau \rightarrow +\infty \) if \(\lambda > 0\). For the second-derivative DTS (15) we prove

Proposition 2

Let \(\gamma \) and \(\lambda \) be real coefficients. The solution to the problem (15) converges to its steady-state solution as \(\tau \rightarrow +\infty \) if \(\gamma \) and \(\lambda \) are positive.

Proof

The solution (17) converges to the steady-state solution if the eigenvalues of A, given by

$$\begin{aligned} \mu _{1,2} = \gamma \pm \sqrt{\gamma ^{2} - \lambda }, \end{aligned}$$
(18)

have positive real parts. If \(\gamma \) and \(\lambda \) are positive, then both real parts of \(\mu _{1,2}\) in (18) are positive and convergence follows. \(\square \)

Next, our aim is to find conditions on \(\gamma \) that lead to faster convergence than the classical DTS technique in (12), i.e. we need

$$\begin{aligned} \text {Re}\left( \mu _{1,2}\right) \ge \text {Re}\left( \lambda \right) , \end{aligned}$$
(19)

where \(\mu _{1}\) and \(\mu _{2}\) are given by (18). Condition (19) gives rise to

Proposition 3

The solution to the unsteady problem (15) converges to steady-state faster than the solution to (12) as \(\tau \rightarrow + \infty \) if

$$\begin{aligned} \left( \lambda ,\gamma \right) \in S := \left\{ \left( \lambda ,\gamma \right) \in {\mathbb {R}}^{2} \mid 0 < \lambda \le 1, \ \ \lambda \le \gamma \le \frac{1 + \lambda }{2}\right\} . \end{aligned}$$
(20)

Proof

We prove each constraint in S, starting from \(\lambda \le \gamma \). By substituting (18) into (19) and observing that \(\gamma , \lambda \in {\mathbb {R}}\), we find that

$$\begin{aligned} \gamma \pm \text {Re}\left( \sqrt{\gamma ^{2} - \lambda }\right) \ge \lambda \end{aligned}$$

must hold. Since the real part of the square root is nonnegative for real arguments, it suffices to satisfy the inequality with the minus sign, which is the most restrictive case. Moreover, if \(\gamma < \lambda \) the condition

$$\begin{aligned} \text {Re}\left( \sqrt{\gamma ^{2} - \lambda }\right) \le \gamma - \lambda \end{aligned}$$
(21)

can not be fulfilled and hence \(\gamma \ge \lambda \) is required.

Due to the constraint \(\gamma \ge \lambda \), we can show that the DTS with two derivatives (15) can be faster than the classical DTS (12) only if \(0 < \lambda \le 1\). We prove this by contradiction: if \(\lambda > 1\), then \(\gamma ^{2} \ge \lambda ^{2} > \lambda \) and (21) becomes

$$\begin{aligned} \sqrt{\gamma ^{2} - \lambda } = \text {Re}\left( \sqrt{\gamma ^{2} - \lambda }\right) \le \gamma - \lambda . \end{aligned}$$

On the other hand, \(\gamma \ge \lambda \) implies \(2\gamma - \lambda \ge \lambda > 1\) and

$$\begin{aligned} \gamma - \lambda = \sqrt{\gamma ^{2} - \lambda \left( 2\gamma - \lambda \right) }< \sqrt{\gamma ^{2} - \lambda } \le \gamma - \lambda \quad \Rightarrow \quad \gamma - \lambda < \gamma - \lambda \end{aligned}$$

which proves that \(0 < \lambda \le 1\) is required.

Finally, the remaining inequality in (20) is obtained by considering (21) again. This relation is trivially fulfilled for \(\gamma \ge \lambda \). Now, let \(\gamma \ge \sqrt{\lambda }\). By squaring (21) we get

$$\begin{aligned} \lambda \left[ \lambda + \left( 1 - 2\gamma \right) \right] \ge 0 \quad \Rightarrow \quad \gamma \le \frac{1+\lambda }{2}. \end{aligned}$$

\(\square \)

Proposition 3 provides conditions on the coefficient \(\gamma \) that leads to faster decay of (15) with respect to (12) for any fixed \(\lambda \in \left( 0,1\right] \). It is legitimate to ask if there exists an optimal choice of \(\gamma \).

Proposition 4

The choice \(\gamma = \sqrt{\lambda }\) provides the fastest decay for the second-derivative DTS formulation (15).

Proof

The eigenvalue of the matrix A in (16) with the smallest real part determines the decay to the steady-state solution. According to (18), this eigenvalue has a real part given by

$$\begin{aligned} \text {Re}\left( \mu _{1}\right) = \left\{ \begin{array}{lr} \gamma , &{}\quad \text {if }\,\, \gamma < \sqrt{\lambda },\\ \gamma - \sqrt{\gamma ^{2} - \lambda }, &{}\quad \text {if }\,\, \gamma \ge \sqrt{\lambda }. \end{array}\right. \end{aligned}$$
(22)

Since the real part of \(\mu _{1}\) increases for \(\gamma \) less than \(\sqrt{\lambda }\) and decreases for \(\gamma \) greater than \(\sqrt{\lambda }\), we conclude that \(\gamma = \sqrt{\lambda }\) maximizes the real part of \(\mu _{1}\). \(\square \)

From (18), the optimal value of \(\gamma \) implies that the eigenvalues of A in (16) are \(\mu _{1} = \mu _{2} = \sqrt{\lambda }\). The optimal DTS formulation (15) hence becomes

$$\begin{aligned} w_{\tau \tau } + 2\sqrt{\lambda } w_{\tau } + \lambda w = r. \end{aligned}$$
(23)

This formulation leads to convergence if \(\lambda > 0\). Moreover, faster decay with respect to (12) is achieved if \(0 < \lambda \le 1\), since in this case \(\sqrt{\lambda } \ge \lambda \). Note that also small perturbations of \(\gamma \) from the optimal value \(\sqrt{\lambda }\), i.e. \(\gamma \approx \sqrt{\lambda }\), allow for faster convergence, see Fig. 1. In particular, if \(\gamma = \sqrt{\lambda } + \delta \), (20) leads to

$$\begin{aligned} 0 < \lambda \le 1, \quad \sqrt{\lambda }\left( \sqrt{\lambda } - 1\right) \le \delta \le \frac{\left( \sqrt{\lambda } - 1\right) ^{2}}{2}. \end{aligned}$$
(24)
Fig. 1
figure 1

Region S of the values \(\left( \lambda ,\gamma \right) \in {\mathbb {R}}^{2}\) in (15) which lead to faster convergence than the conventional DTS (12) according to Proposition 3. The optimal choice \(\gamma = \sqrt{\lambda }\) is contained in S for \(0 < \lambda \le 1\). Note that also \(\gamma \approx \sqrt{\lambda }\) may lead to faster convergence

A detailed convergence analysis for (15) with \(\gamma , \lambda \in {\mathbb {C}}\) is beyond the scope of this study. We restrict ourselves to the formulation (23) with \(\lambda \in {\mathbb {C}} {\setminus }\left\{ 0\right\} \) and prove

Proposition 5

The solution to the problem (23) converges to its steady-state solution as \(\tau \rightarrow +\infty \) if, and only if, \(\lambda \) is not a negative real number.

Proof

The problem (23) can be written as the system of first-order equations (16) with \(\gamma = \sqrt{\lambda }\). The eigenvalues of A are \(\mu _{1} = \mu _{2} = \sqrt{\lambda }\) and lead to convergence if \(\text {Re}(\mu _{1,2}) > 0\). The number \(\sqrt{\lambda }\), interpreted as the principal square root of \(\lambda \), has always a non-negative real part. If \(\lambda \) is a negative real number, then \(\text {Re}(\sqrt{\lambda }) = 0\) which implies no convergence. \(\square \)

In conclusion, we will consider \(G = F^{\frac{1}{2}} = X \varLambda ^{\frac{1}{2}} X^{-1}\) as the optimal choice in (14). In \(\varLambda ^{\frac{1}{2}}\) only the principal square roots are considered, i.e. the square roots with non-negative real parts.

3.2 The New DTS Technique

Consider the new DTS technique applied to the original problem (8)

$$\begin{aligned} \mathbf {w}_{\tau \tau } + 2 F^{\frac{1}{2}} \mathbf {w}_{\tau } + F \mathbf {w} = \mathbf {R}. \end{aligned}$$
(25)

This formulation generalizes the critically damped harmonic oscillator [15] and converges to steady-state with optimal rate, see Remark 3 and Proposition 4. We can now prove

Proposition 6

The decay to steady-state for the new DTS formulation (25) is determined by the square roots of the eigenvalues of F.

Proof

The pseudo-time differential problem (25) can be written as a system of first-order equations

$$\begin{aligned} \begin{bmatrix} \mathbf {w} \\ \mathbf {w}_{\tau } \end{bmatrix}_{\tau } + \begin{bmatrix} I&\quad 0 \\ F^{\frac{1}{2}}&\quad I \end{bmatrix}^{-1} \begin{bmatrix} F^{\frac{1}{2}}&\quad -\,I \\ 0&\quad F^{\frac{1}{2}} \end{bmatrix} \begin{bmatrix} I&\quad 0 \\ F^{\frac{1}{2}}&\quad I \end{bmatrix} \begin{bmatrix}\mathbf {w} \\ \mathbf {w}_{\tau } \end{bmatrix} = \begin{bmatrix} \mathbf {0} \\ \mathbf {R} \end{bmatrix}. \end{aligned}$$
(26)

Clearly, the convergence of the system is determined by the eigenvalues of \(F^{\frac{1}{2}}\). Note that (26) can be rewritten using the auxiliary variable \(\mathbf {v} = \mathbf {w}_{\tau } + F^{\frac{1}{2}} \mathbf {w}\), which leads to

$$\begin{aligned} \begin{bmatrix} \mathbf {w} \\ \mathbf {v} \end{bmatrix}_{\tau } + \begin{bmatrix} F^{\frac{1}{2}}&\quad -\,I \\ 0&\quad F^{\frac{1}{2}} \end{bmatrix} \begin{bmatrix} \mathbf {w} \\ \mathbf {v} \end{bmatrix} = \begin{bmatrix} \mathbf {0} \\ \mathbf {R} \end{bmatrix}. \end{aligned}$$

\(\square \)

The main consequence of Proposition 6 is that the new DTS formulation (25) converges to steady-state if the eigenvalues of F are non-zero and do not lie on the negative real axis. If the eigenvalues of F have positive real parts, then both the DTS formulations (4) and (25) are time convergent. In particular, the decay rates are determined by the eigenvalue with the smallest real part of F and \(F^{\frac{1}{2}}\), respectively.

Remark 4

The new DTS technique (25) can drive the solution to steady-state, when the classical one (4) fails to do that according to Proposition 5.

Remark 5

The square root of a number close to the imaginary axis has an output which is more distant from it. Similarly, if it is applied to a number with a large magnitude, the square root returns a number less distant from the origin. These two effects are illustrated in Fig. 2.

Fig. 2
figure 2

Complex numbers with nonnegative real parts (left figure) and their square root. Note that the distribution of points near the origin of the complex plane tends to rarefy

As pointed out in Remark 5, if F has eigenvalues close to the imaginary axis, the second-derivative DTS decays faster than the classical formulation. Another important effect of the square root is that it narrows the spectrum of F. Pushing the eigenvalues away from the imaginary axis increases the decay rate, while contracting the spectrum enables the use of larger dual time-steps for an explicit time-integrator. Both these characteristics (important for fast convergence) could possibly be obtained by using another matrix G than \(F^{\frac{1}{2}}\) in (14).

Remark 6

The real and scalar analysis made in Sect. 3.1 only highlights the first of these two effects for (25). The effect of modifying the spectrum is discussed further below.

4 Numerical Experiments

In this Section we perform numerical tests for both the classical (4) and the new DTS technique (25). In all experiments, the discrete operators are sixth order accurate in the interior and the matrix square root is computed with the MATLAB function sqrtm [19].

4.1 First-Order Ordinary Differential Equations

Consider the steady problem

$$\begin{aligned} \begin{array}{rcll} u_{x} &{}= &{}f, &{}0< x < 1, \\ u\left( 0\right) &{}= &{}g, \end{array} \end{aligned}$$
(27)

where \(f\left( x\right) = 10\pi \cos \left( 10 \pi x\right) \) and \(g = 1\). The analytical solution to (27) is \(u\left( x\right) = \sin \left( 10 \pi x\right) + 1\).

To discretize (27), we use an (N + 1)-point uniform grid over \(\left[ 0,1\right] \), where \(x_{j} = jh\), \(j = 0, \dots , N\) and \(h = 1/N\). Let \(\mathbf {f}\) be a grid function such that \(f_{j} = f\left( x_{j}\right) \) and \(\mathbf {u}\) the approximate solution to (27). By applying a Summation-by-Parts (SBP) discretization to (27) for the derivative and a Simultaneous-Approximation-Term (SAT) to impose the boundary condition (see “Appendix A and B” for details and [16] for references), we get

$$\begin{aligned} P^{-1}Q \mathbf {u} = \mathbf {f} + \sigma P^{-1} \mathbf {e}_{0} \left( u_{0} - g\right) , \end{aligned}$$
(28)

where \(\sigma \) is a penalty parameter and \(\mathbf {e}_{0} = \left[ 1,0,\dots ,0\right] ^{T} \in {\mathbb {R}}^{\left( N+1\right) }\). Note that (28) has the form (8) with

$$\begin{aligned} F = P^{-1}\left( Q - \sigma E_{0}\right) , \quad \mathbf {R} = \mathbf {f} - \sigma P^{-1} \mathbf {e}_{0} g, \quad E_{0} = \text {diag}\left( 1,0,\dots ,0\right) . \end{aligned}$$
(29)

The penalty term in (28) makes the classical DTS technique (4) stable in the P-norm \(\left\| \mathbf {w}\right\| _{P} = \sqrt{\mathbf {w}^{T}P\mathbf {w}}\) if \(\sigma < - 1/2\) (see “Appendix B”). Also, for these values of the penalty parameter the new DTS (25) applied to (28) gives rise to a stable scheme since Proposition 6 holds.

Remark 7

For \(\sigma < -1/2\) the classical pseudo-time marching technique (4) is convergent since all the eigenvalues of F have positive real parts. The new DTS formulation also converges since \(F^{\frac{1}{2}}\) has only eigenvalues with positive real part [17].

Consider \(\sigma = -1\). We use a spatial increment \(h = 0.01\) to represent the solution on \(\left[ 0,1\right] \) and the fourth order Runge–Kutta scheme as time-integrator, unless otherwise specified. For both the schemes (4) and (25), we have used \(\mathbf {w} = \mathbf {1} := \left[ 1,\dots ,1\right] ^{T}\) as the initial guess, while we have set \(\mathbf {w}_{\tau }\) initially equal to \(\mathbf {0}\) for (25). Let \(\mathbf {w}^{n}\) be the solution to either (4) or (25) at the time \(\tau ^{n} = n\varDelta \tau \). We consider the solution to be converged if \(\left\| \mathbf {w}^{n} - \mathbf {u}\right\| _{P} < 10^{-6}\), where \(\mathbf {u}\) is the solution to (28).

The improved convergence can be seen directly by comparing the spectra of F and \(F^{\frac{1}{2}}\). From Fig. 3, it is clear that the second-derivatives DTS has better convergence properties since the eigenvalue with minimum real part is further away from the imaginary axis. The minimum number of iterations to convergence for the classical DTS is 177, corresponding to \(\varDelta \tau = 0.01775\). Figure 4 shows that the new DTS (25) allows for the use of larger dual time-steps, since this formulation is less stiff than the classical one. The minimum number of iterations for the new DTS formulation is 36, which is reached for \(\varDelta \tau = 0.198\). We conclude that the new DTS formulation is approximately five times more efficient than the old one.

Fig. 3
figure 3

The spectrum of F and of its square root for the first order problem with \(\sigma = -\,1\)

Fig. 4
figure 4

Number of iterations for the classical and new dual-time marching techniques for \(\sigma = -\,1\). The new DTS is less stiff than the former formulation and a larger dual time-step can be chosen

In Fig. 5, we have also tested the classical DTS with two fourth-order accurate Kinnmark–Gray integration methods (with \(K = 6\) and \(K = 8\)) which maximize the allowable dual-time step for hyperbolic problems [18]. For \(K = 6\), the optimal dual-time step is \(\varDelta \tau = 0.0229\) and the convergence is achieved in 135 iterations. Furthermore, the Kinnmark–Gray method with \(K = 8\) converges in 152 iterations for \(\varDelta \tau = 0.0235\). The results show that the new DTS formulation (25) is almost 4 times faster than the classical DTS (4), despite having optimized it with a time-integrator specifically tailored for the problem (27).

Fig. 5
figure 5

Number of iterations for the classical dual-time marching technique for \(\sigma = -\,1\) and two Kinnmark–Gray methods as time-integrators [with \(K = 6\) (left) and \(K = 8\) (right)]

The convergence results in Fig. 5 are shown in a neighborhood of the optimal dual-time step. It is important to mention that these methods are stable and convergent for \(\varDelta \tau \le 0.0309\) and \(\varDelta \tau \le 0.0437\), see Fig. 6. In other words, the number of iterations for convergence is not monotonically decreasing with increasing \(\varDelta \tau \). Hence, the optimal choice of the dual-time step is not necessarily the largest \(\varDelta \tau \) which leads to a stable numerical method. A similar consideration (although less dramatic) holds for the fourth order Runge–Kutta scheme applied to the classical and the new DTS formulation, see Fig. 7.

Fig. 6
figure 6

Number of iterations for the classical dual-time marching technique for \(\sigma = -\,1\) and two Kinnmark–Gray methods as time-integrator [with \(K = 6\) (left) and \(K = 8\) (right)]. All the dual-time steps which lead to convergence are shown

Fig. 7
figure 7

Number of iterations for the classical and new dual-time marching techniques for \(\sigma = -\,1\). All the dual-time steps which lead to convergence are shown

Remark 8

The convergence results in Figs. 4 and 5 show that the choice of the time-integration scheme and its fit to the spectrum have a significant influence on the convergence rate of DTS formulations.

Now consider \(\sigma = -1/2\). The DTS technique (4) applied to (28) is not provably stable, but all the eigenvalues to the matrix F have positive real parts, as shown in Fig. 8. As a result, both (4) and (25) converge to steady-state. In Fig. 9, the number of iterations to convergence is shown as a function of \(\varDelta \tau \) for both procedures. The optimal dual time-step for the classical DTS is \(\varDelta \tau = 0.01778\), leading to 284 iterations. The minimum number of iterations for the new DTS formulation is 36, corresponding to \(\varDelta \tau = 0.1964\). The new DTS formulation is approximately eight times more efficient than the old one.

Fig. 8
figure 8

The spectrum of F and of its square root for the first order problem with \(\sigma = -\,1/2\)

Fig. 9
figure 9

Number of iterations for the classical and new dual-time marching techniques for \(\sigma = -\,1/2\). The new DTS is less stiff than the former formulation and a larger dual time-step can be chosen

For \(\sigma = -1/4\) the classical DTS is not energy-stable and F in (29) has eigenvalues with negative real parts. Nonetheless, the second derivative DTS drives the solution to steady-state since no eigenvalue of F lies on the negative real axis (see Proposition 5). This particular situation, predicted by Remark 4, is illustrated in Fig. 10, where both the spectra of F and \(F^{\frac{1}{2}}\) are presented. As a result, the classical DTS fails to converge for any dual time-step (for example, \(\varDelta \tau = 0.01\) in Fig. 11). Vice versa, the new DTS technique is convergent and the minimum number of iterations to convergence is 35, reached for \(\varDelta \tau = 0.1996\).

Fig. 10
figure 10

The spectrum of F and of its square root for the first order problem with \(\sigma = -\,1/4\)

Fig. 11
figure 11

In the left figure, the classical DTS fails to drive the solution to steady-state with \(\varDelta \tau = 0.01\) and \(\sigma = -\,1/4\). In the right figure, the number of iterations needed for convergence of the new dual-time marching technique is shown for \(\sigma = -\,1/4\)

4.2 A Model of the Time-Dependent Compressible Navier–Stokes Equations

Next, we study both DTS approaches applied to the following system

$$\begin{aligned} \begin{array}{rclll} \mathbf {u}_{t} + A \mathbf {u}_{x} &{}= &{}\varepsilon B \mathbf {u}_{xx} + \mathbf {F}\left( x,t\right) , &{}0< x< 1, &{}t> 0, \\ \mathbf {u}\left( x,0\right) &{}= &{}\mathbf {f}\left( x\right) , &{}0< x < 1, \\ \left( u_{1} + \sqrt{2} u_{2} - \varepsilon u_{2,x}\right) \left( 0,t\right) &{}= &{}g_{0}\left( t\right) , &{}t> 0, \\ \left( u_{1} - \sqrt{2} u_{2} - \varepsilon u_{2,x}\right) \left( 1,t\right) &{}= &{}g_{1}\left( t\right) , &{}t > 0, \end{array} \end{aligned}$$
(30)

where \(\mathbf {u}\left( x,t\right) = \left[ u_{1}\left( x,t\right) ,u_{2}\left( x,t\right) \right] ^{T}\), \(\varepsilon = 10^{-2}\). The matrices A and B are real and given by

$$\begin{aligned} A = \begin{bmatrix} 0&\quad 1 \\ 1&\quad 0 \end{bmatrix}, \quad B = \begin{bmatrix} 0&\quad 0 \\ 0&\quad 1 \end{bmatrix} \end{aligned}$$

while \(\mathbf {F}\left( x,t\right) \), \(\mathbf {f}\left( x\right) \), \(g_{0}\left( t\right) \), \(g_{1}\left( t\right) \) are given data.

The specific boundary conditions \(\left( u_{1} + \sqrt{2} u_{2} - \varepsilon u_{2,x}\right) \left( 0,t\right) = g_{0}\left( t\right) \) and \(\Big (u_{1} - \sqrt{2} u_{2} - \varepsilon u_{2,x}\Big ) \left( 1,t\right) = g_{1}\left( t\right) \) applied to the linear Navier–Stokes like system (30) makes the problem strongly well-posed, i.e. a unique solution to (30) exists and its norm is bounded by the boundary and initial data. Moreover, the corresponding semi-discrete problem is strongly stable, if the SBP-SAT approach is used. These theoretical results are shown in “Appendix C and D”.

Here we limit ourselves to the study of the fully-discrete problem

$$\begin{aligned} \frac{3 \mathbf {v}^{n+1} - 4 \mathbf {v}^{n} + \mathbf {v}^{n-1}}{2 \varDelta t} + D \otimes A \mathbf {v}^{n+1} = \varepsilon D_{2} \otimes B \mathbf {v}^{n+1} + \widetilde{\mathbf {F}}^{n+1} + \mathbf {SAT}^{n+1}, \end{aligned}$$
(31)

with \(\mathbf {v}^{0} = \widetilde{\mathbf {f}}\). The formulation (31) is obtained from (30) by discretizing in space with SBP-SAT and using BDF2 in time. This two-step method requires also \(\mathbf {v}^{1}\) as initial data, which is computed using the same space discretization and Euler backward in time.

We consider a grid with \(x_{j} = jh\), \(j = 0,\dots ,N\) where \(h = 1/N\) is the grid spacing, and the grid functions \(\widetilde{\mathbf {f}}\), \(\widetilde{\mathbf {F}}^{n} \in {\mathbb {R}}^{2\left( N+1\right) }\) which approximate \(\mathbf {f}, \mathbf {F}\left( t^{n}\right) \) in the continuous problem (30). With each grid point we associate the approximate solution \(\mathbf {v} \in {\mathbb {R}}^{2\left( N+1\right) }\), such that

$$\begin{aligned} v_{2j}^{n} \cong u_{1}\left( x_{j},t^{n}\right) , \ v_{2j+1}^{n} \cong u_{2}\left( x_{j},t^{n}\right) , \quad j = 0, \dots , N. \end{aligned}$$

In the fully-discrete problem (31), the symbol \(\otimes \) denotes the Kronecker product defined by

$$\begin{aligned} A = \left\{ a_{ij}\right\} \in {\mathbb {R}}^{m \times n}, \ \ B \in {\mathbb {R}}^{n \times p}, \quad \ \ A \otimes B = \begin{bmatrix} a_{11} B&\quad \cdots&\quad a_{1n} B \\ \vdots&\quad \ddots&\quad \vdots \\ a_{m1} B&\quad \cdots&\quad a_{mn} B \end{bmatrix} \in {\mathbb {R}}^{m \times p}. \end{aligned}$$

Moreover, D and \(D_{2}\) are SBP operators for the first and second derivatives and the vector \(\mathbf {SAT}\) collects the penalty terms for the boundary conditions. The \(\mathbf {SAT}^{n+1}\) term in (31) can be written as

$$\begin{aligned} \mathbf {SAT}^{n+1} =&-\left( P^{-1} E_{0} \otimes \varSigma \right) \left[ \left( I_{N+1} \otimes H_{0}\right) \mathbf {v} - \varepsilon \left( D \otimes H_{D}\right) \mathbf {v} - \widetilde{\mathbf {g}}_{0}\right] ^{n+1} \nonumber \\&+ \left( P^{-1} E_{N} \otimes \varSigma \right) \left[ \left( I_{N+1} \otimes H_{N}\right) \mathbf {v} - \varepsilon \left( D \otimes H_{D}\right) \mathbf {v} - \widetilde{\mathbf {g}}_{N} \right] ^{n+1}, \end{aligned}$$
(32)

where \(E_{0} = \text {diag}\left( 1,0,\dots ,0\right) \), \(E_{N} = \text {diag}\left( 0,\dots ,0,1\right) \) and \(I_{M}\) indicates the \(M \times M\) identity matrix. Furthermore, we have used

$$\begin{aligned} \varSigma = \begin{bmatrix} 0&\quad 0 \\ 0&\quad 1 \end{bmatrix}, \quad H_{0} = \begin{bmatrix} 1&\quad \sqrt{2} \\ 1&\quad \sqrt{2} \end{bmatrix}, \quad H_{N} = \begin{bmatrix} 1&\quad -\,\sqrt{2} \\ 1&\quad -\,\sqrt{2} \end{bmatrix}, \quad H_{D} = \begin{bmatrix} 0&\quad 1 \\ 0&\quad 1 \end{bmatrix} \end{aligned}$$
(33)

and \(\widetilde{\mathbf {g}}_{0}^{n+1} = g_{0}\left( t^{n+1}\right) \mathbf {1}\), \(\widetilde{\mathbf {g}}_{N}^{n+1} = g_{1}\left( t^{n+1}\right) \mathbf {1}\) are \(2\left( N+1\right) \) vectors.

To solve the discrete problem (31) we can write the classical (4) and the new DTS formulation (25) by defining

$$\begin{aligned} F&= \frac{3}{2\varDelta t} I_{2\left( N+1\right) } + D \otimes A - \varepsilon D_{2} \otimes B \nonumber \\&\quad + \,\left( P^{-1}E_{0} \otimes \varSigma \right) \left[ \left( I_{N+1} \otimes H_{0}\right) - \varepsilon \left( D \otimes H_{D}\right) \right] \nonumber \\&\quad - \,\left( P^{-1}E_{N} \otimes \varSigma \right) \left[ \left( I_{N+1} \otimes H_{N}\right) - \varepsilon \left( D \otimes H_{D}\right) \right] \end{aligned}$$
(34)

and

$$\begin{aligned} \mathbf {R} = \frac{2\mathbf {v}^{n}}{\varDelta t} - \frac{\mathbf {v}^{n-1}}{2\varDelta t} + \left( P^{-1} E_{0} \otimes \varSigma \right) \widetilde{\mathbf {g}}_{0}^{n+1} - \left( P^{-1} E_{N} \otimes \varSigma \right) \widetilde{\mathbf {g}}_{N}^{n+1} + \widetilde{\mathbf {F}}^{n+1}. \end{aligned}$$
(35)

To obtain the computational results we have used the following manufactured solutions

$$\begin{aligned} u_{1}\left( x,t\right) = \cos \left( 10\pi x - t\right) , \quad u_{2}\left( x,t\right) = \sin \left( 10\pi x - t\right) , \end{aligned}$$

with a spatial increment \(h = 0.005\) and a physical time-step \(\varDelta t = 0.1\). The new DTS (25) is less stiff than the classical time-marching technique (4) also for this problem, see Fig. 12.

Fig. 12
figure 12

The spectrum of F and of its square root for the linearized Navier–Stokes equations

Figure 13 shows the number of iterations needed for convergence with the fourth-order Runge–Kutta scheme as pseudo time-integrator. The optimal dual time-step for the classical DTS (4) is \(\varDelta \tau = 1.119 \times 10^{-3}\). With the stopping criterion \(\left\| \mathbf {w}^{n} - \mathbf {u}\right\| _{P} < 10^{-6}\) this formulation reaches steady-state in 542 iterations. The optimal choice for the two-derivatives DTS is \(\varDelta \tau = 0.052\) and it leads to convergence in 57 inner iterations. This implies that the new DTS is approximately ten times more efficient than the classical one.

Fig. 13
figure 13

Number of iterations to convergence using the classical and the new DTS

5 Main Drawbacks and Open Questions

The previous numerical tests show that the new DTS formulation (25) has better convergence properties compared to the conventional time-marching technique (4). However, when we rewrite (25) in first-order form as in (16) we obtain a system which has twice the dimensions of the one in (4). Moreover, the computation of the principal square root of F may be expensive if the dimension of the system (8) is large. In Table 1, the computational times of both DTS techniques (4) and (25) are shown for the numerical experiment in Sect. 4.1. The last column provides the elapsed time for computing \(F^{\frac{1}{2}}\) with the routine sqrtm presented in [19].

Table 1 Execution times of the DTS schemes with optimal smoothing step for the steady problem (27)

If the square root of F is given, then Table 1 shows that when the number of nodes increases, the second-derivative DTS (25) provides a better result with respect to the classical technique (4). However, the computation of the square root becomes expensive. Therefore, we are interested in suboptimal formulations of (14) which do not involve fractional or negative powers of F.

5.1 Alternative Formulations

Our goal is to provide provably convergent DTS schemes of the form (14), but avoid having to compute \(F^{\frac{1}{2}}\). This system of second order differential equations can be written as a first-derivative formulation

$$\begin{aligned} \mathbf {z}_{\tau } + A \mathbf {z} = \mathbf {b}, \quad \text {where} \quad \mathbf {z} = \begin{bmatrix} \mathbf {w} \\ \mathbf {w}_{\tau } \end{bmatrix}, \quad A = \begin{bmatrix} 0&\quad -\,I \\ F&\quad 2G \end{bmatrix}, \quad \mathbf {b} = \begin{bmatrix} \mathbf {0} \\ \mathbf {R} \end{bmatrix}. \end{aligned}$$

Let \(K = K\left( F\right) \) be a function of F. By choosing \(G = \left( K^{-1}F + K\right) /2\), we can rotate the system into

$$\begin{aligned} \begin{bmatrix} \mathbf {w} \\ \mathbf {w}_{\tau } \end{bmatrix}_{\tau } + \begin{bmatrix} I&\quad 0 \\ K^{-1}F&\quad I \end{bmatrix}^{-1} \begin{bmatrix} K^{-1}F&\quad -\,I \\ 0&\quad K \end{bmatrix} \begin{bmatrix} I&\quad 0 \\ K^{-1}F&\quad I \end{bmatrix} \begin{bmatrix} \mathbf {w} \\ \mathbf {w}_{\tau } \end{bmatrix} = \begin{bmatrix} \mathbf {0} \\ \mathbf {R} \end{bmatrix}. \end{aligned}$$
(36)

The optimal formulation with \(G = K = F^{\frac{1}{2}}\) is included in (36) and leads to (26).

There are two straightforward alternatives for K. The first one is \(K = \kappa I\) with \(\kappa > 0\). This choice gives rise to a convergent formulation with a decay determined by \(\kappa \) and \(\lambda /\kappa \), where \(\lambda \) is any eigenvalue of F. If \(\kappa \) is big, the damping of the system is reduced by the scaled eigenvalues \(\lambda /\kappa \). However, we would have the same behavior as that of the preconditioned classical DTS (10), with \(\varPi = \kappa I\). For small values of \(\kappa \), every mode of the solution to (36) converges to steady-state uniformly, but slow. The second choice is \(K = F\), which leads to the same damping as the classical DTS (4) if all the eigenvalues have real part less than one. Otherwise, the convergence is dominated by the spurious eigenvalues 1. Therefore, these two choices do not lead to an improved formulation compared to the classical DTS (4).

All other alternatives for K that we have investigated lead to a matrix G which involves inverse matrices or fractional powers of F. For this reason, we conclude that the choice \(G = \left( K^{-1}F + K\right) /2\) in (14) leads to either inefficient or expensive DTS schemes. The existence of alternative formulations not affected by these two effects is still a matter of research.

5.2 Approximations of the Matrix Square Root

Table 1 shows that the direct computation of the square root with the MATLAB function sqrtm is costly. A possible alternative is to approximate \(F^{\frac{1}{2}}\) through an iterative method, as in [20, 21], and to consider an approximation \(G = F^{\frac{1}{2}} + \varDelta \) such that \(\varDelta \) is small in some norm. In this case, the DTS with two pseudo-time derivative (15) can be rewritten as

$$\begin{aligned} \mathbf {w}_{\tau \tau } + 2 F^{\frac{1}{2}} \mathbf {w}_{\tau } + F \mathbf {w} = \mathbf {R} - 2 \varDelta \mathbf {w}_{\tau } \end{aligned}$$

or, equivalently,

$$\begin{aligned} \begin{bmatrix} \mathbf {w} \\ \mathbf {v} \end{bmatrix}_{\tau } + \underbrace{\begin{bmatrix} F^{\frac{1}{2}}&-I \\ 2\varDelta F^{\frac{1}{2}}&F^{\frac{1}{2}} - 2 \varDelta \end{bmatrix}}_{M} \begin{bmatrix} \mathbf {w} \\ \mathbf {v} \end{bmatrix} = \begin{bmatrix} \mathbf {0} \\ \mathbf {R} \end{bmatrix}. \end{aligned}$$
(37)

If \(G = F^{\frac{1}{2}}\) and \(\varDelta = 0\), this formulation is equivalent to (25), whose convergence is determined by the eigenvalues of \(F^{\frac{1}{2}}\). If \(\left\| \varDelta \right\| < \varepsilon \) we may have faster convergence with respect to the classical DTS (4) for small values of \(\varepsilon \), as suggested by (24) and Fig. 1.

As an example, consider the matrix F in (29) with \(\sigma = -1\). In Fig. 14 the eigenvalues of M in (37) are shown for \(\varDelta = \xi R\), where \(\xi \in \left\{ 10^{-3}, 10^{-1}\right\} \) and R is a random matrix whose entries \(r_{ij} \in \left[ -1/2,1/2\right] \). In both cases, the dual time-stepping with two derivatives converges faster than the classical DTS, which required at least 177 iterations. For \(\xi = 10^{-3}\) the convergence is achieved in 37 iterations (\(\varDelta \tau = 0.193\)), while the perturbation with \(\xi = 10^{-1}\) requires 48 iterations for \(\varDelta \tau = 0.1778\) (see Fig. 15).

Fig. 14
figure 14

The spectra of M in (37) for \(\varDelta = \xi R\), where R is a random matrix whose entries \(r_{ij} \in \left[ -\,1/2,1/2\right] \). The eigenvalues of M are shown for \(\xi = 10^{-3}\) (left) and \(\xi = 10^{-1}\) (right)

Fig. 15
figure 15

Number of iterations for convergence using the second derivative DTS with an approximated matrix square root \(G = F^{\frac{1}{2}} + \varDelta \) in (14) and \(\varDelta = \xi R\), where R is a random matrix whose entries \(r_{ij} \in \left[ -1/2,1/2\right] \) and \(\xi \in \left\{ 10^{-3},10^{-1}\right\} \)

However, the iterative methods that could be used to approximate the matrix square root of F usually require matrix inversion, which is as costly as solving the problem (8). Therefore, also the possibilty of approximating the matrix square root requires more research or new ideas.

6 Conclusions and Future Work

A new second-derivative dual-time stepping technique has been proposed. The new DTS technique has been analyzed and optimized theoretically. The formulation involves a matrix in front of the first derivative in dual time which can be chosen to obtain the highest possible decay rate.

We have compared the performances of the new formulation with the ones of the classical DTS. Our technique improves the decay rate compared to the classical time-marching technique if the eigenvalues of the operator representing the system are near the imaginary axis. Furthermore, if the spectrum is not contained within the unitary circle, the new second-derivatives technique provides a system of equations which is less stiff than the classical DTS formulation.

Numerical computations for a first-order ordinary differential equation and a system modeling the Navier–Stokes equations corroborate the theoretical results. The simulations reveal that the new formulation is more efficient than the standard one as the size of the problem increase, provided that the required matrix \(F^{\frac{1}{2}}\) is available.

However, if the computation of \(F^{\frac{1}{2}}\) is required, the new DTS formulation is less efficient than the classical dual time-stepping technique. Nonetheless, our findings show that it is theoretically possible to achieve faster convergence to steady-state by employing second derivatives in pseudo-time. Our findings also point to the possibility of using other preconditioners than \(F^{\frac{1}{2}}\) in order to produce a suitable spectrum for the time-integration method. This is an interesting avenue for future work.