Exact analytical approach to differential equations with variable coefficients

This paper shows how to build an analytical solution for a differential equation of arbitrary order and with variable coefficients. It proofs that the most known approximated solutions for such a problem can be derived from the analytical expression presented in the paper. The formalism can be easily extended to the infinite dimensional case such as the quantum time-dependent Hamiltonian problem and used for the eigenvalues problems.


Introduction
Linear differential equations are ubiquitous in all areas of science. Very typically, a scientific problem is described by a mathematical formalism and, very likely, eventually, this implies a differential equation, possibly linear, that is to say y (n) (z) + a n−1 (z)y (n−1) (z) + . . . + a 0 (z)y(z) = 0, (1) where y (j) means the j-th derivative of y(z). It is virtually impossible to list a complete bibliography on the topic. Its applicability ranges from quantum mechanics [1], astrophysics [2], chemistry and stochastic processes [3][4][5][6][7][8], etc., just to cite text books or paper collections where the reader can find problems ending into linear differential equation with variable coefficients. Also, integro-differential and non-linear differential equations can be reduced to a finite or infinite series of linear differential equations (see for example [9]). A very famous example is the Riccati equation, s (z) + s(z) 2 = f (z), that, via the transformation s(z) = y (z)/y(z), can be reduced to the second-order differential equation y (z) = f (z)y(z). It is well known that eq. (1) can be written as a first-order differential equation in the matrix formalism, i.e.
or, alternatively, d dz where Y (z) is the vector Y (z) = {y(z), . . . , y (n−1) (z)}. More in general the matrix M (z) describes a linear system of differential equations and may have an infinite dimension such as in the time-dependent Hamiltonian problem. Indeed eqs. (2) and ( ıh ∂ ∂t ψ(x, t) = H(t)ψ(x, t).
a e-mail: mauroh69@gmail.com In particular the time-dependent Hamiltonian is a relevant field of research [10][11][12][13][14]. The aim of this paper is to find a formal analytical solution of eq. (2) (or eq. (3)). There are several formal approaches of great theoretical interest [15][16][17] but hard to handle for practical purposes. In this work it will be shown that, starting from the analytical solution, it is possible to recover and improve the most important approximations known in the literature as part of one mathematical scheme (see [18] as an excellent review and text book about approximated methods). The paper is organized as follows. Section 2 shows how to build a formal solution for a differential equation with variable coefficients. Section 3 applies the results of the previous section to the time-dependent Hamiltonian problem. Section 4 shows how to recover the solution of an ordinary differential equation using the time-dependent Hamiltonian approach previously developed. Section 5 shows and tests an analytical formula (derived from the previous sections) to evaluate the eigenvalues in the problem of a differential equation with boundary conditions. Finally sect. 6 summarizes the obtained results and the future work.

Formal solution for linear n-th-order differential equations with variable coefficients
To solve the problem set by eq. (1), we may solve the equivalent problem under the form of eq. (2) (or eq. (3)). This section is devoted to build a formal solution for differential equations. Our starting point is to evaluate the high-order derivatives of eq. (2). Considering the second derivative we have where we defined It is straightforward to show that, for n ≥ 1 Performing the Taylor expansion of the vectorial function Y (z 0 + z) at the point z = z 0 , we may write Without loss of generality we set z 0 = 0 and, after few algebraic steps, we can write the formal solution for Y (z) as where I is the identity matrix and it is understood that O acts on the right, i.e. on M (z 0 ). The symbol exp[uO]M (z 0 )| z0=0 means that it must be evaluated at z 0 = 0. Analogously we may write the solution for as where it is understood that O acts on the left. Being the operator O the sum of two operators we can use several tools present in the literature to manipulate the solution given by eqs. (10) or (12). It is straightforward to find a closed form expression when M (z) = M 0 , i.e. a constant matrix. Since [ d dz , M 0 ] = 0, we may factorize the exponential and resolving the integral of eq. (10), we obtain the well-known result Further exact solutions can be found in more complex cases where M (z) has polynomial elements. Polynomial coefficients in differential equations are widely studied in the literature and there are many works on this topic (here a non exhaustive list of references [19][20][21][22][23][24][25]). For the sake of brevity we limit ourselves to noting that in the case when the matrix elements are polynomials satisfying opportune requirements we can find a closed form expression for eq. (10). The validity of the solving formula (10) or (12)  To proof this we need to use the Trotter product formula [26] and write Using the properties of the translator operator exp[ u N ∂ z0 ] we may rewrite eq. (15) as We now consider the norm of eq. (16). Exploiting the well-known properties of the matrix norm we have where we used the Riemann integral limit This proofs that the operator exp[uO]M (z 0 ), and consequently solution (10), exists for all matrices with integrable elements.
What is left is to manipulate the exact solution to produce analytical results. Mostly there are two important ways to proceed: the Baker-Campbell-Hausdorff formula [27][28][29] (and its dual, the Zassenhaus formula [16,17]) and the Trotter product formula [26]. The former is left to a future work while, in this paper, we will focus on the latter.

Time-dependent Hamiltonian
The solution of a time-dependent Hamiltonian is a fundamental topic in quantum mechanics, quantum field theory, quantum theory of many particles etc. (see for example [1] as an excellent text and review book). Using the previous results we may write a formal solution of the quantum problem under consideration. From eq. (5) we infer that the operator O has the following expression: Using the formal solution (12), via Trotter formula, we may rewrite the exponential of the operator O as where for simplicity we seth = 1 and it is understood that the operator exp[ u N ∂ t0 ] acts on the left. Combining eqs. (12) and (20) we can write the ket corresponding to the solution of eq. (5) as Note that the operator exp[ u N ∂ t0 ] translates of a quantity u N the argument of the Hamiltonian H(t 0 ). The wave function is It is worthy to stress that in this formalism the time dependence of the Hamiltonian, i.e. the dependence on t 0 , is transformed in a parameter dependence. The integration variable that gives rise to the time dependence is u. We achieve a first result. Taking the time derivative of eq. (21), after a little algebra, we have that for the state |ψ t the following holds: The above expression is nothing but the solution to the Hamiltonian problem using a discrete approach [30].
We now assume that we know the orthonormal eigenstates of the Hamiltonian, i.e.
where, for what we said, the "time" t 0 has to be considered as a parameter of a time-independent Hamiltonian. Using the closure relation [31] n |n t0 n t0 | = I, we may rewrite eq. (22) as where ψ(x, 0) = x|ψ 0 , and |n j ≡ |n j , j u N . Inserting a new set of variable |x with the properties [ we obtain the alternative expression where (25) and (27) represent the exact solution of the starting problem, eq. (5). Making the approximation where we defined z ≡ j u N , then the following holds: The sums on the states |n j can be preformed using the Kronecker delta for two consecutive indices, and we have at first-order approximation for N → ∞ where, by definition, c n = n 0 |ψ 0 and we used the continuous limit Via a simple integration by parts we obtain The first term on the right side represents the adiabatic approximation (firstly introduced by [32]) while the second is the correction to the adiabatic term. Let us now evaluate the next-order approximation. Keeping in eq. (27) the terms up to the order u/N , we may write the correction as where we defined ṅ, z| ≡ ∂ z n, z|. In terms of the wave function where for compactness we introduced the function P kn (z) = ∞ −∞ dyψ k (y, z)∂ z ψ † n (y, z). In order for the above expression to represent a correction to eq. (30), the following must hold: where ψ(x, t) is given by eq. (30). The above inequality fixes the range of validity of our approach. It is evident that if ψ n (x, t) are slowly varying functions on time, due to the presence of the derivative of ψ n (x, t) in eqs. (32) and (34), then δψ(x, t) is a small correction and in eq. (32) only the adiabatic approximation term survives.

Ordinary differential equations
With some caution, the previous approach can be extended also to ordinary differential equations, namely when the matrix driving the system is not a Hermitian operator In the previous section an approximated solution has been obtained using the closure relation and the orthonormal condition. Since the matrix of the system is not a Hermitian matrix, we need to define two sets of vectors n |n n| = I, where n| in general does not coincide with the Hermitian conjugate of |n . Repeating the same steps of the previous section, we end up in a similar approximated expression, where c n = n 0 |Y 0 and λ n (u), |n(u) are the eigenvalues and eigenvectors of M (u) respectively. For consistency with the previous section we also introduced the symbols |Y t ≡ Y (t), |Y 0 ≡ Y (0). To better explain the procedure and the notation let us consider the second-order differential equation Writing it in the matrix formalism we have The eigenvectors and eigenvalues of the driving matrix are with N i a factor that has to be chosen in such a way as to satisfy conditions (37). It is straightforward to obtain Consequently we have We consider now an example of differential equation and compare the exact solution with the approximated solution and with the approximated solution where we add the correction given by eq. (33). Making formula (33) explicit for the problem under consideration, we obtain where λ(u) = f (u),ḟ (z) = ∂ z f (z), c n = n 0 |Y (0) and the vectors |n are given by eq. (41). Let us consider for example the following differential equation: We compare the exact solution with the first-order approximation given by eq. (46) and with the approximation where we add correction (48). The result shows an excellent agreement when we add the correction (see figs. 1 and 2). More in general it can be shown that for a function asymptotically behaving as a power law, f (t) ∼ ±t α , with respect to the differential equation (39) the following properties hold true (for the sake of brevity we omit the analytical calculations): i) In order for condition (35) to hold, for an asymptotic power law behaviour with f (t) > 0, it is sufficient that α > −2. This constraint on the power law coincides with the region of validity of the WKB approximation.  ii) For f (t) < 0 approximated solution (46) gives the same asymptotic results of the WKB approximation plus an extra constant for α > −2. For correction (48) to balance the extra constant it is sufficient that −2 < α < 1. In this range of the parameter α correction (48) improves the result with respect to the WKB approximation.
It is worthy to stress that condition i) is sufficient but not necessary. A simple example is given by the discontinuous function f (t) = aθ(1 − t) where θ(z) is the step function. It is easy to check that formula (46) gives the exact solution. We can better appreciate the novelty of our approach studying problems in a finite interval containing the turning points, i.e. the points satisfying the equation f (t) = 0. As we will see the accuracy of the presented approximation is very good and in the next section we will show an analytical formulation for the eigenvalue problem associated to a differential equation with boundary conditions. We will devote the rest of the paper to this important problem. To better show this, we consider the following example where the WKB approximation cannot be used, at least directly. Let us study the following differential equation: The analytical solution of this equation can be found in terms of Mathieu functions. In figs. 3 and 4 we see that the approximation given by eq. (46) shows a small distortion with respect to the exact solution in the neighbourhood of the turning point t = π/2, while the WKB approximation, given by the formula except for a region near the origin, does not hold. As a final remark of this section we note that the presented approach can be straightforwardly extended to higher-order differential equations.

Eigenvalue problem
In this section we will face the problem of finding the eigenvalues in the problem of a differential equation with boundary conditions. To show that, we will build and study several ad hoc analytical examples. This will allow us to better appreciate the accuracy of the proposed approach, in particular in cases where the WKB approximation cannot be used. Let us consider the following differential equation defined in the interval t ∈ [−1, 1]: with λ > 0. The problem admits a general analytical solution given by where H ν (z) is the Hermite function and F (α, β, z) is the Kummer confluent hypergeometric function. Imposing the condition y(0) = 0 we obtain while from the right boundary condition, i.e. y(1) = 0, we obtain the numerical values of λ that give nontrivial solutions to eq. (52). We now evaluate the eigenvalues given by the WKB approximation. In spite of the fact that the WKB approximation is diverging at t = 1, it still gives a function with finite values for λ that make the function vanish at t = 1, i.e.
Note that the applicability of the WKB approximation rests on the fact that the divergent factor is compensated by the fact that the function sinus vanishes at t = 1. The comparison between the exact result and the approximated WKB result is given in table 1. The percent error is defined as where λ ex is the exact value and λ appr is the approximated value.
As an alternative, we may evaluate the eigenvalues using the approach presented in this paper. From eq. (46), via the condition y(0) = 0, we have Using the condition y(1) = 0, eq. (56) gives an equation for the eigenvalues. In table 2 we compare the exact values of λ with the present approximation. The percent error is defined as where λ ex is the exact value and λ appr is the approximated value.
In the next example we will consider a case where, due to the divergence at the turning points, the WKB approximation cannot be used directly. Let us modify the problem with the boundary conditions stated in eq. (52) as follows: The general exact solution of this differential equation is still eq. (53). Imposing the boundary conditions, we obtain the exact sequence of eigenvalues (see table 3). Using the approximate expression for y(t) given by eq. (46), we may write the following equation for the eigenvalues: We obtain a sequence of approximated eigenvalues in good agreement with the exact ones (see table 3). Note that the boundary condition is at the turning point, i.e. t = 1, and we cannot use directly the WKB approximation since the corresponding approximated function is divergent at the turning point. Finally we study a differential equation with mixed boundary conditions, i.e. Dirichlet-von Neumann conditions d 2 dt 2 y + λ(1 − t 2 )y = 0, y(0) = 0,ẏ(1) = 0, whereẏ(1) ≡ d dt y |1 and λ > 0. Evaluating the solution at t = 0 we obtain again expression (54). Making the derivative of y(t) vanish at t = 1 we obtain the eigenvalues of the problem. Since the condition on the derivative is at the turning point, i.e. t = 1, we cannot use directly the WKB approximation. On the other hand we may use the approximate expression for the derivative of y(t) given by eq. (47) that yieldṡ The values of λ so obtained are compared with the exact ones in table 4. In general we can use eqs. (46) and (47) as formulas to find, analytically, the eigenvalues of a differential equation with boundary conditions defined in a finite interval.

Concluding remarks
In this paper a formal analytical solution for a linear differential equation of arbitrary order n and with variable coefficients has been shown. The solution has been found exploiting the reduction of the problem to a first-order differential equation through the matrix formalism. This approach provided a solution also for differential equation involving infinite-dimensional operators such as the time-dependent Schrödinger equation. It has been shown that the most important approximations available in the literature can be deduced from the proposed solution. Exact expressions that can be suitable for computational purposes (see sect. 3 and eq. (27)) have been proved. With respect to ordinary differential equations, a noteworthy fact is that the proposed approximation is smooth around the turning points and this allows to study solutions of differential equations at points where the WKB approximation, in general, cannot be used. Examples of analytical eigenvalue evaluation have been shown. Even though we studied examples of second-order differential equations, the approach can be easily extended to higher-order differential equations.