1 Introduction

Most of the notable achievements in theoretical economics in the last fifty years were related to finances. The first Nobel Memorial Prize in Economic Sciences was awarded in 1969 jointly to Frisch and Tinbergen ‘for having developed and applied dynamic models for the analysis of economic processes’. Many of their works were devoted to the development of mathematical methods to the analysis of economic processes, including mathematical modeling of financial processes [1]. The second Nobel Memorial Prize in Economic Sciences was awarded in 1970 to Samuelson ‘for the scientific work through which he has developed static and dynamic economic theory and actively contributed to raising the level of analysis in economic science’. In his works he studied the role of expectations in the theory of finance.

The growing importance of finance theory in economics is linked to two trends: still wider use of mathematics in the modeling of economic processes, and using the results of theoretical economics in practice. The both trends have a close relationship to finance. Mathematical modeling assumes the exact determination of the parameters - they usually are expressed in the finances; application of the theory in practice assumes description of the cash flows and the risk of using models.

Significant development of the theory of finance, which includes the theory of corporate finance and the theory of investment, occurred in the twentieth century. Until then, the theory of finance was developed as a theory of state finance, in the twentieth century became the theory of capital markets. The amount of significant works on the theory of finance were written in the years 1950 to 2000. Bachelier, the founder of the modern theory of finance, has merit that the theory of finance received a mathematical basics. He anticipated many of the ideas of the twentieth century in his works: the relationship between random and diffusion processes, Markov processes, the theory of Brownian motion and much more than today lies not only in the investment theory. One of the first models of the offers loan funds was built in the early twentieth century by Fisher. Equations to balance between savings and investments (known as IS-LM model, and Mandella-Fleming model) are the basis of the modern macroeconomics. Its authors Hicks and Mundell are Nobel Prize winners. Mundell, in addition, created the theory of optimum currency areas, which allows to call him the father of the euro. In the theory of financial investment, there is no concept that would be such widely verified and so little credible as ‘efficient markets’. The so-called efficient market hypothesis performs a primary function - to justify the use of probabilistic calculation in the analysis of capital markets. But if markets are ‘nonlinear stochastic dynamical systems’, the use of standard statistical analysis can lead to erroneous results, especially if they are based on the model of random walks.

One of the methods that permit to examine the stability of stochastic systems is a traditional method of Lyapunov functions, which was developed, for example, in the works by Barbashin [2], Hasminski [3], Valeev [4], Zubov [5] and others.

Investigating the mean stability or mean square stability of solutions of differential equations with random coefficients depending on Markov process is a current problem. The theory of Markov processes was studied in the works by Chung [6], Davis [7], Dynkin [8, 9], Kolmogorov [10], Lèvy [11], Skorohkod [12] and others. The use of the theory of Markov processes to the study of various economic processes can be found in the works by Elliot, Kopp [13], Malliaris, Brock [14] and Williams [15].

Dynamic systems considered in the present paper belong to the class of the so-called systems with random states. The works by Artem’ev [16], Katz, Krasovskii [17] and others are dedicated to such systems.

We offer a new approach to simulation by creating algorithms for the construction of moment equations and their quantification. The origin of the theory of moment equations and their use in the examination of the stability can be found in the works by Valeev [4] and his scientific school (e.g., [18]).

In the present paper we derive the functional equations for particular density functions and the moment equations for the system which are used in the investigation of solvability and mean square stability. There is shown the application of the results to solve various problems of practice.

2 Statement of the problem

Let (Ω,F,P) be a probability space (see, for example, [19]). On the probability space, we consider the initial value problem formulated for the stochastic system

d x ( t ) d t =A ( t , ξ ( t ) ) x(t)+B ( t , ξ ( t ) ) ,
(1)
x(0)=φ(ω),
(2)

where A is an m×m matrix with random elements, B is an m-dimensional column vector function whose elements are random variables, φ:Ω R m , φC(Ω), ξ(t) is a random Markov process with a finite number of states θ k , k=1,2,,q, the probabilities of which are

p k (t)=P { ξ ( t ) = θ k } ,k=1,2,,q,
(3)

and satisfy the system of linear differential equations

d p k ( t ) d t = s = 1 q π k s (t) p s (t)
(4)

with the transition matrix ( π k s ( t ) ) k , s = 1 q .

Definition 1 The m-dimensional random vector function x(t), the components of which are random variables is called a solution of the initial value problem (1), (2) if x(t) satisfies (1) and initial condition (2) in the meaning of strong solution (defined in [20]) of the initial Cauchy problem.

Our task is to obtain a reliable and simple method for investigating the stability of solutions of this class of systems. To solve this task, we present below the method of moment equations. On a series of examples, we demonstrate that the method is effective and useful.

Definition 2 Let x R m be a continuous random variable depending on a random Markov process ξ(t) with q possible states θ k , k=1,2,,q. The matrices

E(t)= k = 1 q E ( k ) (t),D(t)= k = 1 q D ( k ) (t),

where

E ( k ) (t)= E m x f k (t,x)dx, D ( k ) (t)= E m x x f k (t,x)dx,k=1,2,,q,

are called moments of the first or second order of the random variable x respectively. The values E ( k ) (t) and D ( k ) (t), k=1,2,,q, are called particular moments of the first or second order respectively.

The E m in Definition 2 denotes an m-dimensional Euclidian space, functions f k (t,x), k=1,2,,q are the particular density functions of the random variable x.

Remark 1 The moments of the random variable x in a scalar case, xR, are defined for any s=1,2, , and are called moments of the s th order. The particular moments are defined by the formula

E s ( k ) (t)= x s f k (t,x)dx,s=1,2,,k=1,2,,q.

Several different stability statements are possible. We here recall mean square stability definition, which is based on that given in [3].

Definition 3 The trivial solution of the associated homogenous system to system (1) is said to be mean square stable on the interval [0,) if for each ε>0 there exists δ>0 such that any solution x(t) of the associated system, corresponding to the initial data x 0 , exists for all t0 and the mathematical expectation

E ( x ( t ) 2 ) <εwhenever t0 and  x 0 <δ.

3 Moment equations for the linear differential equations

Before the initial value problem (1), (2) formulated in the previous section will be investigated, a simpler problem will be studied. First we derive the moment equations in the scalar case of system (1), that is, if instead of the system there is an equation. In the first part of this section, the linear homogenous differential equation, the coefficient of which depends on a random Markov process, with two states only is considered. In the second part, the moment equations are derived for nonhomogenous linear differential equations with q possible states of a random process, on which the coefficients depend.

3.1 Homogenous linear differential equations

On the probability space (Ω,F,P), we consider initial value problem (1), (2) where instead of system (1) there is a stochastic linear homogenous differential equation of the first order of the form

d x ( t ) d t =a ( ξ ( t ) ) x(t),
(5)

where a is a scalar function of a random variable. We suppose that the function a depends on the random Markov process ξ(t), which has only two states θ 1 , θ 2 with probabilities

p k (t)=P { ξ ( t ) = θ k } ,k=1,2,

that satisfy the system of linear differential equations

d p 1 ( t ) d t = λ p 1 ( t ) + ν p 2 ( t ) , d p 2 ( t ) d t = λ p 1 ( t ) ν p 2 ( t ) , λ 0 , ν 0 .
(6)

In the following, we use the denotations

a 1 = a ( θ 1 ) , a 2 = a ( θ 2 ) .

Theorem 1 Moment equations of any order s=0,1,2, for equation (5) are of the form

d E s ( 1 ) ( t ) d t = s a 1 E s ( 1 ) ( t ) λ E s ( 1 ) ( t ) + ν E s ( 2 ) ( t ) , d E s ( 2 ) ( t ) d t = s a 2 E s ( 2 ) ( t ) + λ E s ( 1 ) ( t ) ν E s ( 2 ) ( t ) .
(7)

Proof We divide the time line [0,) into intervals of length h. Next we replace the considered system of differential equations (5) by an approximated system of difference equations. If we denote t n =nh, h>0, n=1,2, , and approximate dx( t n + 1 )/dt with (x( t n + 1 )x( t n ))/h, then the approximated system to system (5) can be written in the form

x( t n + 1 )= ( 1 + h a ( ξ ( t n ) ) ) x( t n ),n=1,2,

or the approximated system to system (6) is of the form

p 1 ( t n + 1 ) = ( 1 h λ ) p 1 ( t n ) + h ν p 2 ( t n ) , p 2 ( t n + 1 ) = h λ p 1 ( t n ) + ( 1 h ν ) p 2 ( t n ) .
(8)

In accordance with the formula for total probability, we obtain relationships for the particular density functions f k ( t n ,x), k=1,2, which satisfy the following system of functional equations:

f 1 ( t n + 1 ,x)= 1 h λ 1 + h a 1 f 1 ( t n , x 1 + h a 1 ) + h ν 1 + h a 2 f 2 ( t n , x 1 + h a 2 ) ,
(9)
f 2 ( t n + 1 ,x)= h λ 1 + h a 1 f 1 ( t n , x 1 + h a 1 ) + 1 h ν 1 + h a 2 f 2 ( t n , x 1 + h a 2 ) .
(10)

Rename ‘ t n ’ to ‘t’ and suppose that the particular density functions can be expressed in powers of parameter h by the Taylor formula. Let functions in (9) be represented as

f ( t n + 1 , x ) = f 1 ( t + h , x ) = f 1 ( t , x ) + f 1 ( t , x ) t h + O ( h 2 ) , 1 h λ 1 + h a 1 f 1 ( t n , x 1 + h a 1 ) = ( 1 h ( λ + a 1 ) + O ( h 2 ) ) f 1 ( t , x h x a 1 + O ( h 2 ) ) = ( 1 h ( λ + a 1 ) + O ( h 2 ) ) ( f 1 ( t , x ) f 1 ( t , x ) x h x a 1 + O ( h 2 ) ) = f 1 ( t , x ) h λ f 1 ( t , x ) h a 1 f 1 ( t , x ) h a 1 x f 1 ( t , x ) x + O ( h 2 ) , h ν 1 + h a 2 f 2 ( t n , x 1 + h a 2 ) = h ν f 2 ( t , x ) + O ( h 2 ) ,

where O is Landau order symbol. Now, using the obtained expressions and comparing the left-hand side to the right-hand side of (9) and assuming h0, we get

f 1 ( t , x ) t = a 1 x ( x f 1 ( t , x ) ) λ f 1 (t,x)+ν f 2 (t,x).
(11)

Similarly, decomposition of the particular density functions in (10) gives the second equation

f 2 ( t , x ) t = a 2 x ( x f 2 ( t , x ) ) +λ f 1 (t,x)ν f 2 (t,x).
(12)

Finally, multiplying equations (11), (12) by x s , s=0,1,2, and integrating them by parts from −∞ to ∞, in accordance with Definition 2, a system of linear differential equations with constant coefficients (7) can be obtained. □

Let us note that moment equations (7) can be derived in a different way. If system (8) of difference equations for probabilities is known, then the particular moments of the s th order satisfy the following relations:

E s ( 1 ) ( t n + 1 ) = ( 1 h λ ) ( 1 + h a 1 ) s E s ( 1 ) ( t n ) + h ν ( 1 + h a 2 ) s E s ( 2 ) ( t n ) , E s ( 2 ) ( t n + 1 ) = h λ ( 1 + h a 1 ) s E s ( 1 ) ( t n ) + ( 1 h ν ) ( 1 + h a 2 ) s E s ( 2 ) ( t n ) .
(13)

Particular moments contained in the first equation of (13) can be expressed in powers of parameter h by the Taylor formula:

E s ( 1 ) ( t n + 1 ) = E s ( 1 ) ( t + h ) = E s ( 1 ) ( t ) + d E s ( 1 ) ( t ) d t h + O ( h 2 ) , ( 1 h λ ) ( 1 + h a 1 ) s E s ( 1 ) ( t n ) = ( 1 + h s a 1 h λ + O ( h 2 ) ) E s ( 1 ) ( t ) , h ν ( 1 + h a 2 ) s E s ( 2 ) ( t n ) = h ν E s ( 2 ) ( t ) + O ( h 2 ) .

If we put the obtained expressions into the first equation of (13), then under assumption h0, we get the first equation of system (7). In the same way, using the second equation of (13), the second equation of system (7) can be constructed.

Example 1 Let us establish conditions for s-mean stability of linear differential equation (5). The characteristic equation for the system of moment equations (7) is written as follows:

| z s a 1 + λ ν λ z s a 2 + ν | = z 2 + z ( λ + ν s ( a 1 + a 2 ) ) + s 2 a 1 a 2 s ν a 1 s λ a 2 = 0 .

Therefore, the conditions of asymptotic stability of solutions of moment equations (7), in accordance with the Hurwitz criterion, are of the following form (assume s0, the case s=0 is considered below):

a 1 + a 2 < λ + ν s , a 1 a 2 > ν a 1 + λ a 2 s .

Let us use the denotations

γ λ + ν s ,a ν a 1 + λ a 2 λ + ν = a 1 p 1 0 + a 2 p 2 0 ,

where p k 0 = lim t + p k (t), k=1,2 and a is mean value of coefficients a 1 , a 2 . It allows us to derive a simpler form of the above conditions:

a 1 + a 2 <γ, a 1 a 2 >aγ.

The domains of stability for moments of various order are determined by their boundaries as it is shown in Figure 1. Any domain of stability includes the third quadrant where the values of coefficients a 1 , a 2 are negative, i.e., a 1 <0, a 2 <0.

Figure 1
figure 1

Domains of stability for equation ( 5 ).

Using moment equations, it is also possible to determine the domain of stability for the deterministic equation

d x ( t ) d t =ax(t),

where a is independent of a random variable ξ(t). This case corresponds to the moment equations of the zeroth order, i.e., if s=0.

3.2 Nonhomogenous linear differential equation

We have derived the system of moment equations for a linear homogenous equation with random coefficient under assumptions that the random variable can only be in two states. It was a simple enough case that allowed us to understand the process of deriving the system of moment equations. Now we establish a system of moment equations in the same way for linear the nonhomogeneous differential equation

d x ( t ) d t =a ( t , ξ ( t ) ) x(t)+b ( t , ξ ( t ) ) ,
(14)

where ξ(t) is the Markov process which has q possible states θ 1 , θ 2 ,, θ q , with probabilities p k (t)=P{ξ(t)= θ k }, k=1,2,,q. We suppose that the probabilities satisfy the system of linear differential equations

d p k ( t ) d t = s = 1 q π k s (t) p s (t),
(15)

where the transition matrix ( π k s ( t ) ) k , s = 1 q satisfies the following relationships:

k = 1 q π k s (t)0, π k s (t){ 0 , k s , 0 , k = s .

Since the coefficients of studied system (14) depend on t, we can denote

a k (t)=a(t, θ k ), b k (t)=b(t, θ k ),k=1,2,,q.

Theorem 2 Moment equations of any order s=1,2, for equation (14) are of the form

d E s ( k ) ( t ) d t = s a k ( t ) E s ( k ) ( t ) + s b k ( t ) E s 1 ( k ) ( t ) + r = 1 q π k r ( t ) E s ( r ) ( t ) , k = 1 , 2 , , q .
(16)

Proof By dividing the time line into intervals of length h, we obtain the approximated system

x( t n + 1 )= ( 1 + h a ( t n , ξ ( t n ) ) ) x( t n )+hb ( t n , ξ ( t n ) )

to the considered system (14) and

p k ( t n + 1 )= p k ( t n )+h s = 1 q π k s ( t n ) p s ( t n ),k=1,2,,q

to system (15).

Particular probability density functions f k ( t n ,x) satisfy, in this case, the system of difference equations

f k ( t n + 1 , x ) = 1 1 + h a k ( t n ) f k ( t n , x h b k ( t n ) 1 + h a k ( t n ) ) + h s = 1 q π k s ( t n ) 1 + h a k ( t n ) f k ( t n , x h b k ( t n ) 1 + h a k ( t n ) ) .
(17)

Similarly as in the proof of Theorem 1, we assume that the particular density functions can be represented in powers of parameter h by the Taylor formula, and by the same way as in the proof of Theorem 1, we get

f k ( t , x ) t = x ( a k ( t ) x + b k ( t ) ) f k (t,x)+ s = 1 q π k s (t) f s (t,x),k=1,2,,q.

The system of moment equations (16) can be derived from the last system for particular probability density functions by using the same modifications as in the proof of Theorem 1. □

4 Moment equations for the linear differential system

Now we come back to the initial problem (1), (2) that we have formulated in Section 2. We also suppose that the matrix A and vector B depend on a random Markov process ξ(t) with q possible states, the probabilities of which (3) satisfy the system of linear differential equations (4).

Moreover, we use the denotations

A k (t)=A(t, θ k ), B k (t)=B(t, θ k ),k=1,2,,q.

Theorem 3 Moment equations of the first and second order respectively for system (1) are of the form

d E ( k ) ( t ) d t = A k (t) E ( k ) (t)+ B k (t) p k (t)+ j = 1 q π k j (t) E ( j ) (t),
(18)
d D ( k ) ( t ) d t = A k ( t ) D ( k ) ( t ) + D ( k ) ( t ) A k ( t ) + B k ( t ) ( E ( k ) ( t ) ) + E ( k ) ( t ) B k ( t ) + j = 1 q π k j ( t ) D ( j ) ( t ) , k = 1 , 2 , , q .
(19)

Proof The philosophy of the proof is the same as in the proof of Theorem 1, only the calculations are more complicated, because now we work with the matrix case. In a similar way, by dividing the time line into intervals of length h, for the particular density functions f k (t,x), k=1,2,,q, we get the system of equations

f k ( t n + 1 , x ) = f k ( t n , Y k ) υ k + h j = 1 q π k j ( t n ) f j ( t n , Y j ) υ j , k = 1 , 2 , , q ,
(20)

where

Y j = ( I + h A j ( t n ) ) 1 ( x h B j ( t n ) ) , υ j = det ( I + h A j ( t n ) ) 1 , j = 1 , 2 , , q , I  is the unit matrix.

Assume that the particular density functions can be expressed in powers of parameter h by the Taylor formula. If we put t n =t, then decompositions of the functions on the left-hand side and on the right-hand side in (20) are equal to

f k ( t n + 1 , x ) = f k ( t + h , x ) = f k ( t , x ) + f k ( t , x ) t h + O ( h 2 ) , υ j = det ( I h A j ( t n ) + O ( h 2 ) ) = 1 h Tr ( A k ( t ) ) + O ( h 2 ) , Y j = ( I h A j ( t ) + O ( h 2 ) ) ( x h B j ( t ) ) = x h ( A j ( t ) x + B j ( t ) ) + O ( h 2 ) , f k ( t , Y k ) = f k ( t , x h ( A k ( t ) x + B k ( t ) ) + O ( h 2 ) ) = f k ( t , x ) h grad f k ( t , x ) ( A k ( t ) x + B k ( t ) ) + O ( h 2 ) , k , j = 1 , 2 , , q ,

where

grad f ( t , x ) = ( f ( t , x ) x 1 , f ( t , x ) x 2 , , f ( t , x ) x m ) , Tr ( A )  is the trace of the matrix  A .

Using obtained expressions, next comparing the left-hand side to the right-hand side of equation (20) and assuming h0, we get the system of differential equations for the particular density functions

f k ( t , x ) t = ( f k ( t , x ) Tr ( A k ( t ) ) + grad f k ( t , x ) ( A k ( t ) x + B k ( t ) ) + j = 1 q π k j ( t ) f j ( t , x ) ) , k = 1 , 2 , , q .
(21)

Finally, multiplying equation (21) by x and integrating it by parts on the Euclidean space E m , in accordance with Definition 2, we obtain a system of linear equations for the particular moments of the first order in the form (18). The particular moments of the second order satisfy the matrix system of differential equations (19) which we get in the same way. The difference is that (21) is multiplied by the matrix x x , next it is integrated over the Euclidean space E m . □

Remark 2 The moment equations (18), (19) are deterministic and can be solved by using usual methods, e.g., [21].

The following examples illustrate the use of moment equations for the investigation of stability.

Example 2 Let us investigate the mean square stability of solutions of the homogenous linear differential equation

d x d t =A ( ξ ( t ) ) x,
(22)

where the Markov process ξ(t) can be in two states θ 1 , θ 2 with probabilities

p k (t)=P { ξ ( t ) = θ k } ,k=1,2,

which satisfies the system of differential equations

d p 1 d t =λ p 1 +λ p 2 , d p 2 d t =λ p 1 λ p 2 ,

where λ0. We establish the system of moment equations for the considered system. Let the values of the matrix A(ξ(t)) corresponding with the states of the Markov process be

A 1 =A( θ 1 )= ( 0 ω ω 0 ) , A 2 =A( θ 2 )= ( 0 ω ω 0 ) ,ω0.

Then the system of moment equations of the second order (19) for equation (22) is of the form

d D ( 1 ) d t = A 1 D ( 1 ) + D ( 1 ) A 1 λ D ( 1 ) + λ D ( 2 ) , d D ( 2 ) d t = A 2 D ( 2 ) + D ( 2 ) A 2 + λ D ( 1 ) λ D ( 2 ) .

Denote

D ( 1 ) = ( d 1 d 2 d 2 d 3 ) , D ( 2 ) = ( l 1 l 2 l 2 l 3 )

and rewrite the last system of moment equations into the scalar form

d d 1 d t = 2 ω d 2 λ d 1 + λ l 1 , d l 1 d t = 2 ω l 2 + λ d 1 λ l 1 , d d 2 d t = ω d 3 ω d 1 λ d 2 + λ l 2 , d l 2 d t = ω l 3 + ω l 1 + λ d 2 λ l 2 , d d 3 d t = 2 ω d 2 λ d 3 + λ l 3 , d l 3 d t = 2 ω l 2 + λ d 3 λ l 3 .

The obtained system of moment equations is a system of ordinary linear differential equations. Its stability is determined by the eigenvalues of the matrix system. The characteristic equation

| z + λ 2 ω 0 λ 0 0 ω z + λ ω 0 λ 0 0 2 ω z + λ 0 0 λ λ 0 0 z + λ 2 ω 0 0 λ 0 ω z + λ ω 0 0 λ 0 2 ω z + λ | =0

can be transformed into the following equation:

| ( z + λ ) 2 + 2 ω 2 λ 2 0 2 ω 2 0 ( z + λ ) 2 + 4 ω 2 λ 2 0 2 ω 2 0 ( z + λ ) 2 + 2 ω 2 λ 2 | =0,

the roots of which are

z 1 , 2 =λ+ λ 2 4 ω 2 , z 3 , 4 =λ λ 2 4 ω 2 , z 5 =2λ, z 6 =0.

It is easy to see that the real parts of all eigenvalues are negative or equal to zero. Therefore, the solutions of the system of equations (22) are stable in the mean square.