1 Introduction

For decades, stochastic processes have become more popular as a model for fluctuations over time. Including the noise term is the main advantage of the stochastic model. The Ornstein–Uhlenbeck process is one of the most well-known stochastic processes used in many research areas such as mathematical finance [1], physics [2], and biology [3]. It was introduced by L. Ornstein and G. Eugene Uhlenbeck (1930). This process is defined as the solution of stochastic differential equation

$$ dX(t)=\theta \bigl(\mu -X(t)\bigr)\,dt+\sigma \,dW(t), $$
(1)

where \(\theta \neq 0\), μ, and \(\sigma >0\) are constant parameters, and \(W(t)\) is the Wiener process. The parameter μ is the long-term mean, θ is the velocity, and σ is the friction coefficient. Its analytic solution, a function of mean, variance, and covariance functions over time t were derived. An important feature of this process (with positive θ) is the mean reversion, which means that it tends to its long-term mean μ as t tends to infinity. So, at any moment in time, if the value of the yield is greater than the long-term mean, then the drift becomes negative, so that the yield is pulled down in the direction of the long-term mean. Similarly, if the value of the yield is smaller than the long-term mean, then the drift becomes positive, so that the yield is pushed up to the long-term mean.

The multivariate Ornstein–Uhlenbeck process is a generalization to multiple dimensions of the Ornstein–Uhlenbeck process. It is defined as the solution of the multivariate stochastic differential equation

$$ \begin{aligned}[b] d\boldsymbol{X}(t)=\boldsymbol{\theta } \bigl(\boldsymbol{\mu }-\boldsymbol{X}(t) \bigr)\,dt+\boldsymbol{\sigma } \,d \boldsymbol{W}(t), \end{aligned} $$
(2)

where θ is an \(n\times n\) invertible real matrix, μ is an n-dimensional real vector, σ is an \(n\times m\) positive real matrix, and \(\boldsymbol{W}(t)\) is an m-dimensional standard Wiener process. The idea of this generalization arises when we simultaneously deal with more than one quantity. The univariate Ornstein–Uhlenbeck process forces us to model \(X(t)\) independently, which is not a realistic assumption. It certainly does not work when all quantities are related in some sense. Consequently, many researchers apply this process to their interesting situations with limitations [4,5,6], and [7].

In most research in the past, this process was considered as a solution of stochastic differential equation (2). It was solved for its solution, and then its distribution, mean, covariance, and cross-covariance function matrix was computed. This research is different in that we derive its distribution and parameters without solving it analytically: we consider the probability density function as a solution of the Fokker–Plank equation.

2 Preliminaries

In this section, we introduce some well-known definitions and results, which can be found in [8,9,10,11].

Proposition 1

Let \(\boldsymbol{X}(t)\) be a multivariate Itô process defined by the stochastic differential equation

$$ d\boldsymbol{X}(t)=\boldsymbol{\mu }\bigl(\boldsymbol{X}(t),t\bigr)\,dt+ \boldsymbol{\sigma }\bigl(\boldsymbol{X}(t),t\bigr)\,d\boldsymbol{W}(t), $$
(3)

where \(\boldsymbol{\mu }(\boldsymbol{X}(t),t)\) is an n-dimensional vector, \(\boldsymbol{\sigma }(\boldsymbol{X}(t),t)\) is an \(n\times m\) matrix, and \(\boldsymbol{W}(t)\) is an m-dimensional standard Wiener process. The probability density function \(p(\boldsymbol{x},t)\) of \(\boldsymbol{X}(t)\) satisfies the Fokker–Planck equation

$$ \frac{\partial }{\partial t}p\bigl(\boldsymbol{x}(t),t\bigr) =-\frac{\partial }{\partial \boldsymbol{x}}\bigl[ \boldsymbol{\mu }\bigl(\boldsymbol{x}(t),t\bigr)p\bigl(\boldsymbol{x}(t),t\bigr) \bigr] +\frac{\partial ^{2}}{ \partial \boldsymbol{x}^{2}}\bigl[D\bigl(\boldsymbol{x}(t),t\bigr)p\bigl( \boldsymbol{x}(t),t\bigr)\bigr], $$
(4)

where \(\boldsymbol{D}(\boldsymbol{x}(t),t)=\boldsymbol{\sigma }(\boldsymbol{X}(t),t)\boldsymbol{\sigma }^{T}( \boldsymbol{X}(t),t)\). This equation is also known as the Kolmogorov forward equation.

Let \(f:\mathbb{R}^{n}\rightarrow \mathbb{R}\) be continuous. The n-dimensional Fourier transform of f is the function \(\mathfrak{F}(f):\mathbb{R}^{n}\rightarrow \mathbb{R}\) defined by

$$ \mathfrak{F}(f) (\boldsymbol{u})= \int _{\mathbb{R}^{n}}f(\boldsymbol{x})e^{-i(\boldsymbol{x}\cdot \boldsymbol{u})}\,d\boldsymbol{x}, $$

where i is the imaginary unit.

Lemma 1

Let \(f:\mathbb{R}^{n}\rightarrow \mathbb{R}\) be a continuously differentiable function such that \(\lim_{\|\boldsymbol{x}\|\rightarrow \infty }f(\boldsymbol{x})=0\). For any \(n\times n\) real matrix A and n-dimensional real vector c, the following properties hold:

  1. 1

    \(\mathfrak{F}(\frac{\partial }{\partial \boldsymbol{x}}\cdot \boldsymbol{c}f( \boldsymbol{x}))=i\boldsymbol{u}^{T}\boldsymbol{c}\mathfrak{F}(f)(\boldsymbol{u})\),

  2. 2

    \(\mathfrak{F}(\frac{\partial }{\partial \boldsymbol{x}}\frac{\partial }{\partial \boldsymbol{x}}:\boldsymbol{A}f(\boldsymbol{x}))=-\boldsymbol{u}^{T}\boldsymbol{A}\boldsymbol{u} \mathfrak{F}(f)(\boldsymbol{u})\),

  3. 3

    \(\mathfrak{F}(\frac{\partial }{\partial \boldsymbol{x}}\cdot \boldsymbol{Ax}f( \boldsymbol{x}))= - (\frac{\partial \mathfrak{F}(f)(\boldsymbol{u})}{\partial \boldsymbol{u}} )^{T}\boldsymbol{A}^{T}\boldsymbol{u}\).

For any square matrix A, we define the exponential of A, denoted \(e^{\boldsymbol{A}}\), as \(\sum_{k=0}^{\infty }\frac{\boldsymbol{A} ^{k}}{k!}\), where \(\boldsymbol{A}^{0}\) is the identity matrix I. Note that this series always converges, so the exponential is well-defined.

Lemma 2

For any square matrix A, the following properties hold:

  1. 1

    \(\boldsymbol{A}e^{\boldsymbol{A}}=e^{\boldsymbol{A}}\boldsymbol{A}\),

  2. 2

    \((e^{\boldsymbol{A}})^{T}=e^{\boldsymbol{A}^{T}}\),

  3. 3

    \(e^{\boldsymbol{A}}\) is invertible with \(e^{-\boldsymbol{A}}\) as its inverse,

  4. 4

    \(\frac{de^{\boldsymbol{A}t}}{dt}=\boldsymbol{A}e^{\boldsymbol{A}t}\), so if A is invertible, then \(\int e^{\boldsymbol{A}t}\,dt=\boldsymbol{A}^{-1}e^{\boldsymbol{A}t}\).

3 Main results

Theorem 1

The characteristic function of the n-dimensional Ornstein–Uhlenbeck process \(\boldsymbol{X}(t)\) satisfying (2) with the initial value \(\boldsymbol{X}(0)=\boldsymbol{x}_{0}\) is given by

$$ \phi (\boldsymbol{u},t)=\exp \biggl[i\boldsymbol{u}^{T} \bigl(e^{-\boldsymbol{\theta } t}\boldsymbol{x} _{0}+\bigl(\boldsymbol{I}-e^{-\boldsymbol{\theta } t} \bigr)\boldsymbol{\mu } \bigr) -\frac{1}{2}\boldsymbol{u} ^{T} \biggl( \int _{0}^{t} e^{\boldsymbol{\theta }(s-t)}\boldsymbol{\sigma } \boldsymbol{\sigma } ^{T}e^{\boldsymbol{\theta }^{T} (s-t)}\,ds \biggr)\boldsymbol{u} \biggr]. $$
(5)

Proof

The Fokker–Planck equation of (2) is given by

$$ \frac{\partial p}{\partial t} =- \biggl[ \frac{\partial }{\partial \boldsymbol{x}}\boldsymbol{\theta \mu }p -\frac{\partial }{\partial \boldsymbol{x}} \boldsymbol{\theta x}p \biggr] +\frac{1}{2} \frac{\partial ^{2}}{\partial \boldsymbol{x}^{2}}\boldsymbol{D}p, $$
(6)

where \(\boldsymbol{D}=\boldsymbol{\sigma }\boldsymbol{\sigma }^{T}\), with initial condition

$$ p(\boldsymbol{x})=\boldsymbol{\delta }^{2} ( \boldsymbol{x}-\boldsymbol{x}_{0} ). $$
(7)

First, taking the n-dimensional Fourier transform of equation (6), we get

$$ \frac{\partial \hat{p}}{\partial t} =-i\boldsymbol{u}^{T}\boldsymbol{ \theta \mu } \hat{p} + \biggl(\frac{\partial \hat{p}}{\partial \boldsymbol{u}} \biggr)^{T} \boldsymbol{\theta }^{T}\boldsymbol{u} -\frac{1}{2} \boldsymbol{u}^{T}\boldsymbol{D}\boldsymbol{u}\hat{p}, $$
(8)

where \(\hat{p}(\boldsymbol{u},t)\) is the n-dimensional Fourier transform of \(p(\boldsymbol{x},t)\).

The initial condition (7) becomes

$$ \hat{p}(\boldsymbol{u}_{0})=\exp \bigl(-i \boldsymbol{u}_{0}^{T}\boldsymbol{x}_{0} \bigr). $$
(9)

Note that equation (8) is a first-order partial differential equation, so we will apply the method of characteristic.

Consider the system

$$ \frac{d\boldsymbol{u}}{dt} =\boldsymbol{\theta }^{T}\boldsymbol{u} $$

with initial condition \(\boldsymbol{u}(0)=\boldsymbol{u}_{0}\). The solution of this system is

$$ \boldsymbol{u}=e^{\boldsymbol{\theta }^{T}t}\boldsymbol{u}_{0}. $$
(10)

Consider the other equation

$$ \frac{d\hat{p}}{d t} = \biggl[-i\boldsymbol{u}^{T} \boldsymbol{\theta \mu } -\frac{1}{2} \boldsymbol{u}^{T} \boldsymbol{D}\boldsymbol{u} \biggr]\hat{p}. $$
(11)

Substituting u from (10) into (11), we get

$$ \frac{d\hat{p}}{\hat{p}} = \biggl[-i\boldsymbol{u}_{0}^{T}e^{\boldsymbol{\theta }t} \boldsymbol{\theta \mu } -\frac{1}{2}\boldsymbol{u}_{0}^{T}e^{\boldsymbol{\theta }t} \boldsymbol{D}e ^{\boldsymbol{\theta }^{T}t}\boldsymbol{u}_{0} \biggr]\,dt. $$
(12)

So

$$ \begin{aligned}[b] \hat{p} & =\hat{p}_{0} \exp \biggl[-i\boldsymbol{u}_{0}^{T}\bigl(e^{\boldsymbol{\theta }t}- \boldsymbol{I}\bigr)\boldsymbol{\mu } -\frac{1}{2}\boldsymbol{u}_{0}^{T} \biggl( \int _{0}^{t}e^{ \boldsymbol{\theta }t}\boldsymbol{D}e^{\boldsymbol{\theta }^{T}t} \,dt \biggr)\boldsymbol{u}_{0} \biggr]. \end{aligned} $$
(13)

Then, substituting \(\hat{p}_{0}\) from (9) and \(\boldsymbol{u}_{0}\) by inverting (10) into (13), we get

$$ \begin{aligned}[b] \hat{p} & =\exp \biggl[-i\boldsymbol{u}_{0}^{T} \boldsymbol{x}_{0} -i\boldsymbol{u}_{0}^{T}\bigl(e ^{\boldsymbol{\theta }t}-\boldsymbol{I}\bigr)\boldsymbol{\mu } -\frac{1}{2} \boldsymbol{u}_{0}^{T} \biggl( \int _{0}^{t}e^{\boldsymbol{\theta }t}\boldsymbol{D}e^{\boldsymbol{\theta }^{T}t} \,dt \biggr)\boldsymbol{u}_{0} \biggr] \\ & =\exp \biggl[-i\boldsymbol{u}^{T}e^{-\boldsymbol{\theta } t} \boldsymbol{x}_{0} -i\boldsymbol{u}^{T}e ^{-\boldsymbol{\theta } t} \bigl(e^{\boldsymbol{\theta }t}-\boldsymbol{I}\bigr)\boldsymbol{\mu } -\frac{1}{2} \boldsymbol{u}^{T}e^{-\boldsymbol{\theta } t} \biggl( \int _{0}^{t}e^{\boldsymbol{\theta }t} \boldsymbol{D}e^{\boldsymbol{\theta }^{T}t} \,dt \biggr)e^{-\boldsymbol{\theta }^{T}t}\boldsymbol{u} \biggr] \\ & =\exp \biggl[-i\boldsymbol{u}^{T}e^{-\boldsymbol{\theta } t} \boldsymbol{x}_{0}-i\boldsymbol{u}^{T}\bigl( \boldsymbol{I}-e^{-\boldsymbol{\theta } t}\bigr)\boldsymbol{\mu } -\frac{1}{2} \boldsymbol{u}^{T} \biggl( \int _{0}^{t} e^{\boldsymbol{\theta }(s-t)}\boldsymbol{\sigma } \boldsymbol{\sigma }^{T}e^{ \boldsymbol{\theta }^{T} (s-t)}\,ds \biggr)\boldsymbol{u} \biggr]. \end{aligned} $$
(14)

Since the characteristic function is the Fourier transform with opposite sign in the complex exponential, we are done. □

Corollary 1

The n-dimentianal Ornstein–Uhlenbeck process \(\boldsymbol{X}(t)\) satisfying (2) has an n-dimensional normal distribution with mean vector

$$ \boldsymbol{M}(t)=e^{-\boldsymbol{\theta } t}\boldsymbol{X}_{0}+ \bigl(\boldsymbol{I}-e^{-\boldsymbol{\theta } t}\bigr) \boldsymbol{\mu } $$
(15)

and covariance matrix

$$ \boldsymbol{\varSigma }(t)= \int _{0}^{t} e^{\boldsymbol{\theta }(s-t)}\boldsymbol{\sigma } \boldsymbol{\sigma }^{T}e^{\boldsymbol{\theta }^{T} (s-t)}\,ds. $$
(16)

Moreover, the probability density function of \(\boldsymbol{X}(t)\) is given by

$$ p(\boldsymbol{x},t)=\frac{\exp (-\frac{1}{2} (x-\boldsymbol{M}(t) )^{T} \boldsymbol{\varSigma }^{-1}(t) (x-\boldsymbol{M}(t) ) )}{\sqrt{ \vert 2\pi \boldsymbol{\varSigma }(t) \vert }}. $$
(17)

Proof

Comparing (5) with a characteristic function of multivariate normal distribution with mean M and covariance matrix Σ,

$$ \phi (\boldsymbol{u})=\exp \biggl[i\boldsymbol{u}^{T}\boldsymbol{M}- \frac{1}{2}\boldsymbol{u}^{T} \boldsymbol{\varSigma } \boldsymbol{u}\biggr], $$
(18)

we obtain the result. □

Theorem 2

The cross-covariance function matrix of an n-dimensional Ornstein– Uhlenbeck process \(\boldsymbol{X}(t)\) satisfying (2) is given by

$$ \boldsymbol{\varGamma }(s,t)= \int _{0}^{\min (s,t)} e^{-\boldsymbol{\boldsymbol{\theta }} (s-u)}\boldsymbol{ \boldsymbol{\sigma }} \boldsymbol{\boldsymbol{\sigma }}^{T}e^{-\boldsymbol{\boldsymbol{\theta }}^{T} (t-u)} \,du. $$
(19)

Proof

Let \(\boldsymbol{\varGamma }(s,t)=\mathbb{E}[(\boldsymbol{X}(t)-\boldsymbol{M}(t))(\boldsymbol{X}(s)- \boldsymbol{M}(s))^{T}]\). From (15) we can see that \(\boldsymbol{M}'(t)=- \boldsymbol{\theta }(\boldsymbol{M}(t)-\mu )\). Then

$$ \begin{aligned}[b] \frac{\partial ^{2}\boldsymbol{\varGamma }}{\partial s\,\partial t} & = \mathbb{E} \bigl[\bigl( \boldsymbol{X}'(s)-\boldsymbol{M}'(s) \bigr) \bigl(\boldsymbol{X}'(t)-\boldsymbol{M}'(t) \bigr)^{T}\bigr] \\ & = \mathbb{E}\bigl[\bigl(-\boldsymbol{\theta }\bigl(\boldsymbol{X}(s)- \boldsymbol{M}(s)\bigr)+\boldsymbol{\sigma } \boldsymbol{\xi }(s)\bigr) \bigl(- \boldsymbol{\theta }\bigl(\boldsymbol{X}(t)-\boldsymbol{M}(t)\bigr)+\boldsymbol{ \sigma } \boldsymbol{\xi }(t)\bigr)^{T}\bigr] \\ & = \boldsymbol{\theta } \boldsymbol{\varGamma } \boldsymbol{\theta }^{T} - \boldsymbol{\theta } \boldsymbol{K}(s,t) \boldsymbol{\sigma }^{T}-\boldsymbol{\sigma } \boldsymbol{L}(s,t)\boldsymbol{\theta }^{T}+ \boldsymbol{\sigma }\mathbb{E}\bigl[\boldsymbol{\xi }(s) \boldsymbol{\xi }^{T}(t)\bigr]\boldsymbol{\sigma }^{T}, \end{aligned} $$
(20)

where \(\boldsymbol{\xi }(t)\) is an n-dimensional white noise, \(\boldsymbol{K}(s,t)= \mathbb{E}[(\boldsymbol{X}(s)-\boldsymbol{M}(s))\boldsymbol{\xi }^{T}(t)]\), and \(\boldsymbol{L}(s,t)= \mathbb{E}[\boldsymbol{\xi }(s)(\boldsymbol{X}(t)-\boldsymbol{M}(t))^{T}]\).

Taking the derivative of \(\boldsymbol{K}(s,t)\) with respect to t, we get

$$ \begin{aligned}[b] \frac{\partial \boldsymbol{K}}{\partial s} & = \mathbb{E}\bigl[\bigl( \boldsymbol{X}'(s)-\boldsymbol{M}'(s)\bigr) \boldsymbol{\xi }^{T}(t)\bigr] \\ & = \mathbb{E}\bigl[-\boldsymbol{\theta }\bigl(\boldsymbol{X}(s)- \boldsymbol{M}(s)\bigr)\boldsymbol{\xi }^{T}(t)+ \boldsymbol{\sigma } \boldsymbol{\xi }(s)\boldsymbol{\xi }^{T}(t)\bigr] \\ & = -\boldsymbol{\theta } K(s,t)+\boldsymbol{\sigma }\mathbb{E}\bigl[ \boldsymbol{\xi }(s) \boldsymbol{\xi }^{T}(t)\bigr]. \end{aligned} $$
(21)

Since \(\mathbb{E}[\boldsymbol{\xi }(s)\boldsymbol{\xi }^{T}(t)]=\boldsymbol{\delta }^{m}(s-t)\) and \(\boldsymbol{K}(0,t)=0\), we get the solution

$$ \boldsymbol{K}(s,t)= \textstyle\begin{cases} e^{-\boldsymbol{\theta }(s-t)}\boldsymbol{\sigma } & \text{for } s>t, \\ 0 & \text{for } s< t. \end{cases} $$
(22)

Similarly, we get

$$ \boldsymbol{L}(s,t)= \textstyle\begin{cases} 0 & \text{for } s>t, \\ \boldsymbol{\sigma }^{T}e^{-\boldsymbol{\theta }(s-t)} & \text{for } s< t. \end{cases} $$
(23)

So, if \(t>s\), then

$$ \begin{aligned}[b] \frac{\partial ^{2}\boldsymbol{\varGamma }}{\partial s\,\partial t} & =\boldsymbol{ \theta } \boldsymbol{\varGamma } \boldsymbol{\theta }^{T} - \boldsymbol{ \theta } e^{-\boldsymbol{\theta }(s-t)} \boldsymbol{\sigma }\boldsymbol{\sigma }^{T} + \boldsymbol{\sigma }\boldsymbol{\delta }^{m}(s-t) \boldsymbol{\sigma }^{T} \end{aligned} $$
(24)

with initial condition \(C(0,t)=C(s,0)=0\). This equation has the solution

$$ \boldsymbol{\varGamma }(s,t)= \int _{0}^{s} e^{-\boldsymbol{\boldsymbol{\theta }} (s-u)}\boldsymbol{ \boldsymbol{\sigma }} \boldsymbol{\boldsymbol{\sigma }}^{T}e^{-\boldsymbol{\boldsymbol{\theta }}^{T} (t-u)} \,du. $$
(25)

On the other hand, if \(s>t\), then we similarly obtain that

$$ \boldsymbol{\varGamma }(s,t)= \int _{0}^{t} e^{-\boldsymbol{\boldsymbol{\theta }} (s-u)}\boldsymbol{ \boldsymbol{\sigma }} \boldsymbol{\boldsymbol{\sigma }}^{T}e^{-\boldsymbol{\boldsymbol{\theta }}^{T} (t-u)} \,du. $$
(26)

This completes the proof. □

From this result it follows that if we let \(s=t\), then the cross-covariance function matrix becomes the covariance matrix as in (16).

If the parameter θ of univariate Ornstein–Uhlenbeck process is positive, then the process is mean-reverting. For the multivariate case, we also have a condition for mean-reverting, which is stated in the following theorem.

Theorem 3

The n-dimensional Ornstein–Uhlenbeck process \(\boldsymbol{X}(t)\) satisfying (2) is mean-reverting if all eigenvalues of θ are positive.

Proof

Since \(e^{-\boldsymbol{\theta } t}\) tends to the zero matrix as t tends to infinity if all eigenvalues of θ are positive, we can conclude from (15) that, with this condition, \(\boldsymbol{M}(t)\) tends to μ.

For \(\boldsymbol{\varSigma }(t)\), the situation is different, since we cannot take t in (16) to infinity directly as we do for \(\boldsymbol{M}(t)\). We apply the identity \(\operatorname{vec}(\boldsymbol{ABC})=(\boldsymbol{C}^{T}\otimes \boldsymbol{A}) \operatorname{vec}( \boldsymbol{B})\), where ⊗ is the Kronecker product defined in [12], and \(\operatorname{vec}(\boldsymbol{A})\) is defined as the column vector made of the columns of A stacked atop one another from left to right. Then

$$ \operatorname{vec}\bigl(\boldsymbol{\varSigma }(t)\bigr)= \int _{0}^{t} e^{\boldsymbol{\theta }(s-t)}\otimes e^{ \boldsymbol{\theta }(s-t)}\,ds \operatorname{vec}\bigl(\boldsymbol{\boldsymbol{\sigma } \boldsymbol{\sigma }^{T}}\bigr). $$
(27)

Now we use another identity \(e^{\boldsymbol{A}\otimes \boldsymbol{B}}=e^{\boldsymbol{A}} \oplus e^{\boldsymbol{B}}\) where ⊕ is the Kronecker sum. Then we obtain

$$ \begin{aligned}[b] \operatorname{vec}\bigl(\boldsymbol{\varSigma }(t) \bigr) & = \int _{0}^{t} e^{\boldsymbol{\theta }(s-t)}\otimes e ^{\boldsymbol{\theta }(s-t)}\,ds \operatorname{vec}\bigl(\boldsymbol{\sigma }\boldsymbol{ \sigma }^{T}\bigr) \\ & = \int _{0}^{t} e^{(\boldsymbol{\theta }\oplus \boldsymbol{\theta })(s-t)}\,ds \operatorname{vec}\bigl( \boldsymbol{\sigma }\boldsymbol{\sigma }^{T} \bigr) \\ & = (\boldsymbol{\theta }\oplus \boldsymbol{\theta })^{-1} \bigl( \boldsymbol{I}- e^{-( \boldsymbol{\theta }\oplus \boldsymbol{\theta })t} \bigr) \operatorname{vec}\bigl(\boldsymbol{\sigma }\boldsymbol{\sigma } ^{T}\bigr). \end{aligned} $$
(28)

Since all eigenvalues of \(\boldsymbol{\theta }\oplus \boldsymbol{\theta }\) are still positive, the covariance matrix converts to a constant matrix Σ such that \(\operatorname{vec}(\boldsymbol{\varSigma })= (\boldsymbol{\theta }\oplus \boldsymbol{\theta })^{-1} \operatorname{vec}(\boldsymbol{\sigma }\boldsymbol{\sigma }^{T})\). □

4 Conclusion

In this paper, we propose a new method to derive the distribution of the multivariate Ornstein–Uhlenbeck process by solving its forward equation. We apply the characteristic method and Fourier transform to solve the equation. We obtain the characteristic function of the multivariate Ornstein–Uhlenbeck process and also its density function. Our explicit result shows that the multivariate Ornstein–Uhlenbeck process, at any time, is a multivariate normal random variable. We also derive the mean vector, covariance matrix, and cross covariance matrix and obtain its mean-reverting condition, which is an extension of the univariate case. It is well known that the univariate Ornstein–Uhlenbeck process has a mean-reverting property when the parameter θ is positive. In our study, for the multivariate case, we have found that the process is mean-reverting as t increases and all eigenvalues of the matrix θ are positive.