1 Introduction

Although stochastic differential equations play an essential role in real-world modelling processes, they have received only limited attention in the field of Clifford analysis. They play a vital role in real-world modelling processes scattering from diffusion and financial markets till describing phenomena from quantum mechanics, see [10, 18] for more details on modelling with stochastic differential equations. The approaches to the classical theory of stochastic equations comprise two general ways: (i) a semi-group approach utilising a semi-group generated by the differential operator of a stochastic DE, see e.g., [9], and (ii) Wick-product approach, where the classical objects of the stochastic calculus, such as e.g., Brownian motion, white noise, and Itô integral, are transferred to the Wick setting, see [18] for details. Independent on the way, analysis of stochastic DEs in a Clifford setting requires, first of all, a generalisation of the classical stochastic calculus to Clifford analysis.

Several authors have made first steps towards generalising tools of stochastic calculus to a hypercomplex setting in recent years. In [2], positive definite functions have been studied in a quaternionic setting, where quaternionic random variables and stochastic processes have been introduced. Another group of works [3,4,5] has been devoted to studying stochastic calculus in general Grassmann algebras. Additionally, it is important to mention, that a connection between \(\alpha \)-hyperbolic harmonics and hyperbolic Brownian motion has been presented in [12]. Nonetheless, a general Clifford setting has not been addressed so far.

Further ideas on generalising stochastic calculus to Clifford analysis have been presented in [7], where, among other results, Clifford random variables, Clifford white noise, and Clifford chaos expansion with the help of Hermite polynomials have been introduced. In this paper, we further extend and refine the results from [7] by discussing martingales, Brownian motion, and Itô formula in the Clifford setting. This way, we broaden ideas from [13, 14, 24] on complex Brownian motion and the Itô formula to the hypercomplex setting. Additional motivation for generalising stochastic analysis to the Clifford setting comes from the fact, that tools of stochastic analysis provide a new point of view on studying classical deterministic problems and results in harmonic analysis, such as the solution of Dirichlet boundary value problems and properties of harmonic functions, see for example [22, 23]. In this paper we show that similar techniques can be used in the context of Clifford analysis by studying a Dirichlet problem for monogenic functions and proving the canonical Liouville’s theorem with the help of the tools of stochastic Clifford analysis developed in this paper.

The paper is organised as follows: preliminaries from the classical Clifford analysis, as well as some supplementary results, are presented in Sect. 2; Sect. 3 presents basics of stochastic analysis in the Clifford setting, namely random variables, stochastic processes, and martingales; Sect. 4 discusses Brownian motions in the Clifford setting, as well as shows its applications to some classical problems in Clifford analysis; finally, Sect. 5 is devoted to the Itô formula and related results.

2 Preliminaries and Notations

Following [8, 16], we recall in this section some well-known facts from Clifford analysis. Let us consider the standard orthonormal basis \(\{ {\mathbf {e}}_{1},{\mathbf {e}}_{2},\ldots ,{\mathbf {e}}_{n} \}\) of the Euclidean vector space \({\mathbb {R}}^{n}\). From now on, \({\mathcal {C}}\ell (n)\) will denote the \(2^{n}\)-dimensional real Clifford algebra over \({\mathbb {R}}^{n}\) with the classical multiplication rules for basis vectors:

$$\begin{aligned} {\mathbf {e}}_{j}^{2}=-1, \qquad {\mathbf {e}}_{j}{\mathbf {e}}_{k} + {\mathbf {e}}_{k}{\mathbf {e}}_{j} = 0, \quad j\ne k, \quad j,k=1,2,\ldots ,n. \end{aligned}$$

As usual, we identify \({\mathbf {e}}_{0}\) with the multiplicative identity 1 of the real field. The Euclidean vector space \({\mathbb {R}}^{n+1}\) can be straightforwardly embedded in \({\mathcal {C}}\ell (n)\) by identifying the element \({\mathbf {x}}=(x_{0},x_{1},x_{2},\ldots ,x_{n})\) with the Clifford para-vector \({\mathbf {x}}\) given by

$$\begin{aligned} {\mathbf {x}} = x_{0} + \sum \limits _{k=1}^{n}x_{k}{\mathbf {e}}_{k} \in {\mathcal {A}}:=\mathrm {span}_{{\mathbb {R}}}\{1, {\mathbf {e}}_1, \ldots , {\mathbf {e}}_n\}. \end{aligned}$$

Hence, the vector space \({\mathcal {A}}\) is algebraically isomorphic to \({\mathbb {R}}^{n+1}\).

For an arbitrary para-vector \({\mathbf {x}}\), \({{\mathbf {S}}}{{\mathbf {c}}}({\mathbf {x}}):= x_0\) and \(\mathbf {Vec}({\mathbf {x}}):= \sum _{k=1}^{n} x_k {\mathbf {e}}_k\) denotes its scalar part and its vector part, respectively. A basis of vector space for \({\mathcal {C}}\ell (n)\) is given by

$$\begin{aligned} \left\{ {\mathbf {e}}_0=1, {\mathbf {e}}_A = {\mathbf {e}}_{i_1} \cdots {\mathbf {e}}_{i_k}: A=\{ i_1, \ldots , i_k \}, 1\le i_1< \cdots < i_k \le n \right\} , \end{aligned}$$

implying that each element of \(a\in {\mathcal {C}}\ell (n)\) (a Clifford number) can be represented in the form

$$\begin{aligned} a=\sum _{A}a_{A}{\mathbf {e}}_{A}, \quad \text{ where } a_{A}\in {\mathbb {R}}. \end{aligned}$$

Additionally, the conjugation in \({\mathcal {C}}\ell (n)\) is defined as the involutory anti-automorphism \({\bar{\cdot }}: {\mathcal {C}}\ell (n) \rightarrow {\mathcal {C}}\ell (n)\) given by its action on the basis elements:

$$\begin{aligned} \overline{a b} = {{\bar{b}}} ~{{\bar{a}}}, \quad a, b \in {\mathcal {C}}\ell (n), \text{ and } \bar{{\mathbf {e}}}_{0} = {\mathbf {e}}_{0}, \quad \bar{{\mathbf {e}}}_{k} = - {\mathbf {e}}_{k}, ~k=1, \ldots , n. \end{aligned}$$

Thus, we have that \( {\mathbf {x}} \bar{{\mathbf {x}}} = \bar{{\mathbf {x}}}{\mathbf {x}} = \sum _{k=0}^n x_k^2 := |{\mathbf {x}}|^2\), is the Euclidean norm of \({\mathbf {x}}\) when identified with \(x \in {\mathbb {R}}^{n+1}.\)

Let now \(\Omega \) be an open subset of \({\mathbb {R}}^{n+1}\) with a piecewise smooth boundary. A \({\mathcal {C}}\ell (n)\)-valued function is a mapping

$$\begin{aligned} f:\Omega \mapsto {\mathcal {C}}\ell (n) \text{ with } f({\mathbf {x}})=\sum _{A}f_A({\mathbf {x}}){\mathbf {e}}_{A}, \quad {\mathbf {x}}\in \Omega , \end{aligned}$$

where the coordinates \(f_A\) are real-valued functions defined in \(\Omega \), i.e., \(f_A:\Omega \rightarrow {\mathbb {R}}\), for all A. Continuity, differentiability and integrability of f are defined coordinate-wisely. In the special case of para-vector valued functions we will denote them as \(f({\mathbf {x}})=\sum _{k=0}^{n}f^{k}({\mathbf {x}}){\mathbf {e}}_{k}\).

Definition 2.1

For continuously real-differentiable functions \(f:\Omega \subset {\mathbb {R}}^{n+1}\rightarrow {\mathcal {C}}\ell (n)\), which is denoted for simplicity by \(f\in C^{1}(\Omega ,{\mathcal {C}}\ell (n))\), the operator

$$\begin{aligned} D_{{\mathbf {x}}}:=\sum \limits _{k=1}^{n}{\mathbf {e}}_{k}\partial _{x_{k}} \end{aligned}$$

is called the Dirac operator.

Additionally, we would like also to consider the generalised Cauchy-Riemann operator in \({\mathbb {R}}^{n+1}, n\ge 1\), which is given by

$$\begin{aligned} D = \partial _{x_0}+ {\mathbf {e}}_1 \partial _{x_1} + \cdots +{\mathbf {e}}_n \partial _{x_n} = \partial _{x_0} + D_{{\mathbf {x}}}, \end{aligned}$$

and its conjugated operator

$$\begin{aligned} {\overline{D}} = \partial _{x_0} - {\mathbf {e}}_1 \partial _{x_1} - \cdots -{\mathbf {e}}_n \partial _{x_n} = \overline{ \partial _{x_0} + D} = \partial _{x_0} - D_{{\mathbf {x}}}. \end{aligned}$$

Following [17, 20] and introducing the hypercomplex variables

$$\begin{aligned} z_k = x_k - x_0e_k = -\frac{{\mathbf {x}}{\mathbf {e}}_k + {\mathbf {e}}_k{\mathbf {x}}}{2}, \end{aligned}$$

we can consider n copies \({\mathbb {C}}_k\) of \({\mathbb {C}}\) by identifying \(i\cong {\mathbf {e}}_k,\) \((k=1,\ldots ,n), x_0 = \mathrm {Re} z, x_k \cong \mathrm {Im} z\), where \(z\in {\mathbb {C}},\) and taking then \({\mathbb {C}}_k := -{\mathbf {e}}_k {\mathbb {C}}.\) Defining the Cartesian product \({\mathbb {H}}_k := {\mathbb {C}}_1 \times \cdots {\mathbb {C}}_k, (k=1,\ldots ,n)\), we get the real linear vector space

$$\begin{aligned} {\mathbb {H}}_n= \{ {\mathbf {z}} \,:\, {\mathbf {z}} = (z_1,\ldots , z_n) = (x_1,\ldots , x_n) - x_0({\mathbf {e}}_1,\ldots , {\mathbf {e}}_n)\} \cong {\mathbb {R}}^{n+1} \cong {\mathcal {A}}. \end{aligned}$$

The hypercomplex variables \(z_{k}\) and the real linear vector space \({\mathbb {H}}_{n}\) introduced above will be used later in the paper for proving the Itô formula in Clifford analysis.

Definition 2.2

A function \(f\in C^{1}(\Omega ,{\mathcal {C}}\ell (n))\) is called left (resp. right) monogenic in \(\Omega \) if

$$\begin{aligned} D f=0 \quad \text{ in } \quad \Omega \quad (\text{ resp. }, f D=0 \quad \text{ in } \quad \Omega ). \end{aligned}$$

For discussing basics of stochastic analysis in the Clifford setting, we need to introduce at first a proper Hilbert space structure, which is provided by a right Hilbert-module [8]:

Definition 2.3

A right (unitary) module over \({\mathcal {C}}\ell (n)\) is a vector space V together with an algebra morphism \(R:{\mathcal {C}}\ell (n) \rightarrow V\) (also called right multiplication), such that

$$\begin{aligned} R(ab+c)= R(b)R(a)+R(c). \end{aligned}$$

In the particular case where \(V=H\otimes {\mathcal {C}}\ell (n),\) with \({\mathcal {H}}\) a Hilbert space, we say that V is a right Hilbert-module over \({\mathcal {C}}\ell (n)\).

The inner product \((\cdot , \cdot )_{{\mathcal {H}}}\) in \({\mathcal {H}}\) gives rise to a Clifford algebra-valued inner product in V:

$$\begin{aligned} \langle f, g \rangle _{{\mathcal {H}}} := \sum _{A, B} (f_A, g_B)_{{\mathcal {H}}} \overline{{\mathbf {e}}}_{A} {\mathbf {e}}_{B}, \end{aligned}$$

which is, strongly speaking, not the classical inner product due to lack of the positiveness axiom. Nonetheless, by restricting the Clifford algebra-valued inner product to its scalar part, the inner product can be defined.

By considering \(\mu \)-measurable Clifford algebra-valued functions in \({\mathbb {R}}^{n+1}\), the \(L^{2}\)-inner product can be stated as follows

$$\begin{aligned} \langle f,g\rangle _{L^{2}({\mathbb {R}}^{n+1},{\mathcal {C}}\ell (n))} = \int \limits _{{\mathbb {R}}^{n+1}} \overline{f({\mathbf {x}})}g({\mathbf {x}}) d\mu . \end{aligned}$$

3 Random Variables, Stochastic Processes and Martingales in the Clifford Setting

In this section we summarise some basic facts regarding stochastic analysis in the Clifford setting. Especially, a probability space, random variables, and stochastic process will be introduced. Some of the results presented in this section are built upon extending the work [7], where first steps towards stochastic Clifford analysis have been made. The final goal of this section is to introduce martingales in the Clifford setting. Moreover, for a better readability of the paper, we will provide all basic definitions needed for construction of a stochastic analysis in the Clifford setting, although some of these definitions are straightforwardly extended from the classical real case.

Similar to [2], we start by defining the probability space and Clifford random variables:

Definition 3.1

The triple \((\Omega ,{\mathcal {F}},P)\) is called a probability space, where \(\Omega \subset {\mathbb {R}}^{n+1}\) is a given set, \({\mathcal {F}}\) is a \(\sigma \)-algebra on \(\Omega \), and P is a standard probability measure. A Clifford random variable X is a \({\mathcal {F}}\)-measurable function \(X:\Omega \rightarrow {\mathcal {C}}\ell (n)\).

It is important to remark, that since a Clifford random variable X is a \({\mathcal {C}}\ell (n)\)-valued function, then according to the discussion in Sect. 2, this function can be expressed as follows

$$\begin{aligned} X=\sum _{A}X_{A}{\mathbf {e}}_{A}, \end{aligned}$$
(3.1)

which is a Clifford random variable if all functions \(X_{A}\) are real random variables. By help of the last representation, the classical proposition, see for example [10], can be straightforwardly generalised to the Clifford setting:

Proposition 3.2

Let X and Y be Clifford random variables, then \(\alpha X + \beta Y\) is a Clifford random variable for any \(\alpha ,\beta \in {\mathcal {C}}\ell (n)\).

Next, following the standard theory of stochastic process, see for example [22], we call a probability measure \(\mu _{X}\) on \({\mathcal {C}}\ell (n)\) induced by a random variable X

$$\begin{aligned} \mu _{X}({\mathcal {F}}) = P\left( X^{-1}(A)\right) \end{aligned}$$

the distribution of X for \(A\in {\mathcal {F}}_{{\mathcal {C}}\ell (n)}\), where \({\mathcal {F}}_{{\mathcal {C}}\ell (n)}\) is a \(\sigma \)-algebra in \({\mathcal {C}}\ell (n)\). Similarly, the number

$$\begin{aligned} E[X] := \int \limits _{\Omega } X(\omega )dP(\Omega ) = \int \limits _{{\mathcal {C}}\ell (n)} {\mathbf {x}} d\mu _{X}({\mathbf {x}}), \qquad \left( \text{ if } \int \limits _{\Omega } |X(\omega )|dP(\Omega )<\infty \right) \end{aligned}$$

is called the expectation of X (w.r.t. P). By taking into account representation (3.1) of X, the expectation can also be written as follows

$$\begin{aligned} E[X]=\sum _{A}E[X_{A}]{\mathbf {e}}_{A}. \end{aligned}$$

Analogously to the classical theory [10], we define now a stochastic Clifford process as follows:

Definition 3.3

Let V be a right Hilbert-module over \({\mathcal {C}}\ell (n)\), \((\Omega ,{\mathcal {F}},P)\) be a probability space and let I be an interval of \({\mathbb {R}}\). An arbitrary family \(X=\left\{ X(t)\right\} _{t\in I}\) of V-valued Clifford random variables X(t), \(t\in I\), defined on \(\Omega \) is called a stochastic Clifford process. Additionally, we set \(X_{t}(\omega )=X(t,\omega )\) for all \(t\in I\) and \(\omega \in \Omega \). Functions \(X(\cdot ,\omega )\) are called trajectories (or paths) of X(t). Further, considering representation (3.1), a stochastic Clifford process can be written as follows

$$\begin{aligned} X(t) = \sum _{A}X_{A}(t){\mathbf {e}}_{A}. \end{aligned}$$
(3.2)

The parametrisation of Clifford random variables introduced in this definition is realised component-wisely for \({\mathbf {x}}\in {\mathcal {C}}\ell (n)\), similar to the case of vectors of random variables. Let us further introduce the following classical notions related to stochastic processes, which are straightforwardly generalised to the Clifford setting considering representation (3.2):

  • A stochastic Clifford process X(t) is said to be integrable (resp. square integrable) if

    $$\begin{aligned} E[|X_{A}(t)|]<\infty \, (\text{ resp. } E[|X_{A}(t)|^{2}]<\infty ) \text{ for } \text{ all } t\in I \text{ and } A; \end{aligned}$$
  • A stochastic Clifford process X(t) is said to be bounded in \(L^{p}\) if

    $$\begin{aligned} \sup \limits _{t\in I} E[|X_{A}(t)|^{p}]<\infty ; \end{aligned}$$
  • A stochastic Clifford process X(t) is continuous if its trajectories are continuous.

Further definitions of regularity in a stochastic sense can also be adapted directly from the classical case.

Before introducing Clifford martingales, we need to recall the notion of a filtration:

Definition 3.4

  1. (i)

    A filtration on a probability space \((\Omega ,{\mathcal {F}},P)\) is a family \(\left\{ {\mathcal {F}}_{t}:t\ge 0\right\} \) of \(\sigma \)-algebras such that \({\mathcal {F}}_{s}\subset {\mathcal {F}}_{t}\subset {\mathcal {F}}\) for all \(s<t\).

  2. (ii)

    A probability space together with a filtration is called a filtered probability space.

  3. (iii)

    A stochastic Clifford process \(\left\{ X(t):t\ge 0\right) \}\) defined on a filtered probability space with filtration \(\left\{ {\mathcal {F}}_{t}:t\ge 0\right\} \) is called adapted if X(t) is \({\mathcal {F}}_{t}\)-measurable for any \(t\ge 0\).

Finally, we can introduce Clifford martingales:

Definition 3.5

Let V be a right Hilbert-module over \({\mathcal {C}}\ell (n)\). An integrable V-valued stochastic Clifford process \(\left\{ X(t):t\ge 0\right\} \) is a martingale with respect to a filtration \(\left\{ {\mathcal {F}}_{t}:t\ge 0\right\} \) if it is adapted to the filtration and

$$\begin{aligned} E(X(t)|{\mathcal {F}}_{s}) = X(s), \end{aligned}$$

almost surely for any pair of times \(0\le s\le t\). The process is called a submartingale if \(\ge \) holds, and a supermartingale if \(\le \) holds in the formula above.

Next, for a fixed number \(T>0\), let us denote by \({\mathcal {M}}_{T}^{2}(V)\) the space of all V-valued continuous, square integrable martingales M, such that \(M(0)=0\). We have now the following proposition:

Proposition 3.6

The space \({\mathcal {M}}_{T}^{2}(V)\), equipped with the norm

$$\begin{aligned} \Vert M\Vert _{{\mathcal {M}}_{T}^{2}(V)} := \left( E\left[ \sup \limits _{t\in [0,T]}\Vert M(t)\Vert ^{2}\right] \right) ^{\frac{1}{2}}, \end{aligned}$$
(3.3)

is a right-Banach module.

Proof

The proof of this proposition follows the standard arguments from the classical case, see for example [10]. Therefore, we will only recall few basic ideas of the proof. At first, since we are in the Clifford setting, it is important to underline that the norm in (3.3) must be kept real-valued to ensure that \(\Vert M(t)\Vert \) is a submartingale. Practically this implies, that in the case for example of inner product-induced norm, i.e. of a Hilbert space, the norm is constructed by using only the scalar component of the inner product. Thus, if \(\Vert M(t)\Vert \) is a submartingale, then identity (3.3) defines a norm.

For proving completeness, we consider a Cauchy sequence \(\left\{ M_{n}\right\} \), which in stochastic setting implies

$$\begin{aligned} E\left( \sup \limits _{t\in [0,T]} \Vert M_{n}(t)-M_{m}(t)\Vert ^{2}\right) \rightarrow 0, \text{ as } n,m\rightarrow \infty . \end{aligned}$$

After that, by using properties of Cauchy sequences and the Borel–Cantelli lemma, it is possible to show that M is a continuous stochastic Clifford process. The final step of the proof is to recall that for a subsequence \(\left\{ M_{n_{k}}\right\} \) holds

$$\begin{aligned} E(M_{n_{k}}(t)|{\mathcal {F}}_{s}) = M_{n_{k}}(s), \end{aligned}$$

almost surely if \(0\le s\le t \le T\) and \(k\in {\mathbb {N}}\). After that, by letting k tend to infinity, we get \(E(M(t)|{\mathcal {F}}_{s}) = M(s)\) almost surely, implying that \(M\in {\mathcal {M}}_{T}^{2}(V)\) and \(M_{n}\rightarrow M\) in \({\mathcal {M}}_{T}^{2}(V)\). \(\square \)

In Sect. 5, a generalisation of the stochastic integral and Itô formula to the Clifford setting will be discussed. To prepare this discussion, we need to introduce few more terms related to martingales, namely local martingales and continuous semimartingale [23]:

Definition 3.7

A stochastic Clifford process \(\left\{ X(t):t\ge 0\right\} \) is called a local martingale, if

  1. (i)

    X(0) is \({\mathcal {F}}_{0}\)-measurable,

  2. (ii)

    \(\left\{ X(t)-X(0) :t\ge 0\right\} \in {\mathcal {M}}_{0,loc}\),

where \({\mathcal {M}}_{0,loc}\) is the space of local martingales null at \(t=0\), i.e

$$\begin{aligned} {\mathcal {M}}_{0,loc} := \{M :M(t), t\ge 0 \text { is a continuous local martingale and } M_0= 0\}. \end{aligned}$$

Additionally to the space \({\mathcal {M}}_{0,loc}\), the following space needs to be introduced

$$\begin{aligned} {\mathbb {L}}^2_{loc}(M) := \left\{ F :\begin{array}{l} \left\{ F(t):t\ge 0\right\} \text { is progressively measurable } \\ \text {and } \int _0^{\infty } F^2 d\langle M \rangle < \infty \end{array} \right\} , \end{aligned}$$

where the term progressively measurable implies that a stochastic process is measurable with respect to the \(\sigma \)-algebra \({\mathcal {B}}([0,t])\otimes {\mathcal {F}}_{t}\) with \({\mathcal {B}}([0,t])\) denoting the Borel \(\sigma \)-algebra on [0, t].

Finally, we need the notion of a continuous semimartingale:

Definition 3.8

A stochastic Clifford process \(X(t), t\ge 0\) is a continuous semimartingale if it can be written as the sum

$$\begin{aligned} X(t) = X(0) + M(t) + A(t), \end{aligned}$$

where X(0) is \({\mathcal {F}}_0\)-measurable, M is a continuous local martingale with \(M(0)=0\) and A(t) is a continuous adapted process with paths of locally finite variation with \(A(0)=0\). The processes M and A are known as the martingale and finite variational parts of X, respectively.

In the next section, after introducing Clifford Brownian motion, we will provide several examples of Clifford martingales and linking them to the Clifford Brownian motion.

Remark 3.9

As a summary of this section, we would like to underline that many constructions from the classical stochastic analysis can be directly transferred to the Clifford setting. The technical aspects of working with Clifford structures are first of all related to definitions of spaces, norms, and inner products, which are generally “hidden”  behind the classically formulated definitions.

4 Brownian Motion and Monogenic Functions

The aim of this section is to discuss Brownian motion in the Clifford context and relate it to monogenic functions. From now on we will only consider white noise measure \(\mu _{X}\), i.e. the measure satisfying the Bochner–Minlos theorem, see [18] for details. By the help of white noise measure, a Clifford Brownian motion can be introduced:

Definition 4.1

A para-vector-valued stochastic process \(\left\{ {\mathbf {B}}(t):t\ge 0 \right\} \) of the form

$$\begin{aligned} {\mathbf {B}}(t) := B_{0}(t) + {\mathbf {e}}_{1}B_{1}(t) + \cdots + {\mathbf {e}}_{n}B_{n}(t), \end{aligned}$$

is called a (linear) Clifford Brownian motion with start \({\mathbf {x}}\in {\mathbb {R}}^{n+1}\), and where \(B_{i}(t)\), \(i=0,1,\ldots ,n\) are classical one-dimensional Brownian motions, meaning that

  • \(B_{i}(0)={\mathbf {x}}\);

  • the process has independent increments, i.e. for all times \(0\le t_{1}\le t_{2}\le \cdots \le t_{k}\) the increments \(B_{i}(t_{k})-B_{i}(t_{k-1})\), \(B_{i}(t_{k-1})-B_{i}(t_{k-2})\), \(\ldots \), \(B_{i}(t_{2})-B_{i}(t_{1})\) are independent random variables;

  • for all \(t\ge 0\) and \(\sigma >0\), the increments \(B_{i}(t+\sigma )-B_{i}(t)\) are normally distributed with expectation zero and variance \(\sigma \);

  • almost surely, the function \(t\mapsto B_{i}(t)\) is continuous.

Further, analogous to the classical case, we say that \(\left\{ {\mathbf {B}}(t):t\ge 0 \right\} \) is a standard Clifford Brownian motion if \({\mathbf {x}}=0\).

It is important to underline that only para-vector-valued Clifford Brownian motions are considered from now on. This restriction is necessary for providing clear connections between the Clifford Brownian motion and monogenic functions, which is presented in Theorem 4.7.

Next, we formulate the following theorem:

Theorem 4.2

Let \(\{{\mathbf {B}}(t):t\ge 0\}\) be a Clifford Brownian motion started at \({\mathbf {x}}\in {\mathbb {R}}^{n+1}\). Then the process \(\{{\mathbf {B}}(t+s) - {\mathbf {B}}(s) :t,s >0\}\) is a Brownian motion started at the origin and is independent of \(\{{\mathbf {B}}(t): 0\le t \le s \}\).

Proof

This is an immediate consequence of the independence of increments of Brownian motion. \(\square \)

In the sequel, we will use a Clifford Brownian motion adapted to a filtration. Let us briefly illustrate how such a Brownian motion can be constructed. Suppose \(\{{\mathbf {B}}(t), t\ge 0\}\) is a Clifford Brownian motion defined on some probability space, then a filtration \(\left\{ {\mathcal {F}}_{t}^{0}:t\ge 0\right\} \) can be defined as follows

$$\begin{aligned} {\mathcal {F}}_{t}^{0} = \sigma \left\{ {\mathbf {B}}(s):0\le s\le t\right\} , \end{aligned}$$

which is the \(\sigma \)-algebra generated by the random variables \({\mathbf {B}}(s)\) for \(0\le s\le t\). An important property of a Brownian motion is its independence on a filtration:

Theorem 4.3

For all \(s\ge 0\) the random process \(\{{\mathbf {B}}(t+s) - {\mathbf {B}}(s), t\ge 0\}\) is independent of \({\mathcal {F}}_{t^{+}}\).

Proof

Because of the continuity of stochastic process \(\{{\mathbf {B}}(t+s) - {\mathbf {B}}(s), t\ge 0\}\), the following equality holds for a strictly decreasing sequence \(\{s_n:n\in {\mathbb {N}}\}\) converging to s:

$$\begin{aligned} {\mathbf {B}}(t+s) - {\mathbf {B}}(s) = \lim _{n\rightarrow \infty } {\mathbf {B}}(s_n+t) - {\mathbf {B}}(s_n). \end{aligned}$$

Then the Markov property implies that the right side of the above equation is independent of \({\mathcal {F}}_{t^{+}}\). \(\square \)

Further we list two theorems, whose proofs are straightforward [21], and, therefore, omitted:

Theorem 4.4

(Strong Markov property) For every almost surely finite stopping time T, the process \(\{{\mathbf {B}}(T+t) - {\mathbf {B}}(T):t\ge 0\}\) is a standard Clifford Brownian motion independent of \({\mathcal {F}}_{t^{+}}\).

Theorem 4.5

(Reflection principle) If T is a stopping time and \(\{{\mathbf {B}}(t):t\ge 0 \}\) is as standard Clifford Brownian motions, then the random process

$$\begin{aligned} {\mathbf {B}}^*(t):= {\mathbf {B}}(t){\mathbbm {1}}_{\{t\le T\}} + (2{\mathbf {B}}(T)-{\mathbf {B}}(t)){\mathbbm {1}}_{\{t >T\}} =\left\{ \begin{array}{cl} {\mathbf {B}}(T) - {\mathbf {B}}(t), &{} \text { if } t \le T \\ {\mathbf {B}}(t), &{} \text { if } 0\le t \le T \end{array} \right. \end{aligned}$$

is a standard Clifford Brownian motions, and where \({\mathbbm {1}}\) is the classical characteristic function of a set.

4.1 Clifford Brownian Motion and Clifford Martingales

In this subsection, we will present two basic examples of Clifford martingales linking them to the Clifford Brownian motion introduced in Definition 4.1. These examples serve as a basis for future studies of the Clifford Brownian motion. We start with the following example:

Example

For a Clifford Brownian motion \(\left\{ {\mathbf {B}}(t):t\ge 0 \right\} \) we have

$$\begin{aligned} E\left[ {\mathbf {B}}(t)|{\mathcal {F}}_{s}^{+}\right] = E\left[ {\mathbf {B}}(t)-{\mathbf {B}}(s)|{\mathcal {F}}_{s}^{+}\right] + {\mathbf {B}}(s) = E\left[ {\mathbf {B}}(t)-{\mathbf {B}}(s)\right] + {\mathbf {B}}(s) = {\mathbf {B}}(s) \end{aligned}$$

for \(0\le s\le t\), where Theorem 4.3 has been used. Hence, Clifford Brownian motion is a Clifford martingale, as expected.

As the next example, we present the following lemma:

Lemma 4.6

Let \(\{{\mathbf {B}}(t):t\ge 0\}\) be a Clifford Brownian motion adapted to a filtration \({\mathcal {F}}_{t^{+}}\), then the process

$$\begin{aligned} \{{\mathbf {B}}^{2}(t) - t :t\ge 0\} \end{aligned}$$

is a martingale.

Proof

The proof is done by straightforward calculations and by using the fact, that the increments of Brownian motions are independent of a filtration. So, we can compute

$$\begin{aligned}&\displaystyle E\left[ {\mathbf {B}}^{2}(t) - t |{\mathcal {F}}_{s}^{+} \right] = E\left[ {\mathbf {B}}^{2}(t) - t |{\mathcal {F}}_{s}^{+} \right] \\&\quad = \displaystyle E\left[ \left( {\mathbf {B}}(t)-{\mathbf {B}}(s)+{\mathbf {B}}(s)\right) ^{2} - t |{\mathcal {F}}_{s}^{+} \right] \\&\quad = \displaystyle E\left[ \left( {\mathbf {B}}(t)-{\mathbf {B}}(s)\right) ^{2}|{\mathcal {F}}_{s}^{+} \right] + 2E\left[ {\mathbf {B}}(t)-{\mathbf {B}}(s)|{\mathcal {F}}_{s}^{+} \right] {\mathbf {B}}(s) + E\left[ {\mathbf {B}}^{2}(s)|{\mathcal {F}}_{s}^{+} \right] - t \\&\quad = \displaystyle t-s + {\mathbf {B}}^{2}(s) = {\mathbf {B}}^{2}(s) - s, \end{aligned}$$

and thus, the lemma is proved. It is only important to underline, that since para-vector-valued Brownian motions are considered, the left and right multiplications must be carried through. \(\square \)

The result of this lemma provides a basis for proving the second Wald’s lemma, which goes beyond the scope of the current paper and will be addressed in future work.

4.2 Clifford Brownian Motion and Monogenic Functions

Let now \(B_{r}({\mathbf {x}})\) denotes a ball of radius r centred at \({\mathbf {x}}\), and \(\partial B_{r}({\mathbf {x}})\) be its boundary, as usual. Further, we will need the classical Hardy spaces \(H^{p}\left( \partial \Omega \right) \) of \(L^{p}\)-functions on \(\partial \Omega \) with \(0<p<\infty \) and monogenic in \(\Omega \). We start this section with the following theorem, which provides a connection between a Clifford Brownian motion and monogenic functions:

Theorem 4.7

Let \(\Omega \) be a domain, and let \((\Omega ,{\mathcal {F}},P)\) be the probability space on which the Clifford Brownian motion \(\{{\mathbf {B}}(t), t\ge 0\}\) started inside \(\Omega \) is defined and suppose that \(\left( {\mathcal {F}}_{t}:t\ge 0\right) \) is a filtration to which Clifford Brownian motion is adapted such that the strong Markov property holds. Further, let \(\tau = \tau (\partial \Omega ) = \min \left\{ t\ge 0 :{\mathbf {B}}(t)\in \partial \Omega \right\} \) be the first hitting time of the boundary \(\partial \Omega \). Let \(\varphi :\partial \Omega \rightarrow {\mathcal {C}}\ell (n)\) be measurable and belong to \(H^{p}\left( \partial \Omega \right) \), and such that the function \(u:\Omega \rightarrow {\mathcal {C}}\ell (n)\) with

$$\begin{aligned} u({\mathbf {x}}) = E\left[ \varphi \left( {\mathbf {B}}(\tau )\right) {\mathbbm {1}}_{\tau <\infty }\right] , \quad \text{ for } \text{ every } {\mathbf {x}}\in \Omega , \end{aligned}$$

is locally bounded. Then u is a monogenic function.

Proof

For a ball \(B_{\delta }({\mathbf {x}})\subset \Omega \) let \({\tilde{\tau }}=\inf \left\{ t>0 :{\mathbf {B}}(t)\notin B_{\delta }({\mathbf {x}})\right\} \), then the strong Markov property implies that

$$\begin{aligned} \begin{array}{lcl} \displaystyle u({\mathbf {x}}) &{} = &{} \displaystyle E\left[ E\left[ \varphi ({\mathbf {B}}(\tau )){\mathbbm {1}}_{\tau <\infty } | {\mathcal {F}}^{+}({\tilde{\tau }})\right] \right] \\ &{} = &{} \displaystyle E\left[ u({\mathbf {B}}({\tilde{\tau }}))\right] = \int \limits _{\partial B_{r}({\mathbf {x}})} u({\mathbf {y}})\omega _{{\mathbf {x}},\delta }(d{\mathbf {y}}), \end{array} \end{aligned}$$

where \(\omega _{{\mathbf {x}},\delta }\) is the uniform distribution on the sphere \(\partial B_{\delta }({\mathbf {x}})\). Therefore, u possesses the mean value property, and considering that it has \(\varphi \in H^{p}(\partial \Omega )\) as a boundary value, it is monogenic on \(\Omega \). See [6] for a detailed discussion on the relations between Hardy space \( H^{p}(\partial \Omega )\) and monogenicity in \(\Omega \). \(\square \)

Next, we want to illustrate the application of tools constructed in this section for studying some known problems from the classical Clifford analysis. We start with the following Dirichlet problem for monogenic functions:

Definition 4.8

(Dirichlet problem) Let \(\Omega \) be a domain in \({\mathbb {R}}^{n+1}\) and let \(\partial \Omega \) be its boundary, and assume further \(u_0\in H^{2}(\partial \Omega )\cap C(\partial \Omega )\). Find a function \(u\in C(\Omega )\), such that u is monogenic on \(\Omega \) and satisfies the boundary condition \(u({\mathbf {x}})=u_0({\mathbf {x}})\).

For discussing solvability of the Dirichlet problem, we need at first to recall the Poincaré cone condition:

Definition 4.9

(Poincaré cone) Let \(\Omega \subset {\mathbb {R}}^{n+1}\) be a domain. We say that \(\Omega \) satisfies the Poincaré cone condition at \({\mathbf {x}}\in \partial \Omega \) if there exists a cone V based at \({\mathbf {x}}\) with opening angle \(\alpha >0\), and \(h>0\) such that \(V\cap B_{h}({\mathbf {x}})\subset \Omega ^{c}\).

It is worth to recall that the Lipschitz domains also satisfy the cone condition, see for example [1], and hence, the subsequent results are applicable for a large class of problems.

As the final preparation step for proving solvability of the Dirichlet problem, we need the following lemma:

Lemma 4.10

[21] Let \(0<\alpha < 2\pi \) and \(C_{\mathbf {0}}(\alpha ) \subset {\mathbb {R}}^{n+1}\) is a cone based at the origin with opening angle \(\alpha ,\) and

$$\begin{aligned} a = \sup _{{\mathbf {x}}\in B_{\frac{1}{2}}({\mathbf {0}})} P\{\tau (\partial B_1({\mathbf {0}})) < \tau (C_{\mathbf {0}}(\alpha ))\}. \end{aligned}$$

Then \(a<1\) and, for any positive integer k and \(h>0,\) we have

$$\begin{aligned} P\{ \tau (\partial B_{h}({\mathbf {z}})) < \tau (C_{\mathbf {z}}(\alpha ))\} \le a^k , \end{aligned}$$

for all \({\mathbf {x}},{\mathbf {z}}\in {\mathbb {R}}^{n+1}\) with \(|{\mathbf {x}}-{\mathbf {z}}|< 2^{-k}h\), where \(C_{\mathbf {z}}(\alpha )\) is a cone based at \({\mathbf {z}}\) with opening angle \(\alpha \).

We provide a short prove of this lemma for the convenience of the reader:

Proof

Obviously \(a < 1\). If \({\mathbf {x}}\in B_{2^{-k}}({\mathbf {0}})\) then by the strong Markov property

$$\begin{aligned}&\displaystyle P\{ \tau (\partial B_{h}({\mathbf {z}}))< \tau (C_{\mathbf {z}}(\alpha ))\} \\&\quad \displaystyle \le \prod _{i=0}^{k-1} \sup _{{\mathbf {x}}\in B_{2^{-k+i}}({\mathbf {0}})} P\{ \tau (\partial B_{2^{-k+i+1}}({\mathbf {0}})) < \tau (C_{\mathbf {0}}(\alpha ))\} = a^k . \end{aligned}$$

Therefore, for any positive integer k and \(h>0\), we have by scaling

$$\begin{aligned} P\{ \tau (\partial B_{h}({\mathbf {z}})) < \tau (C_{\mathbf {z}}(\alpha ))\} \le a^k, \end{aligned}$$

for all \({\mathbf {x}}\) with \(|{\mathbf {x}}-{\mathbf {z}}|<2^{-k}h\). \(\square \)

Now, we can formulate the main theorem for the Dirichlet boundary value problem:

Theorem 4.11

(Dirichlet problem) Suppose \(\Omega \subset {\mathbb {R}}^{n+1}\) is a bounded domain with cone property, and \(u_{0}\) is a continuous function on \(\partial \Omega \). Let \(\tau (\partial \Omega ) = \inf \{t>0 :{\mathbf {B}}(t) \in \partial \Omega \}\). Then the function

$$\begin{aligned} u({\mathbf {x}}) = E\left[ u_0({\mathbf {B}}(\tau (\partial U)))\right] , \quad \text {for } {\mathbf {x}}\in {\overline{\Omega }}, \end{aligned}$$

is the unique continuous monogenic function on \(\Omega \) with \(u({\mathbf {x}}) = u_0({\mathbf {x}})\in H^{2}(\partial \Omega )\cap C(\partial \Omega )\) for all \({\mathbf {x}}\in \partial \Omega \).

Proof

The uniqueness of the solution follows from the uniqueness of monogenic functions. Considering that the boundary values belong to the Hardy space \(H^{2}(\partial \Omega )\) of inner monogenic function, it follows that the function u is monogenic by Theorem 4.7. We only need to prove that the Poincaré cone condition implies the boundary condition. For a fixed \({\mathbf {y}}\in \partial \Omega \) there is a cone \(C_{\mathbf {y}}(\alpha )\) based at \({\mathbf {y}}\) with angle \(\alpha >0\) such that \(C_{\mathbf {y}}(\alpha )\cap B_h({\mathbf {y}})\subset \Omega ^c.\) Using Lemma 4.10, for any positive integer k and \(h>0\), we have

$$\begin{aligned} P\{ \tau (\partial B_{h}({\mathbf {z}})) < \tau (C_{\mathbf {z}}(\alpha ))\} \le a^k \end{aligned}$$

for all \({\mathbf {x}}\) with \(|{\mathbf {x}}-{\mathbf {y}}|< 2^{-k}h\). Given \(\varepsilon >0\), there is a \(0<\delta \le h\) such that \(|u_0({\mathbf {v}})-u_0({\mathbf {y}})| < \varepsilon \) for all \({\mathbf {v}}\in \partial \Omega \) with \(|{\mathbf {v}}-{\mathbf {y}}|< \delta \). For all \({\mathbf {x}}\in {\overline{\Omega }}\) such that \(|{\mathbf {y}}-{\mathbf {x}}|<2^{-k}\delta \),

$$\begin{aligned} \begin{array}{lcl} \displaystyle |u({\mathbf {x}})-u({\mathbf {y}})| &{} = &{} \displaystyle \left| E_xu_0({\mathbf {B}}(\tau (\partial \Omega )))- u_0({\mathbf {y}})\right| \\ &{} \le &{} \displaystyle E\left| u_0({\mathbf {B}}(\tau (\partial \Omega )))- u_0({\mathbf {y}})\right| . \end{array} \end{aligned}$$
(4.1)

If the Brownian motion hits the cone \(C_{\mathbf {y}}(\alpha )\), which is outside the domain \(\Omega \), before the sphere \(\partial B_{\delta }({\mathbf {y}})\), then \(|{\mathbf {y}}-{\mathbf {B}}(\tau (\partial \Omega ))| < \delta \), and \(u_0({\mathbf {B}}(\tau (\partial \Omega )))\) is close to \(u_0({\mathbf {y}})\). Hence, (4.1) is bounded from above by

$$\begin{aligned} \begin{array}{lcl} \displaystyle 2 \Vert u_0\Vert _{\infty } P_{\mathbf {x}}\{\tau (\partial B_{\delta }({\mathbf {y}})) &{}< &{} \displaystyle \tau (C_{\mathbf {y}}(\alpha ))\} +\varepsilon P_{\mathbf {x}}\{\tau (\partial \Omega ) \\ &{} < &{} \displaystyle \tau (\partial B_{\delta }({\mathbf {y}}))\} \le 2 \Vert u_0\Vert _{\infty } a^k + \varepsilon , \end{array} \end{aligned}$$

where \(\Vert \cdot \Vert _{\infty }\) is the classical \(L^{\infty }\)-norm. This implies that u is continuous on \({\overline{\Omega }}\). \(\square \)

Remark 4.12

If the domain \(\Omega \) fulfils the cone condition, a solution of the Dirichlet problem can be simulated by running many independent Brownian motions, starting in \({\mathbf {x}}\in \Omega \) until they hit the boundary \(\partial \Omega \) and letting \(f({\mathbf {x}})\) be the average of the values of \(\varphi \) on the hitting points.

Next, we show how the classical Liouville’s theorem can be proved by the help of stochastic Clifford analysis:

Theorem 4.13

(Liouville’s theorem) Any bounded monogenic function in \({\mathbb {R}}^{n+1}\) is constant.

Proof

Let f be a monogenic function such that \(|f({\mathbf {x}})|< M < \infty \) for all \({\mathbf {x}}\in {\mathbb {R}}^{n+1}\). Further, let \({\mathbf {x}},{\mathbf {y}}\) be two distinct points in \({\mathbb {R}}^{n+1}\), and H the hyperplane so that the reflection in H takes \({\mathbf {x}}\) to \({\mathbf {y}}\). Let \(\{{\mathbf {B}}(t) :t\ge 0\}\) be a Clifford Brownian motion started at \({\mathbf {x}}\), and \(\{{\mathbf {B}}^*(t) :t \ge \}\) its reflection in H. Let \(\tau (H)=\min \{t :{\mathbf {B}}(t)\in H \}\) and because of

$$\begin{aligned} \{ {\mathbf {B}}(t) :t\ge \tau (H)\} {\mathop {=}\limits ^{d}} \{{\mathbf {B}}^*(t) :t\ge \tau (H)\} . \end{aligned}$$
(4.2)

The monogenicity of f implies that \(E\left[ f({\mathbf {B}})(t))\right] = f({\mathbf {x}})\) and decomposing into \(t<\tau (H)\) and \(t\ge \tau (H)\) we get

$$\begin{aligned} u({\mathbf {x}}) = E\left[ u({\mathbf {B}}(t) {\mathbbm {1}}_{\{t<\tau (H)\}}\right] + E\left[ u({\mathbf {B}}(t) {\mathbbm {1}}_{\{t\ge \tau (H)\}}\right] . \end{aligned}$$

Using (4.2) we obtain

$$\begin{aligned} \begin{array}{lcl} \displaystyle \left| u({\mathbf {x}})-u({\mathbf {y}})\right| &{} = &{} \displaystyle \left| E\left[ u({\mathbf {B}}(t)){\mathbbm {1}}_{\{t< \tau (H)\}}\right] - E\left[ u({\mathbf {B}}^*(t)){\mathbbm {1}}_{\{t< \tau (H)\}}\right] \right| \\ &{} \le &{} \displaystyle 2 M P\{t< \tau (H)\} \rightarrow 0, \quad \text { as } t\rightarrow \infty . \end{array} \end{aligned}$$

Thus \(u({\mathbf {x}})=u({\mathbf {y}}),\) and since \({\mathbf {x}}\) and \({\mathbf {y}}\) were chosen arbitrarily, u must be constant. \(\square \)

5 Stochastic Integral and Itô Formula

We start this section by defining the Itô integral:

Definition 5.1

Let \(X=M+A\) be a semimartingale with the martingale part \(M\in {\mathcal {M}}_{0, loc}\) and the finite variational part A. Further, let \(F\in {\mathbb {L}}^2_{loc}(M)\), then the Itô integral of F with respect to X is the stochastic process of the form

$$\begin{aligned} \int _0^* F(s) dX(s) = \int _0^* F(s) dM(s) + \int _0^* F(s) dA(s), \end{aligned}$$

where the first terms on the right-hand side is an Itô integral, and the second term is a (pathwise) Lebesgue–Stieltjes integral.

Before introducing the Itô formula, it is necessary to mention that we will use the notation e.g. \(X_{i}(\cdot )\) implying \({\mathbf {e}}_{i}\) component of n-dimensional stochastic process. The Itô formula is presented in the following theorem:

Theorem 5.2

(Itô’s formula, [13, 23]) Let \(f: [0,\infty )\times {\mathbb {R}}^n \rightarrow {\mathbb {R}}\) and suppose that the derivatives \(f_t\) and \(f_{x_ix_j}\) exist and are continuous for all \(1\le i,j \le n.\) For \(i=1, \ldots , n,\) suppose that

$$\begin{aligned} X_{i}(\cdot ) = X_{i}(0) + M_{i}(\cdot ) + A_{i}(\cdot ) \end{aligned}$$

is a continuous semimartingale with the martingale part \(M_i\) and the finite variational part \(A_i\). Then, the following relation holds for all t

$$\begin{aligned} \begin{array}{ll} \displaystyle f(t, X(t)) - f(0, X(0)) &{}= \displaystyle \int _0^t f_t(s, X(s)) ds \\ &{}\quad \displaystyle + \sum _{i=1}^n \int _0^t f_{x_i}(s, X(s)) dM_i(s) \\ &{}\quad \displaystyle + \sum _{i=1}^n \int _0^t f_{x_i}(s, X(s)) dA_i(s) \\ &{}\quad \displaystyle + \frac{1}{2} \sum _{i,j=1}^n \int _0^t f_{x_{i}x_{j}}(s, X(s)) d\langle M_i, M_j \rangle _s. \end{array} \end{aligned}$$

Remark 5.3

[13, 23] Very often, the Itô’s formula is abbreviated as follows

$$\begin{aligned} \displaystyle df(t,X(t))= & {} \displaystyle f_t(t,X(t))dt + \sum _{i=1}^n f_{x_i}(t, X(t))dX_i(t) \\&\displaystyle + \frac{1}{2} \sum _{i,j=1}^n f_{x_{i}x_{j}}(s, X(s)) dX_i(t)dX_j(t) \end{aligned}$$

with the convention that \(dX_i(t) = dM_i(t) + dA_i(t)\), where

$$\begin{aligned} dA_i(t) dA_j(t) = dA_i(t) dM_j(t) = 0, \text{ and } dM_i(t)dM_j(t) = d\langle M_i, M_j \rangle _t . \end{aligned}$$

Next lemma provides a generalisation of the Itô formula to the Clifford setting:

Lemma 5.4

Let \(f:{\mathcal {C}}\ell (n) \rightarrow {\mathcal {C}}\ell (n)\) be twice continuously differentiable, and let \(X(t)=X_0(t)+{\mathbf {e}}_{1}X_{1}(t)+\cdots + {\mathbf {e}}_{n}X_{n}(t)\) be a continuous Clifford martingale, then the Itô formula in the Clifford setting is given by

$$\begin{aligned} \displaystyle df(t,X(t))= & {} \displaystyle f_t(t,X(t))dt + \int _{0}^{t} dZ_0(t)(D f) + dZ_1(t) \partial _{x_1}f + \cdots + dZ_n(t) \partial _{x_n}f \\&\displaystyle + \frac{1}{2}\int _{0}^{t} \sum _{i=0}^n dZ_0(t)dX_i(t) D\left( \frac{\partial f}{\partial x_i}\right) + \sum _{j=0}^n dZ_j(t)dX_i(t) \frac{\partial ^2 f}{\partial x_j \partial x_i }, \end{aligned}$$

where \(dZ_0(t) = dX_0(t)\) and \(dZ_j(t) = e_j dX_0(t) + dX_j (t).\)

Proof

Proof of this lemma is based on the classical Itô formula, and additionally, for proving this lemma we need to calculate the differential \({\mathcal {C}}\ell (n)\)-valued 1- and 2-forms. Using the isomorphism between \({\mathbb {H}}_n\) and \({\mathbb {R}}^{n+1}\) we consider the mapping \(f: {\mathbb {R}}^{n+1} \mapsto C\ell (n)\) as a function of the form

$$\begin{aligned} f: {\mathbb {H}}_n \mapsto C\ell (n), \end{aligned}$$

and its differential at \({\mathbf {z}}\in {\mathbb {H}}_n\) is then given by an \({\mathbb {R}}\)-linear map \(df_{{\mathbf {z}}}: {\mathbb {H}}_n\mapsto C\ell (n)\). By identifying the tangent space at each point of \({\mathbb {H}}_n\) with \({\mathbb {H}}_n\) itself, the differential of \(C\ell (n)\)-valued 1-form can be written as follows

$$\begin{aligned} df = \partial _{x_0}f dx_0 + \partial _{x_1}f dx_1 + \cdots + \partial _{x_n}f dx_n. \end{aligned}$$

Applying \(x_0 = z_0\), \(x_k = z_0{\mathbf {e}}_k + z_k\) (resp. \(x_0 = z_0\), \(x_k = {\mathbf {e}}_kz_0 + z_k)\), \(k=1,\ldots , n\), we get

$$\begin{aligned} \begin{array}{lcl} df &{} = &{} (f D)dz_0 + \partial _{x_1}f dz_1 + \cdots + \partial _{x_n}f dz_n \\ &{} = &{} dz_0(D f) + dz_1 \partial _{x_1}f + \cdots + dz_n \partial _{x_n}f, \end{array} \end{aligned}$$

where the \(n+1\) basic hypercomplex 1-forms \(dz_k\) are defined by \(dz_0 = dx_0\) and \(dz_k = -{\mathbf {e}}_k dx_0 + dx_k\) for \(k=1,\ldots , n\), respectively. It is well known that \(d^{2}=0\) by definition. Nonetheless, for introducing a Clifford Itô formula, we need to perform explicit computations of the differential 2-form by help of df:

$$\begin{aligned} d^2f= & {} d(df) = \displaystyle \sum _{j=0}^n \frac{\partial ^2}{\partial x_j\partial x_0} f dx_jdx_0 + \sum _{i=1}^n\sum _{j=0}^n \frac{\partial ^2}{\partial x_{i}\partial x_{j}} f dx_jdx_i \\= & {} \displaystyle dz_0dx_0 D\left( \frac{\partial f}{\partial x_0}\right) + \sum _{j=0}^n dz_j dx_0 \frac{\partial }{\partial x_j}\left( \frac{\partial f}{\partial x_0}\right) \\&\displaystyle + \sum _{i=1}^n \left( dz_0dx_i D\left( \frac{\partial f}{\partial x_i}\right) + \sum _{j=0}^n dz_j dx_i \frac{\partial }{\partial x_j}\left( \frac{\partial f}{\partial x_i}\right) \right) \\= & {} \displaystyle \sum _{i=0}^n \left( dz_0dx_i D\left( \frac{\partial f}{\partial x_i}\right) + \sum _{j=0}^n dz_jdx_i \frac{\partial ^2 f}{\partial x_j \partial x_i }\right) . \end{aligned}$$

Comparing these calculations with the classical Itô formula, we obtain the statement of lemma.

\(\square \)

Lemma 5.4 presents a general form of the Itô formula. However, more specific forms of the Itô formula, for example when a stochastic process is a Brownian motion [11, 22], are also useful in practical applications. For connecting the discussion in Sect. 4 with the Itô formula, let us now consider the case that \({\mathbf {X}}(t)\) is a para-vector-valued standard Brownian motion, such that \(X_0 = 0\). The process \({\mathbf {B}}(t) = {\mathbf {x}}+{\mathbf {X}}(t)\) is called Brownian motion starting from \({\mathbf {x}}\) (in general, it is possible to take for \(B_0\) any random variable independent of the process \({\mathbf {X}}\)). Since \(\langle X_i,X_j\rangle _t =\delta _{ij} t\), we can write Itô’s formula, applied to a function f, as follows

$$\begin{aligned} \displaystyle f({\mathbf {B}}(t))= & {} \displaystyle f({\mathbf {B}}(0)) + \int _0^t \nabla f({\mathbf {B}}(s)) \cdot d{\mathbf {B}}(s) + \frac{1}{2} \int _0^t \Delta f({\mathbf {B}}(s)) ds \\= & {} \displaystyle f({\mathbf {B}}_0) + \int _0^t dZ_0(t)(D f) + dZ_1(t) \partial _{x_1}f + \cdots + dZ_n(t) \partial _{x_n}f \\&\displaystyle + \frac{1}{2} \int _0^t \Delta f({\mathbf {B}}(s)) ds, \end{aligned}$$

where \(dZ_0(t) = dB_0(t)\) and \(dZ_j(t) = e_j dB_0(t) + dB_j(t)\). If f is a monogenic function the formula reduces to

$$\begin{aligned} f({\mathbf {B}}(t)) = f({\mathbf {B}}(0)) + \int _0^t dZ_1(t) \partial _{x_1}f + \cdots + dZ_n(t) \partial _{x_n}f. \end{aligned}$$

Remark 5.5

It is important to remark, that the Itô formula presented in Lemma 5.4 is a first attempt to develop this tool in the context of Clifford analysis. Moreover, in contrast to the classical results on conformal martingales and complex Brownian motions, see again [13, 24], the Itô formula in Lemma 5.4 does not significantly simplify itself in the case if f is a monogenic function, as one would expect. Therefore, further analysis and studies are needed in this direction.

Remark 5.6

Additionally, we would like to remark that all constructions presented in this section would work also for a general Clifford-valued function f, and not only for a para-vector-valued function considered above. However, because the Clifford Brownian motion is para-vector-valued, and it serves as a variable in the Itô formula, it seems to be more appropriate to stay with para-vector-functions at the moment. Alternatively, a more general notion of the Clifford Brownian motion can be introduced, but this goes beyond the scope of current paper.

6 Summary and Outlook

In this paper, we have further developed ideas towards stochastic Clifford analysis by discussing random variables, stochastic processes, martingales, Brownian motion, and the Itô formula in the Clifford setting. As a practical application of tools of stochastic Clifford analysis developed in this paper, we have illustrated how classical results in Clifford analysis, such as solvability of a Dirichlet problem for monogenic functions and Liouville’s theorem, can be proved from the stochastic point of view. Further, the Itô formula introduced in this paper, is a basis for addressing stochastic partial differential equations in Clifford analysis, which is the scope of our future work.