1 Introduction

A Lévy process is a random process \(\left\{ X(t): t \ge 0\right\} \) whose increments \(X(t) - X(s)\) are independent and stationary, in the sense that the distribution \(\mu _{s,t}\) of \(X(t) - X(s)\) depends only on \(t-s\),

$$\begin{aligned} \mu _{s,t} = \mu _{t-s}. \end{aligned}$$

A Lévy process is a Markov process, and its transition operators \(\mathcal{K }_{s,t}\) defined via

$$\begin{aligned} \mathbb E \left[f(X(t)) | s\right] = (\mathcal{K }_{s,t} f)(X(s)) \end{aligned}$$

are also stationary, \(\mathcal{K }_{s,t} = \mathcal{K }_{t-s}\); in fact

$$\begin{aligned} \mathcal{K }_{s,t}(f)(x) = \int _{\mathbb{R }} f(x + y) \,d\mu _{t-s}(y). \end{aligned}$$

Here \(\mathbb E \left[\cdot | s\right]\) is the conditional expectation onto time \(s\). The maps \(\left\{ \mathcal{K }_t : t \ge 0\right\} \) form a semigroup, which has a generator \(A\). In fact \(A = \ell (- i \partial _x)\), where \(\ell \) is the cumulant generating function of the process. It can also be expressed in terms of the Lévy measure of the process. See Sect. 3 for more details.

In a groundbreaking paper [10], Biane showed that processes with freely independent increments, in the context of free probability [31, 35], are also Markov processes. He also noted that there are two distinct classes of such processes which can be called (additive) free Lévy process (Biane also investigated multiplicative processes, which we will not study here). Free Lévy processes of the first kind (FL1) have stationary increments, in the sense that each \(X(t) - X(s)\) has distribution \(\mu _{t-s}\). Then \(\left\{ \mu _t: t \ge 0\right\} \) form a semigroup with respect to free convolution \(\boxplus \). These processes are Markov, but their transition operators typically are not stationary. Free Lévy processes of the second kind (FL2) have stationary transition operators \(\mathcal{K }_t\), which form a semigroup, but their increments are typically not stationary: if \(X(t) - X(s)\) has distribution \(\mu _{s,t}\), then we only have the property \(\mu _{s,t} \boxplus \mu _{t,u} = \mu _{s,u}\) (so that the measures form a free convolution hemigroup).

In this paper we compute the generators of free Lévy processes with finite variance. Since for FL1, the transition operators do not form a semigroup, they have a family of generators \(\left\{ A_t: t \ge 0\right\} \). In the case of FL2, there is a genuine generator \(A\). If the distribution of the process has finite moments, using the free Itô formula from [2], one can express the generators in terms of the \(R\)-transform, the free analog of \(\ell \). However, it is unclear whether such an expression can be assigned a meaning in the absence of moments. But there is an alternative description. For a measure \(\nu \), denote

$$\begin{aligned} L_\nu (f)(x) = \int _{\mathbb{R }} \frac{f(x) - f(y)}{x - y} \,d\nu (y), \end{aligned}$$

a singular integral operator. A free convolution semigroup with finite variance is characterized by the free canonical pair \((\alpha , \rho )\), where \(\alpha \in \mathbb{R }\), and (with appropriate normalization) \(\rho \) is a probability measure. Further, denote by \(\gamma _t\) the semicircular distribution at time \(t\), so that \(\rho \boxplus \gamma _t\) is the free analog of heat flow started at \(\rho \). Then the generator of the corresponding free Lévy process of the first kind is

$$\begin{aligned} \alpha \partial _x + \partial _x L_{\rho \boxplus \gamma _t}. \end{aligned}$$
(1)

In fact, we show that for \(\alpha = 0\), the full Markov structure of this process coincides with that of the free Brownian motion \(\left\{ Y_t : t \ge 0\right\} \) with \(Y_0\) having distribution \(\rho \). This statement clearly has no classical analogue.

In addition to free probability theory, there are only two other “natural” non-commutative probability theories [28], the Boolean and the monotone. These theories do not, at least at this point, approach the wealth of structure of free or classical probability. However, one reason to study them is that they turn out to have unexpected connections to free probability. Indeed, we show that the generator of a free Lévy process of the second kind is \(\alpha \partial _x + \partial _x L_{\rho }\), where \(\rho \) now is the monotone canonical measure. In fact, Biane in [10] already noted that each FL2 is associated to a semigroup of analytic maps, and Franz in [22] observed that exactly such semigroups are associated with monotone Lévy processes: the measures \(\mu _{0,t}\) do not form a free semigroup, but they do form a semigroup under monotone convolution. In the monotone case itself there is no distinction between the Lévy processes of the first and second kind (so the free case is really special in this respect), and the generator of a monotone Lévy process is related to its monotone Lévy measure in the expected way [21].

We also compute the generators of the \(q\)-Brownian motion. This non-commutative process was constructed in [15], and investigated in detail in [13]. Building on the work of [19], we prove a functional Itô formula for it (for polynomial functions), from which the formulas for generators easily follow.

We note that the study of “time-dependent generators”, or more usually the inverse problem—how to reconstruct \(\left\{ \mathcal{K }_{s,t}\right\} \) from \(\left\{ A_t\right\} \)—goes back to [24]. This is typically formulated at the linear non-autonomous Cauchy problem, and a significant amount of general results on its solution is known, see for example Sect.  VI.9 of [20, 29], and their extensive references. We do not use these general results in the paper, but this may be a matter for further study.

The paper is organized as follows. After the introduction and some general results in Sect. 2, in Sect. 3 we give a short overview of the generators for classical processes. The next section, covering free Lévy processes, is the main part of the paper. We show that transition operators for such a process form a strongly continuous contractive family on \(L^p(\mathbb{R }, \,dx)\), and that their generators are given by formula (1) on large domains in \(L^p(\mathbb{R }, \,dx)\) and \(C_0(\mathbb{R })\). In Sect. 4.5 we find the closures of these generators. We also compute the generators of FL2 processes. In Sect. 4.6, we show that the operator \(L_\nu \) itself is an isometry between certain \(L^2\) spaces, and compute the “carré du champ” operator corresponding to \(\partial _x L_\nu \). In Sect. 5, we compute the Itô formula and generators for the \(q\)-Brownian motion, and in a short final section we apply similar analysis to the two-state free Brownian motions from [5].

2 Preliminaries

2.1 Generalities and definitions

Let \((\mathcal{M }, \mathbb{E })\) be a tracial non-commutative probability space, where \(\mathcal{M }\) is a von Neumann algebra and \(\mathbb{E }\) is a tracial normal state on it. Possibly unbounded random variables are self-adjoint elements of the algebra \(\widetilde{\mathcal{M }}\) of operators affiliated to \(\mathcal{M }\).

A process is a family of (possibly non-commutative) random variables \(\left\{ X(t): t\!\ge \! 0\right\} \) in a (possibly non-commutative) probability space \((\mathcal{M }, \mathbb{E })\).

We will assume that \(X(0) = 0\), and will denote by \(\mu _{s,t}\) the distribution of \(X(t) - X(s)\) with respect to \(\mathbb{E }\) (for \(s \le t\)), \(\mu _t = \mu _{0,t}\) the distribution \(X(t)\), and \(\mu = \mu _1\). If \(\star \) is a convolution operation corresponding to some non-commutative independence, and the increments of \(\left\{ X(t)\right\} \) are independent in that sense, then

$$\begin{aligned} \mu _{s,t} \star \mu _{t,u} = \mu _{s,u}. \end{aligned}$$

For an unbounded operator \(X\), we will denote by \(\mathcal{D }(X)\) its domain, and by \((X, \mathcal{D })\) its restriction to a smaller domain \(\mathcal{D }\).

Definition 1

For a family of distributions \(\left\{ \mu _t\right\} \), we say that the functional \(L_t\) is its generator at time t with domain \(\mathcal{D }(L_t)\) if

$$\begin{aligned} \partial _t \int _{\mathbb{R }} f(x) \,d\mu _t(x) = L_t[f] \end{aligned}$$

for \(f \in \mathcal{D }(L_t)\). Frequently,

$$\begin{aligned} L_t[f] = \int _{\mathbb{R }} (A_t f)(x) \,d\mu _t(x) \end{aligned}$$

for an operator \(A_t\). If \(\left\{ X(t)\right\} \) is a process with distributions \(\left\{ \mu _t\right\} \), this is equivalent to

$$\begin{aligned} \partial _t \mathbb E \left[f(X(t))\right] = \mathbb E \left[(A_t f)(X(t))\right]. \end{aligned}$$
(2)

Note however that this property does not determine \(A_t\), even on \(\mathcal{D }(L_t)\).

For operators \(\left\{ \mathcal{K }_{s,t}\right\} \) on a Banach space \(\mathcal{A }\), we write

$$\begin{aligned} \left. \frac{\partial }{\partial t} \right|_{t=s} \mathcal{K }_{s,t} = A_s \end{aligned}$$

if

$$\begin{aligned} \lim _{h \rightarrow 0^+} \left\Vert{\frac{1}{h} \left( \mathcal{K }_{s, s+h} f - \mathcal{K }_{s,s} f \right) - A_s f}\right\Vert= 0. \end{aligned}$$
(3)

In this case we say that \(A_s\) is the generator of the family \(\left\{ \mathcal{K }_{s,t}\right\} \) at time \(s\). Its domain \(\mathcal{D }(A_s) \subset \mathcal{A }\) consists of all \(f \in \mathcal{A }\) for which the limit (3) holds.

Now suppose that the process \(\left\{ X(t)\right\} \) is a Markov process. That is, denoting \(\mathbb E \left[\cdot | \le s\right]\) the \(\mathbb{E }\)-preserving conditional expectation onto the von Neumann algebra generated by \(\left\{ X_u : u \le s\right\} \), for any \(f \in L^\infty (\mathbb{R }, \,dx)\), \(\mathbb E \left[f(X(t)) | \le s\right]\) is in the von Neumann algebra generated by \(X(s)\). (See the Introduction and Sect. 4 of [10] for more details, and also for a weaker requirement, sufficient for our purposes, that the classical version of \(\left\{ X(t)\right\} \) is a Markov process.) In this case, the corresponding transition operators are determined by

$$\begin{aligned} \mathbb E \left[f(X(t)) | \le s\right] = (\mathcal{K }_{s,t} f)(X(s)) \end{aligned}$$

We say that the operator \(A_s\) is the generator of the process at time \(s\) if it is the generator of its family of transition operators. Note that if \(A_t\) exists, it has the property in Eq. (2).

Proposition 1

Let \((\mathcal{A }, \Vert \cdot \Vert )\) be a Banach space, \(\left\{ \mathcal{K }_{s,t}\right\} \) a family of contractions on \(\mathcal{A }\) such that

$$\begin{aligned} \mathcal{K }_{s,t} \circ \mathcal{K }_{t,v} = \mathcal{K }_{s,v}, \quad \mathcal{K }_{s,s} = I, \end{aligned}$$

and \(\mathcal{K }_{s,t}\) is strongly continuous in \(t\). Let \(\left\{ A_t\right\} \) be the generators of \(\left\{ \mathcal{K }_{s,t}\right\} \) in the sense of Eq. (3), and consider a subspace \(\mathcal{D } \subset \bigcap _{t} \mathcal{D }(A_t)\) such that for any \(f \in \mathcal{D }\), \(A_t f\) is a continuous function of \(t\).

  1. (a)

    Each \(A_t\) is dissipative and closable.

  2. (b)

    Let \(\mathcal{B } \subset \mathcal{A }\) be another subspace such that \(\mathcal{D } \subset \mathcal{B }\), and \(\Vert \cdot \Vert _{\mathcal{B }}\) be another norm on \(\mathcal{B }\) such that \(\mathcal{D }\) is \(\Vert \cdot \Vert _{\mathcal{B }}\)-dense in \(\mathcal{B },\; \Vert f \Vert \le \Vert f \Vert _{\mathcal{B }}\), and for \(f \in \mathcal{D }\),

    $$\begin{aligned} \Vert A_t f \Vert \le \Vert f \Vert _{\mathcal{B }}. \end{aligned}$$
    (4)

    Then Eq. (3) holds for \(f \in \mathcal{B }\), so that \(\mathcal{B } \subset \mathcal{D }(A_t)\) for all \(t\).

  3. (c)

    The closure \(\overline{\mathcal{D }}^{\Vert \cdot \Vert _A}\) of \(\mathcal{D }\) in the sup-graph norm

    $$\begin{aligned} \Vert f \Vert _A = \Vert f \Vert + \sup _t \Vert A_t f \Vert \end{aligned}$$

    is in \(\mathcal{D }(A_t)\) for all \(t\).

Remark 1

Note that strong continuity of \(\mathcal{K }_{s,t}\) does not imply continuity of \(\left\{ A_t\right\} \). Indeed, already in one dimension, if \(\mathcal{K }_{s,t} = e^{f(t) - f(s)}\), then \(A_t = f^{\prime }(t)\).

Proof

For part (a), recall from Sect. X.8 of [32] that for \(f \in \mathcal{A }\), a normalized tangent functional \(\phi _f\) is an element of \(\mathcal{A }^*\) such that \(\Vert \phi _f \Vert = \Vert f \Vert \) and \(\phi _f[f] = \Vert f \Vert ^2\). For any such functional,

$$\begin{aligned} \mathfrak R \phi _f[A_s f]&= \lim _{h \rightarrow 0^+} \frac{1}{h} \mathfrak R \phi _f[\mathcal{K }_{s, s+h} f - f] \le \lim _{h \rightarrow 0^+} \frac{1}{h} \left( \vert \phi _f[\mathcal{K }_{s, s+h} f] \vert - \Vert f \Vert ^2 \right) \\&\le \lim _{h \rightarrow 0^+} \frac{1}{h} \left( \Vert f \Vert \cdot \Vert \mathcal{K }_{s, s+h} f \Vert - \Vert f \Vert ^2 \right) \le 0 \end{aligned}$$

since \(\mathcal{K }_{s, s+h}\) is a contraction, so \(A_s\) is dissipative. Combining this with Theorem II.3.23 and Proposition II.3.14(iv) of [20], it follows that \(A_s\) is closable.

For part (b), we first note that for \(s \le t\), since \(\mathcal{K }_{s,t}\) is a contraction, for \(f \in \mathcal{D }\),

$$\begin{aligned} \lim _{h \rightarrow 0^+} \left\Vert{\frac{1}{h} \left( \mathcal{K }_{s, t+h} f \!- \!\mathcal{K }_{s,t} f \right) \!- \!\mathcal{K }_{s,t}(A_t f)}\right\Vert&= \lim _{h \rightarrow 0^+} \left\Vert\! {\mathcal{K }_{s,t} \left( \frac{1}{h} \left( \mathcal{K }_{t, t+h} f \!-\! f \right) \!- \!A_t f\!\right)} \!\right\Vert\\&\le \lim _{h \rightarrow 0^+} \left\Vert{\frac{1}{h} \left( \mathcal{K }_{t, t+h} f - f \right) - A_t f}\right\Vert= 0, \end{aligned}$$

so

$$\begin{aligned} \partial _t \mathcal{K }_{s,t}(f) = \mathcal{K }_{s,t}(A_t f). \end{aligned}$$

Also, since \(\mathcal{K }_{s,v}\) is a contraction, \(\mathcal{K }_{s,v} A_v f\) is continuous in \(v\). Therefore we have the Riemann integral identity

$$\begin{aligned} \mathcal{K }_{s,t}(f) = f + \int _s^t \mathcal{K }_{s,v}(A_v f) \,dv. \end{aligned}$$

Since for \(f \in \mathcal{D }\), (4) holds, \(A_t\) has a continuous extension \((\mathcal{B }, \Vert \cdot \Vert _{\mathcal{B }}) \rightarrow \mathcal{A }\) satisfying the same property. We will show that this continuous extension \(\tilde{A}_t\) coincides with \(A_t\).

Fix \(g \in \mathcal{B }\) and a time \(t\). For each \(\varepsilon > 0\), we can find a \(f \in \mathcal{D }\) such that \(\Vert f - g \Vert _{\mathcal{B }} < \varepsilon \), so that \(\Vert f - g \Vert < \varepsilon \),

$$\begin{aligned} \Vert \mathcal{K }_{s,t} f - \mathcal{K }_{s,t} g \Vert < \varepsilon , \end{aligned}$$

and

$$\begin{aligned} \left\Vert{\mathcal{K }_{s,v} (\tilde{A}_v f) - \mathcal{K }_{s,v} (\tilde{A}_v g)}\right\Vert\le \left\Vert{\tilde{A}_v f - \tilde{A}_v g}\right\Vert< \varepsilon \end{aligned}$$

for all \(s \le v \le t\). Then

$$\begin{aligned} \left\Vert{\mathcal{K }_{s,t} g - g - \int _s^t \mathcal{K }_{s,v}(\tilde{A}_v g) \,dv}\right\Vert< 2 \varepsilon + (t-s) \varepsilon . \end{aligned}$$

So

$$\begin{aligned} \mathcal{K }_{s,t}(g) = g + \int _s^t \mathcal{K }_{s,v}(\tilde{A}_v g) \,dv \end{aligned}$$
(5)

(in particular, the integral is well defined), and

$$\begin{aligned} A_t g = \left. \frac{\partial }{\partial t} \right|_{t=s} \mathcal{K }_{s,t} g = \tilde{A}_t g. \end{aligned}$$

Finally, for part (c) we take \(\mathcal{B } = \overline{\mathcal{D }}^{\Vert \cdot \Vert _A}\) and \(\Vert \cdot \Vert _{\mathcal{B }} = \Vert \cdot \Vert _A\), and apply part (b). \(\square \)

Remark 2

Under the assumptions of the preceding proposition, from Eq. (5) we also get

$$\begin{aligned} \left. \frac{\partial }{\partial s} \right|_{s=t} \mathcal{K }_{s,t} g&= - A_s g,\\ A_s \mathcal{K }_{s,t}(g)&= - \partial _s \mathcal{K }_{s,t}(g), \end{aligned}$$

which in turn implies

$$\begin{aligned} \mathcal{K }_{s,t}(g) = g + \int _s^t A_v \mathcal{K }_{v,t}(g) \,dv. \end{aligned}$$

Lemma 2

Let \(\left\{ X(t)\right\} \) be a non-commutative Markov process, with transition operators \(\left\{ \mathcal{K }_{s,t}\right\} \) and distributions \(\left\{ \mu _t\right\} \). Let \((\mathcal{A }, \Vert \cdot \Vert )\) be either \((C_0(\mathbb{R }), \Vert \cdot \Vert _\infty )\) or \(L^p(\mathbb{R }, \,dx)\), and suppose that \(\left\{ \mathcal{K }_{s,t}\right\} \), their generators \(\left\{ A_s\right\} \), and \(\mathcal{D } \subset \mathcal{A }\) satisfy the hypotheses of Proposition 1. Then for \(f \in \mathcal{D }\),

$$\begin{aligned} f(X(t)) - \int _0^t (A_v f)(X(v)) \,dv \end{aligned}$$
(6)

is a martingale. Conversely, suppose \(\left\{ B_s\right\} \) is another family of operators strongly continuous on \(\mathcal{D }\) such that (6) is a martingale. Then for \(f \in \mathcal{D }\), \(A_s f = B_s f\) in the restriction of \(\mathcal{A }\) to \(\mathrm{supp }(\mu _s)\).

Proof

As shown in Proposition 1, under these assumptions, for \(f \in \mathcal{D }\)

$$\begin{aligned} \mathcal{K }_{s,t}(f) = f + \int _s^t \mathcal{K }_{s,v}(A_v f) \,dv. \end{aligned}$$

It follows that

$$\begin{aligned}&\mathbb E \left[\left. f(X(t)) - \int _0^t (A_v f)(X(v)) \,dv \right| \le s\right] \\&\quad = (\mathcal{K }_{s,t}(f))(X(s)) - \int _0^s (A_v f)(X(v)) \,dv - \int _s^t \mathcal{K }_{s,v}(A_v f)(X(s)) \,dv \\&\quad = f(X(s)) - \int _0^s (A_v f)(X(v)) \,dv \end{aligned}$$

and the process is a martingale.

Conversely, suppose that \(f(X(t)) - \int _0^t (B_v f)(X(s)) \,dv\) is a martingale. The last equality then implies that

$$\begin{aligned} \mathcal{K }_{s,t}(f)(X(s)) = f(X(s)) + \int _s^t \mathcal{K }_{s,v}(B_v f)(X(s)) \,dv. \end{aligned}$$

It follows that in \(C_0(\mathrm{supp }(\mu _s))\) or \(L^p(\mathrm{supp }(\mu _s), \,dx)\),

$$\begin{aligned} \mathcal{K }_{s,t}(f) = f + \int _s^t \mathcal{K }_{s,v}(B_v f) \,dv, \end{aligned}$$

and therefore in this space

$$\begin{aligned} B_v f = \left. \frac{\partial }{\partial t} \right|_{t=v} \mathcal{K }_{v,t}(f) = A_v f. \end{aligned}$$

\(\square \)

2.2 Cumulants

Let \(\left\{ \mu _t\right\} \) be a convolution semigroup with respect to some convolution operation \(\star \). In all cases we will consider, \(\mu _0 = \delta _0\), \(\mu _t[x] = t \cdot \mu [x]\), and \(\left\{ \mu _t\right\} \) is weakly continuous. Almost by definition (see Property (K1\(^{\prime }\)) in Sect. 3 of [23]), the cumulant functional of \(\mu \) corresponding to the convolution operation \(\star \) is

$$\begin{aligned} C_\mu [f] = \left. \frac{\partial }{\partial t} \right|_{t=0} \mu _t[f]. \end{aligned}$$
(7)

This approach works for all the convolutions associated to natural types of independence (tensor, free, Boolean, monotone), but also for other operations such as the \(q\)-convolution from [1].

Proposition 3

Assume that \(\mu \) has finite moments of all orders. Then, at least on the space \(\mathcal{P }\) of polynomials,

$$\begin{aligned} C_\mu [f] = \alpha f^{\prime }(0) + \int _{\mathbb{R }} \frac{f(y) - f(0) - y f^{\prime }(0)}{y^2} \,d\rho (y) \end{aligned}$$
(8)

for a finite measure \(\rho \).

Proof

First note that \(C_\mu [1] = 0\) and \(C_\mu [x] = \mu [x] = \alpha \) for some \(\alpha \). Since each \(\mu _t\) is positive, it follows that \(C_\mu \) is a conditionally positive function on polynomials, so it has the canonical representation \(C_\mu [x^n] = \rho [x^{n-2}]\) for \(n \ge 2\), where \(\rho \) is a finite measure, the canonical measure of the semigroup. We compute, for \(f(x) = x^n\)

$$\begin{aligned} C_\mu [x^n] = \rho [x^{n-2}]&= \alpha f^{\prime }(0) + \rho \left[ \frac{f(x) - f(0) - x f^{\prime }(0)}{x^2} \right] \\&= \alpha f^{\prime }(0) + \rho \left[ \left. \frac{\partial }{\partial x} \right|_{x=0} \frac{f(x) - f(y)}{x - y} \right] \end{aligned}$$

and this formula is also valid for \(f(x) = 1\) and \(x\). \(\square \)

3 Classical Lévy processes

See Sect. 1.2 of [9] (except for a small misprint) for the following results.

Theorem

Let \(\left\{ X(t)\right\} \) be a Lévy process corresponding to the convolution semigroup \(\left\{ \mu _t\right\} \). Denote

$$\begin{aligned} \ell (\theta ) = \log \mathbb E \left[e^{i \theta X}\right] = \log \int _{\mathbb{R }} e^{i \theta x} \,d\mu (x) \end{aligned}$$

the cumulant generating function of the process. Then the generator of the process is the pseudo-differential operator \(\ell (- i \partial _x)\) with dense domain

$$\begin{aligned} \left\{ f \in L^2(\mathbb{R }, dx): \int _{\mathbb{R }} \vert \ell (\theta ) \vert ^2 \vert \hat{f}(\theta ) \vert ^2 \,d\theta < \infty \right\} . \end{aligned}$$

In other words, if the process has the Lévy–Khintchine representation

$$\begin{aligned} \ell (\theta ) = i \alpha \theta - \frac{1}{2} V \theta ^2 + \int _{\mathbb{R } \backslash \{0\}} (e^{i y \theta } - 1 - i y \theta \mathbf 1 _{\vert y \vert < 1}) \Pi (dy), \end{aligned}$$

then

$$\begin{aligned} A f(x) = \alpha f^{\prime }(x) + \frac{1}{2} V f^{\prime \prime }(x) + \int _{\mathbb{R }} \left(f(x+y) - f(x) - \mathbf 1 _{\vert y \vert < 1} y f^{\prime }(x) \right) \Pi (dy). \end{aligned}$$

If \(\mu \) has mean \(\alpha \) and finite variance, we also have the Kolmogorov representation,

$$\begin{aligned} \ell (\theta ) = i \alpha \theta + \int _{\mathbb{R }} (e^{i y \theta } - 1 - i y \theta ) y^{-2} \,d\rho (y), \end{aligned}$$

where \(\rho \) is the canonical measure. In this case the generator is

$$\begin{aligned} A f (x) = \alpha f^{\prime }(x) + \int _{\mathbb{R }} \frac{f(x+y) - f(x) - y f^{\prime }(x)}{y^2} \,d\rho (y). \end{aligned}$$
(9)

If the process has (say) finite exponential moments, we have moveover

$$\begin{aligned} \ell (\theta ) = \sum _{n=1}^\infty \frac{1}{n!} c_n (i \theta )^n, \end{aligned}$$

where \(\left\{ c_n\right\} \) are the cumulants [33] of (the distribution of) the process. So the generator of the process is

$$\begin{aligned} \sum _{n=1}^\infty \frac{1}{n!} c_n \partial _x^n. \end{aligned}$$

Note also that \(c_1 = \alpha \) and

$$\begin{aligned} c_n = \int _{\mathbb{R }} x^{n-2} \,d\rho (x) \end{aligned}$$

for \(n \ge 2\).

4 Free Lévy processes

4.1 Background

Let \(\mu \) be a probability measure on \(\mathbb{R }\). Its Cauchy transform is the analytic function \(G_\mu :\mathbb{C }^+ \rightarrow \mathbb{C }^-\) defined by

$$\begin{aligned} G_\mu (z) = \int _{\mathbb{R }} \frac{1}{z - x} \,d\mu (x). \end{aligned}$$

We will also denote \(F_\mu (z) = \frac{1}{G_\mu (z)}\), so that \(F_\mu : \mathbb{C }^+ \rightarrow \mathbb{C }^+\).

\(G_\mu \) is invertible in a Stolz angle near infinity, and Voiculescu’s \(R\)-transform is defined by

$$\begin{aligned} R_\mu (z) = G_\mu ^{-1}(z) - \frac{1}{z}. \end{aligned}$$

A free convolution \(\mu _1 \boxplus \mu _2\) of two probability measures \(\mu _1, \mu _2\) is characterized by the property that

$$\begin{aligned} G_{\mu _1 \boxplus \mu _2}^{-1}(z) = G_{\mu _1}^{-1}(z) + R_{\mu _2}(z). \end{aligned}$$

\(\mu \) is \(\boxplus \)-infinitely divisible if and only it can be included as \(\mu = \mu _1\) in a free convolution semigroup \(\left\{ \mu _t : t \ge 0\right\} \), \(\mu _s \boxplus \mu _t = \mu _{s+t}\). This is the case if and only if \(R_\mu \) extends to an analytic function \(R_\mu : \mathbb{C }^+ \rightarrow \mathbb{C }^+ \cup \mathbb{R }\). In this case, we have the free Lévy–Khintchine representation (Theorem 5.10 of [8])

$$\begin{aligned} R_\mu (z) = \alpha + \int _{\mathbb{R }} \frac{z + x}{1 - x z} \,d\nu (x). \end{aligned}$$

Moreover, if \(\mu \) has finite variance, we also have the free Kolmogorov representation,

$$\begin{aligned} R_\mu (z) = \alpha + \int _{\mathbb{R }} \frac{z}{1 - x z} \,d\rho (x). \end{aligned}$$

Here \(\alpha \) is the mean of \(\mu \), \(\rho \) is a finite measure, the free canonical measure for the semigroup \(\left\{ \mu _t\right\} \), and \((\alpha , \rho )\) is the free canonical pair. For convenience, throughout most of the paper we will rescale time so that the variance

$$\begin{aligned} \mathrm{Var }[\mu = \mu _1] = 1 \end{aligned}$$
(10)

in which case \(\rho \) is a probability measure.

We will also encounter two other convolution operations, the monotone convolution \(\triangleright \) and the Boolean convolution \(\uplus \), determined by

$$\begin{aligned} F_{\mu _1 \triangleright \mu _2}(z) = F_{\mu _1}(F_{\mu _2}(z)) \end{aligned}$$

and

$$\begin{aligned} F_{\mu _1 \uplus \mu _2}(z) = F_{\mu _1}(z) + F_{\mu _2}(z) - z. \end{aligned}$$

We will denote

$$\begin{aligned} d\gamma _t(x) = \frac{1}{2 \pi t} \sqrt{4 t - x^2} \mathbf 1 _{[-2 \sqrt{t}, 2 \sqrt{t}]}(x) \,dx \end{aligned}$$

the semicircular distributions, the analogs of the normal distributions in free probability. They form a free convolution semigroup with the free canonical pair \((0, \delta _0)\).

Remark 3

If \(\nu \) is a probability measure, there exists a probability measure \(\mu = \Phi _t[\nu ]\) with mean zero and variance \(t\) such that

$$\begin{aligned} F_{\Phi _t[\nu ]}(z) = z - t G_\nu (z). \end{aligned}$$

In particular, for the map \(\Phi = \Phi _1\), see [7] and Proposition 2.2 of [27], and

$$\begin{aligned} \Phi _t[\nu ] = \Phi [\nu ]^{\uplus t}. \end{aligned}$$

Conversely, if \(\mu \) is a probability measure with mean \(\alpha \) and variance \(\beta > 0\), there exists a probability measure \(\nu = \mathcal{J }[\mu ]\) such that

$$\begin{aligned} F_\mu (z) = z - \alpha - \beta G_{\mathcal{J }[\mu ]}(z). \end{aligned}$$
(11)

Note that \(\mathcal{J } \circ \Phi _t = \mathrm Id \), while \(\Phi _t[\mathcal{J }[\mu ]] = \mu \) if \(\mu \) has mean zero and variance \(t\). If \(\nu \) has finite moments of all orders, its Cauchy transform has a continued fraction expansion

$$\begin{aligned} G_\mu (z) = \frac{1}{ z - \alpha _0 - \frac{\beta _1}{ z - \alpha _1 - \frac{\beta _2}{ z - \alpha _2 - \frac{\beta _3}{ z - \ldots }}}}. \end{aligned}$$

Here \(\beta _0 = 1\), \(\alpha _0\) is the mean of \(\mu \), \(\beta _1\) is the variance of \(\mu \), and in general \(\left\{ \alpha _n, \beta _n\right\} \) are its Jacobi parameters. Then for \(\nu = \Phi _t[\mu ]\),

$$\begin{aligned} G_\nu (z) = \frac{1}{ z - \frac{t}{ z - \alpha _0 - \frac{\beta _1}{ z - \alpha _1 - \frac{\beta _2}{ z - \alpha _2 - \frac{\beta _3}{ z - \ldots }}}}}, \end{aligned}$$

while \(\mathcal{J }\) is the inverse map, namely coefficient stripping [18]. Note that there are also related maps which involve finite rather than only probability measures, but because of the normalization (10), we will not need to consider them.

4.2 Transition operators

The following is a reformulation of Theorem 3.1 of [10].

Theorem

Let \(X\) and \(Y\) be freely independent. Then the transition operator \(\mathcal{K }\) defined via

$$\begin{aligned} \mathbb E \left[f(X+Y) | X\right] = (\mathcal{K } f)(X) \end{aligned}$$

is a map on \(C_0(\mathbb{R })\) (which extends to a map on \(L^\infty (\mathbb{R }, \,dx)\)) such that for any \(z \in \mathbb{C }{\setminus }\mathbb{R }\)

$$\begin{aligned} \mathcal{K } \left[ \frac{1}{z - x} \right] = \frac{1}{F(z) - x} = \frac{1}{F_\nu (z) - x}. \end{aligned}$$

Here \(F(z) = F_\nu (z)\) for some probability measure \(\nu \), and \(F\) is uniquely determined by

$$\begin{aligned} G_{X+Y}(z) = G_X(F(z)). \end{aligned}$$

Proposition 4

For processes with freely independent increments, the transition operators \(\mathcal{K }\) have the form

$$\begin{aligned} \mathcal{K }[f](x) = \int _{\mathbb{R }} f(y) \,d(\delta _x \triangleright \nu )(y). \end{aligned}$$

For the free Lévy processes of the second kind, for \(\mathcal{K }_t\) we have \(\nu _t = \mu _t\). For the free Lévy processes of the first kind, for \(\mathcal{K }_{s,t}\), \(\nu _{s,t}\) is determined by

$$\begin{aligned} G_{\mu _t}(z) = G_{\mu _s}(F_{\nu _{s,t}}(z)). \end{aligned}$$

In other words, \(\nu = \nu _{s,t} = \mu _{t-s} \square \!\!\!{\vdash }\mu _s\), the subordination distribution, see [25, 30].

Proof

According to Theorem 3.1 of [10],

$$\begin{aligned} \mathcal{K } \left[\! \frac{1}{z \!-\! x} \!\right] \!=\! \frac{1}{F(z) \!-\! x} \!=\! \frac{1}{F_\nu (z) \!-\! x} \!=\! \frac{1}{F_{\delta _x \triangleright \nu }(z)} \!= G_{\delta _x \triangleright \nu }(z) \!= \!\int _{\mathbb{R }} \frac{1}{z \!-\! y} \,d(\delta _x \triangleright \nu )(y), \end{aligned}$$

and, still according to [10], this property entirely determines \(\mathcal{K }\). For FL2, \(F_{s,t} = F_{t-s}\) and \(F_{\mu _t} = F_{\mu _s} \circ F_{t-s}\), so for \(s = 0\),

$$\begin{aligned} F_{\mu _t} = F_{\delta _0} \circ F_t = F_t = F_{\nu _t}. \end{aligned}$$

For FL1, \(\nu _{s,t} = \mu _{t-s} \square \!\!\!{\vdash }\mu _s\) by definition. \(\square \)

Remark 4

Note that

$$\begin{aligned} \mathcal{K }(x, dy) = (\delta _x \triangleright \nu )(y) = (\delta _x \uplus \nu )(y) = (\nu \uplus \delta _x)(y). \end{aligned}$$

In fact, measures \(\delta _x \triangleright \nu \) are well-known in classical spectral theory. Indeed, if \(X\) is an operator with cyclic vector \(\xi \) and corresponding distribution \(\nu \), then \(\delta _x \triangleright \nu \) is the distribution with respect to \(\xi \) of the rank-one perturbation \(X - x \left\langle \cdot , \xi \right\rangle \xi \). Finally, note that for the classical processes and convolution, we can also write

$$\begin{aligned} \mathcal{K }[f](x)= \int _{\mathbb{R }} f(y) \,d\mu (y - x) = \int _{\mathbb{R }} f(y) \,d (\delta _x *\mu )(y) \end{aligned}$$

Proposition 5

\(\mathcal{K }\) is a contraction on each \(L^p(\mathbb{R }, dx)\) for \(1 \le p \le \infty \).

Proof

For \(f \in L^\infty (\mathbb{R }, dx)\),

$$\begin{aligned} \Vert \mathcal{K }[f] \Vert _\infty = \mathop {\mathrm{esssup }}\limits _{x \in \mathbb{R }} \int _{\mathbb{R }} \vert f(y) \vert \,d(\delta _x \triangleright \nu )(y) \le \mathop {\mathrm{esssup }}\limits _{x \in \mathbb{R }} \Vert f \Vert _\infty = \Vert f \Vert _\infty , \end{aligned}$$

so \(\mathcal{K }\) is a contraction on \(L^\infty (\mathbb{R }, dx)\). On the other hand, Alexandrov’s averaging theorem (Theorem 11.8 from [34]) states that for \(f \in L^1(\mathbb{R }, \,dx)\)

$$\begin{aligned} \int _{\mathbb{R }} \mathcal{K }[f](x) \,dx = \int _{\mathbb{R }} \left( \int _{\mathbb{R }} f(y) \,d(\delta _x \triangleright \nu )(y) \right) \,dx = \int _{\mathbb{R }} f(x) \,dx, \end{aligned}$$

so that \(\Vert \mathcal{K } f \Vert _1 = \Vert f \Vert _1\) for \(f \ge 0\), and \(\mathcal{K }\) is a contraction on \(L^1(\mathbb{R }, \,dx)\). For \(1 < p < \infty \), the result now follows by Riesz–Thorin interpolation, see Sect. IX.4 from [32]. \(\square \)

Remark 5

Unless stated otherwise, we will work in the Banach space \(C_0(\mathbb{R })\) of continuous functions converging to zero at infinity, with the maximum norm. Also, again until stated otherwise, we will denote

$$\begin{aligned} \mathcal{D } = \mathrm Span \left\{ \frac{1}{z_1 - x} - \frac{1}{z_2 - x} : z_1, z_2 \in \mathbb{C }{\setminus }\mathbb{R }\right\} . \end{aligned}$$
(12)

We will also denote by \(C_c(\mathbb{R })\) the compactly supported continuous functions, and by \(\mathcal{P }\) the polynomials.

For \(1 \le p \le \infty \), denote

$$\begin{aligned} \Vert f \Vert _{k, p} = \sum _{i=0}^k \Vert f^{(i)} \Vert _p \sim \Vert f^{(k)} \Vert _p + \Vert f \Vert _p \end{aligned}$$

the Sobolev norm. For \(p < \infty \), denote by \(W^{k, p}(\mathbb{R })\) the corresponding Sobolev space, while for \(p = \infty \) we will denote by \(W^{k, \infty }\) the corresponding subspace of \(C_0(\mathbb{R })\). Note that we will identify the Lipschitz norm

$$\begin{aligned} \sup _{x \ne y} \left|{\frac{f(x) - f(y)}{x - y}}\right| \end{aligned}$$

with \(\Vert f^{\prime } \Vert _\infty \), since a Lipschitz function is differentiable almost everywhere.

Finally, we abbreviate

$$\begin{aligned} W^\infty = \left\{ f \in C_0(\mathbb{R }) | f^{\prime \prime } \in C_0(\mathbb{R })\right\} \end{aligned}$$

with norm \(\Vert f \Vert _\infty + \Vert f^{\prime \prime } \Vert _\infty \), and

$$\begin{aligned} W^p = W^\infty \cap W^{1,p} \end{aligned}$$

with norm

$$\begin{aligned} \Vert f \Vert ^{\prime } = \Vert f \Vert _\infty + \Vert f^{\prime \prime } \Vert _\infty + \Vert f \Vert _p + \Vert f^{\prime } \Vert _p. \end{aligned}$$
(13)

The following argument is reminiscent of Sect. 2.3 from [26].

Lemma 6

\(\mathcal{D }\) is dense in \(W^p\), \(1 \le p \le \infty \), \(L^p(\mathbb{R }, \,dx)\), \(1 \le p < \infty \), and \(C_0(\mathbb{R })\), with their respective norms.

Proof

We will prove that \(\mathcal{D }\) is dense in \(W^p\); the other arguments are similar, and more standard. Note that \(\mathcal{D } \subset W^p\).

Step 1. By an elementary “cut-off plus smoothing” argument,

$$\begin{aligned} C_c(\mathbb{R }) \cap W^p = C_c(\mathbb{R }) \cap W^\infty \end{aligned}$$
(14)

is dense in \(W^p\) with respect to the norm (13). So it suffices to check that every element of this space can be approximated by elements of \(\mathcal{D }\).

Step 2. For

$$\begin{aligned} P_\varepsilon (x) = \frac{1}{\pi } \frac{\varepsilon }{x^2 + \varepsilon ^2} \end{aligned}$$

the Poisson kernel and \(f\) in the set (14), we know that

$$\begin{aligned} (P_\varepsilon *f)^{\prime \prime } = P_\varepsilon *f^{\prime \prime } = P_\varepsilon ^{\prime \prime } *f. \end{aligned}$$
(15)

Moreover, as \(\varepsilon \rightarrow 0^+\), \((P_\varepsilon *f) \rightarrow f\) and \((P_\varepsilon *f)^{\prime \prime } \rightarrow f^{\prime \prime }\) uniformly, and so (since the support of \(f\) is compact) also in norm (13).

Step 3. For a fixed \(\varepsilon \), if

$$\begin{aligned} \sum \frac{1}{\pi } \frac{\varepsilon }{(x - a_i) + \varepsilon ^2} f(a_i) \Delta _i \end{aligned}$$

is a Riemann sum for \((P_\varepsilon *f)\), then by (15)

$$\begin{aligned} \sum \frac{1}{\pi } \left(\frac{\varepsilon }{(x - a_i) + \varepsilon ^2}\right)^{\prime \prime } f(a_i) \Delta _i \end{aligned}$$

is a Riemann sum for \((P_\varepsilon *f)^{\prime \prime }\). Since \(f\) is uniformly continuous, both sets of Riemann sums converge uniformly, so \(P_\varepsilon *f\) is a limit of such Riemann sums in the norm (13).

It remains to note that

$$\begin{aligned} \frac{b}{(x - a) + b^2} = \frac{1}{2i} \left( \frac{1}{a + b i - x} - \frac{1}{a - b i - x} \right) \in \mathcal{D }. \end{aligned}$$

\(\square \)

Proposition 7

For a free Lévy process of the first kind, on each \(L^p(\mathbb{R }, dx)\), \(1 \le p < \infty \) and on \((C_0(\mathbb{R }), \Vert \cdot \Vert _\infty )\), \(\mathcal{K }_{s,t}\) is strongly continuous in \(s, t\).

Proof

Since \(\mathcal{D }\) is dense in \(L^p(\mathbb{R }, \,dx)\) and \(\mathcal{K }_{s,t}\) is a contraction on it, it suffices to prove continuity for \(f \in \mathcal{D }\). Indeed,

$$\begin{aligned} \left|{\mathcal{K }_{s^{\prime }, t^{\prime }} \left[\frac{1}{z - x} \right] - \mathcal{K }_{s, t} \left[\frac{1}{z - x} \right]}\right| = \left|{\frac{F_{s^{\prime }, t^{\prime }}(z) - F_{s, t}(z)}{(F_{s^{\prime }, t^{\prime }}(z) - x)(F_{s, t}(z) - x)}}\right|. \end{aligned}$$

It remains to note that for a fixed \(z\), \((F_{s^{\prime }, t^{\prime }}(z) - F_{s, t}(z)) \rightarrow 0\) as \(s^{\prime } \rightarrow s\), \(t^{\prime } \rightarrow t\), and that for any \(z, w \not \in \mathbb{R }\),

$$\begin{aligned} \left\Vert{\frac{1}{(z - x)(w - x)}}\right\Vert_{{p}} \le \left\Vert{\frac{1}{z - x}}\right\Vert_{{2p}} \left\Vert{\frac{1}{w - x}}\right\Vert_{{2p}} \le C \frac{1}{(\vert \mathfrak I z \vert \vert \mathfrak I w \vert )^{(1 - p^{-1})/2}}\qquad \end{aligned}$$
(16)

for all \(p \ge 1\). \(\square \)

4.3 Free Lévy processes of the first kind

Lemma 8

Let \(\left\{ \mu _t\right\} \) be a free convolution semigroup, where \(\mu = \mu _1\) has mean \(\alpha \) and variance \(1\). Then

$$\begin{aligned} R_\mu (G_{\mu _t}(z)) = \alpha + G_{\nu _t}(z), \end{aligned}$$

where \(\nu _t = \mathcal{J }[\mu _t]\).

Proof

By definition (11),

$$\begin{aligned} G_{\nu _t}(z) = \frac{1}{t} \left( z - \alpha t - \frac{1}{G_{\mu _t}(z)} \right) = \frac{1}{t} \left( z - \frac{1}{G_{\mu _t}(z)} \right) - \alpha . \end{aligned}$$

One the other hand, by definition of the \(R\)-transform

$$\begin{aligned} G_{\mu _t}^{-1}(z) = \frac{1}{z} + t R_\mu (z), \end{aligned}$$

so

$$\begin{aligned} R_{\mu }(z) = \frac{1}{t} \left( G_{\mu _t}^{-1}(z) - \frac{1}{z} \right). \end{aligned}$$

Putting these together, it follows that

$$\begin{aligned} \alpha + G_{\nu _t}(z) = R_\mu (G_{\mu _t}(z)) \end{aligned}$$

on a domain, and hence, by analytic continuation, on \(\mathbb{C }^+\). \(\square \)

Remark 6

In [6], Belinschi and Nica defined a family of transformations

$$\begin{aligned} \mathbb{B }_t[\nu ] = \left( \nu ^{\boxplus (1 + t)} \right)^{\uplus \frac{1}{1 + t}}. \end{aligned}$$

They showed that these transformations form a semigroup under composition, and \(\mathbb{B }_1 = \mathbb{B }\) is the Boolean-to-free Bercovici–Pata bijection, defined via

$$\begin{aligned} z - F_\nu (z) = z R_{\mathbb{B }[\nu ]}(1/z). \end{aligned}$$

The domain of \(\mathbb{B }\) consists of all probability measures, while its image are all the freely infinitely divisible measures. Moreover, they proved the following evolution equation:

$$\begin{aligned} \Phi [\rho \boxplus \gamma _t] = \mathbb{B }_t[\Phi [\rho ]]. \end{aligned}$$

We found this equation quite mysterious. We now re-interpret it as follows: a single coefficient stripping, applied to a free convolution semigroup (with finite variance), produces a semicircular evolution started at the free canonical measure of the semigroup.

Proposition 9

Let \(\rho \) be a probability measure on \(\mathbb{R }\). Then

$$\begin{aligned} \mu _t = \Phi _t[\rho \boxplus \gamma _t] \end{aligned}$$

is a free convolution semigroup with mean zero and finite variance, such that \(\rho \) is the corresponding free canonical measure. Moreover, each such free convolution semigroup with \(\mathrm{Var }[\mu _1] = 1\) arises in this way. In particular, for any such free convolution semigroup,

$$\begin{aligned} \mathcal{J }[\mu _t] = \rho \boxplus \gamma _t. \end{aligned}$$

Proof

We compute

$$\begin{aligned} \mu _t = \Phi [\rho \boxplus \gamma _t]^{\uplus t} = \mathbb{B }_t[\Phi [\rho ]]^{\uplus t} = \mathbb{B }_{t-1}[\mathbb{B }[\Phi [\rho ]]]^{\uplus t} = \mathbb{B }[\Phi [\rho ]]^{\boxplus t}, \end{aligned}$$

so \(\left\{ \mu _t\right\} \) form a free convolution semigroup, with \(\mu = \mu _1 = \mathbb{B }[\Phi [\rho ]]\). Also,

$$\begin{aligned} \mathcal{J }[\mu _t] = \mathcal{J }[\Phi _t[\rho \boxplus \gamma _t]] = \rho \boxplus \gamma _t. \end{aligned}$$

If \(R_\mu \) is the \(R\)-transform corresponding to \(\left\{ \mu _t\right\} \), then

$$\begin{aligned} G_{\rho \boxplus \gamma _t}(z) = G_{\mathcal{J }[\mu _t]}(z) = R_\mu (G_{\mu _t}(z)) \end{aligned}$$

by the Lemma 8. In particular, since \(\mu _0 = \delta _0\) and \(G_{\mu _0}(z) = \frac{1}{z}\),

$$\begin{aligned} G_{\rho }(z) = R_\mu (1/z) = \int _{\mathbb{R }} \frac{1}{z - x} \,d\rho (x), \end{aligned}$$

so \(\rho \) is the free canonical measure for \(\left\{ \mu _t\right\} \). Finally, such a representation holds precisely for any free convolution semigroup with mean zero and \(\mathrm{Var }[\mu _1] = 1\). \(\square \)

Corollary 10

For \(\left\{ \mu _t\right\} \) a free convolution semigroup satisfying (10), with free canonical pair \((\alpha , \rho )\),

$$\begin{aligned} R_\mu (G_{\mu _s}(z)) = \alpha + G_{\rho \boxplus \gamma _s}(z). \end{aligned}$$

Definition 2

For a finite measure \(\nu \), we denote by \(L_\nu \) the operator

$$\begin{aligned} L_\nu [f] = \int _{\mathbb{R }} \frac{f(x) - f(y)}{x - y} \,d\nu (x) = (I \otimes \nu ) (\partial f), \end{aligned}$$

where \(\partial \) is the difference quotient operator. Such operators were studied in [4], but also in other sources, for example [26].

Lemma 11

For \(\rho \), \(\left\{ \mu _t\right\} \) as in the preceding corollary,

$$\begin{aligned} (\alpha \partial _x + \partial _x L_{\rho \boxplus \gamma _s}) \frac{1}{z-x} = R_\mu (G_{\mu _s}(z)) \frac{1}{(z - x)^2} \end{aligned}$$

and (for each fixed \(z \in \mathbb{C }{\setminus }\mathbb{R }\)) this is a continuous function of \(s\) into \(L^p(\mathbb{R }, \,dx)\).

Proof

Using the preceding corollary,

$$\begin{aligned} (\alpha \partial _x + \partial _x L_{\rho \boxplus \gamma _s}) \frac{1}{z-x} = \alpha \frac{1}{(z - x)^2} + \frac{1}{(z - x)^2} G_{\rho \boxplus \gamma _s}(z) = R_\mu (G_{\mu _s}(z)) \frac{1}{(z - x)^2}. \end{aligned}$$

Continuity follows from Eq. (16). \(\square \)

Proposition 12

Let \(A_t\) be a generator of a free Lévy process corresponding to the free convolution semigroup \(\left\{ \mu _t\right\} \) with free canonical pair \((\alpha , \rho )\). Then for \(\mathcal{D }\) from Eq. (12), \(\mathcal{D } \subset \mathcal{D }(A_t)\), and on this domain

$$\begin{aligned} A_t f(x) = \alpha \partial _x f(x) + \int _{\mathbb{R }} \partial _x \frac{f(x) - f(y)}{x - y} \,d(\rho \boxplus \gamma _t)(y), \end{aligned}$$
(17)

which we will abbreviate as

$$\begin{aligned} A_t = \alpha \partial _x + \partial _x L_{\rho \boxplus \gamma _t}. \end{aligned}$$
(18)

Proof

For any free convolution semigroup \(\left\{ \mu _t\right\} \), the following evolution equation holds:

$$\begin{aligned} \partial _t G_{\mu _t}(z) = - R_\mu (G_{\mu _t}(z)) \ G_{\mu _t}^{\prime }(z), \end{aligned}$$
(19)

see Eq. (3.18) in [35]. Also according to Theorem 3.1 of [10], the transition operators of a free Lévy process (of the first kind) have the property that

$$\begin{aligned} \mathcal{K }_{s,t} \left[ \frac{1}{z - x} \right] = \frac{1}{F_{s,t} - x}, \end{aligned}$$

where

$$\begin{aligned} G_{\mu _t}(z) = G_{\mu _s}(F_{s,t}(z)). \end{aligned}$$

This implies that

$$\begin{aligned} \partial _t F_{s,t}(z) = \frac{\partial _t G_{\mu _t}(z)}{G_{\mu _s}^{\prime }(F_{s,t}(z))} = - \frac{G_{\mu _t}^{\prime }(z) R(G_{\mu _t}(z))}{G_{\mu _s}^{\prime }(F_{s,t}(z))}. \end{aligned}$$

So using Lemma 11, we compute

$$\begin{aligned}&\left|{\frac{1}{h} \left(\mathcal{K }_{s, s+h} \frac{1}{z - x} - \frac{1}{z - x} \right) - \left(\alpha \partial _x + \partial _x L_{\rho \boxplus \gamma _s} \right) \frac{1}{z - x}}\right| \\&\quad = \left|{ \frac{1}{h} \left( \frac{1}{F_{s, s+h}(z) - x} - \frac{1}{z - x} \right) - \frac{1}{(z-x)^2} R_\mu (G_{\mu _s}(z))}\right| \\&\quad = \left|{- \frac{1}{h} \frac{F_{s, s+h}(z) - F_{s,s}(z)}{(F_{s,s+h}(z) - x) (z-x)} - \frac{1}{(z-x)^2} R_\mu (G_{\mu _s}(z))}\right| \\&\quad = \left|{\left( \frac{1}{h} \int _0^h \frac{G_{\mu _{s+u}}^{\prime }(z)}{G_{\mu _s}^{\prime }(F_{s,s+u}(z))} \frac{R_\mu (G_{\mu _{s+u}}(z))}{(F_{s,s+h}(z) - x)} \,du - \frac{R_\mu (G_{\mu _s}(z))}{(z-x)} \right) \frac{1}{z-x}}\right| \\&\quad = \left|{\left( \frac{1}{h} \int _0^h \frac{G_{\mu _{s+u}}^{\prime }(z)}{G_{\mu _s}^{\prime }(F_{s,s+u}(z))} \frac{R_\mu (G_{\mu _{s+u}}(z))}{R_\mu (G_{\mu _s}(z))} \,du \right) \frac{1}{(F_{s,s+h}(z) - x)} - \frac{1}{(z-x)}}\right|\\&\qquad \times \left|{\frac{R_\mu (G_{\mu _s}(z))}{z-x}}\right|. \end{aligned}$$

Now using Eq. (16) and \(\frac{G_{\mu _{s+u}}^{\prime }(z)}{G_{\mu _s}^{\prime }(F_{s,s+u}(z))} \frac{R_\mu (G_{\mu _{s+u}}(z))}{R_\mu (G_{\mu _s}(z))} \rightarrow 1\) as \(u \rightarrow 0\), the difference above converges to zero in \(L^p(\mathbb{R }, \,dx)\), \(1 \le p \le \infty \). The result follows. \(\square \)

The appearance of the semicircular distributions in the generator formula above is explained by the following theorem.

Theorem 13

Let \(\left\{ X_t : t \ge 0\right\} \) be a centered free Lévy process of the first kind with finite variance, normalized so that \(\mathrm{Var }[X_1] = 1\). Let \(\rho \) be the canonical measure for the corresponding free convolution semigroup \(\left\{ \mu _t\right\} \). Finally, let \(\left\{ Y_t : t \ge 0\right\} \) be the free Brownian motion started at \(Y_0\) with distribution \(\rho \). Then the transition operators of the processes \(\left\{ X_t\right\} \) and \(\left\{ Y_t\right\} \) coincide.

Proof

It suffices to check the equality of transition operators on \(\mathcal{D }\), in other words we need to verify the equality of analytic functions \(F_{s,t}\). We check that indeed,

$$\begin{aligned} G_{\mu _s}^{-1} \circ G_{\mu _t}(z)&= \left(G_{\mu _t}^{-1} - (t-s) R_\mu \right) \circ G_{\mu _t}(z) = z - (t-s) R_\mu \circ G_{\mu _t}(z) \\&= z - (t-s) G_{\rho \boxplus \gamma _t}(z) = \left(G_{\rho \boxplus \gamma _t}^{-1} - (t-s) z \right) \circ G_{\rho \boxplus \gamma _t}(z) = G_{\rho \boxplus \gamma _s}^{-1}\\&\circ G_{\rho \boxplus \gamma _t}(z). \end{aligned}$$

\(\square \)

Remark 7

For readers familiar with the properties of the subordination distribution, we provide an alternative proof, see [30] for the results used. We compute

$$\begin{aligned} \mu _{t-s} \square \!\!\!{\vdash }\mu _s&= (\mu \square \!\!\!{\vdash }\mu _s)^{\boxplus (t-s)}\\&= (\mu _s^{\boxplus (1/s)} \square \!\!\!{\vdash }\mu _s)^{\boxplus (t-s)} = \mathbb{B }[\mu _s]^{\boxplus (1/s)(t-s)} = \mathbb{B }\left[\left(\mu ^{\boxplus s}\right)^{\uplus (1/s)}\right]^{\boxplus (t-s)} \\&= \left( \mathbb{B } \circ \mathbb{B }_{s-1} [\mu ] \right)^{\boxplus (t-s)} = \left( \mathbb{B }_{s} \circ \mathbb{B }[\Phi [\rho ]] \right)^{\boxplus (t-s)} = \left(\mathbb{B } \circ \mathbb{B }_s[\Phi [\rho ]] \right)^{\boxplus (t-s)} \\&= \mathbb{B }[\Phi [\rho \boxplus \gamma _s]]^{\boxplus (t-s)} = \left(\gamma \square \!\!\!{\vdash }(\rho \boxplus \gamma _s) \right)^{\boxplus (t-s)} = \gamma _{t-s} \square \!\!\!{\vdash }(\rho \boxplus \gamma _s). \end{aligned}$$

Note also that the preceding theorem is false for a process with non-zero mean; indeed, the generator of a free Brownian motion with drift is not (18) but rather \(\alpha \partial _x + \partial _x L_{\rho \boxplus \gamma _t \boxplus \delta _{\alpha t}}\).

Proposition 14

Let \(\nu \) be a finite measure. Then

$$\begin{aligned} \Vert \partial _x L_\nu f \Vert _\infty \le \nu (\mathbb{R }) \Vert f^{\prime \prime } \Vert _\infty . \end{aligned}$$

and for \(1 \le p < \infty \),

$$\begin{aligned} \Vert \partial _x L_\nu f \Vert _1 \le C_p \nu (\mathbb{R }) (\Vert f^{\prime \prime } \Vert _\infty + \Vert f^{\prime } \Vert _p). \end{aligned}$$

It follows that \(\partial _x L_\nu \) is a bounded operator \(W^\infty \rightarrow C_0(\mathbb{R })\) and \(W^p \rightarrow L^p(\mathbb{R }, \,dx)\).

Proof

By Taylor’s theorem,

$$\begin{aligned} \left|{\partial _x \frac{f(x) - f(y)}{x - y}}\right|&= \left|{\frac{f(y) - f(x) - (y-x) f^{\prime }(x)}{(y-x)^2}}\right| \\&= \left|{\frac{1}{(y \!- \!x)^2} \int _x^y (y \!-\! u) f^{\prime \prime }(u) \,du}\right| \!\le \!\sup _{x \le u \le y} \left|{\frac{y \!-\! u}{y \!-\! x} f^{\prime \prime }(u)}\right| \!\le \!\Vert f^{\prime \prime } \Vert _\infty . \end{aligned}$$

So

$$\begin{aligned} \Vert \partial _x L_\nu f \Vert _\infty \le \nu (\mathbb{R }) \Vert f^{\prime \prime } \Vert _\infty . \end{aligned}$$
(20)

Since \(\partial _x L_\nu (\mathcal{D }) \subset C_0(\mathbb{R })\), \(\mathcal{D }\) is dense in \(W^\infty \), and \(C_0(\mathbb{R })\) is closed, it follows that in fact

$$\begin{aligned} \partial _x L_\nu (W^\infty ) \subset C_0(\mathbb{R }). \end{aligned}$$

Next, (20) implies that for \(p > 1\) and \(q\) the dual exponent,

$$\begin{aligned} \Vert \partial _x L_\nu f \Vert _1&= \int _{\mathbb{R }} \int _{[y - a, y + a]} \left|{ \partial _x \frac{f(x) - f(y)}{x - y}}\right| \,dx \,d\nu (y) \\&+ \int _{\mathbb{R }} \int _{[y - a, y + a]^c} \left|{ \frac{f^{\prime }(x)}{x - y} - \frac{f(x) - f(y)}{(x - y)^2}}\right| \,dx \,d\nu (y) \\&\le \int _{\mathbb{R }} 2 a \Vert f^{\prime \prime } \Vert _\infty \,d\nu (y) + \int _{\mathbb{R }} \Vert f^{\prime } \Vert _p \left( \int _{[y - a, y + a]^c} \frac{1}{(x - y)^q} \,dx \right)^{1/q} \,d\nu (y) \\&+ \int _{\mathbb{R }} \int _{[y - a, y + a]^c} \frac{1}{(x - y)^2} \left( \int _y^x \,du \right)^{1/q} \left( \vert f^{\prime }(u) \vert ^p \,du \right)^{1/p} \,dx \,d\nu (y) \\&\le 2 a \nu (\mathbb{R }) \Vert f^{\prime \prime } \Vert _\infty + \frac{(2 (p-1))^{1/q}}{a^{1/p}} \nu (\mathbb{R }) \Vert f^{\prime } \Vert _p + \frac{2 p}{a^{1/p}} \nu (\mathbb{R }) \Vert f^{\prime } \Vert _p \end{aligned}$$

So

$$\begin{aligned} \Vert \partial _x L_\nu f \Vert _1 \le C_p \nu (\mathbb{R }) \left( \Vert f^{\prime \prime } \Vert _{\infty } + \Vert f^{\prime } \Vert _{p} \right). \end{aligned}$$

A similar argument works for \(p=1\). The final result follows by interpolation. \(\square \)

Theorem 15

Let \(\rho \) be a probability measure, \(\alpha \in \mathbb{R }\), \(\left\{ \mu _t\right\} \) the free convolution semigroup with the free canonical pair \((\alpha , \rho )\), \(\left\{ X(t)\right\} \) the corresponding free Lévy process, and \(\left\{ A_t\right\} \) its generators. Then on \(C_0(\mathbb{R })\), \((A_t, W^\infty )\) equals

$$\begin{aligned} A_t = \alpha \partial _x + \partial _x L_{\rho \boxplus \gamma _t}. \end{aligned}$$

and on \(L^p(\mathbb{R }, \,dx)\), \((A_t, W^p)\) is given by the same formula.

Proof

Use the estimates in Proposition 14, and Lemma , and apply Proposition 1 with \(\mathcal{A } = C_0(\mathbb{R })\), \(\mathcal{B } = W^\infty \), respectively, \(\mathcal{A } = L^p(\mathbb{R }, \,dx)\), \(\mathcal{B } = W^p\), to show that these sets are in the domain of the generators. Since the same estimate shows that \(\alpha \partial _x + \partial _x L_{\rho \boxplus \gamma _t}\) continuously extends to \(\mathcal{B }\), the result follows. \(\square \)

Example 1

A free Meixner distribution \(\mu _{b,c}^{\boxplus t}\) is the probability measure with Jacobi parameters

$$\begin{aligned} \alpha _0 = 0, \alpha _n = b, \beta _1 = t, \beta _n = t + c. \end{aligned}$$

For \(c \ge 0\), these distributions form a free convolution semigroup with respect to the parameter \(t\). Clearly the corresponding \(\nu _t = \mathcal{J }[\mu _{b,c}^{\boxplus t}] = \delta _{b} \boxplus \gamma _{t + c}\) are the semicircular distributions with mean \(b\) and variance \((t+c)\); thus we recover a weaker version of the result of [16]. On the other hand, for \(\mu = \mu _{b,c}\) we also have

$$\begin{aligned} R_\mu (z) = z \left(1 + b R_\mu (z) + c (R_\mu (z))^2 \right)\!, \end{aligned}$$

which implies that

$$\begin{aligned} R_{\mu }(z) = \int _{\mathbb{R }} \frac{z}{1 - z x} \,d\rho (x) \end{aligned}$$

for \(\rho = \delta _{b} \boxplus \gamma _c\) semicircular with mean \(b\) and variance \(c\). So the free canonical measure in this case is also semicircular. The reason for this coincidence is that, as pointed out in Proposition 9,

$$\begin{aligned} \nu _t = \rho \boxplus \gamma _t = \left( \delta _{b} \boxplus \gamma _c \right) \boxplus \gamma _t = \delta _{b} \boxplus \gamma _{t + c}. \end{aligned}$$

In the particular case \(b = c = 0\), we have \(\mu _t = \nu _t = \gamma _t\) (and \(\rho = \delta _0\)). The corresponding process is the free Brownian motion, whose generator \(\partial _x L_{\gamma _t}\) was found at the end of Sect. 4 in [12], see also Example 4.9 in [13].

Example 2

(Generator of the Cauchy process) The mean of the Cauchy distribution is undefined. Nevertheless, we can still compute the generator of the free Cauchy process, because the Cauchy distributions form both a free and a usual convolution semigroup. Indeed, the Fourier transform of the standard Cauchy distribution is \(e^{-\vert x \vert }\), so the generator of the corresponding process is \(- \vert -i \partial _x \vert = - \vert \partial _x \vert \). Note that \(\vert x \vert = \mathrm{sgn }(x) x\) and

$$\begin{aligned} \mathcal{F }(H f)(x) = - i \mathrm{sgn }(x) \mathcal{F }(f)(x), \end{aligned}$$

where \(\mathcal{F }\) is the Fourier transform and \(H\) is the Hilbert transform. We conclude that the generator is

$$\begin{aligned} A f(x) = -i \partial _x (H f)(x). \end{aligned}$$

This is consistent with the relation \(R_\mu (z) = -i\) and

$$\begin{aligned} \partial _t G_{\mu _t}(z) = - i G_{\mu _t}^{\prime }(z). \end{aligned}$$

Remark 8

The generator of a classical process is a pseudo-differential operator. The generator of the free process can be given a similar interpretation. Indeed, note first that

$$\begin{aligned} L_{\delta _0} f(x) = \frac{f(x) - f(0)}{x}. \end{aligned}$$

This operator is the crucial object in [3]; note also that \(\partial _x L_{\delta _0}\) is the generator, but only at time zero, of the free Brownian motion. Suppose now that all the moments \(m_n(\nu )\) of \(\nu \) are finite, and let

$$\begin{aligned} M_\nu (z) = \sum _{n=0}^\infty m_n(\nu ) z^n \end{aligned}$$

be its moment generating function. Then, at least for polynomial \(f\),

$$\begin{aligned} L_\nu f = M_\nu (L_{\delta _0}) L_{\delta _0} f. \end{aligned}$$

Indeed, for \(f(x) = x^n\),

$$\begin{aligned} M_\nu (L_{\delta _0}) L_{\delta _0} f(x)&= \sum _{k=0}^n m_k(\nu ) L_{\delta _0}^{k+1} x^n = \sum _{k=0}^{n-1} m_k(\nu ) x^{n-k-1}\\&= \int _{\mathbb{R }} \frac{x^n - y^n}{x - y} \,d\nu (y) = L_\nu f(x). \end{aligned}$$

By writing \(L_\nu = G_\nu (L_{\delta _0}^{-1})\), we can interpret it as a pseudo-differential-type operator even if the moments of \(\nu \) are not finite.

Remark 9

Suppose \(\mu \) is compactly supported. In particular, we can expand

$$\begin{aligned} R_{\mu }(z) = \sum _{n=1}^\infty r_n z^{k-1}, \end{aligned}$$

where \(\left\{ r_n\right\} \) are the free cumulants of \(\mu \). Then it follows from Proposition 12, Lemma 8, and the preceding remark that the generator of the corresponding process is

$$\begin{aligned} A_s&= \alpha \partial _x + \partial _x L_{\rho \boxplus \gamma _s} = \partial _x \left(\alpha + G_{\rho \boxplus \gamma _s}(L_{\delta _0}^{-1}) \right)\nonumber \\&= \partial _x R_\mu (G_{\mu _s}(L_{\delta _0}^{-1})) = \partial _x R_\mu (L_{\mu _s}) = \sum _{n=1}^\infty r_n \partial _x L_{\mu _s}^{n-1}. \end{aligned}$$
(21)

On the other hand, by Lemmas 2 and 3 of [2], in this case the higher diagonal measures

$$\begin{aligned} \Delta _n(t) = \int _0^t (d X(s))^n \end{aligned}$$

are defined, and \(\mathbb E \left[\Delta _n(t)\right] = r_n t\). Moreover, by Corollary 12 of the same paper, for polynomial \(f\)

$$\begin{aligned} f(X(t)) = \sum _{n=1}^\infty \int _0^t (I \otimes \mathbb{E } \otimes \ldots \otimes \mathbb{E } \otimes I) \left[(\partial ^n f)(X(s), \ldots , X(s)) \right] \sharp \,d\Delta _n(s), \end{aligned}$$

where \(\partial ^n\) is defined recursively by

$$\begin{aligned} \partial ^n = (I \otimes \cdots \otimes I \otimes \partial ) \partial ^{n-1} \end{aligned}$$

(this notation differs by a factor of \(n!\) from [2]), and

$$\begin{aligned} \int _0^t (A(x) \otimes B(s)) \sharp \,dX(s) = \int _0^t A(x) \,dX(s) \, B(s). \end{aligned}$$

In other words,

$$\begin{aligned} f(X(t)) = \sum _{n=1}^\infty \int _0^t (\partial L_{\mu _s}^{n-1} f)(X(s)) \sharp \,d\Delta _n(s). \end{aligned}$$

It follows that for the generator (21), the martingale from Lemma 2 can be written explicitly as

$$\begin{aligned}&f(X(t)) - \int _0^t (A_s f)(X(s)) \,ds \\&= \sum _{n=1}^\infty \int _0^t (\partial L_{\mu _s}^{n-1} f)(X(s)) \sharp \,d\Delta _n(s) - \sum _{n=1}^\infty \int _0^t r_n (\partial _x L_{\mu _s}^{n-1} f) (X(s)) \,ds \\&= \sum _{n=1}^\infty \int _0^t (\partial L_{\mu _s}^{n-1} f)(X(s)) \,\sharp \,d(\Delta _n(s) - r_n s). \end{aligned}$$

It would be interesting to find such a representation for more general \(\mu \).

Remark 10

A matricial interpolation between classical and free Lévy processes was constructed in [17], where the generator of such a matricial process is also computed.

4.4 Free Lévy processes of the second kind

The semigroup \(\left\{ \mathcal{K }_t\right\} \) of transition operators corresponding to a free Lévy process of the second kind is characterized by

$$\begin{aligned} \mathcal{K }_t \left[ \frac{1}{z - x} \right] = \frac{1}{F_{\mu _t}(z) - x}, \end{aligned}$$

and \(\left\{ F_{\mu _t}\right\} \) form a semigroup with respect to composition.

The transition operators for a monotone Lévy process corresponding to the family \(\left\{ \mu _t\right\} \) are exactly the same, and in fact \(\left\{ \mu _t\right\} \) form a monotone convolution semigroup, see Corollary 5.3 of [22]. According to Theorem 5.1 of [21], at least for the compactly supported case, on bounded continuous functions

$$\begin{aligned} \mathcal{K }_t f (x) = \int _{\mathbb{R }} f(y) \,d(\delta _x \triangleright \mu _t)(y). \end{aligned}$$

Since \(\left\{ \mathcal{K }_t\right\} \) form a semigroup, we only need to compute the generator \(A\) at zero. By Proposition 5.1 of [21], the generator is

$$\begin{aligned} A f (x) = \alpha \partial _x f(x) + \int _{\mathbb{R }} \partial _x \frac{f(x) - f(y)}{x - y} \,d\rho (y), \end{aligned}$$
(22)

where

$$\begin{aligned} - \left. \frac{\partial }{\partial t} \right|_{t=0} F_{\mu _t}(z) = z^2 \left. \frac{\partial }{\partial t} \right|_{t=0} G_{\mu _t}(z) = \alpha + \int _{\mathbb{R }} \frac{1}{z - x} \,d\rho (x), \end{aligned}$$

so that \((\alpha , \rho )\) is the monotone canonical pair (note our choice of signs is the opposite of [21]). As pointed out in Theorem 4.5 of [10], only certain \(\rho \) correspond to processes with free increments in this way (note that unlike Biane, we have assumed \(\mu _0 = \delta _0\)). We repeat Biane’s question (Sect. 4.7): it would be interesting to have a more direct description of which \(\rho \) do so appear. In particular, according to J.C. Wang, a centered FL2 process cannot have finite variance, and as of this writing, no non-trivial examples of FL2 processes are known.

Remark 11

Let \(\left\{ X(t)\right\} \) be a process whose increments are stationary and independent in a certain sense, \(\left\{ \mathcal{K }_{s,t}\right\} \) the corresponding transition operators, and \(\left\{ \mu _t, \star \right\} \) be the corresponding convolution semigroup. Since \(\mu _0 = \delta _0\), we observe that

$$\begin{aligned} \mu _t[f] = \mathbb E \left[f(X(t))\right] = \mathbb E \left[(\mathcal{K }_{0,t} f)(X(0))\right] = \mu _0[\mathcal{K }_{0,t} f] = (\mathcal{K }_{0,t} f)(0). \end{aligned}$$

Therefore the corresponding cumulant functional (7) is

$$\begin{aligned} C_\mu [f] = \left. \frac{\partial }{\partial t} \right|_{t=0} \mu _t[f] = (A_0 f)(0). \end{aligned}$$

We note that indeed, if we set \(t=0\) and \(x=0\) in formulas (9), (17) and (22), in all three cases we get formula (8). Note that in these three cases, \(\rho \) is interpreted as the classical, free, and monotone canonical measure, respectively. In particular, in all three cases, the cumulant functionals are defined at least on the domain of the corresponding generators. On the other hand, for \(t > 0\)

$$\begin{aligned} L_t[f] = \partial _t \mu _t[f] = (\mathcal{K }_{0,t} A_t f)(0) = \mu _t[A_t f] \end{aligned}$$

will depend on the type of semigroup considered.

4.5 Semigroups for generators of the free Lévy processes of the first kind

We noted before that for a Lévy process of the first kind, \((A_t, \mathcal{D }(A_t))\) is closable. We now show explicitly that its closure generates a contraction semigroup.

Let \(\alpha \in \mathbb{R }\), and \(\rho \) be a probability measure. For each \(s \ge 0\), denote \(\left\{ F^{(s)}_t : t \ge 0\right\} \) the solution of

$$\begin{aligned} - \partial _t F^{(s)}_t(z) = \alpha + G_{\rho \boxplus \gamma _s}(F^{(s)}_t(z)) \end{aligned}$$

which, by Theorem 4.5 of [10], exists and moreover satisfies

$$\begin{aligned} F^{(s)}_t(z) = F_{\tau ^{(s)}_t}(z) \end{aligned}$$

for a probability measure \(\tau ^{(s)}_t\). In fact \(F^{(s)}_{t_1} \circ F^{(s)}_{t_2} = F^{(s)}_{t_1 + t_2}\), and the corresponding measures form a monotone convolution semigroup. Denote now

$$\begin{aligned} \mathcal{L }^{(s)}_t f(x) = \int _{\mathbb{R }} f(y) d (\delta _x \triangleright \tau ^{(s)}_t)(y). \end{aligned}$$

At least in the compactly supported case, as noted above, these are transition operators for the corresponding process with monotone independent increments, but will not use this property directly. Instead, we note the following.

Theorem 16

Let \(\left\{ \mathcal{L }^{(s)}_t\right\} \) be as above, and \(\left\{ \mathcal{K }_{s,t}\right\} \), \(\left\{ A_t\right\} \) be as in Theorem 15.

  1. (a)

    For each \(s\), the operators \(\left\{ \mathcal{L }^{(s)}_t\right\} \) form a strongly continuous semigroup of contractions on \(C_0(\mathbb{R })\) and \(L^p(\mathbb{R }, \,dx)\).

  2. (b)

    The generator \(B_s\) of this semigroup is a closed operator for which \(\mathcal{D }\) is a core.

  3. (c)

    For \(f \in \mathcal{D }\), \(B_s f = A_s f = (\alpha \partial _x + \partial _x L_{\rho \boxplus \gamma _s}) f\).

  4. (d)

    On \(C_0(\mathbb{R })\) and \(L^p(\mathbb{R }, \,dx)\).

    $$\begin{aligned} \lim _{n \rightarrow \infty } \left( \mathcal{K }_{s, s + t/n} \right)^n = \mathcal{L }^{(s)}_t \end{aligned}$$
    (23)

    strongly.

  5. (e)

    \(B_s = \overline{A_s}\), and \(\mathcal{D }\) is a core for \(A_s\).

Proof

The proofs are very similar to the results for \(\mathcal{K }_{s,t}\), and are mostly omitted. For part (a), see Propositions 5 and 7. For the semigroup property, we compute

$$\begin{aligned} \mathcal{L }^{(s)}_{t_1} \mathcal{L }^{(s)}_{t_2} f(x)&= \int _{\mathbb{R }} \left( \int _{\mathbb{R }} f(z) d(\delta _y \triangleright \tau ^{(s)}_{t_2})(z) \right) d(\delta _x \triangleright \tau ^{(s)}_{t_1})(y) \\&= \int _{\mathbb{R }} f(z) d((\delta _x \triangleright \tau ^{(s)}_{t_1}) \triangleright \tau ^{(s)}_{t_2})(z) \\&= \int _{\mathbb{R }} f(z) d(\delta _x \triangleright \tau ^{(s)}_{t_1 + t_2})(z) = \mathcal{L }^{(s)}_{t_1 + t_2} f(x), \end{aligned}$$

since \(\triangleright \) is associative and distributive in the first variable.

The generator of a strongly continuous semigroup is closed. \(\mathcal{D }\) is dense and invariant under all \(\mathcal{L }^{(s)}_t\), so by Theorem X.49 of [32] it is a core for \(B_s\). The proof of part (c) is similar to Proposition 12. Part (d) follows from Chernoff Product Formula, Theorem III.5.2 of [20], applied to \((A_s, \mathcal{D }) = (B_s, \mathcal{D })\) (the density of the range condition is satisfied since \(B_s\) generates a contraction semigroup).

Finally, for part (e), we apply Chernoff Product Formula to \((A_s, \mathcal{D }(A_s))\). The density of the range condition holds since it was already satisfied for \(B_s\) on \(\mathcal{D }\). The theorem implies that \(\overline{A_s}\) generates precisely the same semigroup (23). Therefore \(\overline{A_s} = B_s\). \(\square \)

Remark 12

A alternative general approach to the non-autonomous Cauchy problem is via evolution semigroups, see Sect.VI.9(b) of [20]. We briefly describe this approach in our case. Let \(f \in C_0(\mathbb{R }^2)\), and denote \(f_t(x) = f(t,x)\). Then operators

$$\begin{aligned} (T_t f) (s,x) = (\mathcal{K }_{s,s+t} f_{s+t})(x) \end{aligned}$$

form a contraction semigroup with respect to \(\Vert \cdot \Vert _\infty \). Its generator is a closed, dissipative operator. At least formally, it is related to the generators of the family \(\left\{ \mathcal{K }_{s,t}\right\} \) by

$$\begin{aligned} (\mathbf{A } f)(t,x) = A_t f_t(x) + \partial _t f_t(x). \end{aligned}$$

We also note that \(\mathbf{A } f = 0\) if \(f\) is fixed by \(T_t\), in other words if

$$\begin{aligned} \mathcal{K }_{s,s+t} f_{s+t} = f_s. \end{aligned}$$

This is precisely the condition for \(f_t(X_t)\) to be a martingale.

4.6 Further remarks on the properties of \(L_\nu \) and \(\partial _x L_\nu \)

Proposition 17

Let \(\mu \) be a probability measure with mean \(\alpha \) and variance \(\beta \). Denote \(\nu = \mathcal{J }[\mu ]\). Then \(L_\mu \) is a multiple of a unitary operator

$$\begin{aligned} \left\{ f \in L^2(\mu ): \mu [f] = 0\right\} \rightarrow L^2(\nu ), \end{aligned}$$

with inverse \(x - \alpha - \beta L_\nu \).

Note that if polynomials are dense in \(L^2(\mu )\), this result follows from the proof of Proposition 10 in [4], and the statement about the inverse from that proposition and Corollary 11.

Proof

First we show that

$$\begin{aligned} \widetilde{\mathcal{D }} = \mathrm Span \left\{ \frac{1}{z - x} : z \in \mathbb{C }{\setminus }\mathbb{R }\right\} \end{aligned}$$

is dense in \(L^2(\mu )\). Indeed, if \(f \in \widetilde{\mathcal{D }}^\perp \), then

$$\begin{aligned} \left\langle f, \frac{1}{\bar{z} - x} \right\rangle _\mu = \int _{\mathbb{R }} \frac{f(x)}{z - x} \,d\mu (x) = G_{f \mu }(z) = 0 \end{aligned}$$

for all \(z \in \mathbb{C }{\setminus }\mathbb{R }\). By Stieltjes inversion, it follows that \(f = 0\,\mu \)-a.e.

By density of \(\widetilde{\mathcal{D }}\), it suffices to show that for resolvents,

$$\begin{aligned} \left\langle \frac{1}{z - x} - G_\mu (z), \frac{1}{w - x} - G_\mu (w) \right\rangle _\mu&= \mu \left[ \frac{1}{z - x} \frac{1}{\overline{w} - x} \right] - G_\mu (z) G_\mu (\overline{w}) \\&= \frac{1}{z - \overline{w}} (G_\mu (\overline{w}) - G_\mu (z)) - G_\mu (z) G_\mu (\overline{w}) \\&= \frac{G_\mu (z) G_\mu (\overline{w})}{\overline{w} \!-\! z} \left( (z \!-\! F_\mu (z))\!-\!(\overline{w}\!-\!F_\mu (\overline{w})) \right) \\&= \frac{G_\mu (z) G_\mu (\overline{w})}{\overline{w} - z} (\beta G_\nu (z) - \beta G_\nu (\overline{w})) \\&= \beta \left\langle \frac{1}{z - x} G_\mu (z), \frac{1}{w - x} G_\mu (w) \right\rangle _\nu \\&= \beta \left\langle L_\mu \left[ \frac{1}{z - x} \right], L_\mu \left[ \frac{1}{w - x} \right] \right\rangle _\nu . \end{aligned}$$

independently of \(\alpha \). Also

$$\begin{aligned}&(x - \alpha - \beta L_\nu ) \circ L_\mu \left[ \frac{1}{z - x} \right] = \left( -(z - x) + z - \alpha - \beta G_\nu (z) \right) \frac{1}{z - x} G_\mu (z)\\&\quad = - G_\mu (z) + \frac{1}{z - x}. \end{aligned}$$

and the proof of the opposite equality is similar. \(\square \)

We noted in Proposition 1 combined with Theorem 15 that a generator of the transition operator family for a free Lévy process is dissipative, and that on \(W^p\), it coincides with \(\alpha \partial _x + \partial _x L_\nu \). We now give a more explicit proof of this result for \(p=2\), which may be of interest in itself.

Proposition 18

Let \(\nu \) be a finite measure, and denote \(A = \alpha \partial _x + \partial _x L_\nu \) and

$$\begin{aligned} D(f, \bar{g}) = \int _{\mathbb{R }} (\partial f)(x,y) (\partial g)(x,y) \,d\nu (y). \end{aligned}$$

Then \(D\) is well-defined for \(f, g \in W^{1, \infty } \cap W^{1, 2}\), while for \(f, g \in W^2\)

$$\begin{aligned} D(f, \bar{g}) = A(f g) - f A(g) - A(f) g \end{aligned}$$

for all \(\alpha \), so that \(D\) is the carré du champ operator corresponding to \(A\). It follows that for such \(f\),

$$\begin{aligned} \mathfrak R \left\langle A f, f \right\rangle \le 0, \end{aligned}$$

so \(\alpha \partial _x + \partial _x L_\nu \) on \(W^2\) is \(L^2\)-dissipative.

Proof

We compute

$$\begin{aligned}&\int _{\mathbb{R }} D(f, f) \,dx = \int \int _{\mathbb{R }^2} \left|{\frac{f(x) - f(y)}{x - y}}\right|^2 \,d\nu (y) \,dx \\&\quad = \int _{\mathbb{R }} \int _{[y - a, y + a]} \left|{\frac{f(x) - f(y)}{x - y}}\right|^2 \,dx \,d\nu (y) \\&\qquad \, +\int _{\mathbb{R }} \int _{[y - a, y + a]^c} \left|{\frac{f(x) - f(y)}{x - y}}\right|^2 \,dx \,d\nu (y) \\&\quad \le \int _{\mathbb{R }} 2 a \Vert f^{\prime } \Vert _\infty ^2 \,d\nu (y) + \int _{\mathbb{R }} \int _{[y - a, y + a]^c} \frac{1}{(x - y)^2} \left|{\int _y^x f^{\prime }(u) \,du}\right|^2 \,dx \,d\nu (y) \\&\quad \le 2 a \Vert f^{\prime } \Vert _\infty ^2 \nu (\mathbb{R }) + \Vert f^{\prime } \Vert _1^2 \int _{\mathbb{R }} \frac{2}{a} \,d\nu (y) = 2 \nu (\mathbb{R }) \left( a \Vert f^{\prime } \Vert _\infty ^2 + \frac{1}{a} \Vert f^{\prime } \Vert _1^2 \right). \end{aligned}$$

For \(a = 1\), we get

$$\begin{aligned} \Vert D(f, f) \Vert _1 \le 2 \nu (\mathbb{R }) \left( \Vert f \Vert _{1, \infty } + \Vert f \Vert _{1,1} \right)^2. \end{aligned}$$

By polarization, \(D(f,g)\) is well defined for \(f, g \in W^{1, \infty } \cap W^{1,1}\).

Next, we compute

$$\begin{aligned}&\left(A(f g) - f A(g) - A(f) g \right) (x)\\&\quad = \int _{\mathbb{R }} \left[ \frac{f^{\prime }(x) g(x) - f(x) g^{\prime }(x)}{x - y} - \frac{f(x) g(x) - f(y) g(y)}{(x - y)^2} \right] \,d\nu (y) \\&\qquad - \int _{\mathbb{R }} \left[ \frac{f^{\prime }(x) g(x)}{x - y} - \frac{f(x) - f(y)}{(x - y)^2} g(x) \right] \,d\nu (y) \\&\qquad - \int _{\mathbb{R }} \left[ \frac{f(x) g^{\prime }(x)}{x - y} - f(x) \frac{g(x) - g(y)}{(x - y)^2} \right] \,d\nu (y) \\&\qquad +\, \alpha ((fg)^{\prime } - f g^{\prime } - f^{\prime } g)(x) \\&\quad = \int _{\mathbb{R }} \frac{f(x) - f(y)}{x - y} \frac{g(x) - g(y)}{x - y} \,d\nu (y). \end{aligned}$$

Moreover,

$$\begin{aligned} \vert L_\nu (f)(x) \vert&= \int _{[x - a, x + a]} \frac{1}{\vert x - y \vert } \left|{\int _y^x f^{\prime }(u) \,du}\right| \,d\nu (y)\\&+ \int _{[x - a, x + a]^c} \frac{1}{\vert x - y \vert } \left|{\int _y^x f^{\prime }(u) \,du}\right| \,d\nu (y) \\&\le \Vert f^{\prime } \Vert _\infty \nu ([x - a, x + a]) \\&+ \int _{[x - a, x + a]^c} \frac{1}{\vert x - y \vert } \left( \int _y^x \,du \right)^{1/2} \left|{ \int _y^x \vert f^{\prime }(u) \vert ^2 \,du}\right|^{1/2} \,d\nu (y) \\&\le \Vert f^{\prime } \Vert _\infty \nu ([x - a, x + a]) + \frac{1}{\sqrt{a}} \nu (\mathbb{R }) \Vert f^{\prime } \Vert _2. \end{aligned}$$

So for \(a = 1/\varepsilon ^2\),

$$\begin{aligned} \limsup _{x \rightarrow \infty } \vert L_\nu (f)(x) \vert \le \varepsilon \nu (\mathbb{R }) \Vert f^{\prime } \Vert _2 \end{aligned}$$

as long as \(\Vert f^{\prime } \Vert _\infty < \infty \). Thus for \(f \in W^{1, \infty } \cap W^{1, 2}\),

$$\begin{aligned} \limsup _{x \rightarrow \infty } \vert L_\nu (f)(x) \vert = 0. \end{aligned}$$
(24)

Therefore for \(f \in W^2\),

$$\begin{aligned}&2 \mathfrak R \left\langle A f, f \right\rangle = \left\langle A f, f \right\rangle + \left\langle f, A f \right\rangle \nonumber \\&\quad = \int _{-\infty }^{\infty } \left[A(\vert f \vert ^2)(x) - D(f, \bar{f})(x) \right] \,dx \nonumber \\&\quad = \int _{-\infty }^{\infty } \left[ \partial _x \left( L_\nu (\vert f \vert ^2)(x) + \alpha f(x) \right)- D(f, \bar{f})(x) \right] \,dx\nonumber \\&\quad = - \int \int _{\mathbb{R }^2} \left|{\frac{f(x) - f(y)}{x - y}}\right|^2 \,d\nu (y) \,dx < 0. \end{aligned}$$
(25)

\(\square \)

Note that the Dirichlet form from [11] is \(\mathcal{D }(f, g) = \int _{\mathbb{R }} D(f, g)(x) \,d\nu (x)\), which is not the same as the right-hand-side in Eq. (25).

Proposition 19

\(\alpha \partial _x + \partial _x L_\nu \) is \(C_0\)-dissipative on \(W^\infty \).

Proof

For \(f \in C_0(\mathbb{R })\) such that \(\vert f(x_0) \vert = \max _{x \in \mathbb{R }} \vert f(x) \vert \), a normalized tangent functional is \(\phi _f(g) = \overline{f(x_0)} \delta _{x_0}\). Then for \(f \in W^\infty \),

$$\begin{aligned}&\mathfrak R \phi _f[\alpha \partial _x f + \partial _x L_\nu f]\\&\quad = \mathfrak R \delta _{x_0} \left[ \alpha \overline{f(x_0)} f^{\prime }(x) + \overline{f(x_0)} \int _{\mathbb{R }} \left( \frac{f^{\prime }(x)}{x-y} - \frac{f(x) - f(y)}{(x-y)^2} \right) \,d\nu (y) \right] \\&\quad = - \mathfrak R \left( \overline{f(x_0)} \int _{\mathbb{R }} \frac{f(x_0) - f(y)}{(x_0 - y)^2} \right) \,d\nu (y) \\&\quad = - \int _{\mathbb{R }} \frac{\vert f(x_0) \vert ^2 - \mathfrak R (\overline{f(x_0)} f(y))}{(x_0 - y)^2} \,d\nu (y) \le 0. \end{aligned}$$

since

$$\begin{aligned} 2 \mathfrak R (\overline{f(x_0)} f^{\prime }(x_0)) = (\overline{f} f)^{\prime }(x_0) = (\vert f \vert ^2)^{\prime }(x_0) = 0. \end{aligned}$$

\(\square \)

5 The \(q\)-Brownian motion

Let \(q \in (-1,1)\). The \(q\)-Brownian motion \(\left\{ X(t) : t \ge 0\right\} \) is a non-commutative stochastic process constructed in [15]. The distribution of each \(X(t)\) is a very classical \(q\)-Gaussian distribution

$$\begin{aligned} d\gamma _{t; q}(y)&= \frac{\sqrt{1 - q}}{\pi \sqrt{t}} \sin (\theta ) (q; q)_\infty \vert (q e^{2 i \theta }; q)_\infty \vert ^2 dy \nonumber \\&= (q; q)_\infty \vert (q e^{2 i \theta }; q)_\infty \vert ^2 d\gamma _t(\sqrt{1-q} \, y) \end{aligned}$$
(26)

supported on the interval

$$\begin{aligned} \left[- \frac{2 \sqrt{t}}{\sqrt{1-q}}, \frac{2 \sqrt{t}}{\sqrt{1-q}}\right]. \end{aligned}$$

Here we have used the change of variable (27) and the \(q\)-Pochhammer symbol

$$\begin{aligned} (a_1, \ldots , a_k; q)_\infty = \prod _{j=1}^k \prod _{i=0}^\infty (1 - a_j q^i). \end{aligned}$$

According to Corollary 3.10 of [13], the \(q\)-Brownian motion is a Markov process, and moreover the \(q\)-Hermite polynomials are martingale polynomials with respect to it. Here the (Rogers) continuous \(q\)-Hermite polynomials

$$\begin{aligned} H_n(y, t; q) = t^{n/2} H_n(x/\sqrt{t}; q) \end{aligned}$$

are the monic orthogonal polynomials with respect to the measure (26),

$$\begin{aligned} \int _{\mathbb{R }} H_n(y, t; q) H_k(y, t; q) \,d\gamma _{t; q}(y) = \delta _{n=k} [n]_q! t^n. \end{aligned}$$

They also satisfy the three-term recursion relation

$$\begin{aligned} y H_n(y, t; q) = H_{n+1} (y, t; q) + [n]_q t H_{n-1}(y, t; q), \end{aligned}$$

where \([n]_q = 1 + q + \cdots + q^{n-1}\).

Lemma 20

The transition operators

$$\begin{aligned} \mathcal{K }_{s, t; q} f(x) = \int _{\mathbb{R }} f(y) \mathcal{K }_{s, t; q}(x, dy) \end{aligned}$$

of the \(q\)-Brownian motion are

$$\begin{aligned} \mathcal{K }_{s,t; q}(x, dy)&= \frac{\sqrt{1 - q}}{\pi \sqrt{t}} (q; q)_\infty \sin (\theta ) \vert (q e^{2 i \theta };q)_\infty \vert ^2\\&\times \frac{(s/t; q)_\infty }{\vert (\sqrt{s/t} e^{i(\phi + \theta )}, \sqrt{s/t} e^{i(\phi - \theta )}; q)_\infty \vert ^2} dy, \\&= (q; q)_\infty \vert (q e^{2 i \theta };q)_\infty \vert ^2 \frac{(s/t; q)_\infty }{\vert (\sqrt{s/t} e^{i(\phi + \theta )}, \sqrt{s/t} e^{i(\phi - \theta )}; q)_\infty \vert ^2}\\&\times d\gamma _t(\sqrt{1 - q} \, y), \end{aligned}$$

where

$$\begin{aligned} x = \frac{2 \sqrt{s}}{\sqrt{1-q}} \cos (\phi ), \quad y = \frac{2 \sqrt{t}}{\sqrt{1-q}} \cos (\theta ), \quad \phi , \theta \in [0, \pi ]. \end{aligned}$$
(27)

Proof

Using the martingale property

$$\begin{aligned} (\mathcal{K }_{s,t; q} H_n(y, t; q))(x) = H_n(x, s; q) \end{aligned}$$
(28)

and the orthogonality and density of \(q\)-Hermite polynomials,

$$\begin{aligned} \mathcal{K }_{s, t; q}(x, dy)&= \sum _{n=0}^\infty \frac{1}{[n]_q! t^n} H_n(x, s; q) H_n(y, t; q) \,d\gamma _{t; q}(y) \\&= \sum _{n=0}^\infty \frac{(s/t)^{n/2}}{[n]_q!} H_n(x/\sqrt{s}; q) H_n(y/\sqrt{t}; q) \,d\gamma _{t; q}(y). \end{aligned}$$

The result now follows from the \(q\)-Mehler formula

$$\begin{aligned} \sum _{n=0}^\infty \frac{r^n}{[n]_q!} H_n(x; q) H_n(y; q) = \frac{(r^2; q)_\infty }{\vert (r e^{i(\phi + \theta )}, r e^{i(\phi - \theta )}; q)_\infty \vert ^2}. \end{aligned}$$
(29)

See Theorem 4.6 of [13] for more details. \(\square \)

Remark 13

According to [19], the Itô product formula for the \(q\)-Brownian motion has the form

$$\begin{aligned} \left(\int _0^\infty U(t) \sharp dX(t) \right) \left(\int _0^\infty V(t) \sharp dX(t) \right)&= \int _0^\infty A(t) d X(t) \left(B(t) \int _0^t V(s) \sharp dX(s) \right)\nonumber \\&+ \int _0^\infty \left(\int _0^t U(s) \sharp dX(s) \right) C(t) dX(t) D(t) \nonumber \\&+ \int _0^\infty A(t) \Gamma _q \left[B(t) C(t) \right] D(t) dt, \end{aligned}$$
(30)

where \(U = A \otimes B\), \(V = C \otimes D\) are adapted biprocesses satisfying a technical condition. Here \(\Gamma _q\) is a certain completely positive map on the von Neumann algebra \(W^*(\left\{ X(t), t \ge 0\right\} )\). In this paper we are only interested in the action of this map on the von Neumann algebra generated by a single operator \(X(t)\). This algebra is commutative and isomorphic to

$$\begin{aligned} L^\infty \left[- \frac{2 \sqrt{t}}{\sqrt{1-q}}, \frac{2 \sqrt{t}}{\sqrt{1-q}}\right]. \end{aligned}$$

On this algebra, the map is determined by the property that

$$\begin{aligned} \Gamma _{t;q}[H_n(x, t; q)] = q^n H_n(x, t; q) = H_n(q x, q^2 t; q), \end{aligned}$$
(31)

that is, it is a multiplier for the \(q\)-Hermite polynomials. Comparing with Eq. (28), we see that

$$\begin{aligned} \Gamma _{t; q}(x, dy) = \mathcal{K }_{q^2 t, t; q}(q x, dy). \end{aligned}$$
(32)

In particular, \(\Gamma _{t; q}\) is an integral operator

$$\begin{aligned} \Gamma _{t; q}(x, dy)&= \frac{\sqrt{1 - q}}{\pi \sqrt{t}} \sin (\theta ) (q^2; q)_\infty (q; q)_\infty \frac{\vert (q e^{2 i \theta };q)_\infty \vert ^2}{\vert (q e^{i(\phi + \theta )}, q e^{i(\phi - \theta )}; q)_\infty \vert ^2} \,dy \\&= (q^2; q)_\infty (q; q)_\infty \frac{\vert (q e^{2 i \theta };q)_\infty \vert ^2}{\vert (q e^{i(\phi + \theta )}, q e^{i(\phi - \theta )}; q)_\infty \vert ^2} d\gamma _t(\sqrt{1 - q} \, y). \end{aligned}$$

Proposition 21

(Functional Itô formula) Let \(f\) be a polynomial. Then

$$\begin{aligned} f(X(t)) = \int _0^t (\partial f)(X(s), X(s)) \,\sharp \, dX(s) + \int _0^t (\Delta _{s; q} f)(X(s)) ds. \end{aligned}$$
(33)

Here

$$\begin{aligned} \Delta _{s; q} f (x) = \int _{\mathbb{R }} \left( \partial _x \frac{f(x) - f(y)}{x - y} \right) \Gamma _{s; q}(x, dy) = \int _{\mathbb{R }} (\partial _x \partial f) (x,y) \Gamma _{s; q}(x, dy). \end{aligned}$$

Proof

First we show that all the terms in the functional Itô formula satisfy the technical condition of Theorem 3.2 from [19]. All the integrands are polynomials in \(X(s)\). So it suffices to show all the properties for the process \(\left\{ X(t)\right\} \). It is clearly adapted and uniformly bounded on the interval \([0,t]\). Now let

$$\begin{aligned} \mathcal{I } = \left\{ 0 = t_0 < t_1 < \cdots < t_n = t\right\} \end{aligned}$$

be a subdivision of \([0,t]\), and \(\delta (\mathcal{I })\) be the length of the largest interval in this subdivision. Let

$$\begin{aligned} X^{\mathcal{I }}(s) = \sum _{i=0}^{n-1} X(t_i) \mathbf 1 _{[t_i, t_{i+1})}(s). \end{aligned}$$

Then

$$\begin{aligned} \int _0^t \Vert X(s) - X^{\mathcal{I }}(s) \Vert _\infty ^2 ds = \sum _{i=0}^{n-1} \int _{t_i}^{t_{i+1}} \Vert X(s) - X(t_i) \Vert _\infty ^2 ds. \end{aligned}$$

But \(\Vert X(s) - X(t_i) \Vert _\infty ^2 = \Vert X(s - t_i) \Vert _\infty ^2 = \frac{4}{1-q} (s - t_i)\). Therefore the preceding sum is

$$\begin{aligned} \sum _{i=0}^{n-1} \int _{t_i}^{t_{i+1}} \frac{4}{1-q} (s - t_i) ds = \frac{2}{1-q} \sum _{i=0}^{n-1} (t_{i+1} - t_i)^2 \le \frac{2}{1-q} \delta (\mathcal{I }) \rightarrow 0 \end{aligned}$$

as \(\delta (\mathcal{I }) \rightarrow 0\).

The rest of the proof proceeds by induction. Assuming formula (33) for \(f\!,\) and using the Itô product formula (30), we get

$$\begin{aligned} f(X(t)) X(t)&= \int _0^t (\partial f) (X(s), X(s)) \,\sharp (I \otimes X(s)) \,\sharp \,dX(s) + \int _0^t f(X(s)) \,dX(s)\\&+ \int _0^t [(I \otimes \Gamma _q) (\partial f)](X(s)) \,ds + \int _0^t \Delta _{s; q}(f)[X(s)] X(s) \,ds \\&= \int _0^t (\partial f) (X(s), X(s)) \,\sharp (I \otimes X(s)) \,\sharp \,dX(s) + \int _0^t f(X(s)) \,dX(s) \\&+ \int _0^t [(I \otimes \Gamma _q) (\partial f)](X(s)) \,ds \!+\! \int _0^t [(I \otimes \Gamma _q)(\partial _x \partial f)] (X(s)) X(s) \,ds. \end{aligned}$$

The result now follows for \(x f(x)\) by observing that

$$\begin{aligned} (\partial f)(x,y) y + f(x) = (\partial (x f)) (x,y). \end{aligned}$$

and

$$\begin{aligned}&(\partial f)(x,y) + \partial _x (\partial f)(x,y) x = \partial _x(x (\partial f)(x,y)) = \partial _x \left(\partial (x f)(x,y) - f(y) \right)\\&\quad = \partial _x \partial (x f)(x,y). \end{aligned}$$

\(\square \)

Corollary 22

On the domain \(\mathcal{P }\) of polynomials, the generators of the \(q\)-Brownian motion are

$$\begin{aligned} A_t f(x) = \Delta _{t; q} f(x) = \int (\partial _x \partial f)(x, y) \Gamma _{t; q}(x, dy), \end{aligned}$$

More explicitly,

$$\begin{aligned} A_t f(x)&= \int \left(\partial _x \frac{f(x) - f(y)}{x-y} \right) \frac{(q^2; q)_\infty }{\vert (q e^{i(\phi + \theta )}, q e^{i(\phi - \theta )}; q)_\infty \vert ^2} \;d\gamma _{t; q}(y) \\&= \int \left(\partial _x \frac{f(x) - f(y)}{x-y} \right) (q^2; q)_\infty (q; q)_\infty \frac{\vert (q e^{2 i \theta };q)_\infty \vert ^2}{\vert (q e^{i(\phi + \theta )}, q e^{i(\phi - \theta )}; q)_\infty \vert ^2}\\&\,d\gamma _t(\sqrt{1-q} y) \end{aligned}$$

with the change of variables (27).

Proof

It follows from the Itô product formula in Proposition 21 that for polynomial \(f\),

$$\begin{aligned} f(X(t)) - \int _0^t \Delta _{s; q} f(X(s)) \,ds \end{aligned}$$

is a martingale. Therefore by Lemma 2, \(\Delta _{t,q}\) is the generator of the process at time \(t\). Note that since the support of \(\gamma _{t; q}\) is infinite, polynomials are determined by their values on it. The explicit formula follows. \(\square \)

Theorem 23

The operator \(\Delta _{t; q}\) described in Corollary 22 is the generator of the \(q\)-Brownian motion at time \(t\) on the domain \(W^\infty \subset C_0(\mathbb{R })\).

Proof

First, using the beginning of the proof of Proposition 14,

$$\begin{aligned} \vert \Delta _{t; q} f(x) \vert = \left|{\int _{\mathbb{R }} (\partial _x \partial f)(x, y) \Gamma _{t; q}(x, dy)}\right| \le \Vert f^{\prime \prime } \Vert _\infty \left|{\int _{\mathbb{R }} \Gamma _{t; q}(x, dy)}\right| \le \Vert f^{\prime \prime } \Vert , \end{aligned}$$

where in the last step we used Eq. (31) for \(n=0\). It is also clear that \(\mathcal{K }_{s, t; q}\) is a contraction on \(L^\infty (\mathbb{R }, \,dx)\). By a standard argument, polynomials are dense with respect to the \(W^\infty \) norm in \(C\left[- \frac{2 \sqrt{t}}{\sqrt{1-q}}, \frac{2 \sqrt{t}}{\sqrt{1-q}}\right]\). Finally, the strong continuity of \(\mathcal{K }_{s, t; q}\) and \(\Gamma _{t; q}\) on polynomials follows from the martingale property of the \(q\)-Hermite polynomials, and formula (31). It remains to apply Proposition 1. \(\square \)

Remark 14

Setting \(q=0\) in the formula in Corollary 22, we get

$$\begin{aligned} A_t f(x) = \int \left(\partial _x \frac{f(x) - f(y)}{x-y} \right) \,d\gamma _t(y), \end{aligned}$$

as expected. On the other hand, setting \(q=1\) in formula (31) we see that \(\Gamma _{t; 1}(x, dy) = \delta (x - y) \,dy\) is the identity operator. So in this case,

$$\begin{aligned} A_t f(x) = \int \left(\partial _x \frac{f(x) - f(y)}{x-y} \right) \delta (x-y) \,dy = f^{\prime \prime }(x), \end{aligned}$$

again as expected.

6 Two-state free Brownian motions

In [5], we considered Brownian motions in the context of two-state free probability theory \((\mathcal{A }, \mathbb{E }, E)\). A priori, any process with two-state freely independent increments whose \(E\)-distributions are a free convolution semigroup \(\left\{ \nu _t\right\} \) and whose \(\mathbb{E }\)-distributions satisfy

$$\begin{aligned} \mathcal{J }[\mu _t] = \nu _t, \quad \mu _t[x] = 0, \mu _t[x^2] = t \end{aligned}$$

can be considered a two-state free Brownian motion. We proved, however, that if we require \(\mathbb{E }\) to be a faithful normal state and \(E\) be normal, then \(\nu _t\) has to be the semicircular distribution with possibly non-zero mean \(\alpha t\) and variance \(t\). Such a process is not Markov (in fact \(\mathbb{E }\) is not tracial, and \(\mathbb{E }\)-preserving conditional expectations do not exist), however its classical version is. We now construct generators of these processes.

Proposition 24

The generator of the two-state free Brownian motion \(\left\{ X(t)\right\} \) with parameter \(\alpha \) at time \(t\) is

$$\begin{aligned} \alpha (\partial _x - L_{\mu _t}) + \partial _x L_{\nu _t}, \end{aligned}$$

on the domain \(W^{\infty }\).

Proof

This result was proved in [5] for polynomial \(f\). Also,

$$\begin{aligned} \Vert (\alpha (\partial _x \!- \!L_{\mu _t}) + \partial _x L_{\nu _t}) f \Vert \!\le \!2 \vert \alpha \vert \Vert f^{\prime } \Vert _\infty \!+\! \Vert f^{\prime \prime } \Vert _\infty \!\le \!2 \vert \alpha \vert \Vert f \Vert _\infty \!+\! (2 \vert \alpha \vert \!+\! 1) \Vert f^{\prime \prime } \Vert _\infty . \end{aligned}$$

Since the measures \(\mu _t, \nu _t\) are all uniformly compactly supported, the full result follows as in Theorem 23. \(\square \)

Remark 15

(Itô formula) By the same methods as in [2, 12], for sufficiently nice \(f\),

$$\begin{aligned} f(X(t)) = f(0) + \int _0^t \partial f(X(s)) \sharp \,dX(s) + \int _0^t (\partial _x \otimes E) \partial f(X(s)) \,ds. \end{aligned}$$
(34)

Using Lemma 2.1 of [14] and the observation that the process \(\left\{ X(t)\right\} \) is \(\mathbb{E }\)-centered (see Remark 6 of [5] for more details), we see that

$$\begin{aligned} \mathbb E \left[f(X(t))\right] = f(0) + \int _0^t \mathbb E \left[\left( \alpha \partial _x - \alpha (1 \otimes \mathbb{E }) \partial + (\partial _x \otimes E) \partial \right) f(X(s))\right] \,ds. \end{aligned}$$

This result is consistent with the generator formula in the preceding proposition.