Abstract
We consider random walks and Lévy processes in a homogeneous group G. For all \(p > 0\), we completely characterise (almost) all G-valued Lévy processes whose sample paths have finite p-variation, and give sufficient conditions under which a sequence of G-valued random walks converges in law to a Lévy process in p-variation topology. In the case that G is the free nilpotent Lie group over \(\mathbb {R}^d\), so that processes of finite p-variation are identified with rough paths, we demonstrate applications of our results to weak convergence of stochastic flows and provide a Lévy–Khintchine formula for the characteristic function of the signature of a Lévy process. At the heart of our analysis is a criterion for tightness of p-variation for a collection of càdlàg strong Markov processes.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
This paper focuses on several questions regarding Lévy processes and random walks in homogeneous groups, with a particular focus on applications to rough paths theory. Let G be a homogeneous group (in the sense of [15]) equipped with a sub-additive homogeneous norm and the corresponding left-invariant metric. We can summarise the three main results of the paper as follows.
-
(Theorem 5.1) Given a Lévy process \(\mathbf {X}\) in G, we determine (almost) all values of \(p > 0\) for which the sample paths of \(\mathbf {X}\) have almost surely finite p-variation.
-
(Theorem 5.5) We give sufficient conditions for a sequence of (interpolated and reparametrised) random walks in G to converge weakly to a (interpolated and reparametrised) Lévy process in G in p-variation topology.
-
(Theorem 5.17) In the case that \(G = G^N(\mathbb {R}^d)\), the step-N free nilpotent Lie group over \(\mathbb {R}^d\), we determine a Lévy–Khintchine formula for the characteristic function (in the sense of [11]) of the signature of the random rough path constructed from a Lévy process in G.
We apply the second of these results in the context of rough paths to show weak convergence of stochastic flows in several examples. Notably, we provide a significant generalisation of a result of Kunita [29] and of a related result of Breuillard, Friz and Huesmann [7].
We take a moment to discuss how our work relates to the appearance of càdlàg rough paths in the current literature. Friz and Shekhar [17] recently introduced a broad extension of rough paths theory to the càdlàg setting. Their work in particular generalises the notion of rough integration and RDEs and significantly extends earlier work of Williams [36] who gave pathwise solutions to differential equations driven by Lévy processes in \(\mathbb {R}^d\).
As a family of càdlàg rough paths of particular interest, Lévy process in \(G^N(\mathbb {R}^d)\) of finite p-variation for some \(1 \le p < N +1\), were studied in [17]. Such Lévy p-rough paths bear a resemblance to Markovian rough paths constructed from subelliptic Dirichlet forms on \(L^2(G^N(\mathbb {R}^d))\), first studied in [19] and recently in [10,11,12], in the sense that both processes may be viewed as stochastic rough paths whose evolution depends entirely on its first N iterated integrals.
The method we employ here to give meaning to càdlàg rough paths is to connect left- and right-limits with continuous paths and treat the resulting object as a classical rough path. We therefore do not address directly the concept of a càdlàg RDE in this paper, but emphasise that our methods relate closely to Marcus SDEs and that Theorems 5.1 and 5.17 can be seen as generalisations of two related results in [17] (discussed further in Sect. 5). We mention however that the method of proof used for our main results, which is based on approximating a Lévy process by a sequence of random walks, is different to the methods used in [17].
We also point out that our methods treat general interpolations, which depend arbitrarily on the endpoints of jumps, on the same footing as the simpler linear interpolation used in Marcus SDEs. Examples of interest of such non-linear interpolations date back to the works of McShane [33] and Sussman [35] on approximations of Brownian motion (discussed further in Examples 5.12 and 5.14), and recently in the work of Flint, Hambly and Lyons [14].
A crucial result for our analysis, which we believe to be of independent interest, is a criterion for tightness of p-variation of strong Markov processes taking values in a Polish space (Theorem 4.8). This result is a generalisation of the main result of Manstavičius [32], which provides a criterion for a strong Markov process to have sample paths of a.s. finite p-variation. Our proof of Theorem 4.8 is a simplification of the stopping times technique adopted in [32].
Finally, we mention that while most applications presented in this paper concern geometric rough paths, and thus only require consideration of the free nilpotent Lie group, we have attempted to make statements in their natural level of generality. In particular, we believe that our results may prove to be of interest for studying random walks and Lévy processes in the Butcher group, which correspond to branched rough paths in the sense of [21, 22] (see also Remark 3.2 below).
1.1 Outline of the paper
In Sect. 2 we discuss iid arrays and Lévy processes taking values in a general Lie group. Our only contribution in this section is the construction of a sequence of random walks \((\mathbf {X}^n)_{n \ge 1}\) associated with a Lévy process \(\mathbf {X}\) such that \(\mathbf {X}^n \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}\) in the Skorokhod topology, and for which tightness of p-variation is simple to verify. In Sect. 3 we recall several preliminary facts about homogeneous groups and spaces of paths of finite p-variation.
Section 4 is devoted to the proof of Theorem 4.1, which shows tightness of p-variation for a collection of random walks in a homogeneous group. This is a central result used in the proofs of the three main aforementioned theorems, which we state and prove in Sect. 5. In Sect. 5.3.1 we also provide several applications of Theorem 5.5 to weak convergence of stochastic flows.
In “Appendix A” we introduce the concept of path functions, which serve to connect the left- and right-limits of càdlàg paths, and collect several technical results used throughout Sect. 5. In “Appendix B” we describe conditions under which sample paths of a Lévy process possess infinite p-variation (used to complete the proof of Theorem 5.1).
2 Iid arrays and Lévy processes in Lie groups
2.1 Notation
Throughout this section, we fix a Lie group G with Lie algebra \(\mathfrak g\), and identify \(\mathfrak g\) with the space of left-invariant vector fields on G. Let \(u_1, \ldots , u_m\) be a basis for \(\mathfrak g\). We equip \(\mathfrak g\) with the inner product for which \(u_1,\ldots , u_m\) is an orthonormal basis. For an element \(y \in \mathfrak g\) we write \(y = \sum _{i=1}^m y^iu_i\). When x is an element of a normed space, we denote its norm by |x|.
We further fix an open neighbourhood \(U \subset G\) of the identity \(1_G \in G\), such that U has compact closure and \(\exp : \mathfrak g\mapsto G\) is a diffeomorphism from a neighbourhood of zero in \(\mathfrak g\) onto U. Let \(\xi _i \in C^\infty _c(G, \mathbb {R})\) be smooth functions of compact support such that \(\log (x) = \sum _{i=1}^m \xi _i(x) u_i\) for all \(x \in U\) (that is, \(\xi _i\) provide exponential coordinates of the first kind on U). We denote \(\xi : G \mapsto \mathfrak g, \xi (x) = \sum _{i=1}^m \xi _i(x) u_i\).
For a metric space E, denote by D([0, T], E) the space of càdlàg functions \(\mathbf {x}: [0,T] \mapsto E\) equipped with the Skorokhod topology (see, e.g., [3, Section 12]). We shall use the symbol o to denote spaces of paths whose starting point is the identity element \(1_G\). For example \(D_o([0,T],G)\) denotes the set of all \(\mathbf {x}\in D([0,T],G)\) such that \(\mathbf {x}_0 = 1_G\).
2.2 Preliminaries on iid arrays and Lévy processes
An array in G is a sequence of a finite collection of G-valued random variables \(\left( X_{n1},\ldots , X_{nn}\right) _{n \ge 1}\). We call the array iid if, for every \(n \ge 1\), \(X_{n1},\ldots , X_{nn}\) are iid. We will always suppose that an iid array \(X_{nj}\) is infinitesimal, i.e., \(\lim _{n \rightarrow \infty } \mathbb {P} \left[ X_{n1} \notin V \right] = 0\) for every neighbourhood V of \(1_G\). Furthermore, for all \(n \ge 1\) we let
and for all \(i,j\in \{1,\ldots , m\}\)
For a collection of elements \(x_{1},\ldots , x_{n}\) in G, we define the associated walk \(\mathbf {x}\in D_o([0,1],G)\) by
and for an array \(X_{nj}\), we refer to the associated random walk \(\mathbf {X}^n\) to mean the sequence of associated walks built from the collections \((X_{n1},\ldots ,X_{nn})\).
Recall that a (left) Lévy process in G is a \(D_o([0,T], G)\)-valued random variable \(\mathbf {X}\) with independent and stationary (right) increments. We refer to Liao [30] for further details.
We call a Lévy triplet (or simply triplet) a collection \((A, B, \Pi )\) of an \(m\times m\) covariance matrix \((A^{i,j})_{i,j = 1}^m\), an element \(B = \sum _{i=1}^m B^i u_i \in \mathfrak g\), and a Lévy measure \(\Pi \) on G (see [30, p. 12]).
A classical theorem of Hunt [26] asserts that for every Lévy process \(\mathbf {X}\) in G, there exists a unique triplet \((A,B,\Pi )\) such that the generator of \(\mathbf {X}\) is given for all \(f \in C^2_0(G)\) and \(x \in G\) by
Conversely, every Lévy triplet gives rise to a unique Lévy process.
We will heavily use a characterisation due to Feinsilver [13] of when a G-valued random walk converges in law to a Markov process as a \(D_o([0,1], G)\)-valued random variable. The following is a special case of the main results of [13].
Theorem 2.1
(Feinsilver [13]). Let \(X_{nj}\) be an iid array of G-valued random variables and \(\mathbf {X}^n\) the associated random walk. Denote by \(F_n\) the probability measure on G associated with \(X_{n1}\). Let \(\mathbf {X}\) be a Lévy process in G with triplet \((A,B,\Pi )\).
Then \(\mathbf {X}^n \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}\) as \(D_o([0,1],G)\)-valued random variables if and only if
-
(1)
\(\lim _{n \rightarrow \infty } n F_n(f) = \Pi (f)\) for every \(f \in C_b(G)\) which is identically zero on a neighbourhood of \(1_G\),
-
(2)
\(\lim _{n \rightarrow \infty } n B_n = B\), and
-
(3)
for all \(i,j\in \{1,\ldots , m\}\),
$$\begin{aligned} \lim _{n \rightarrow \infty } nA^{i,j}_n = A^{i,j} + \int _G \xi _i(x)\xi _j(x) \Pi (dx). \end{aligned}$$
The following notion of a scaling function will be used throughout the paper.
Definition 2.2
(Scaling function). A continuous bounded function \(\theta : G \rightarrow \mathbb {R}\) is called a scaling function if
-
(i)
\(\theta (1_G) = 0\),
-
(ii)
\(\theta (x) > 0\) for all \(x \ne 1_G\),
-
(iii)
there exists \(C > 0\) such that \(|\xi |^2 \le C\theta \), and
-
(iv)
there exists \(c > 0\) such that \(\theta (x) > c\) for all \(x \in G{\setminus } U\).
Let \(X_{nj}\) be an iid array in G. We say that \(\theta \) scales the array \(X_{nj}\) if
The importance behind the above definition is that given a scaling function \(\theta \) which scales \(X_{nj}\), the rate with which \(\theta \) decays at \(1_G\) will determine the values of \(p > 0\) for which the p-variation of the associated random walk is tight (Theorem 4.1).
Example 2.3
In the case \(G = \mathbb {R}^d\), the prototypical example of a scaling function is \(1 \wedge |\cdot |^2\). For a general Lie group G, the example extends as follows: let \(c > 0\) be sufficiently small such that \(W \,{:=}\, \{\exp (y) \mid y \in \mathfrak g, |y| \le c\}\) is contained in U. Then
is a scaling function.
Remark 2.4
Suppose that \(\theta \) is defined by (2.1) and that \(X_{nj}\) is an iid array in G such that the associated random walk converges in law to a Lévy process. Then a simple consequence of Theorem 2.1 is that \(\theta \) scales the array \(X_{nj}\).
2.3 Approximating walk
In this subsection, given a Lévy process \(\mathbf {X}\) in G, we construct an iid array \(X_{nj}\) for which the associated random walk \(\mathbf {X}^n\) converges in law to \(\mathbf {X}\). The array \(X_{nj}\) has the advantage that it takes values in either the support of the Lévy measure of \(\mathbf {X}\), or in a set which shrinks to the identity as \(n \rightarrow \infty \). This makes the walk \(\mathbf {X}^n\) significantly easier to analyse than the increments of \(\mathbf {X}\) itself and will be used in the proofs of Theorems 5.1 and 5.17.
Throughout this subsection, let \(\mathbf {X}\) be a Lévy process in G with triplet \((A, B, \Pi )\). For \(i\in \{1, \ldots m\}\) define
Define also the sets of indexes
For \(k \in \widetilde{K}\) define
and let .
For n sufficiently large so that \(\Pi (U^c) < n/2\), let
Define and note that \(w_n \,{:=}\, \Pi \{U_n^c\} \le n/2\). Remark that \(\lim _{n \rightarrow \infty } h_n = 0\) which implies that \(U_n\) shrinks to \(1_G\) as \(n \rightarrow \infty \).
Define on G the probability measure \(\mu _n(dx) \,{:=}\, w_n^{-1}\mathbf 1 \{x \in U_n^c\} \Pi (dx)\). Observe that by Hölder’s inequality, for all \(q \ge 1\)
For every \(n \ge 1\), let \(Y_n = Y_n^1u_1 + \cdots Y_n^m u_m\) be a \(\mathfrak g\)-valued random variable such that for all \(k \in \widetilde{K}\)
and for all \(k \notin \widetilde{K}\)
and with covariances for all \(i,j\in \{1,\ldots , m\}\)
In particular, note that \(Y_n^i = b_n^i\) a.s. for all \(i \notin J\). Remark that setting \(q = 2\) in (2.2) implies
from which it follows that \(\sup _{n \ge 1}n|b^i_n| < \infty \). Moreover, it holds that
It follows that we can choose \(Y_n\) such that \(\exp (Y_n)\) has support in a neighbourhood \(V_n\) of \(1_G\), such that \(V_n\) shrinks to \(1_G\) as \(n \rightarrow \infty \). Denote by \(\nu _n\) the probability measure of the G-valued random variable \(\exp (Y_n)\).
Finally, let \(X_{n1}\) be the G-valued random variable associated to the probability measure \((w_n/n) \mu _n + (1-w_n/n) \nu _n\), and let \(X_{n2}, \ldots , X_{nn}\) be independent copies of \(X_{n1}\).
Consider the random walk \(\mathbf {X}^n\) associated with \(X_{nj}\). Then a straightforward application of Theorem 2.1 implies that \(\mathbf {X}^n\,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}\) as \(D_o([0,1],G)\)-valued random variables. We also record the following two simple lemmas whose proofs we omit.
Lemma 2.5
Let \(0 < q_1, \ldots , q_m \le 2\) be real numbers such that \(q_i \notin \Gamma _i\) for all \(i \in \{1,\ldots , m\}\), \(q_i = 2\) for all \(i \in J\), and \(q_i \ge 1\) for all \(i \in K\). Let \(\theta \) be a scaling function such that \(\theta (x) = \sum _{i = 1}^m |\xi _i(x)|^{q_i}\) for x in a neighbourhood of \(1_G\). Then \(\theta \) scales the array \(X_{n1},\ldots , X_{nn}\).
Lemma 2.6
Let \(\theta \) be a scaling function on G which scales \(X_{nj}\). Let V be a neighbourhood of \(1_G\), and let \(f : supp (\Pi ) \cup V \mapsto \mathbb {R}\) be a bounded measurable function such that f is continuous on \( supp (\Pi )\). Furthermore, suppose that
Then for all n sufficiently large, \(X_{n1} \in supp (\Pi ) \cup V\) a.s., and
where
3 Homogeneous groups
In this section we collect several preliminary facts about homogeneous groups. For details, we refer to [15] and [25].
Throughout this section, we fix a homogeneous group G. That is, G is a nilpotent, connected, and simply connected Lie group endowed with a one-parameter family of dilations (group automorphisms) \((\delta _\lambda )_{\lambda > 0}\), which, upon identifying G with its Lie algebra \(\mathfrak g\) by the \(\exp \) map, is given by
for a basis \(u_1,\ldots , u_m\) of \(\mathfrak g\) and real numbers \(d_m \ge \cdots \ge d_1 \ge 1\). We equip G with a sub-additive homogeneous norm \(\left| \left| \cdot \right| \right| \) which induces a left-invariant metric \(d(x,y) = \left| \left| x^{-1}y \right| \right| \) (see [25]).
For the remainder of the section, we identify G with \(\mathfrak g\) by the diffeomorphism \(\exp : \mathfrak g\mapsto G\), and write \(x = \sum x^i u_i\) for \(x \in G\). Note that \(\left| \left| \left| x \right| \right| \right| = \sum _{i=1}^m |x^i|^{1/d_i}\) is also a homogeneous norm on G and thus equivalent to \(\left| \left| \cdot \right| \right| \).
For a multi-index \(\alpha = (\alpha ^1,\ldots , \alpha ^m)\), \(\alpha ^i \ge 0\), we define \(\deg (\alpha ) = \sum _{i=1}^m \alpha ^i d_i\), and for \(x \in G\), write \(x^\alpha = (x^1)^{\alpha ^1}\ldots (x^m)^{\alpha ^m}\). By the Campbell–Baker–Hausdorff (CBH) formula, for all \(i \in \{1,\ldots , m\}\) there exist constants \(C^i_{\alpha ,\beta }\) such that
where the (finite) sum runs over all non-zero multi-indexes \(\alpha ,\beta \) such that \(\deg (\alpha ) + \deg (\beta ) = d_i\).
Example 3.1
Recall that a Lie group G is called graded if its Lie algebra is endowed with a decomposition
such that \([\mathfrak g^i,\mathfrak g^j] \subseteq \mathfrak g^{i+j}\), where \(\mathfrak g^k = 0\) for \(k > N\) (and where we allow the possibility that \(\mathfrak g^k = 0\) for some \(k \le N\)). Every graded Lie group can be equipped with a natural family of dilations \((\delta _{\lambda })_{\lambda > 0}\), and thus a homogeneous structure, for which \(d_1,\ldots , d_m\) are rational numbers with \(d_1 = 1\), given by \(\delta _\lambda (u) = \lambda ^{k/\alpha }u\) for all \(u \in \mathfrak g^k\), where \(\alpha = \min \{k \ge 1 \mid \mathfrak g^k \ne 0\}\) (and conversely, if \(d_1,\ldots , d_m\) are rational for a homogeneous group G, then G can be given a graded structure [15, p. 5]).
Recall also that a graded Lie group G is called a step-N Carnot group (or stratified group in the terminology of [15]) if the decomposition (3.2) further satisfies \([\mathfrak g_i,\mathfrak g_j] = \mathfrak g_{i+j}\), where \(\mathfrak g_k = 0\) for \(k > N\). Every Carnot group is a homogeneous group with a natural family of dilations given by \(\delta _\lambda (u) = \lambda ^k u\) for all \(u \in \mathfrak g_k\) (so that \(d_i \in \{1,\ldots , N\}\)), and for which the metric d can be taken as the Carnot–Carathéodory distance [2, p. 38].
The Carnot group which will be particularly relevant in Sect. 5.3 for applications in rough paths theory is the step-N free nilpotent Lie group \(G^N(\mathbb {R}^d)\) over \(\mathbb {R}^d\), which we recall is, by definition, the space where geometric p-rough paths (for \(\lfloor p \rfloor = N\)) take value. For further details concerning the theory of geometric rough paths, we refer to [18].
Remark 3.2
Another homogeneous group which plays an important role in the theory of rough paths is the step-N Butcher group \(\mathcal {G}^N(\mathbb {R}^d)\) over \(\mathbb {R}^d\) (see [21, 22]). Recall that \(G^N(\mathbb {R}^d)\) is canonically embedded in \(\mathcal {G}^N(\mathbb {R}^d)\), and that \(\mathcal {G}^N(\mathbb {R}^d)\) admits a natural grading under which \(\mathcal {G}^N(\mathbb {R}^d)\) is not a Carnot group (see [22, Remark 2.15]).
The group \(\mathcal {G}^N(\mathbb {R}^d)\) is, by definition, the space where branched rough paths take value (which form a genuine extension of the notion of geometric rough paths). We mention that branched rough paths were recently studied in [8] to give a rough path perspective on renormalisation of stochastic PDEs in the theory of regularity structures [9, 23]. Lévy processes in \(\mathcal {G}^N(\mathbb {R}^d)\) in particular form a family of stationary stochastic processes closed under appropriate renormalisation maps (see [8, Section 4]).
3.1 Paths of finite p-variation
For \(p > 0\) and functions \(\mathbf {x},\mathbf {y}: [s,t] \mapsto G\), define the p-variation distance
where the supremum runs over all partitions \(\mathcal {D}\) of [s, t] and where we have used the shorthand notation \(\mathbf {x}_{u,v} = \mathbf {x}_u^{-1} \mathbf {x}_v\). Define also the p-variation of \(\mathbf {x}\) by \(\left| \left| \mathbf {x} \right| \right| _{p -var ;[s,t]} = d_{p -var ;[s,t]}(\mathbf {x},1_G)\), and
and
We will drop the reference to the interval [s, t] when it is clear from the context. For convenience, we record the following standard interpolation estimates.
Lemma 3.3
-
(1)
For all \(p'> p > 0\) and \(\mathbf {x},\mathbf {y}: [s,t] \mapsto G\)
$$\begin{aligned} d_{p' -var }(\mathbf {x},\mathbf {y}) \le 2^{\max \{0,(1-p)/p'\}}(\left| \left| \mathbf {x} \right| \right| _{p -var } + \left| \left| \mathbf {y} \right| \right| _{p -var })^{p/p'} d_{0}(\mathbf {x},\mathbf {y})^{1-p/p'}. \end{aligned}$$ -
(2)
There exists \(C > 0\) such that for all \(\mathbf {x},\mathbf {y}: [s,t] \mapsto G\) with \(\mathbf {x}_s = \mathbf {y}_s\)
$$\begin{aligned} d_{\infty }(\mathbf {x},\mathbf {y}) \le d_{0}(\mathbf {x},\mathbf {y}) \le C \max \{d_{\infty }(\mathbf {x},\mathbf {y}), d_{\infty }(\mathbf {x},\mathbf {y})^{1/d_m}(\left| \left| \mathbf {x} \right| \right| _{\infty } + \left| \left| \mathbf {y} \right| \right| _{\infty })^{1-1/d_m}\}. \end{aligned}$$
Proof
(1) is obvious. To show (2), it follows from an application of the CBH formula (3.1) and the equivalence of \(\left| \left| \cdot \right| \right| \) and \(\left| \left| \left| \cdot \right| \right| \right| \), that for all \(g,h \in G\)
The conclusion now follows by the identical argument used to prove [18, Proposition 8.15]. \(\square \)
For \(p \ge 1\), let \(C^{p -var }([0,T],G)\) denote the space of continuous paths of finite p-variation equipped with the metric \(d_{p -var ;[0,T]}\). Note that \(C^{p -var }([0,T],G)\) is a complete metric space due to the lower semi-continuity of \(\mathbf {x}\mapsto \left| \left| \mathbf {x} \right| \right| _{p -var ;[0,T]}\) (under pointwise convergence).
Note that, except in trivial cases, \(C^{p -var }([0,T],G)\) is non-separable. However, it is not difficult to show that \(C^{p' -var }([0,T],G)\) contains a separable subset \(C^{0,p' -var }([0,T],G)\) which contains \(C^{p -var }([0,T],G)\) for all \(1 \le p < p'\). Indeed, let \(C^g([0,T],G)\) denote the space of curves which are concatenations of one-parameter subgroups of G, i.e., all curves \(\gamma : [0,T] \mapsto G\) of the form
where \(\mathcal {D}= (t_0 = 0< t_1< \cdots < t_n = T)\) is a partition of [0, T] and \(x_1,\ldots , x_n \in G\) (and where for clarity we have broken the convention of identifying G with \(\mathfrak g\)). Then for \(p \ge 1\), define \(C^{0,p -var }([0,T],G)\) as the closure of \(C^{g}([0,T],G) \cap C^{p -var }([0,T],G)\) in \(C^{p -var }([0,T],G)\).
Remark 3.4
In the case that G is a Carnot group with decomposition (3.2), \(C^{0,p -var }([0,T],G)\) is precisely the closure of the horizontal lifts of smooth paths \(\gamma \in C^\infty ([0,T], \mathfrak g^1)\).
To show the claimed properties of \(C^{0,p -var }([0,T],G)\), note that for \(x \in G\), the path \(\gamma : t \mapsto \exp (t\log x)\) has finite p-variation if and only if \(x^i = 0\) for all \(i \in \{1,\ldots , m\}\) such that \(d_i > p\), in which case there exists \(C_1 = C_1(p,G)>0\) such that \(\left| \left| \gamma \right| \right| _{p -var ;[0,1]} \le C_1 \left| \left| x \right| \right| \). For \(\mathbf {x}: [0,T] \mapsto G\), and a partition \(\mathcal {D}\subset [0,T]\), let \(\mathbf {x}^\mathcal {D}\in C^g([0,T], G)\) be the interpolation of \(\mathbf {x}\) along \(\mathcal {D}\) defined as \(\gamma \) in (3.3) with \(x_k = \mathbf {x}^{-1}_{t_{k-1}}\mathbf {x}_{t_k}\) and \(\mathbf {x}^\mathcal {D}_0 = \mathbf {x}_0\). One can then readily show (e.g., by Lemma A.5) that \(\sup _{\mathcal {D}\subset [0,T]}\left| \left| \mathbf {x}^\mathcal {D} \right| \right| _{p -var } \le C_2\left| \left| \mathbf {x} \right| \right| _{p -var }\). Hence for all \(\mathbf {x}\in C^{p -var }([0,T],G)\) and \(p' > p \ge 1\), by Lemma 3.3, \(d_{p' -var ;[0,T]}(\mathbf {x}^\mathcal {D},\mathbf {x}) \rightarrow 0\) as \(|\mathcal {D}| \rightarrow 0\), which shows that \(C^{p -var }([0,T],G) \subseteq C^{0,p' -var }([0,T],G)\) as claimed. The fact that \(C^{0,p -var }([0,T],G)\) is separable (and thus Polish) is also easy to show (e.g., by considering \(\gamma \in C^g([0,T],G)\) with rational coordinates and using a similar argument as the proof of Lemma A.5).
The following result will be important in our classification of G-valued Lévy processes of finite p-variation.
Proposition 3.5
Let \(p > 0\) and \((\mathbf {X}_n)_{n \ge 1}\) be a sequence of D([0, T], G)-valued random variables such that \((\left| \left| \mathbf {X}_n \right| \right| _{p -var ;[0,T]})_{n \ge 1}\) is a tight collection of real random variables. Suppose that \(\mathbf {X}_n \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}\) as D([0, T], G)-valued random variables.
-
(1)
It holds that \(\left| \left| \mathbf {X} \right| \right| _{p -var ;[0,T]} < \infty \) a.s..
-
(2)
Suppose further that \(p \ge 1\) and \(\mathbf {X}^n, \mathbf {X}\) are C([0, T], G)-valued random variables. Then for all \(p' > p\), \(\mathbf {X}_n \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}\) as \(C^{0,p' -var }([0,T], G)\)-valued random variables.
Proof
-
(1)
Note that \(\mathbf {x}\mapsto \left| \left| \mathbf {x} \right| \right| _{p -var }\) is a lower semi-continuous function on D. Since D([0, T], G) is Polish, we may apply the Skorokhod representation theorem [27, Theorem 3.30], from which the conclusion easily follows.
-
(2)
It follows from Lemma 3.3 that every set of the form \(A \cap \{\mathbf {x}\in C([0,T],G) \mid \left| \left| \mathbf {x} \right| \right| _{p -var ;[0,T]} < R\}\), where \(R > 0\) and A is a compact subset of C([0, T], G) (for uniform topology), is a compact subset of \(C^{0,p' -var }([0,T],G)\). Hence \((\mathbf {X}_n)_{n \ge 1}\) is a tight collection of \(C^{0,p' -var }([0,T],G)\)-valued r.v.’s, and so converges in law along a subsequence to some \(C^{0,p' -var }([0,T],G)\)-valued r.v. \(\widetilde{\mathbf {X}}\). Since \(\mathbf {X}_n \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}\) as C([0, T], G)-valued r.v.’s, it necessarily follows that \(\widetilde{\mathbf {X}} \,{\buildrel \mathcal {D}\over =}\,\mathbf {X}\), which concludes the proof.
\(\square \)
Remark 3.6
A version of Helly’s selection principle (see [34, Theorem 2.4]) states that any uniformly bounded sequence of functions \(\mathbf {x}^n : [0,T] \mapsto G\) for which \(\sup _{n \ge 1} \left| \left| \mathbf {x}^n \right| \right| _{p -var ;[0,T]} < \infty \) for some \(p \ge 1\), has a subsequence such that \(\mathbf {x}^{n_k} \rightarrow \mathbf {x}\) pointwise.
4 p-variation tightness of random walks
We continue to use the notation of the previous section. Consider an iid array \(X_{nk}\) in the homogeneous group G, and let \(\mathbf {X}^n\) be the associated random walk. The main result of this section is Theorem 4.1, which provides sufficient conditions under which \((\left| \left| \mathbf {X}^n \right| \right| _{p -var ;[0,1]})_{n \ge 1}\) is tight. In its simplest form, Theorem 4.1 implies that whenever \(\mathbf {X}^n\) converges in law to a Lévy process in G, and the array \(X_{nk}\) is scaled by a scaling function \(\theta \), then \((\left| \left| \mathbf {X}^n \right| \right| _{p -var ;[0,1]})_{n \ge 1}\) is tight for all \(p> \kappa > 0\), where \(\kappa \) depends only on the scaling function \(\theta \).
Let \(\xi _1,\ldots ,\xi _m \in C^\infty _c(G)\) and \(\xi : G \mapsto \mathfrak g\) be smooth functions and U a neighbourhood of \(1_G\) for which the conditions at the start of Sect. 2 are satisfied with respect to the basis \(u_1,\ldots , u_m\).
Theorem 4.1
Let \(X_{n1}, \ldots , X_{nn}\) be an iid array of G-valued random variables and \(\mathbf {X}^n\) the associated random walk. For every \(i\in \{1,\ldots , m\}\), let \(0 < q_i \le 2\) be a real number, and define
Consider the following conditions:
-
(A)
for every fixed \(h \in [0,1]\), \((\mathbf {X}^n_h)_{n \ge 1}\) is a tight collection of G-valued random variables;
-
(B)
for all \(i \in \{1,\ldots , m\}\), \(\sup _{n \ge 1} n\left| \mathbb E \left[ \xi _i(X_{n1}) \right] \right| < \infty \);
-
(C)
the array \(X_{nk}\) is scaled by a scaling function \(\theta \), where \(\theta \equiv \sum _{i=1}^m |\xi _i|^{q_i}\) on a neighbourhood of \(1_G\).
Then, provided (A), (B) and (C) hold, \((\mathbf {X}^n)_{n \ge 1}\) is a tight collection of \(D_o([0,1],G)\)-valued random variables and, for every \(p > \kappa \), \((\left| \left| \mathbf {X}^n \right| \right| _{p -var ;[0,1]})_{n \ge 1}\) is a tight collection of real random variables.
Remark 4.2
Suppose that for a Lévy process \(\mathbf {X}\) in G, \(\mathbf {X}^n \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}\) as \(D_o([0,T], G)\)-valued random variables. Then conditions (A) and (B) are automatically satisfied by Theorem 2.1 (and (C) is satisfied upon choosing \(q_i = 2\) for all \(i\in \{1,\ldots , m\}\) by Remark 2.4).
The remainder of the section is devoted to the proof of Theorem 4.1, which can be split into three parts. The first part is collected in Sect. 4.1 and comprises a general p-variation tightness criterion for strong Markov processes. The second part, which is the most technical part of the proof, is collected in Sect. 4.2 and establishes the bounds required to apply the results of Sect. 4.1 for the case \(p > d_m\). The third part is collected in Sect. 4.3 and treats the case \(p \le d_m\). Roughly speaking, in the third part we decompose \(\mathbf {X}^n\) into the lift of a walk in a lower level group, for which the previous two parts apply, and a perturbation on the higher levels, for which the p-variation can be controlled directly.
4.1 p-Variation tightness of strong Markov processes
In this section we give a criterion for p-variation tightness of strong Markov processes in a Polish space (Theorem 4.8), which is inspired by the work of Manstavičius [32].
Let (E, d) be a metric space and \(\mathbf {x}: [0,T] \mapsto E\) a function. Define
and, for \(\delta > 0\),
Note that quantity \(\nu _\delta (\mathbf {x})\) measures the maximum number of oscillations of \(\mathbf {x}\) of magnitude greater than \(\delta \) over non-overlapping intervals. Observe the following basic inequality which serves to control \(\left| \left| \mathbf {x} \right| \right| _{p -var ;[0,T]}\):
For \(\delta > 0\), define the increasing sequence of times \((\tau ^\delta _j(\mathbf {x}))_{j=0}^\infty \) by \(\tau ^\delta _0(\mathbf {x}) = 0\) and for \(j \ge 1\)
Lemma 4.3
Let \(\mathbf {X}\) be a D([0, T], E)-valued random variable for a Polish space (E, d). Let \(\delta ,h > 0\) such that there exists \(q \in (0,1)\) for which a.s. for all \(i \ge 0\)
(where we use the convention \(\infty - \infty = \infty \)). Then
Proof
Note that for any function \(\mathbf {x}: [0,T] \mapsto E\), it holds that \(\nu _{\delta }(\mathbf {x})\) is the largest integer j for which \(\tau ^\delta _j(\mathbf {x}) \le T\), and thus
For \(i \ge 0\), consider the event \(A_i = \{\tau _{i+1}^{\delta }(\mathbf {X}) - \tau _{i}^\delta (\mathbf {X}) > h\}\), and note that
Consider a real random variable Z distributed by the negative binomial distribution with parameters \((\lceil T/h \rceil ,q)\), i.e., Z counts the total number of iid Bernoulli trials with success probability q until exactly \(\lceil T/h \rceil \) failures occur. It follows from the uniform bound (4.2) that
(where one considers \(A_i\) as a failure with probability at least \(1-q\)), so that
\(\square \)
We now show how one can verify the condition of Lemma 4.3 for a strong Markov process. We first restrict attention to the set of times on which a process is allowed to move.
Definition 4.4
For a metric space (E, d) and a D([0, T], E)-valued random variable \(\mathbf {X}\), call a (deterministic) open interval \((s,t) \subset [0,T]\) stationary if
Let \(Z_\mathbf {X}\subseteq [0,T]\) denote the union of all stationary intervals, and let \(R_{\mathbf {X}} = [0,T]{\setminus } Z_\mathbf {X}\) be its complement.
Example 4.5
For the random walk \(\mathbf {X}^n \in D([0,1], G)\) associated with an iid array \(X_{nj}\) in a Lie group G, we have \(R_{\mathbf {X}^n} = \{0,1/n,\ldots , (n-1)/n, 1\}\).
We emphasise that the role of \(R_\mathbf {X}\) is only technical in that it allows us to easily formulate bounds uniform in \(s \in R_\mathbf {X}\) (such as those in Theorem 4.8 and Corollary 4.9) which hold for random walks and for which the same bounds would not hold when taken uniformly over all \(s \in [0,T]\) (though for completely harmless reasons). The following lemma is a variant of Gīhman–Skorokhod [20, Lemma 2, p. 420] (in which the notion of \(R_\mathbf {X}\) does not appear).
Lemma 4.6
(Maximum inequality). Let \(\mathbf {X}\) be a càdlàg (not necessarily strong) Markov process taking values in a Polish space (E, d).
Let \(h,\delta > 0\) and suppose there exists \(c \in [0,1)\) such that
Then for all \(s \in R_\mathbf {X}\) and \(x \in E\), it holds that
Proof
Let \(s \in R_\mathbf {X}\) and observe that a.s.
Consider a nested sequence of partitions \(\mathcal {D}_n \subset [s,s+h] \cap R_\mathbf {X}\) such that
where . Since \(\mathbf {X}\) is càdlàg, it holds that
where the right side is non-decreasing in n since \(\mathcal {D}_n\) are nested.
It thus suffices to show that for any partition \(\mathcal {D}= (t_0=s,\ldots , t_n) \subset [s,s+h]\cap R_\mathbf {X}\), we have
To this end, for \(i \in \{0,\ldots , n\}\), consider the events
and
Define the \(\sigma \)-algebras \(\mathcal {F}_{s,t} \,{:=}\, \sigma (\mathbf {X}_u)_{s\le u \le t}\). Observe that (4.3) implies that a.s.
Moreover, consider the disjoint events \(F_i \,{:=}\, B^c_1 \cap \ldots \cap B^c_{i-1} \cap B_{i}\). Then for all \(i \in \{0,\ldots , n\}\)
Since each \(F_i\cap C^c_i\) is disjoint and \(F_i\) is \(\mathcal {F}_{s,t_i}\)-measurable, we have
Finally, (4.4) now follows from the fact that
\(\square \)
Corollary 4.7
Let \(\mathbf {X}\) be a càdlàg strong Markov process taking values in a Polish space (E, d). Let \(h,\delta > 0\) and \(c \in [0,1)\) satisfy (4.3). Then for all \(i \ge 0\), a.s.
Proof
Observe that \(\tau ^{4\delta }_i(\mathbf {X})\) takes values a.s. in \(R_\mathbf {X}\), and that the event \(\{\tau _{i+1}^{4\delta }(\mathbf {X}) - \tau _i^{4\delta }(\mathbf {X}) \le h\}\) is contained inside \(\{\sup _{t \in [\tau _i,\tau _i + h]} d(\mathbf {X}_{\tau _i},\mathbf {X}_{t}) > 2\delta \}\). Conditioning on the stopping times \(\{\tau ^{4\delta }_i(\mathbf {X}),\ldots ,\tau ^{4\delta }_0(\mathbf {X})\}\) and using the assumption that \(\mathbf {X}\) is a strong Markov process, the desired result now follows from Lemma 4.6. \(\square \)
We now obtain the following p-variation tightness criterion for strong Markov processes. Recall the quantity \(M(\mathbf {X}) = \sup _{s,t \in [0,T]} d(\mathbf {X}_t, \mathbf {X}_s)\).
Theorem 4.8
Let \(\mathcal {M}\) be collection of càdlàg strong Markov processes on [0, T] taking values in a Polish space (E, d). Suppose that
-
(a)
\((M(\mathbf {X}))_{\mathbf {X}\in \mathcal {M}}\) is tight, and
-
(b)
there exist constants \(a, \kappa , b > 0\) and \(c \in [0,1/2)\) such that for all \(\delta \in (0, b]\)
$$\begin{aligned} \sup _{\mathbf {X}\in \mathcal {M}} \sup _{s \in R_{\mathbf {X}}} \sup _{x \in E} \sup _{t \in [s,s+h(\delta )]} \mathbb P^{s,x} \left[ d(\mathbf {X}_s,\mathbf {X}_{t}) > \delta \right] \le c, \end{aligned}$$where \(h(\delta ) \,{:=}\, a\delta ^\kappa \).
Then for any \(p > \kappa \), \((\left| \left| \mathbf {X} \right| \right| _{p -var ;[0,T]})_{\mathbf {X}\in \mathcal {M}}\) is a tight collection of real random variables.
Proof
Let \(p > \kappa \). We claim that it suffices to show
Indeed, observe that (4.5) implies that \((\nu _{1}(\mathbf {X}))_{\mathbf {X}\in \mathcal {M}}\) is tight. It then follows, by (a) and the estimate (4.1), that (4.5) implies \((\left| \left| \mathbf {X} \right| \right| _{p -var ;[0,T]})_{\mathbf {X}\in \mathcal {M}}\) is tight as claimed.
It thus remains to show (4.5). By (b) and Corollary 4.7, it holds that for all \(\delta \in (0,b]\)
Hence, by Lemma 4.3, for all \(\delta \in (0,b]\)
from which (4.5) readily follows. \(\square \)
Corollary 4.9
Let \((\mathbf {X}^n)_{n \ge 1}\) be a sequence of càdlàg strong Markov processes on [0, T] taking values in a Polish space (E, d). Suppose that
-
(i)
for every fixed rational \(h \in [0,T]\), \((\mathbf {X}_h^n)_{n \ge 1}\) is a tight collection of E-valued random variables, and
-
(ii)
there exist constants \(K, \beta , \gamma , b > 0\) such that for all \(\delta \in (0, b]\) and \(h > 0\)
$$\begin{aligned} \sup _{n \ge 1} \sup _{s \in R_{\mathbf {X}^n}} \sup _{x \in E} \sup _{t \in [s,s+h]} \mathbb P^{s,x} \left[ d(\mathbf {X}^n_s,\mathbf {X}^n_t) > \delta \right] \le K\frac{h^\beta }{\delta ^\gamma }. \end{aligned}$$
Then \((\mathbf {X}^n)_{n \ge 1}\) is a tight collection of D([0, T], E)-valued random variables, and for any \(p > \gamma /\beta \), \((\left| \left| \mathbf {X}^n \right| \right| _{p -var ;[0,T]})_{n \ge 1}\) is a tight collection of real random variables.
Proof
First, note that (ii) applied to small h allows us to verify the Aldous condition for the sequence \((\mathbf {X}^n)_{n \ge 1}\) (see, e.g, [28, p. 188], though note one should restrict attention to sequences of stopping times \(\tau _n\) taking values in \(R_{\mathbf {X}^n}\) a.s., which is a trivial modification to the usual Aldous condition). Together with (i), it follows that \((\mathbf {X}^n)_{n \ge 1}\) is a tight collection of D([0, T], E)-valued random variables ([28, Theorems 4.8.1, 4.8.2]).
Observe that M is a continuous function on D([0, T], E), from which it follows that \((M(\mathbf {X}^n))_{n \ge 1}\) is tight. Moreover, observe that (ii) implies that there exists \(a > 0\) such that for all \(\delta \in (0,b]\)
where \(h = a\delta ^{\gamma /\beta }\). It follows that the conditions of Theorem 4.8 are satisfied with \(\kappa = \gamma /\beta \), so that indeed \((\left| \left| \mathbf {X}^n \right| \right| _{p -var ;[0,T]})_{n \ge 1}\) is tight for all \(p > \gamma /\beta \). \(\square \)
4.2 Proof of Theorem 4.1 in the case \(p > d_m\)
We continue using the notation of Sect. 3. In particular, we identify G with \(\mathfrak g\) via the \(\exp \) map.
Remark 4.10
We note here that Corollary 4.9 and the bound (4.6) in the upcoming Lemma 4.12 are sufficient to establish that conditions (A), (B) and (C) imply that \((\mathbf {X}^n)_{n \ge 1}\) is a tight collection of \(D_o([0,1],G)\)-valued random variables and that \((\left| \left| \mathbf {X}^n \right| \right| _{p -var ;[0,1]})_{n \ge 1}\) is tight for all \(p > \kappa \vee d_m\), which proves the statement of Theorem 4.1 subject to the restriction \(p > d_m\).
Observe that an inductive application of the CBH formula (3.1), along with the multinomial identity \((z_1+\cdots + z_n)^{j} = \sum _{k_1+\cdots + k_n = j} \left( {\begin{array}{c}j\\ k_1,\ldots ,k_n\end{array}}\right) z_1^{k_1}\ldots z_n^{k_n}\), yields the following lemma.
Lemma 4.11
For all \(x_1,\ldots , x_k \in G\) and every index \(i \in \{1,\ldots , m\}\), it holds that
where \(\sum _{\alpha _1,\ldots , \alpha _r}\) indicates the (finite) sum over all non-zero multi-indexes \(\alpha _1,\ldots , \alpha _r\) such that \(\deg (\alpha _1) + \cdots + \deg (\alpha _r) = d_i\) and \(c^i_{\alpha _1,\ldots ,\alpha _r}\) are constants.
Recall that \(\mathbf {X}^n\) denotes the random walk associated to the iid array \(X_{n1},\ldots , X_{nn}\).
Lemma 4.12
Use the notation from Theorem 4.1 and suppose that (B) and (C) hold. Let \(\gamma \,{:=}\, d_m \vee \kappa \), and for \(i \in \{1,\ldots , m\}\) denote by \(\mathbf {Y}^{n,i} \in D_o([0,1],\mathbb {R})\) the random walk associated with the \(\mathbb {R}\)-valued iid array \(X^i_{nk}\).
Then there exists \(K > 0\) such that for all \(n \ge 1\), \(k \in \{1,\ldots , n\}\) and \(\delta \in (0,1]\)
and, for all \(i \in \{1,\ldots , m\}\) such that \(q_i \le 1\),
Proof
We first claim that it suffices to consider the case \(\left| \left| X_{n1} \right| \right| \le \varepsilon \) a.s. for all \(n \ge 1\), where \(\varepsilon > 0\) may be taken arbitrarily small. Indeed, let \(\varepsilon > 0\) and note that there exists \(c > 0\) such that \(\theta (x) > c\) for all \(x \in G\) with \(\left| \left| x \right| \right| > \varepsilon \). Since \(\theta \) scales \(X_{nk}\), it follows that there exists \(C_1 > 0\) such that for all \(n \ge 1\)
and hence
It follows that for all \(n \ge 1\) and \(k \in \{1,\ldots , n\}\)
and similarly for \(\mathbb {P} \left[ \left| \mathbf {Y}^{n,i}_{k/n} \right| > \delta \right] \). Replacing \(X_{nk}\) by
we note that (B) and (C) imply that the same conditions hold for the iid array \(X_{nk}'\). It thus suffices to prove the statement of the lemma for the iid array \(X_{nk}'\) instead as claimed.
We henceforth assume that \(\left| \left| X_{n1} \right| \right| < \varepsilon \) a.s., where \(\varepsilon > 0\) is sufficiently small so that \(x \in U\) whenever \(\left| \left| x \right| \right| < \varepsilon \). We first show (4.7). Let \(i \in \{1,\ldots , m\}\) such that \(q_i \le 1\). Then there exists \(C_2 > 0\) such that
where the second inequality is due to (C). It follows by Markov’s inequality that there exists \(K > 0\) such that (4.7) holds for all \(n \ge 1\), \(k \in \{1,\ldots , n\}\), and \(\delta \in (0,1]\).
We now show (4.6). By Lemma 4.11, it suffices to show that for all \(i \in \{1,\ldots m\}\), \(r \in \{1,\ldots , \lfloor d_i \rfloor \}\), multi-indexes \(\alpha _1,\ldots , \alpha _r\) such that \(\deg (\alpha _1) + \cdots +\deg (\alpha _r) = d_i\) (with \(\alpha _1^i = 1\) in the case that \(r=1\)), there exists \(K > 0\) such that for all \(n \ge 1\), \(k \in \{1,\ldots ,n\}\) and \(\delta \in (0,1]\)
To this end, let us fix \(i \in \{1,\ldots , m\}\), \(r \in \{1,\ldots , \lfloor d_i \rfloor \}\), and multi-indexes \(\alpha _1,\ldots , \alpha _r\) such that \(\deg (\alpha _1)+\cdots +\deg (\alpha _r) = d_i\). Consider first the case \(r \ge 2\). Define
By Markov’s and Jensen’s inequalities (observing that \(\gamma _i \le 2d_i\))
To bound the last expression, for a multi-index \(\alpha = (\alpha ^1,\ldots ,\alpha ^m)\), denote \(|\alpha | = \alpha ^1 + \cdots + \alpha ^m\). Note that due to the assumption \(\left| \left| X_{n1} \right| \right| < \varepsilon \) a.s., (B) is equivalent to
Furthermore, by (C) and the Cauchy–Schwartz inequality,
Consider now the expression
Since \(X_{n1},\ldots , X_{nn}\) are independent, (4.12) splits into a sum of terms of the form \(\mathbb E \left[ X_{n1}^{\beta _1} \right] \ldots \mathbb E \left[ X_{nk}^{\beta _k} \right] \) with \(\beta _i \ge 0\). Call the simple degree of such a term the number of \(\beta _i > 0\). The minimum simple degree of any term is evidently r and the maximum is 2r, and one readily sees that there exists \(C_3 > 0\) such that for all \(n \ge 1\) and \(k \in \{ 1,\ldots , n\}\), the number of terms of simple degree \(s \in \{r,\ldots , 2r\}\) is bounded above by \(C_3 k^s\). Furthermore, since \(X_{n1},\ldots , X_{nn}\) are identically distributed, it follows from (4.10) and (4.11) that there exists \(C_4 > 0\) such that the absolute value of every term of simple degree s is bounded above by \(C_4 n^{-s}\). Since \(2 \le r \le s\) and \(k \le n\), it follows that
Therefore, from (4.9) and the fact that \(d_i \le \gamma _i \le \gamma \), we obtain (4.8). This completes the case \(r \ge 2\).
It remains to consider the case \(r = 1\). Define now \(\gamma _i \,{:=}\, d_i(q_i\vee 1)\). It holds that
Denote \(\mu _{n} = \mathbb E \left[ X^i_{n1} \right] \). Then there exist \(C_6, C_7 > 0\) such that
where the first inequality is due to (4.10), and the second inequality is due to the (discrete) Burkholder–Davis–Gundy inequality and the fact that \(q_i \le 2\). It now follows from (C) and (4.10) that
Since \(\gamma _j \le \gamma \), this completes the case \(r=1\) and the proof of the lemma. \(\square \)
As mentioned in Remark 4.10, Corollary 4.9 and the bound (4.6) are now sufficient to prove Theorem 4.1 for the case that \(p > d_m\).
4.3 Proof of Theorem 4.1 in the case \(p \le d_m\)
Lemma 4.13
Use the notation from Lemma 4.12 and suppose that (A), (B) and (C) hold. For all \(i \in \{1,\ldots , m\}\), it holds that \((\mathbf {Y}^{n,i}_h)_{n \ge 1, h \in [0,1]}\) is a tight collection of real random variables.
Proof
By Remark 4.10, \((\mathbf {X}^n)_{n \ge 1}\) is a tight collection of \(D_o([0,1],G)\)-valued random variables, from which it follows that \((\max _{1 \le k \le n}|X^i_{nk}|)_{n \ge 1}\) is tight for all \(i \in \{1,\ldots , m\}\). We may thus suppose that \(\left| \left| X_{n1} \right| \right| \le R\) a.s. for some large \(R>0\) and all \(n \ge 1\).
Consider the decomposition \(X^i_{nk} = A_{nk} + B_{nk}\) where
and
We take here \(\varepsilon > 0\) sufficiently small so that \(\left| \left| x \right| \right| < \varepsilon \) implies \(x \in U\). It suffices to prove that \((\sum _{a=1}^k B_{na})_{n \ge 1, k \in \{1, \ldots , n\}}\) and \((\sum _{a=1}^k A_{na})_{n \ge 1, k \in \{1, \ldots , n\}}\) are tight collections of real random variables.
Let \(C_1 = C_1(\varepsilon ) > 0\) be such that \(C_1\theta (x) > |x^i|\mathbf 1 \{\varepsilon \le \left| \left| x \right| \right| \le R\}\) for all \(x\in G\). Since \(\theta \) scales \(X_{nk}\), it holds that
and thus \((\sum _{a=1}^k B_{na})_{n \ge 1, k \in \{1, \ldots , n\}}\) is tight.
Now observe that (B) and (C) imply that \(\sup _{n \ge 1} n\left| \mathbb E \left[ A_{n1} \right] \right| < \infty \). Moreover (C) implies that there exists \(C_2 > 0\) such that \(|\xi _i(x)|^2 \le C_2\theta (x)\) for all \(x \in G\) and \(i \in \{1,\ldots , m\}\). Since \(A_{na} = \mathbf 1 \{\left| \left| X_{na} \right| \right| < \varepsilon \}\xi _i(X_{na})\), and \(A_{n1},\ldots , A_{nn}\) are iid, it follows that
and thus \((\sum _{a=1}^k A_{na})_{n \ge 1, k \in \{1, \ldots , n\}}\) is also tight. \(\square \)
For \(i \in \{1,\ldots , m\}\), let \(\mathfrak g^{> i}\) be the subspace of \(\mathfrak g\) spanned by \(\{u_j \mid j > i\}\). Note that \(\mathfrak g^{> i}\) is an ideal of \(\mathfrak g\), and so we can define the Lie algebra \(\mathfrak g^i = \mathfrak g/\mathfrak g^{> i}\) and the projection map \(\pi ^i : \mathfrak g\mapsto \mathfrak g^i\). The dilations \(\delta _\lambda \) on \(\mathfrak g\) give rise to a natural family of dilations on \(\mathfrak g^{i}\), and thus to a homogeneous group \(G^i\) associated with \(\mathfrak g^i\). Equivalently, \(G^i = \mathfrak g/\mathfrak g^{>i}\), where we have identified \(\mathfrak g\) with G and \(\mathfrak g^{> i}\) with a normal subgroup of G. We implicitly equip \(G^i\) with an arbitrary sub-additive homogeneous norm \(\left| \left| \cdot \right| \right| \). For notational convenience, we also let \(G^0 = \{1\}\) be the trivial group and \(\pi ^0 : G \mapsto G^0\) the trivial map.
Corollary 4.14
Use the notation from Lemma 4.12 and suppose that (A), (B) and (C) hold.
-
(i)
Let \(i \in \{1,\ldots , m\}\) and \(p > d_i \vee \kappa \). Then \((\left| \left| \pi ^{i}\mathbf {X}^n \right| \right| _{p -var ;[0,1]})_{n \ge 1}\) is tight.
-
(ii)
For every \(i \in \{1, \ldots , m\}\) such that \(q_i \le 1\), \((\left| \left| \mathbf {Y}^{n,i} \right| \right| _{p -var ;[0,1]})_{n \ge 1}\) is tight for all \(p > q_i\).
Proof
-
(i)
Observe that \(\pi ^i\mathbf {X}^n\) is the random walk associated with the \(G^{i}\)-valued iid array \(\pi ^{i} X_{nk}\), from which the conclusion follows by Corollary 4.9 and the bound (4.6) of Lemma 4.12 (cf. Remark 4.10).
-
(ii)
From Corollary 4.9 and the bound (4.7) of Lemma 4.12, it suffices to check that condition (i) of Corollary 4.9 holds for the processes \((\mathbf {Y}^{n,i})_{n \ge 1}\). However this follows from Lemma 4.13.
\(\square \)
Recall that we identify G with \(\mathfrak g\) via the \(\exp \) map. For functions \(\mathbf {z}: [0,T] \mapsto G\) and \(\mathbf {y}: [0,T] \mapsto \mathbb {R}\), define the function
(where addition is taken in \(\mathfrak g\)). The following lemma is a simple consequence of the fact that \((\mathbf {x}_s^{-1}\mathbf {x}_t)^i = (\mathbf {z}_s^{-1}\mathbf {z}_t)^i\) for all \(i \in \{1,\ldots , m-1\}\), and
Lemma 4.15
Let \(\mathbf {z}: [0,T] \mapsto G\) and \(\mathbf {y}: [0,T] \mapsto \mathbb {R}\) be functions, and let \(\mathbf {x}= \mathbf {z}+ \mathbf {y}\). Then for any \(p > 0\) there exists \(C=(p,G) > 0\) such that
Lemma 4.16
Let \(p > 0\) and \(i \in \{1,\ldots , m\}\) be the largest index such that \(d_i \le p\) (with \(i=0\) if no such index exists). Consider elements \(x_1,\ldots , x_n \in G\) and let \(\mathbf {x}\in D_o([0,1], G)\) be the associated walk. For \(j \in \{i + 1, \ldots , m\}\), let \(\mathbf {y}^j \in D_o([0,1], \mathbb {R})\) be the walk associated with the real numbers \(x^j_1,\ldots , x^j_n\).
Then there exists \(C = C(p,G) > 0\), such that
Proof
By induction on m and Lemma 4.15, it suffices to show that if \(p < d_m\) and \(x^m_k = 0\) for all \(k \in \{1,\ldots , n\}\), then \(\left| \left| \mathbf {x} \right| \right| _{p -var ;[0,1]} \le C_1\left| \left| \pi ^{m-1} \mathbf {x} \right| \right| _{p -var ;[0,1]}\). This in turn follows from the CBH formula (3.1) and an application of Young’s partition coarsening argument (see, e.g., [31, p. 50]). \(\square \)
We now have all the ingredients for the proof of Theorem 4.1.
Proof of Theorem 4.1
The fact that \((\mathbf {X}^n)_{n \ge 1}\) is a tight collection of \(D_o([0,1],G)\)-valued random variables follows directly from Corollary 4.9 and the bound (4.6) of Lemma 4.12 (cf. Remark 4.10).
Let \(p > \kappa \). Decreasing p if necessary, we may suppose \(p \ne d_i\) for all \(i \in \{1,\ldots , m\}\). Let \(i \in \{1,\ldots , m\}\) be the largest index such that \(d_i < p\) (with \(i=0\) if no such index exists). Define \(\mathbf {Y}^{n,j}\) as in Lemma 4.12, and note that \(q_j< p/d_j < 1\) for all \(j \in \{i + 1,\ldots , m\}\). It follows by Corollary 4.14 that \((\left| \left| \pi ^i \mathbf {X}^n \right| \right| _{p -var ;[0,1]})_{n \ge 1}\) and \((\left| \left| \mathbf {Y}^{n,j} \right| \right| _{p/d_j -var ;[0,1]})_{n \ge 1}\) are tight for all \(j \in \{i+1,\ldots , m\}\). We conclude by Lemma 4.16 that \((\left| \left| \mathbf {X}^n \right| \right| _{p -var ;[0,1]})_{n \ge 1}\) is also tight. \(\square \)
5 Lévy processes in homogeneous groups
5.1 Finite p-variation of Lévy processes
Consider a homogeneous group G and recall the notation of Sect. 3. Recall also the definitions of \(\Gamma _i\), J, and K from Sect. 2.3. The following is the main result of this subsection.
Theorem 5.1
Let \(p > 0\) and \(\mathbf {X}\) be a Lévy process in G with triplet \((A, B, \Pi )\).
-
(1)
Then \(\left| \left| \mathbf {X} \right| \right| _{p -var ;[0,1]} < \infty \) a.s. provided that all of the following hold:
-
(i)
\(p > 2d_j\) for all \(j \in J\);
-
(ii)
\(p > d_k\) for all \(k \in K\);
-
(iii)
\(p/d_i > \sup \{\Gamma _i\}\) for all \(i \in \{1, \ldots , m\}\).
-
(i)
-
(2)
Then \(\left| \left| \mathbf {X} \right| \right| _{p -var ;[0,1]} = \infty \) a.s. provided that one of the following holds:
-
(iv)
\(p \le 2d_j\) for some \(j \in J\);
-
(v)
\(p < d_k\) for some \(k \in K\);
-
(vi)
\(p/d_i \in \Gamma _i\) for some \(i \in \{1,\ldots , m\}\).
-
(iv)
Remark 5.2
Note that Theorem 5.1 does not completely determine all values of p for which \(\left| \left| \mathbf {X} \right| \right| _{p -var ;[0,1]} < \infty \) a.s. (e.g., when \(p/d_i = \sup \{\Gamma _i\} \notin \Gamma _i\) for some \(i \in \{1, \ldots , m\}\)). Comparing Theorem 5.1 with known results for \(\mathbb {R}\)-valued Lévy processes [6], we suspect that (ii) and (iii) can be replaced by \(p \ge d_k, \forall k \in K\), and \(p/d_i \notin \Gamma _i, \forall i \in \{1,\ldots , m\}\), respectively, which would complete the characterisation.
Remark 5.3
In [17], the authors determined sufficient conditions under which a Lévy process in the step-2 free nilpotent Lie group \(G^2(\mathbb {R}^d)\) possesses finite p-variation for \(p \in (2,3)\), along with a partial converse that their conditions cannot in general be weakened ([17, Theorem 50]). In this context, Theorem 5.1 generalises this result to all \(N \ge 1\) and \(p > 0\) and provides a sharp converse. In particular, the Carnot–Carathéodory Blumenthal–Getoor index \(\beta \) introduced in [17] for a Lévy measure on \(G^N(\mathbb {R}^d)\) relates to our definition of \(\Gamma _i\) by \(\beta = \max \{d_1\sup \{\Gamma _1\},\ldots , d_m\sup \{\Gamma _m\}\}\), in which case (iii) reads \(p > \beta \).
For the proof of Theorem 5.1, we require the following lemma.
Lemma 5.4
Let \(\mathbf {X}\) be a Lévy process in G with triplet \((A,B,\Pi )\). Assume \(p > 0\) satisfies (i), (ii), and (iii) of Theorem 5.1.
Let \(X_{nj}\) be the associated iid array constructed in Sect. 2.3 and \(\mathbf {X}^n\) the associated random walk. Then \((\left| \left| \mathbf {X}^n \right| \right| _{p -var ;[0,1]})_{n \ge 1}\) is tight.
Proof
Let \(0< p' < p\) such that \(p'\) also satisfies (i), (ii), and (iii) of Theorem 5.1. For all \(i \in \{1,\ldots , m\}\), define \(q_i \,{:=}\, 2\wedge (p'/d_i)\), and let \(\theta \) be a scaling function on G such that \(\theta \equiv \sum _{i = 1}^m |\xi _i|^{q_i}\) in a neighbourhood of \(1_G\).
Observe that \(q_i \notin \Gamma _i\) for all \(i \in \{1,\ldots , m\}\), \(q_j = 2\) for all \(j \in J\), and \(q_k > 1\) for all \(k \in K\). Thus, by Lemma 2.5, \(\theta \) scales the array \(X_{nj}\). Moreover, since \(\mathbf {X}^n \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}\) as \(D_o([0,1],G)\)-valued random variables, it follows that the array \(X_{nj}\) satisfies the conditions of Theorem 4.1 with the above \(\theta \) and \(q_1,\ldots , q_m\) (see Remark 4.2). Since \(p > \max \{q_1 d_1,\ldots , q_m d_m\}\), it follows that \((\left| \left| \mathbf {X}^n \right| \right| _{p -var ;[0,1]})_{n \ge 1}\) is tight. \(\square \)
Proof of Theorem 5.1
(1) follows from Lemma 5.4 and part (1) of Proposition 3.5, while (2) follows directly from Corollary B.3 and Proposition B.4. \(\square \)
5.2 Convergence in p-variation
In this subsection we consider continuous random paths \((\mathbf {X}^{n,\phi })_{n \ge 1}, \mathbf {X}^\phi \), constructed from a random walk \(\mathbf {X}^n\) and a Lévy process \(\mathbf {X}\) by connecting their left- and right-limits with a path function \(\phi \), and give conditions under which \(\mathbf {X}^{n,\phi } \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}^\phi \) as \(C^{p -var }([0,1],G)\)-valued random variables. All relevant material on path functions is collected in “Appendix A”.
Theorem 5.5
Let \(X_{nj}\) be an iid array in G and \(\mathbf {X}^n\) the associated random walk. Let \(\mathbf {X}\) be a Lévy process in G with triplet \((A,B,\Pi )\). Suppose that \(\mathbf {X}^n \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}\) as \(D_o([0,1],G)\)-valued random variables and that \(\theta \) scales \(X_{nj}\), where \(\theta \equiv \sum _{i=1}^m |\xi _i|^{q_i}\) in a neighbourhood of \(1_G\) for some \(0 < q_i \le 2\).
Let \(W \subseteq G\) be a closed subset such that \( supp (\Pi ) \subseteq W\) and \(X_{n1} \in W\) a.s. for all \(n \ge 1\). Let \(p > \max \{1, q_1 d_1,\ldots ,q_m d_m\}\) and \(\phi : W \mapsto C_o^{p -var }([0,1],G)\) a p-approximating, endpoint continuous path function.
Then \(\left| \left| \mathbf {X}^\phi \right| \right| _{p -var ;[0,1]} < \infty \) a.s., and for every \(p' > p\), \(\mathbf {X}^{n,\phi } \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}^\phi \) as \(C_o^{0,p' -var }([0,1], G)\)-valued random variables.
Remark 5.6
In the statement of Theorem 5.5, note that, a.s., \(\mathbf {X}_{t-}^{-1}\mathbf {X}_t \in supp (\Pi )\) for every jump time t of \(\mathbf {X}\) (e.g., [30, Proposition 1.4]). Hence, for any (measurable) path function \(\phi \) defined on \( supp (\Pi )\), \(\mathbf {X}^\phi \) is indeed a well-defined \(C_o([0,1],G)\)-valued random variable.
Proof
By Theorem 4.1, it holds that \((\left| \left| \mathbf {X}^n \right| \right| _{p -var ;[0,1]})_{n \ge 1}\) is tight, and thus, by Proposition A.7, \((\left| \left| \mathbf {X}^{n,\phi } \right| \right| _{p -var ;[0,1]})_{n \ge 1}\) is also tight. Since \(\phi \) is endpoint continuous on W, it follows by Proposition A.4 that \(\mathbf {X}^{n,\phi } \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}^{\phi }\) as \(C_o([0,1],G)\)-valued random variables. The conclusion now follows from Proposition 3.5. \(\square \)
5.3 Applications to rough paths theory
We apply the results so far developed in the paper to the theory of rough paths and stochastic flows. Following Example 3.1, denote by \(G^N(\mathbb {R}^d)\) the step-N free nilpotent Lie group over \(\mathbb {R}^d\) and let \(\mathfrak g^N(\mathbb {R}^d)\) be its Lie algebra. For the remainder of the paper, unless otherwise stated, we shall always let \(G = G^N(\mathbb {R}^d)\) and \(\mathfrak g= \mathfrak g^N(\mathbb {R}^d)\). Being a Carnot group, G comes equipped with a natural homogeneous structure and we note that \(u_1,\ldots , u_d\) can be identified with a basis of \(\mathbb {R}^d\).
For \(1 \le p < N+1\), we let \(WG\Omega _p(\mathbb {R}^d) \,{:=}\, C^{p -var }([0,T], G)\), equipped with the metric \(d_{p -var ;[0,T]}\), denote the space of weakly geometric p-rough paths. Given an element \(\mathbf {x}\in WG\Omega _p(\mathbb {R}^d)\), and a collection \((f_i)_{i=1}^d\) of vector fields in \( Lip ^{\gamma +k-1}(\mathbb {R}^e)\) for \(\gamma > p \ge 1\) and an integer \(k \ge 1\), there is a unique solution to the rough differential equation (RDE)
We refer to [18] for further details on (geometric) rough paths theory.
5.3.1 Stochastic flows
Let \(U^\mathbf {x}_{T\leftarrow 0} : \mathbf {y}_0 \mapsto \mathbf {y}_T\) denote the flow associated to (5.1), which we recall is an element of \( Diff ^k(\mathbb {R}^e)\), the group of \(C^k\)-diffeomorphisms of \(\mathbb {R}^e\). Recall that the map \(U^\cdot _{T\leftarrow 0} : WG\Omega _p(\mathbb {R}^d) \mapsto Diff ^k(\mathbb {R}^e)\) is a continuous function on \(WG\Omega _p(\mathbb {R}^d)\) when \( Diff ^k(\mathbb {R}^e)\) is equipped with the \(C^k\)-topology ([18, Theorem 11.12]). The following result is now an immediate corollary of Theorem 5.5.
Corollary 5.7
Suppose the assumptions of Theorem 5.5 are verified for some \(1 \le p < N+1\). Let \(\gamma > p\), \(k \ge 1\) an integer, and \((f_i)_{i=1}^d\) a collection of vector fields in \( Lip ^{\gamma +k-1}(\mathbb {R}^e)\). Let \(U^\cdot _{1\leftarrow 0} : WG\Omega _p(\mathbb {R}^d) \mapsto Diff ^k(\mathbb {R}^e)\) be the associated flow map.
Then \(U^{\mathbf {X}^{n,\phi }}_{1\leftarrow 0} \,{\buildrel \mathcal {D}\over \rightarrow }\,U^{\mathbf {X}^\phi }_{1\leftarrow 0}\) as \( Diff ^k(\mathbb {R}^e)\)-valued random variables.
We demonstrate how one can apply Corollary 5.7 to show weak convergence of stochastic flows in the following three examples, the first of which extends a result of Kunita [29].
Example 5.8
(Linear interpolation, Kunita [29]). Let \(Y_{n1},\ldots , Y_{nn}\) be an iid array in \(\mathbb {R}^d\) such that the associated random walk \(\mathbf {Y}^n\) converges in law as a \(D_o([0,1],\mathbb {R}^d)\)-valued random variable to a Lévy process \(\mathbf {Y}\) in \(\mathbb {R}^d\).
We claim that ODE flows driven by the piecewise linear interpolation of the random walk \(\mathbf {Y}^n\) along \( Lip ^{\gamma +k-1}\) vector fields, for any \(\gamma > 2\), \(k \ge 1\), converge in law as \( Diff ^k(\mathbb {R}^e)\)-valued random variables.
Indeed, setting \(G \,{:=}\, G^2(\mathbb {R}^d)\), consider the G-valued iid array \(X_{nj} \,{:=}\, e^{Y_{nj}}\). It follows that \(X_{nj}\) is scaled by any scaling function \(\theta \) on G for which \(\theta \ge \sum _{i=1}^d |\xi _i|^2\). Moreover, using the fact that \(\xi _i\circ \exp \in C^\infty _c(\mathbb {R}^d)\), one can readily see by Theorem 2.1 that \(\mathbf {X}^n\,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}\) as \(D_o([0,1],G)\)-valued random variables, where \(\mathbf {X}\) is a G-valued Lévy process. Finally, consider the 1-approximating, endpoint continuous path function
Then \(\mathbf {X}^{n,\phi }\) is (a reparametrisation of) the lift of the piecewise linear interpolation of \(\mathbf {Y}^n\). Furthermore, the conditions of Theorem 5.5 are satisfied for all \(p > 2\), so that \(\mathbf {X}^{n,\phi } \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}^{\phi }\) as \(C_o^{0,p -var }([0,1],G)\)-valued random variables, from which the desired claim follows (Corollary 5.7).
Remark 5.9
In the previous example, it is easy to see that RDEs driven by \(\mathbf {X}^\phi \) coincide (up to reparametrisation) with general (Marcus) RDEs driven by \(\mathbf {X}\) (in the sense of [17, Section 6]) and thus with Marcus SDEs driven by \(\mathbf {Y}\).
Remark 5.10
The previous example extends the main result of Kunita [29, Theorem 4, Corollary p. 329]. The main restriction of Kunita’s result is the assumption that the vector fields \(f_1,\ldots , f_d\), along which \(\mathbf {Y}^n\) drives an ODE, generate a finite dimensional Lie algebra, which essentially allows one to reduce the problem to a random walk on a Lie group (see [29, p. 340]). Our approach, based on convergence under rough path topologies, bypasses this restriction and provides a natural interpretation of the limiting stochastic flow as the solution of an RDE.
Remark 5.11
Breuillard, Friz and Huesmann [7] showed a result analogous to the above example in a special case where the limiting Lévy process \(\mathbf {Y}\) is Brownian motion. The main analytic tool used in [7] is the Kolmogorov–Lamperti criterion to show tightness of . This is of course stronger than tightness of \((\left| \left| \mathbf {Y}^n \right| \right| _{p -var ;[0,1]})_{n \ge 1}\), and cannot hold whenever the limiting Lévy process has jumps, which demonstrates an example where the tightness criterion Theorem 4.8 can be used as an effective alternative to the classical Kolmogorov–Lamperti criterion.
In the following example we demonstrate how Example 5.8 generalises to non-linear interpolations with essentially no extra effort.
Example 5.12
(Non-linear interpolation). As in Example 5.8, let \(Y_{nj}\) be an iid array in \(\mathbb {R}^d\) such that \(\mathbf {Y}^n \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {Y}\) for a Lévy process \(\mathbf {Y}\) in \(\mathbb {R}^d\).
Instead of piecewise interpolations, consider now any q-approximating endpoint continuous path function \(\psi : \mathbb {R}^d \mapsto C_o^{q -var }([0,1], \mathbb {R}^d)\) for some \(1 \le q < 2\). Set again \(G = G^2(\mathbb {R}^d)\) and define the injective map \(f : \mathbb {R}^d \mapsto G\) by
where \(S_2 : C_o^{q -var }([0,1],\mathbb {R}^d) \mapsto C_o^{q -var }([0,1],G)\) denotes the level-2 lifting map.
Consider the iid array \(X_{nj} \,{:=}\, f(Y_{nj})\). It follows readily from the assumption that \(\psi \) is q-approximating that \(X_{nj}\) is again scaled by any scaling function \(\theta \) on G for which \(\theta \ge \sum _{i=1}^d |\xi _i|^2\). We now make the assumption on \(\psi \) and \(Y_{n1}\) that for all \(i,j \in \{1,\ldots m\}\) the following limits exist:
This occurs, for example, whenever every \(\xi _i \circ f\) is twice differentiable at zero, but in general will depend on the array \(Y_{nj}\) and the path function \(\psi \).
Under this assumption, it follows from Theorem 2.1 that the random walk \(\mathbf {X}^n\) associated with the array \(X_{nj}\) converges in law to the Lévy process \(\mathbf {X}\) with triplet \((C,D,\Xi )\), where \(\Xi \) is the pushforward of \(\Pi \) by f.
Define now the q-approximating, endpoint continuous path function \(\phi : f(\mathbb {R}^d) \mapsto C_o^{q -var }([0,1],G)\) by
Observe that the conditions of Theorem 5.5 are again satisfied for all \(p > 2\), so that \(\mathbf {X}^{n,\phi } \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}^{\phi }\) as \(C_o^{0,p -var }([0,1],G)\)-valued random variables.
Note that \(\mathbf {X}^{n,\phi }\) is, up to reparametrisation, the lift of \(\mathbf {Y}^{n,\psi }\) (which is itself, up to reparametrisation, the random walk \(\mathbf {Y}^n\) interpolated by the path function \(\psi \)). It follows that ODE flows driven by \(\mathbf {Y}^{n,\psi }\) along \( Lip ^{\gamma +k-1}\) vector fields, for any \(\gamma > 2\), \(k \ge 1\), converge in law as \( Diff ^k\)-valued r.v.’s to the corresponding RDE flow driven by \(\mathbf {X}^\phi \) (Corollary 5.7).
Remark 5.13
McShane [33] considered non-linear interpolations of the increments of Brownian motion and showed strong convergence of the corresponding ODEs to the associated Stratonovich SDE with an adjusted drift. We note that the family of path functions \(\psi \) to which the above example applies includes the non-linear interpolations considered by McShane ([33, p. 285]) (provided that \(Y_{nk}\) are also sufficiently well behaved, e.g., the increments of Brownian motion, to ensure that the limits \(C^{i,j}\) and \(D^i\) exist). The above example can thus be seen as a weak convergence analogue for general Lévy processes of the results in [33]. In a similar way, the following example is analogous to the results of Sussman [35] on non-linear approximations of Brownian motion.
Example 5.14
(Perturbed walk). As in Examples 5.8 and 5.12, let \(Y_{nj}\) be an iid array in \(\mathbb {R}^d\) such that \(\mathbf {Y}^n \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {Y}\) for a Lévy process \(\mathbf {Y}\) in \(\mathbb {R}^d\).
Let \(N \ge 2\) and as before denote \(G = G^N(\mathbb {R}^d)\) and \(\mathfrak g= \mathfrak g^N(\mathbb {R}^d)\). Fix a path \(\gamma \in C_o^{1 -var }([0,1], \mathbb {R}^d)\) such that \(v \,{:=}\, \log (S_N(\gamma )_{0,1})\) is in the center of \(\mathfrak g\) (that is, \(v^i = 0\) for all \(i \in \{1,\ldots , m\}\) such that \(d_i < N\)).
In this example we wish to consider the random path \(\mathbf {Z}^n \in C^{1 -var }([0,1], \mathbb {R}^d)\) defined by linearly joining the points of \(\mathbf {Y}^n\), and, between each linear chord, running along the path \(n^{-1/N}\gamma \).
Define the closed subset
Note that for every \(x \in W\) decomposes uniquely as \(x = \exp (y)\exp (\lambda v)\) for some \(y \in \mathbb {R}^d\) and \(\lambda \ge 0\). Define then the 1-approximating, endpoint continuous path function \(\phi : W \mapsto C_o^{1 -var }([0,1],G)\) by
Consider the G-valued iid array \(X_{nj} \,{:=}\, \exp (Y_{nj})\exp (n^{-1} v)\) and the associated random walk \(\mathbf {X}^n\). Observe that \(\mathbf {X}^{n,\phi }\) is (a reparametrisation of) the level-N lift of the path \(\mathbf {Z}^n\) described above.
We now claim that \(\mathbf {X}^n \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}\) for a Lévy process \(\mathbf {X}\) in G. A straightforward way to show this is to take local coordinates \(\sigma _1,\ldots , \sigma _d \in C^\infty _c(\mathbb {R}^d)\) so that \(\sigma \,{:=}\, \sum _{i=1}^d \sigma _iu_i\) is the identity in a neighbourhood of zero, and write the triplet of \(\mathbf {Y}\) as \((A,B,\Pi )\) with respect to \(\sigma _1,\ldots , \sigma _d\). Define the functions
so that \(\xi (X_{nk}) = f_n(Y_{nk})\). Note that, since v is in the centre of \(\mathfrak g\), there exists a neighbourhood of zero \(V \subset \mathbb {R}^d\) and \(n_0 > 0\) such that for all \(n \ge n_0\)
where \(h_n \equiv 0\) on V. It readily follows that
where
Observe now that for all \(y \in \mathbb {R}^d\), \(\lim _{n\rightarrow \infty }h_n(y) = \xi (e^y) - \sigma (y)\), so that by dominated convergence,
from which it follows that
Since \(nf_n(0) = v\) for all n sufficiently large, we obtain that the following limit exists:
Furthermore, letting \(\Xi \) denote the pushforward of \(\Pi \) by \(\exp \), one can show in exactly the same way that
exists for all \(i,j \in \{1,\ldots , m\}\), and that
for every \(f \in C_b(G)\) which is identically zero on a neighbourhood of \(1_G\). It follows by Theorem 2.1 that \(\mathbf {X}^n \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}\) as claimed, where \(\mathbf {X}\) is the Lévy process with triplet \((C,D,\Xi )\).
Finally, one readily sees that \(X_{nj}\) is scaled by any scaling function \(\theta \) on G for which
It now follows by Theorem 5.5 that for all \(p > N\), \(\mathbf {X}^{n,\phi } \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}^{\phi }\) as \(C_o^{0,p -var }([0,1],G)\)-valued r.v.’s. In particular, ODE flows driven by the random paths \(\mathbf {Z}^n\) along \( Lip ^{\gamma +k-1}\) vector fields, for any \(\gamma > N\), \(k \ge 1\), converge in law as \( Diff ^k\)-valued r.v.’s to the corresponding RDE flow driven by \(\mathbf {X}^\phi \) (Corollary 5.7).
Remark 5.15
Note that the previous Example 5.8 is a special case of Example 5.14 by taking \(v = 0\) and \(\gamma \) the constant path \(\gamma \equiv 0\). Building on Remark 5.9, one can verify that RDEs driven by \(\mathbf {X}^\phi \) coincide (up to reparametrisation) with the associated Marcus SDEs driven by \(\mathbf {Y}\) with an adjusted drift given by appropriate N-th level Lie brackets of the driving vector fields (cf. [16] and [18, Section 13.3.4]).
5.3.2 The Lévy–Khintchine formula for Lévy rough paths
In this subsection we determine a formula for the characteristic function (in the sense of [11]) of the signature of a Lévy rough path.
Recall that for every \(\mathbf {x}\in WG\Omega _p(\mathbb {R}^d)\), there exists an element
called the signature of \(\mathbf {x}\), where \(S(\mathbf {x})_{0,T}^k\) encodes all the k-fold iterated integrals of \(\mathbf {x}\). A fundamental result in rough paths theory is that \(S(\mathbf {x})_{0,T}\) belongs to a certain group \(G(\mathbb {R}^d)\) contained in the set of group-like elements of \(T((\mathbb {R}^d))\) (for the tensor Hopf algebra structure). Furthermore, for every linear map \(f \in \mathbf L(\mathbb {R}^d,\mathbf L(\mathbb {R}^e, \mathbb {R}^e))\), the series \(\sum _{k=0}^\infty f^{\otimes k}(S(\mathbf {x})_{0,T}^k)\) converges absolutely to an operator \(f(S(\mathbf {x})_{0,T}) \in \mathbf L(\mathbb {R}^e,\mathbb {R}^e)\) (which is precisely the flow \(U^\mathbf {x}_{T\leftarrow 0}\) associated with the RDE (5.1) upon treating f as a collection of linear vector fields on \(\mathbb {R}^e\)).
For a finite-dimensional complex Hilbert space H, let \(\mathfrak u(H) \subset \mathbf L(H,H)\) denote the Lie algebra of anti-Hermitian operators on H and \(\mathcal {U}(H) \subset \mathbf L(H,H)\) the group of unitary operators on H. Note that every \(f \in \mathbf L(\mathbb {R}^d, \mathfrak u(H))\) naturally induces a map \(f : G(\mathbb {R}^d) \mapsto \mathbf L(H,H)\) (which is continuous for the topology on \(G(\mathbb {R}^d)\) introduced in [11]) given by \(f(x) = \sum _{k=0}^\infty f^{\otimes k}(x^k)\), where \(x^k \in (\mathbb {R}^d)^{\otimes k}\) denotes the level-k projection of x. Note that f satisfies \(f(x) \in \mathcal {U}(H)\) and \(f(xy) = f(x)f(y)\) for all \(x,y \in G(\mathbb {R}^d)\), i.e., \(f : G(\mathbb {R}^d) \mapsto \mathcal {U}(H)\) is a unitary representation of \(G(\mathbb {R}^d)\).
One of the main results of [11] is that for any \(WG\Omega _p(\mathbb {R}^d)\)-valued random variable \(\mathbf {X}\), the following characteristic function
where H varies over all finite dimensional complex Hilbert spaces, uniquely determines \(S(\mathbf {X})_{0,T}\) as a \(G(\mathbb {R}^d)\)-valued random variable \(G(\mathbb {R}^d)\) (more generally, this result holds for every \(G(\mathbb {R}^d)\)-valued random variable).
Remark 5.16
Boedihardjo et al. [5] have recently established a conjecture of Hambly–Lyons [24] on the kernel of the map \(S : WG\Omega _p(\mathbb {R}^d) \mapsto T((\mathbb {R}^d))\). A consequence of the main result of [5] is that for all \(\mathbf {x},\mathbf {y}\in WG\Omega _p(\mathbb {R}^d)\), \(S(\mathbf {x})_{0,T} = S(\mathbf {y})_{0,T} \Leftrightarrow U^\mathbf {x}_{T\leftarrow 0} = U^\mathbf {y}_{T\leftarrow 0}\) for all collections \((f_i)_{i=1}^d\) of vector fields in \( Lip ^{\gamma }(\mathbb {R}^e)\) with \(\gamma > p\) (not necessarily linear). In combination with the results from [11], it follows that for any \(WG\Omega _p(\mathbb {R}^d)\)-valued random variable \(\mathbf {X}\), knowledge of the map (5.2) uniquely determines the law of every RDE driven by \(\mathbf {X}\).
We now state the aforementioned formula for the characteristic function of the signature of a Lévy rough path. For a subset \(W\subseteq G\), path function \(\phi : W \mapsto C^{p -var }_o([0,1], G)\), and a linear map \(f \in \mathbf L(\mathbb {R}^d,\mathbf L(\mathbb {R}^e,\mathbb {R}^e))\), we adopt the shorthand notation
By interpolation (Lemma 3.3), one can readily verify that \(f_\phi \) is continuous whenever \(\phi \) is p-approximating and endpoint continuous. Finally, we canonically treat \(\mathfrak g= \mathfrak g^N(\mathbb {R}^d)\) as a subspace of the tensor algebra \(T(\mathbb {R}^d)\), so that for any Lie algebra \(\mathfrak h\), every \(f \in \mathbf L(\mathbb {R}^d, \mathfrak h)\) extends uniquely to a linear map \(f : \mathfrak g\mapsto \mathfrak h\).
Theorem 5.17
(Lévy–Khintchine formula). Let \(\mathbf {X}\) be a Lévy process in G with triplet \((A, B, \Pi )\). Suppose that for some \(1 \le p < N+1\), \(\left| \left| \mathbf {X} \right| \right| _{p -var ;[0,T]} < \infty \) a.s.. Let \(\phi : supp (\Pi ) \mapsto C_o^{p -var }([0,1],G)\) be a p-approximating, endpoint continuous path function defined on \( supp (\Pi )\).
Then for every finite-dimensional complex Hilbert space H and \(f \in \mathbf L(\mathbb {R}^d,\mathfrak u(H))\), it holds that the function
is \(\Pi \)-integrable, and that
where
Remark 5.18
Note that every pair \((\mathbf {X},\phi )\) as in Theorem 5.17 naturally gives rise to a convolution semigroup \((\mu _t)_{t > 0}\) of probability measures on \(G(\mathbb {R}^d)\) (which we recall is a Polish but, if \(d > 1\), non-locally compact group, [11]) given by \(\mu _t = Law \left[ S(\mathbf {X}^\phi _{[0,t]})_{0,t} \right] \), where \(\mathbf {X}_{[s,t]}^\phi \in C([s,t],G)\) denotes the connecting map applied to the restriction \({\left. \mathbf {X} \phantom {\big |} \right| _{[s,t]} }\). Moreover, treating \(\phi \) as a map \( supp (\Pi ) \mapsto G(\mathbb {R}^d)\), \(x \mapsto S(\phi (x))_{0,1}\), and every \(f \in \mathbf L(\mathbb {R}^d,\mathfrak u(H))\) as unitary representation of \(G(\mathbb {R}^d)\), Theorem 5.17 bears a close resemblance to other forms of the Lévy–Khintchine formula stated in terms of unitary representations of Lie groups (see, e.g., [1, Section 5.5]).
Remark 5.19
Theorem 5.17 can be seen as an extension of a related result on the expected signature of a Lévy p-rough path for \(1 \le p < 3\) ([17, Theorem 53]) in which \(\phi \) is taken as the log-linear path function \(\phi (e^x) = e^{tx}\), \(\forall x \in \mathfrak g\), and additional moment assumptions on the Lévy measure are required to ensure existence of the expected signature.
We first record the following estimate which is readily derived from standard Euler approximations to RDEs ([18, Corollary 10.15]).
Lemma 5.20
Let \(1 \le p< \gamma < N+1\), \(\theta \) a scaling function on G, \(W \subseteq G\) a subset, and \(\phi \) a path function defined on W such that \(\lim _{x \rightarrow 1_G}\left| \left| \phi (x) \right| \right| ^\gamma _{p -var ;[0,1]}/\theta (x) = 0\).
Then for all \(f \in \mathbf L(\mathbb {R}^d, \mathbf L(\mathbb {R}^e,\mathbb {R}^e))\), it holds that
Proof of Theorem 5.17
Without loss of generality, we cam assume \(T=1\). Let V be a bounded neighbourhood of \(1_G\) and \(W \,{:=}\, supp (\Pi ) \cup V\). Note that \(\phi \) shrinks on the diagonal (see Remark A.2), so we can find a path function \(\psi : W \mapsto C([0,1],G)\) which is also p-approximating and shrinks on the diagonal and such that \(\psi \equiv \phi \) on \( supp (\Pi )\) (e.g., let \(\psi (x)\) be a geodesic from \(1_G\) to x for all \(x \in V{\setminus } supp (\Pi )\)).
Let \(X_{n1}, \ldots , X_{nn}\) be the iid array constructed in Sect. 2.3 associated to \(\mathbf {X}\), and let \(\mathbf {X}^n\) be the associated random walk. Due to the shrinking support of the random variables \(Y_{nj}\) from Sect. 2.3, observe that
where we recall the notation \( supp (\Pi )^\varepsilon \) from Section A.1. In particular, for all n sufficiently large, \(\mathbf {X}^n \in W^0\) a.s., so that \(\mathbf {X}^{n,\psi }\) is well-defined. Observe that, due to (5.4) and Proposition A.4, \(\mathbf {X}^{n, \psi } \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}^{\psi }\) as \(C_o([0,1], G)\)-valued random variables.
Let \(p< p' < N+1\). Since \(\left| \left| \mathbf {X} \right| \right| _{p -var ;[0,1]} < \infty \) a.s. by assumption, we deduce from Theorem 5.1, Lemma 5.4, and Proposition 3.5, that
where the equality in law follows from the fact that \(\psi \equiv \phi \) on \( supp (\Pi )\) and \(\mathbf {X}\in supp (\Pi )^0\) a.s. (see Remark 5.6).
For all \(i \in \{1,\ldots , m\}\), define \(q_i \,{:=}\, 2\wedge (p/d_i)\), and let \(\theta \) be a scaling function on G such that \(\theta \equiv \sum _{i=1}^m |\xi _i|^{q_i}\) in a neighbourhood of \(1_G\). It follows from Lemma 2.5 and part (2) of Theorem 5.1 that \(\theta \) scales the array \(X_{nj}\).
Since \(\psi \) is p-approximating, it holds that \(\lim _{x \rightarrow 1_G}\left| \left| \psi (x) \right| \right| _{p -var ;[0,1]}^\gamma /\theta (x) = 0\) for all \(\gamma > p\). For \(f \in \mathbf L(\mathbb {R}^d, \mathfrak u(H))\), observe that \(f_{\psi }\) is a map from W to the unitary operators on H (thus bounded) and is continuous on \( supp (\Pi )\). Since \(f_{\psi } \equiv f_{\phi }\) on \( supp (\Pi )\), it now follows from Lemmas 5.20 and 2.6 that
and thus
Since the array \(X_{nj}\) is iid, note that for all \(n \ge 1\)
Since \(\mathbf {X}^{n,\psi } \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}^\phi \) as \(WG\Omega _{p'}(\mathbb {R}^d)\)-valued r.v.’s, and \(\mathbf {x}\mapsto f(S(\mathbf {x})_{0,1})\) is a continuous bounded function on \(WG\Omega _{p'}(\mathbb {R}^d)\), we obtain (5.3). \(\square \)
References
Applebaum, D.: Probability on compact Lie groups. Probability Theory and Stochastic Modelling. With a Foreword by Herbert Heyer, vol. 70, pp. xxvi+217. Springer, Cham. ISBN: 978-3-319-07841-0; 978-3-319-07842-7. doi:10.1007/978-3-319-07842-7 (2014)
Baudoin, F.: An Introduction to the Geometry of Stochastic Flows. Imperial College Press, London, pp. x+140. ISBN: 1-86094-481-7. doi:10.1142/9781860947261 (2004)
Billingsley, P.: Convergence of probability measures. Second. Wiley Series in Probability and Statistics: Probability and Statistics. A Wiley-Interscience Publication, pp. x+277. Wiley, Inc., New York, . ISBN: 0-471-19745-9. doi:10.1002/9780470316962 (1999)
Blumenthal, R.M., Getoor, R.K.: Sample functions of stochastic processes with stationary independent increments. J. Math. Mech. 10, 493–516 (1961)
Boedihardjo, H., Geng, Xi., Lyons, T., Yang, D.: The signature of a rough path: uniqueness. Adv. Math. 293 720–737. ISSN: 0001- 8708. doi:10.1016/j.aim.2016.02.011 (2016)
Bretagnolle, J.: \(p\)-variation de fonctions aléatoires. II. Processus à accroissements indépendants”. Séminaire de Probabilités, VI (Univ. Strasbourg, année universitaire 1970–1971). Lecture Notes in Mathematics. Springer, Berlin, vol. 258, pp. 64–71 (1972)
Breuillard, E., Friz, P., Huesmann, M.: From random walks to rough paths. Proc. Am. Math. Soc. 137(10), 3487–3496. ISSN: 0002-9939. doi:10.1090/S0002-9939-09-09930-4 (2009)
Bruned, Y., Chevyrev, I., Friz, P.K., Preiss, R.: A Rough Path Perspective on Renormalization. ArXiv e-prints. arXiv:1701.01152 [math.PR] (2017)
Bruned, Y., Hairer, M., Zambotti, L.: Algebraic renormalisation of regularity structures. ArXiv e-prints. arXiv:1610.08468 [math.RA] (2016)
Cass, T., Ogrodnik, M.: Tail estimates for Markovian rough paths. ArXiv e-prints. To appear in Annals of Probability. arXiv:1411.5189 [math.PR] (2014)
Chevyrev, I., Lyons, T.: Characteristic functions of measures on geometric rough paths. Ann. Probab. 44(6), 4049–4082. ISSN: 0091-1798. doi:10.1214/15-AOP1068 (2016)
Chevyrev, I., Ogrodnik, M.: A support and density theorem for Markovian rough paths. ArXiv e-prints. arXiv:1701.03002 [math.PR] (2017)
Feinsilver, P: Processes with independent increments on a Lie group. Trans. Am. Math. Soc. 242, 73–121. ISSN: 0002-9947 (1978)
Flint, G., Hambly, B., Lyons, T.: Discretely sampled signals and the rough Hoff process. Stochastic Process. Appl. 126(9), 2593–2614. ISSN: 0304-4149. doi:10.1016/j.spa.2016.02.011 (2016)
Folland, G.B., Stein, E.M.: Hardy spaces on homogeneous groups. Mathematical Notes, vol. 28, pp. xii+285. Princeton University Press, Princeton, NJ; University of Tokyo Press, Tokyo. ISBN: 0-691-08310-X (1982)
Friz, P., Oberhauser, H.: Rough path limits of the Wong–Zakai type with a modified drift term. J. Funct. Anal. 256(10), 3236–3256. ISSN: 0022-1236. doi:10.1016/j.jfa.2009.02.010 (2009)
Friz, P., Shekhar, A.: General rough integration, Levy rough paths and a Levy–Kintchine type formula. ArXiv e-prints. To appear in Annals of Probability. arXiv:1212.5888 [math.PR] (2012)
Friz, P.K., Victoir, N.B.: Multidimensional stochastic processes as rough paths. In: Cambridge Studies in Advanced Mathematics, vol. 120. pp. xiv+656. Cambridge University Press, Cambridge. ISBN: 978-0-521- 87607-0 (2010)
Friz, P., Victoir, N.: On uniformly subelliptic operators and stochastic area. Probab. Theory Relat. Fields 142(3–4), 475–523. ISSN: 0178-8051. doi:10.1007/s00440-007-0113-y (2008)
Gīhman, I.I., Skorohod, A.V.: The theory of stochastic processes. I. Translated from the Russian by S. Kotz, Die Grundlehren der mathematischen Wissenschaften, Band 210, pp. viii+570. Springer, New York (1974)
Gubinelli, M.: Ramification of rough paths. J. Differ. Equ. 248(4), 693–721. ISSN: 0022-0396. doi:10.1016/j.jde.2009.11.015 (2010)
Hairer, M., Kelly, D.: Geometric versus non-geometric rough paths. Ann. Inst. Henri Poincaré Probab. Stat. 51(1), 207–251. ISSN: 0246- 0203. doi:10.1214/13-AIHP564 (2015)
Hairer, M.: A theory of regularity structures. Invent. Math. 198(2), 269–504. ISSN: 0020-9910. doi:10.1007/s00222-014-0505-4 (2014)
Hambly, B., Lyons, T.: Uniqueness for the signature of a path of bounded variation and the reduced path group. Ann. Math. 171(1), 109–167. ISSN: 0003-486X. doi:10.4007/annals.2010.171.109 (2010)
Hebisch, W., Sikora, A.: A smooth subadditive homogeneous norm on a homogeneous group. Stud. Math. 96(3), 231–236. ISSN: 0039-3223 (1990)
Hunt, G.A.: Semi-groups of measures on Lie groups. Trans. Am. Math. Soc. 81, 264–293. ISSN: 0002-9947 (1956)
Kallenberg, O.: Foundations of modern probability. Probability and its Applications (New York), pp. xii+523. Springer, New York. ISBN: 0-387-94957-7 (1997)
Kolokoltsov, V.N.: Markov processes, semigroups and generators. de Gruyter Studies in Mathematics, vol. 38, pp. xviii+430. Walter de Gruyter & Co., Berlin. ISBN: 978-3-11-025010-7 (2011)
Kunita, H.: Some problems concerning Lévy processes on Lie groups. Stochastic analysis (Ithaca, NY, 1993). In: Proceedings of Symposium. Pure American Mathematical Society, Providence, RI, vol. 57, pp. 323–341. doi:10.1090/pspum/057/1335479 (1995)
Liao, M.: Lévy processes in Lie groups. Cambridge Tracts in Mathematics, vol. 162, pp. x+266. Cambridge University Press, Cambridge. ISBN: 0- 521-83653-0. doi:10.1017/CBO9780511546624 (2004)
Lyons, T.J., Caruana, M., Lévy, T.: Differential equations driven by rough paths. Lecture Notes in Mathematics, vol. 1908, pp. xviii+109. Springer, Berlin. ISBN: 978-3-540-71284-8; 3-540-71284-4 (2007)
Manstavičius, M.: \(p\)-variation of strong Markov processes. Ann. Probab. 32(3A), 2053–2066. ISSN: 0091-1798. doi:10.1214/009117904000000423 (2004)
McShane, E.J.: Stochastic differential equations and models of random processes. In: Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability (Univ. California, Berkeley, Calif., 1970/1971), Vol. III: Probability theory. Univ. California Press, Berkeley, California, pp. 263–294 (1972)
Porter, J.E.: Helly’s selection principle for functions of bounded P-variation. Rocky Mt. J. Math. 35(2), 675–679. ISSN: 0035-7596. doi:10.1216/rmjm/1181069753 (2005)
Sussmann, H.J.: Limits of the Wong–Zakai Type with a Modified Drift Term. Stochastic Analysis. Academic Press, Boston (1991)
Williams, D.R.E.: Path-wise solutions of stochastic differential equations driven by Lévy processes. Rev. Mat. Iberoamericana 17(2), 295–329. ISSN: 0213-2230. doi:10.4171/RMI/296 (2001)
Acknowledgements
Part of this work is contained in the author’s D.Phil thesis written under the guidance of Prof. Terry Lyons, to whom the author is sincerely grateful. The author would also like to thank Atul Shekhar, Guy Flint, and Prof. Peter Friz for helpful conversations on this topic, and the anonymous referees whose suggestions helped to significantly improve this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
The author is supported by a Junior Research Fellowship of St John’s College, Oxford.
Appendices
Appendix A: Path functions
In this section we introduce and study the basic properties of path functions, which serve to systematically connect the left- and right-limits of a càdlàg path. Throughout the section, let (E, d) be a metric space and equip C([0, T], E) and D([0, T], E) with the uniform and the Skorokhod topology respectively.
Definition A.1
For a subset \(J \subseteq E \times E\), we call \(\phi : J \mapsto C([0,1], E)\) a path function defined on J if
We say \(\phi \) is endpoint continuous if \(\phi \) is continuous and
For \(p \ge 1\), we say \(\phi \) is p-approximating if for every \(r > 0\) there exists \(C > 0\) such that for all \((x,y) \in J\) with \(d(x,y) < r\)
When E is a Lie group, we say \(\phi \) is left-invariant if there exists a subset \(B \subseteq E\) such that and
Note that for a Lie group G and a subset \(B \subseteq G\), there is a canonical bijection between functions \(\phi : B \mapsto C([0,1], G)\), for which
and left-invariant path functions defined on \(J \,{:=}\, \{(x,y) \mid x^{-1}y \in B\}\). Whenever we speak of a path function \(\phi \) defined on a subset \(B\subseteq G\), we shall mean that \(\phi \) satisfies (A.1) and shall identify \(\phi \) with the corresponding left-invariant path function defined on \(\{(x,y) \mid x^{-1}y \in B\}\).
1.1 A.1. The connecting map on the Skorokhod space
For a path \(\mathbf {x}\in D([0,T], E)\) and a time \(t \in [0,T]\), define \(\Delta \mathbf {x}_t \,{:=}\, (\mathbf {x}_{t-},\mathbf {x}_t) \in E \times E\) and \(\left| \left| \Delta \mathbf {x}_t \right| \right| \,{:=}\, d(\mathbf {x}_{t-},\mathbf {x}_t)\). We call t a jump time of \(\mathbf {x}\) if \(\left| \left| \Delta \mathbf {x}_t \right| \right| > 0\).
For a subset \(J \subseteq E \times E\) and \(\varepsilon \ge 0\) define the subset of càdlàg paths
In particular, \(\mathbf {x}\in J^0\) if and only if all the jumps of \(\mathbf {x}\) are in J. In the case that E is a Lie group and \(B \subseteq E\), we set \(B^\varepsilon \,{:=}\, J^\varepsilon \) where .
For a path function \(\phi : J \mapsto C([0,1], E)\), we now define a map \(\mathbf {x}\mapsto \mathbf {x}^\phi \) from \(J^0\) to C([0, T], E). The construction is similar to the method considered in [17] and [36] of adding fictitious times over which to traverse the jumps.
Consider \(\mathbf {x}\in J^0\). Let \(t_1, t_2, \ldots \) be the jump times of \(\mathbf {x}\) ordered so that \(\left| \left| \Delta \mathbf {x}_{t_1} \right| \right| \ge \left| \left| \Delta \mathbf {x}_{t_2} \right| \right| \ge \ldots \) with \(t_j < t_{j+1}\) if \(\left| \left| \Delta \mathbf {x}_{t_j} \right| \right| = \left| \left| \Delta \mathbf {x}_{t_{j+1}} \right| \right| \). Let \(0 \le m \le \infty \) be the number of jumps of \(\mathbf {x}\). We call the sequence \((t_j)_{j=1}^m\) the canonically ordered jump times of \(\mathbf {x}\).
We henceforth fix a strictly decreasing sequence \((r_i)_{i=1}^\infty \) of positive real numbers such that \(\sum _{i=1}^\infty r_i < \infty \). Define the sequence \((n_k)_{k = 0}^m\) by \(n_0 = 0\), and for \(1 \le k < m+1\) let \(n_k\) be the smallest integer such that \(n_k > n_{k-1}\) and \(r_{n_k} < \left| \left| \Delta \mathbf {x}_{t_{k}} \right| \right| \).
Let \(r \,{:=}\, \sum _{k=1}^m r_{n_k}\). Define the strictly increasing (càdlàg) function
Note that \(\tau (t-) < \tau (t)\) if and only if \(t = t_k\) for some \(1 \le k < m+1\). Moreover, note that the interval \([\tau (t_k-), \tau (t_k))\) is of length \(r_{n_k}\).
We now define \(\widehat{\mathbf {x}} \in C([0,T+r],E)\) by
Denote by \(\tau _r\) the linear bijection from [0, T] to \([0,T+r]\). We finally define
and the associated time change
We call the map \(\mathbf {x}\mapsto \mathbf {x}^\phi \) from \(J^0\) to C([0, T], E) the connecting map.
1.2 A.2. Measurability and continuity
The main result of this subsection is a continuity property of the connecting map, which we summarise in Proposition A.4.
For a subset \(J \subseteq E \times E\), we say that a path function \(\phi : J \mapsto C([0,T], E)\) shrinks on the diagonal if for every bounded set \(B \subseteq J\) and \(\varepsilon > 0\), there exists \(\delta > 0\) such that for all \((x,y) \in B\) with \(d(x,y) < \delta \)
Remark A.2
Observe that every left-invariant, endpoint continuous path function defined on a subset of a Lie group shrinks on the diagonal.
Lemma A.3
Consider \(J\subseteq E\times E\) and a path function \(\phi : J \mapsto C([0,1], E)\) which shrinks on the diagonal. Suppose that \(\phi \) is endpoint continuous on a subset \(K \subseteq J\).
Let \(\mathbf {x}\in K^0\) and a sequence \(\mathbf {x}(n) \in J^0\) such that \(\mathbf {x}(n) \rightarrow \mathbf {x}\) in the Skorokhod topology as \(n \rightarrow \infty \), and such that for every \(\varepsilon > 0\), there exists \(n_0 > 0\) such that \(\mathbf {x}(n) \in K^\varepsilon \) for all \(n \ge n_0\).
Then \(\lim _{n \rightarrow \infty } d_{\infty ;[0,T]}(\mathbf {x}(n)^\phi , \mathbf {x}^\phi ) = 0\).
Proof
Let \(\varepsilon > 0\). By uniform continuity of \(\mathbf {x}^\phi \), there exists \(\eta > 0\) such that
Let \(t_1, t_2, \ldots \) be the canonically ordered jump times of \(\mathbf {x}\), and denote \([s_i, u_i) \,{:=}\, [\tau _\mathbf {x}(t_i-), \tau _\mathbf {x}(t_i))\). For another element \(\mathbf {y}\in D([0,T], E)\), let \(t_1', t_2', \ldots \) be the jump times of \(\mathbf {y}\) and \([s_i', u_i') \,{:=}\, [\tau _\mathbf {y}(t_i'-), \tau _\mathbf {y}(t_i'))\). Let \((n_k)_{k=0}^m\) and \((n_k')_{k=1}^{m'}\) be the corresponding sequences for \(\mathbf {x}\) and \(\mathbf {y}\) respectively (where m and \(m'\) are the number of jumps of \(\mathbf {x}\) and \(\mathbf {y}\)).
Denote by \(\Lambda ^*\) the set of continuous, strictly increasing bijections \(\lambda : [0,T] \mapsto [0,T]\) and let \( id \in \Lambda ^*\) be the identity map. Consider on D([0, T], E) the Skorokhod metric
Let \(\delta > 0\) (which we shall send to zero), and suppose that there exists \(\lambda \in \Lambda ^*\) such that \(d_{\infty ;[0,T]}(\mathbf {x}\circ \lambda , \mathbf {y}) < \delta \) and \(d_{\infty ;[0,T]}(\lambda , id ) < \delta \).
Observe that there exists an integer \(k \ge 1\) such that \(\lambda (t_i') = t_i\) for all \(i \in \{1, \ldots , k\}\), and, denoting by \(v_1< \cdots < v_k\) (resp. \(v_1'< \cdots < v_k'\)) the same set of points as \(t_1, \ldots , t_k\) (resp. \(t_1', \ldots , t_k'\)) ordered monotonically, it holds that \(\lambda (t') \in [v_i, v_{i+1})\) for all \(t' \in [v_i', v_{i+1}')\). In particular, it holds that \(d(\mathbf {y}_{t_i'-},\mathbf {x}_{t_i-}), d(\mathbf {y}_{t_i'}, \mathbf {x}_{t_i}) < \delta \) for all \(i \in \{1, \ldots , k\}\).
Moreover, by choosing \(\delta \) sufficiently small, we can assume that \(n_i = n_i'\) for all \(i \in \{1,\ldots , k\}\) and that k is sufficiently large so that, by making \(\sum _{j = k+1}^\infty r_j\) sufficiently small, it holds that \(|\tau _{\mathbf {y}}(t') - \tau _\mathbf {x}(\lambda (t'))| < \eta \) for all \(t' \in [v_i', v_{i+1}')\) (this is where we have used the condition \(r_{n_j} < \left| \left| \Delta \mathbf {x}_{t_j} \right| \right| \)).
In particular, it holds that that for all \(t' \in [v_i', v_{i+1}')\)
This covers all points not in an interval \([s_i',u_i')\) (note that no assumptions on \(\phi \) were needed yet except that \(\phi (x,y)\) itself is a continuous path for each \((x,y) \in J\)).
Now we let \(\mathbf {y}= \mathbf {x}(n)\) for some n. We may choose n sufficiently large, such that \(\sigma (\mathbf {x},\mathbf {y}) < \delta \) and such that \(\Delta \mathbf {y}_{t_i'} \in K\) for all \(i \in \{1, \ldots , k\}\).
Due to the continuity of \(\phi : K \mapsto C([0,1], E)\) at \(\Delta \mathbf {x}_{t_i} \in K\), it follows that for all \(w' \in [s_i', u_i')\) and \(i \in \{1,\ldots , k\}\), there exists \(w \in [s_i, u_i)\) such that \(|w' - w| < \eta \) and \(d(\mathbf {x}^\phi _w, \mathbf {y}^\phi _{w'}) < \varepsilon \) , so that
Finally, since \(\phi \) shrinks on the diagonal, we may further decrease \(\delta \) if necessary so that for all \(w' \in [s_j', u_j')\) and \(j > k\), it holds that \(|w' - u_j'| < \eta \) and \(d(\mathbf {y}^\phi _{w'}, \mathbf {y}^\phi _{u_j'}) < \varepsilon \). Now \(u_j' = \tau _\mathbf {y}(t')\) for some \(t' \in [0,T]\), and thus, by (A.2),
from which it follows that
\(\square \)
In the following proposition, we equip all topological spaces with their respective Borel \(\sigma \)-algebras.
Proposition A.4
Suppose E is a Polish space, \(K \subseteq J \subseteq E\times E\) are measurable sets, and \(\phi : J \mapsto C([0,1],E)\) is a measurable path function.
-
(i)
Then \(J^\varepsilon \) is a measurable subset of D([0, T], E) for all \(\varepsilon \ge 0\), and the connecting map \(\cdot ^\phi : J^0 \mapsto C([0,T], E)\) is measurable.
-
(ii)
Let \(\mathbf {X}\) be a D([0, T], E)-valued random variable such that \(\mathbf {X}\in K^0\) a.s.. Let \((\mathbf {X}_n)_{n \ge 1}\) be a collection of D([0, T], E)-valued random variables such that \(\mathbf {X}_n \in J^0\) a.s. and \(\mathbf {X}_n\,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}\) as D([0, T], E)-valued random variables. Suppose further that for every \(\varepsilon > 0\), \(\lim _{n \rightarrow \infty }\mathbb {P} \left[ \mathbf {X}_n \notin K^\varepsilon \right] = 0\), and that \(\phi \) is endpoint continuous on K and shrinks on the diagonal in J.
Then \(\mathbf {X}_n^\phi \,{\buildrel \mathcal {D}\over \rightarrow }\,\mathbf {X}^\phi \) as C([0, T], E)-valued random variables.
Proof
-
(i)
Measurability of \(J^\varepsilon \) is an easy consequence of the measurability of the finite-dimensional projections \(\mathbf {x}\mapsto \mathbf {x}_t\) on D([0, T], E) ([3, Theorem 12.5]). To show measurability of the connecting map, note that it suffices to show that for every \(s \in [0,T]\), the map \(\mathbf {x}\mapsto \mathbf {x}^\phi (s)\) is measurable (cf. [3, Example 1.3]), which in turn follows easily from the construction of \(\mathbf {x}^\phi \).
-
(ii)
Note that the condition \(\lim _{n \rightarrow \infty }\mathbb {P} \left[ \mathbf {X}_n \notin K^\varepsilon \right] = 0\) for all \(\varepsilon > 0\) implies that
$$\begin{aligned} \mathbf {Y}_n \,{:=}\, (\mathbf {X}_n, \mathbf 1 \{\mathbf {X}_n \notin K^{1}\}, \mathbf 1 \{\mathbf {X}_n \notin K^{1/2}\}, \mathbf 1 \{\mathbf {X}_n \notin K^{1/4}\}, \ldots ) \end{aligned}$$is a sequence of \(D([0,T],E) \times \{0,1\}^{\mathbb {N}}\)-valued random variables which converges in law to \(\mathbf {Y}\,{:=}\, (\mathbf {X}, 0, 0, \ldots )\). The conclusion now follows from an application of the Skorokhod representation theorem [27, Theorem 3.30] and Lemma A.3.
\(\square \)
1.3 A.3. p-variation
The main result of this subsection is Proposition A.7, which shows that a p-approximating path function does not significantly increase the p-variation of a càdlàg path.
We first require the following lemma, whose proof was inspired by [17, Lemma 22].
Lemma A.5
Let (E, d) be a metric space and \(\mathbf {x}: [0,T] \mapsto E\) a function (not necessarily càdlàg). Let \((I_n)_{n \ge 1}\) be a countable collection of disjoint open subintervals of [0, T] and set \(I \,{:=}\, \cup _{n \ge 1}I_n\). Define \(\mathbf {y}: [0,T] \mapsto E\) by \(\mathbf {y}_t = \mathbf {x}_t\) for \(t \in [0,T]{\setminus } I\), and \(\mathbf {y}_t = \mathbf {x}_{c_n}\) for \(t \in (c_n,d_n) \,{:=}\, I_n\). Let \(p > 0\) and \(C = 1+2\cdot 2^{(p-1)\vee 0} + 3^{(p-1)\vee 0}\). Then
Proof
Define the super-additive functions \(\omega _\mathbf {x}(s,t) = \left| \left| \mathbf {x} \right| \right| _{p -var ;[s,t]}^p\) and \(\omega _\mathbf {y}(s,t) = \left| \left| \mathbf {y} \right| \right| _{p -var ;[s,t]}^p\). Let \(\mathcal {D}= (t_0, t_1, \ldots , t_k)\) be a partition of [0, T].
Denote by \(J_1 = (a_1,b_1), \ldots , J_m = (a_m, b_m)\) those intervals \(I_n\) which contain some partition point \(t_j \in \mathcal {D}\), ordered so that \(b_j < a_{j+1}\) for all \(j \in \{1,\ldots m-1\}\). Call a block a consecutive run of partition points \(t_j, t_{j+1}, \ldots , t_n\) either all in \(J_r\) for some \(r \in \{1,\ldots , m\}\), in which case we call it red, or all outside I, in which case we call it blue. We call a consecutive pair of partition points \(t_i, t_{i+1} \in \mathcal {D}\) which lie in different blocks either red-red, red-blue, or blue-red depending on their respective blocks (note there are no blue-blue pairs). For convenience of notation, set \(J_0 = (a_0,b_0) \,{:=}\, (-\infty , 0)\) and \(J_{m+1} = (a_{m+1},b_{m+1}) \,{:=}\, (T, \infty )\).
For \(r \in \{1,\ldots , m\}\) and a red block \(t_j, t_{j+1}, \ldots , t_n\) in \(J_r\) we have
For \(r \in \{0,\ldots , m\}\) and a blue block \(t_j, t_{j+1}, \ldots , t_n\) between \(J_r, J_{r+1}\) we have
For \(r \in \{1,\ldots , m\}\) and a red-blue pair \(t_i, t_{i+1}\) with \(t_i \in J_r\) and \(t_{i+1}\) between \(J_r,J_{r+1}\), we have
For \(r \in \{0,\ldots , m-1\}\) and a blue-red pair \(t_i, t_{i+1}\) with \(t_i\) between \(J_{r},J_{r+1}\) and \(t_{i+1} \in J_{r+1}\), we have
Finally for \(r \in \{1,\ldots , m-1\}\) a red-red pair \(t_i, t_{i+1}\) with \(t_i \in J_r\) and \(t_{i+1} \in J_{r+1}\), we have
Since \(\omega _\mathbf {x}\) and \(\omega _\mathbf {y}\) are super-additive, the conclusion now follows from splitting the sum \(\sum _{i=1}^k d(\mathbf {x}_{t_{i-1}}, \mathbf {x}_{t_i})^p\) into blocks and consecutive pairs in different blocks. \(\square \)
Corollary A.6
Let \(p \ge 1\), \(J \subseteq E\times E\), \(\phi : J \mapsto C([0,T], E)\) a path function, and \(\mathbf {z}\in J^0\). Suppose that there exists \(C > 0\) such that \(\left| \left| \phi (\mathbf {z}_{t-},\mathbf {z}_t) \right| \right| _{p -var ;[0,1]} \le Cd(\mathbf {z}_{t-},\mathbf {z}_t)\) for every jump time t of \(\mathbf {z}\). Then
The following result now follows immediately from Corollary A.6 and from the definition of a p-approximating path function (Definition A.1).
Proposition A.7
Let \(p \ge 1\), \(J \subseteq E\times E\), and \(\phi : J \mapsto C([0,T], E)\) a p-approximating path function. There exists a continuous function \(\psi : [0,\infty ) \mapsto [0,\infty )\) such that for some \(R,\varepsilon > 0\), \(\psi (x) = Rx\) for all \(x \in [0,\varepsilon )\), and such that \(\left| \left| \mathbf {x}^\phi \right| \right| _{p -var ;[0,T]} \le \psi (\left| \left| \mathbf {x} \right| \right| _{p -var ;[0,T]})\) for all \(\mathbf {x}\in J^0\).
Appendix B: Infinite p-variation of Lévy processes in Lie groups
The purpose of this section is to establish conditions under which sample paths of a Lévy process have infinite p-variation. The methods are all well-known for the case \(G = \mathbb {R}^d\), so we mostly provide indications of how they extend to a general Lie group. Throughout this section, we use the notation from Section 2.
Let \(\mathbf {X}\) be a Lévy process in a Lie group G with triplet \((A, B, \Pi )\) and let be the first exit time of \(\mathbf {X}\) from U. Let \(y_0 \in \mathfrak g{\setminus } \log (U)\) be a distinguished point and consider the \(\mathfrak g\)-valued process
For \(\varepsilon > 0\), define the \(\mathfrak g\)-valued process \(\mathbf {Y}^\varepsilon _t \,{:=}\, \varepsilon ^{-1/2}(\mathbf {Y}_{\varepsilon t})\) for \(t \in [0,1]\). Let \(\mathbf {B}\) be a \(\mathfrak g\)-valued centred Brownian motion starting from zero with covariance matrix \((A^{i,j})_{i,j=1}^m\) with respect to the basis \(u_1,\ldots , u_m\).
Lemma B.1
It holds that \(\mathbf {Y}^\varepsilon \xrightarrow [\varepsilon \rightarrow 0]{\mathcal {D}} \mathbf {B}\) as \(D_o([0,1], \mathfrak g)\)-valued random variables.
Proof
Note that for every \(\varepsilon > 0\), \(\mathbf {Y}^\varepsilon \) can be considered as a \(\mathfrak g\)-valued Markov process (for which every point outside \(\varepsilon ^{-1/2}\log (U)\) is absorbing). Writing \(L^\varepsilon \) and \(L_\mathbf {B}\) for the generators of \(\mathbf {Y}^\varepsilon \) and \(\mathbf {B}\) respectively, it suffices to show that \(L^\varepsilon f \rightarrow L_\mathbf {B}f\) in \(C_0(\mathfrak g)\) for all \(f \in C^\infty _c(\mathfrak g)\) (see, e.g., [27, Chapter 17]). This in turn follows from writing the generator of \(\mathbf {X}\) in the \(\log \) chart and performing a straightforward limiting argument. \(\square \)
Proposition B.2
Suppose \(A^{i,i} > 0\) for some \(i \in \{1,\ldots , m\}\). Then
Proof
Let \(c > 0\). Lemma B.1 implies that there exist \(\delta , \varepsilon _0 > 0\) such that for all \(0 < \varepsilon \le \varepsilon _0\)
Observe that, by the CBH formula, there exist a neighbourhood V of \(1_G\) and a constant \(C > 0\) such that for all \(x,y \in V\)
For \(k \ge 1\), define \(\varepsilon _k \,{:=}\, 3^{-k}\varepsilon _0\). Then by Lemma B.1 and the first BorelâĂşCantelli lemma there exists a strictly increasing sequence \((k(n))_{n \ge 1}\) such that
On the other hand, since \(\mathbf {X}_{\varepsilon _k,\varepsilon _{k-1}} \,{\buildrel \mathcal {D}\over =}\,\mathbf {X}_{\varepsilon _{k-1} - \varepsilon _k}\) and the r.v.’s \((\mathbf {X}_{\varepsilon _k, \varepsilon _{k-1}})_{k \ge 1}\) are independent, the second BorelâĂşCantelli lemma yields
Since \(\lim _{t \rightarrow 0}\mathbf {X}_t = 1_G\) a.s., we now readily deduce from (B.1), (B.2), and the definition of \(\varepsilon _k\) that
As \(c>0\) was arbitrary, the conclusion follows. \(\square \)
Corollary B.3
Suppose \(A^{i,i} > 0\) for some \(i \in \{1,\ldots , m\}\). Then
Proof
By Proposition B.2, \(\limsup _{t \rightarrow 0} t^{-1}|\xi _i(\mathbf {X}_t)|^2 = \infty \) a.s.. Since \(\mathbf {X}\) has stationary and independent increments, the conclusion follows from an application of the Vitali covering argument (see [6, Proposition p. 68], or [18, Theorem 13.69]). \(\square \)
The following is a form of the classical Blumenthal–Getoor index [4] adapted to the setting of Lie groups. Recall the definitions of \(\Gamma _i\) and K from Section 2.3.
Proposition B.4
(Blumenthal–Getoor index). Let \(i \in \{1,\ldots , m\}\) and \(q > 0\). Then
if either
-
(i)
\(q \in \Gamma _i\), or
-
(ii)
\(i \in K\) and \(q < 1\).
Proof
Define \(f \in C_c(G)\) by \(f(x) = 1-\exp (-|\xi _i(x)|^q)\). Since \(\mathbf {X}\) has independent and stationary increments, we can readily show (cf. [4, p. 499]) that (B.3) holds whenever
It thus suffices to show that (B.4) holds in both cases of (i) and (ii):
-
(i)
Let \((\psi _n)_{n \ge 1}\) be a non-decreasing sequence of non-negative functions in \(C^\infty (\mathbb {R})\), each vanishing on some neighbourhood of zero, and such that \(\lim _{n \rightarrow \infty }\psi _n (x) = |x|^{q}\) for all \(x \in \mathbb {R}\). Then for \(f_n(x) \,{:=}\, 1-\exp (-\psi _n(\xi _i(x)))\), we have
$$\begin{aligned} \lim _{t \rightarrow 0} t^{-1}\mathbb E \left[ f_n(\mathbf {X}_t) \right] = \int _G f_n(x) \Pi (dx) \ge c\int _G \psi _n(\xi _i(x)) \Pi (dx) \xrightarrow [n \rightarrow \infty ]{} \infty , \end{aligned}$$where the final convergence follows from \(q \in \Gamma _i\). Since \(0 \le f_n \le f\), we obtain (B.4).
-
(ii)
Since \(q < 1\), for every integer \(n \ge 1\) we can find \(\psi _n \in C^\infty (\mathbb {R})\) such that \(|\psi _n(x)| \le |x|^q\) for all \(x \in \mathbb {R}\) and such that \(\psi _n(x) = nx/\widetilde{B}^i\) for all x in a neighbourhood \(V_n\) of zero. Note that we may suppose \(A^{i,i} = 0\) and \(q \notin \Gamma _i\) (as otherwise the desired result follows by Corollary B.3 or by case (i)). Then for \(f_n(x) \,{:=}\, 1-\exp (-\psi _n(\xi _i(x)))\), a straightforward calculation shows that
$$\begin{aligned} \lim _{t \rightarrow 0}t^{-1}\mathbb E \left[ f_n(\mathbf {X}_t) \right] = n \left( 1 + n^{-1}\int _G f_n(x)\Pi (dx)\right) \xrightarrow [n \rightarrow \infty ]{} \infty , \end{aligned}$$where the final convergence follows from \(q \notin \Gamma _i\) and \(|f_n(x)| \le C|\psi _n(\xi _i(x))|\). Since again \(f_n \le f\), we obtain (B.4).
\(\square \)
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Chevyrev, I. Random walks and Lévy processes as rough paths. Probab. Theory Relat. Fields 170, 891–932 (2018). https://doi.org/10.1007/s00440-017-0781-1
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-017-0781-1
Keywords
- Homogeneous groups
- Rough paths
- Lévy processes
- Random walks
- Tightness of p-variation
- Stochastic flows
- Characteristic functions of signatures