Abstract
For \(\alpha > 0\), the \(\alpha \)-Lipschitz minorant of a function\(f\!:\!\mathbb{R }\rightarrow \mathbb{R }\) is the greatest function \(m : \mathbb{R }\rightarrow \mathbb{R }\) such that \(m \le f\) and \(|m(s)-m(t)| \le \alpha |s-t|\) for all \(s,t \in \mathbb{R }\), should such a function exist. If \(X=(X_t)_{t \in \mathbb{R }}\) is a real-valued Lévy process that is not pure linear drift with slope \(\pm \alpha \), then the sample paths of \(X\) have an \(\alpha \)-Lipschitz minorant almost surely if and only if \(| \mathbb{E }[X_1] | < \alpha \). Denoting the minorant by \(M\), we investigate properties of the random closed set \(\fancyscript{Z} := \{ t \in \mathbb{R }: M_t = X_t \wedge X_{t-} \}\), which, since it is regenerative and stationary, has the distribution of the closed range of some subordinator “made stationary” in a suitable sense. We give conditions for the contact set \(\fancyscript{Z}\) to be countable or to have zero Lebesgue measure, and we obtain formulas that characterize the Lévy measure of the associated subordinator. We study the limit of \(\fancyscript{Z}\) as \(\alpha \rightarrow \infty \) and find for the so-called abrupt Lévy processes introduced by Vigon that this limit is the set of local infima of \(X\). When \(X\) is a Brownian motion with drift \(\beta \) such that \(|\beta | < \alpha \), we calculate explicitly the densities of various random variables related to the minorant.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
A function \(g: \mathbb{R }\rightarrow \mathbb{R }\) is \(\alpha \)-Lipschitz for some \(\alpha > 0\) if \(|g(s) - g(t)| \le \alpha |s - t|\) for all \(s,t \in \mathbb{R }\). If \(\varGamma \) is a set of \(\alpha \)-Lipschitz functions from \(\mathbb{R }\) to \(\mathbb{R }\) such that \(\sup \{g(t_0) : g \in \varGamma \} < \infty \) for some \(t_0 \in \mathbb{R }\), then the function \(g^*: \mathbb{R }\rightarrow \mathbb{R }\) defined by \(g^*(t) = \sup \{g(t) : g \in \varGamma \},\,t \in \mathbb{R }\), is \(\alpha \)-Lipschitz. Also, if \(f: \mathbb{R }\rightarrow \mathbb{R }\) is an arbitrary function, then the set of \(\alpha \)-Lipschitz functions dominated by \(f\) is non-empty if and only if \(f\) is bounded below on compact intervals and satisfies \(\liminf _{t \rightarrow -\infty } f(t) - \alpha t > - \infty \) and \(\liminf _{t \rightarrow +\infty } f(t) + \alpha t > - \infty \). Therefore, in this case there is a unique greatest \(\alpha \)-Lipschitz function dominated by \(f\), and we call this function the \(\alpha \) -Lipschitz minorant of \(f\). See Fig. 1 for an example of the minorant of a Brownian sample path.
Denoting the \(\alpha \)-Lipschitz minorant of \(f\) by \(m\), an explicit formula for \(m\) is
For the sake of completeness, we present a proof of these equalities in Lemma 9.1. The first equality says that for each \(t \in \mathbb{R }\) we construct \(m(t)\) by considering the set of “tent” functions \(s \mapsto h - \alpha |t-s|\) that have a peak of height \(h\) at the position \(t\) and are dominated by \(f\), and then taking the supremum of those peak heights—see Fig. 2 in Sect. 9. The second equality is simply a rephrasing of the first.
The property that the pointwise supremum of a suitable family of \(\alpha \)-Lipschitz functions is also \(\alpha \)-Lipschitz is reminiscent of the fact that the pointwise supremum of a suitable family of convex functions is also convex, and so the notion of the \(\alpha \)-Lipschitz minorant of a function is analogous to that of the convex minorant. Indeed, there is a well-developed theory of abstract or generalized convexity that subsumes both of these concepts and is used widely in nonlinear optimization, particularly in the theory of optimal mass transport—see [2, 12, 14, 26], Sect. 3.3 of [41] and Chapter 5 of [50]. Lipschitz minorants have also been studied in convex analysis for their Lipschitz regularization and Lipschitz extension properties, and in this area are known as Pasch-Hausdorff envelopes [22, 23, 27, 42].
Furthermore, the second expression in (1.1) can be thought of as producing a function analogous to the smoothing of the function \(f\) by an integral kernel (that is, a function of the form \(t \mapsto \int _\mathbb{R }K(|t-s|) f(s) \, ds\) for some suitable kernel \(K: \mathbb{R }\rightarrow \mathbb{R }\)) where one has taken the “min-plus” or “tropical” point of view and replaced the algebraic operations of + and \(\times \) by, respectively, \(\wedge \) and +, so that integrals are replaced by infima. Note that if \(f\) is a continuous function that possesses an \(\alpha _0\)-Lipschitz minorant for some \(\alpha _0\) (and hence an \(\alpha \)-Lipschitz minorant for all \(\alpha \ge \alpha _0\)), then the \(\alpha \)-Lipschitz minorants converge pointwise monotonically up to \(f\) as \(\alpha \rightarrow +\infty \). Standard methods in optimization theory involve approximating a general function by a Lipschitz function and then determining approximate optima of the original function by finding optima of its Lipschitz approximant [20, 21, 24, 36].
We investigate here the stochastic process \((M_t)_{t \in \mathbb{R }}\) obtained by taking the \(\alpha \)-Lipschitz minorant of the sample path of a real-valued Lévy process \(X=(X_t)_{t \in \mathbb{R }}\) for which the \(\alpha \)-Lipschitz minorant almost surely exists, a condition that turns out to be equivalent to \(| \mathbb{E }[X_1] | \!<\! \alpha \) when \(X_0 = 0\) (excluding the trivial case where \(X_t = \pm \alpha t\) for \(t \in \mathbb{R }\))—see Proposition 2.1. Our original motivation for this undertaking was the abovementioned analogy between \(\alpha \)-Lipschitz minorants and convex minorants and the rich (and growing) literature on convex minorants of Brownian motion and Lévy processes in general [1, 3, 7, 9, 11, 19, 38, 39, 46].
In particular, we study properties of the contact set \({\fancyscript{Z}}:=\{ t \in \mathbb{R }: M_t = X_t \wedge X_{t-} \}\). This random set is stationary and regenerative as shown in Theorem 2.6. Consequently, its distribution is that of the closed range of a subordinator “made stationary” in a suitable manner. For a broad class of Lévy processes we are able to identify the associated subordinator in the sense that we can determined its Laplace exponent—see Theorem 4.10.
We then consider the Lebesgue measure of the random set \(\fancyscript{Z}\) in Theorem 4.1 and Remark 4.2. If the paths of the Lévy process have either unbounded variation or bounded variation with drift \(d\) satisfying \(|d |> \alpha \), then the associated subordinator has zero drift, and hence the random set \(\fancyscript{Z}\) has zero Lebesgue measure almost surely. Conversely, if the paths of the Lévy process have bounded variation and drift \(d\) satisfying \(|d| < \alpha \), then the associated subordinator has positive drift, and hence the random set \(\fancyscript{Z}\) has infinite Lebesgue measure almost surely. In Theorem 4.7 we give conditions under which the Lévy measure of the subordinator associated to the set \(\fancyscript{Z}\) has finite total mass, which implies that \(\fancyscript{Z}\) is a discrete set in the case where it has zero Lebesgue measure. This result relies on an interesting result relating to the local behavior of a Lévy process at its local extrema—see Theorem 3.1.
If for the moment we write \(\fancyscript{Z}_\alpha \) instead of \(\fancyscript{Z}\) to stress the dependence on \(\alpha \), then it is clear that \(\fancyscript{Z}_{\alpha ^{\prime }} \subseteq \fancyscript{Z}_{\alpha ^{\prime \prime }}\) for \(\alpha ^{\prime } \le \alpha ^{\prime \prime }\). We find in Theorem 5.5 that if the Lévy process is abrupt, that is, its paths have unbounded variation and “sharp” local extrema in a suitable sense (see Definition 5.1 for a precise definition), then the set \(\bigcup _\alpha \fancyscript{Z}_\alpha \) is almost surely the set of local infima of the Lévy process.
Lastly, when the Lévy process is a Brownian motion with drift, we can compute explicitly the distributions of a number of functionals of the \(\alpha \)-Lipschitz minorant process. In order to describe these results, we first note that it follows from Lemma 9.3 below that the graph of the \(\alpha \)-Lipschitz minorant \(M\) over one of the connected components of the complement of \(\fancyscript{Z}\) is almost surely a “sawtooth” that consists of a line of slope \(+\alpha \) followed by a line of slope \(-\alpha \). Set \(G := \sup \{ t < 0 : t \in \fancyscript{Z} \},D := \inf \{ t > 0 : t \in \fancyscript{Z} \}\), and put \(K: = D-G\). Let \(T\) be the unique \(t \in [G,D]\) such that \(M(t) = \max \{M(s) : s \in [G,D]\}\). That is, \(T\) is place where the peak of the sawtooth occurs. Further, let \(H := X_T - M_T\) be the distance between the Brownian path and the \(\alpha \)-Lipschitz minorant at the time where the peak occurs.
The following theorem summarizes a series of results that we establish in Sect. 8.
Theorem 1.1
Suppose that \(X\) is a Brownian motion with drift \(\beta \), where \(|\beta |<\alpha \). Then, the following hold.
-
(a)
The Lévy measure \(\varLambda \) of the subordinator associated to the contact set \(\fancyscript{Z}\) has finite mass and is characterized up to a scalar multiple by
$$\begin{aligned} \frac{\displaystyle \int \nolimits _{\mathbb{R }_+} 1\! -\! e^{-\theta x} \, \varLambda (dx)}{\displaystyle \int \nolimits _{\mathbb{R }_+} x \, \varLambda (dx)} \!=\! \frac{4 (\alpha ^2 - \beta ^2) \theta }{ \left( \sqrt{2 \theta \!+\! (\alpha - \beta )^2} + \alpha - \beta \right) \left( \sqrt{2 \theta \!+\! (\alpha + \beta )^2} + \alpha + \beta \right) } \end{aligned}$$ -
(b)
When \(\beta = 0\) the measure \(\varLambda \) is absolutely continuous with respect to Lebesgue measure with
$$\begin{aligned} \frac{\varLambda (d x)}{\varLambda (\mathbb{R }_+)} = \frac{2 \alpha }{ \sqrt{2 \pi } } \left[ x^{-\frac{1}{2}} e^{- \frac{\alpha ^2 x}{2}} - 2 \alpha ^2 \varPhi (- \alpha x^{\frac{1}{2}}) \right] \, dx, \end{aligned}$$where \(\varPhi \) is the standard normal cumulative distribution function (that is, \(\varPhi (z) := \int _{-\infty }^{z} \frac{1}{\sqrt{2 \pi }} e^{-\frac{t^2}{2}} \, dt\)).
-
(c)
The distribution of \(T\) is characterized by
$$\begin{aligned}&\mathbb{E }[e^{-\theta T}] = 8 \alpha (\alpha ^2-\beta ^2)\\&\quad \frac{1}{\theta } \left( \frac{1}{ \sqrt{(\alpha +\beta )^2 - 2 \theta } + 3 \alpha - \beta } - \frac{1}{ \sqrt{(\alpha -\beta )^2 + 2 \theta } + 3 \alpha + \beta } \right) \end{aligned}$$for \(- \frac{(\alpha -\beta )^2}{2} \le \theta \le \frac{(\alpha +\beta )^2}{2} \). Also,
$$\begin{aligned} \mathbb{P }\{T>0\} = \frac{1}{2} \left( 1 + \frac{\beta }{\alpha } \right) \!. \end{aligned}$$ -
(d)
The random variable \(H\) has a \({\mathrm{Gamma}}(2, 4 \alpha )\) distribution; that is, the distribution of \(H\) is absolutely continuous with respect to Lebesgue measure with density \(h \mapsto (4 \alpha )^2 h e^{-4 \alpha h},\,h \ge 0\).
The rest of this article is organized as follows. In Sect. 2 we provide precise definitions and give some preliminary results relating to the nature of the contact set. In Sect. 3 we introduce some results relating to the local behavior of a Lévy process at its local extrema. In Sect. 4 we describe the subordinator associated with the contact set, and in Sect. 5 we describe the limit of the contact set as \(\alpha \rightarrow \infty \). In order to prove Theorem 4.10 we need some preliminary results relating to the future infimum of a Lévy process, which we give in Sect. 6, and then we prove Theorem 4.10 in Sect. 7. In Sect. 8 we cover the special case when \(X\) is a two sided Brownian motion with drift in detail. Finally, in Sect. 9 we give some basic facts about the \(\alpha \)-Lipschitz minorant of a function that are helpful throughout the paper.
2 Definitions and preliminary results
2.1 Basic definitions
Let \(X = (X_t)_{t \in \mathbb{R }}\) be a real-valued Lévy process. That is, \(X\) has càdlàg sample paths, \(X_0=0\), and \(X_t-X_s\) is independent of \(\{X_r : r \le s\}\) with the same distribution as \(X_{t-s}\) for \(s,t \in \mathbb{R }\) with \(s<t\).
The Lévy-Khintchine formula says that for \(t \in \mathbb R \) the characteristic function of \(X_t\) is given by \(\mathbb{E }[e^{i \theta X_t}] = e^{|t| \varPsi (\text{ sgn }(t) \theta ) }\) for \(\theta \in \mathbb{R }\), where
with \(a \in \mathbb{R },\,\sigma \in \mathbb{R }_+\), and \(\varPi \) a \(\sigma \)-finite measure concentrated on \(\mathbb{R }{\setminus } \{0\}\) satisfying \(\int _\mathbb{R }(1 \wedge x^2) \, \varPi (dx) < \infty \). We call \(\sigma ^2\) the infinitesimal variance of the Brownian component of \(X\) and \(\varPi \) the Lévy measure of \(X\).
The sample paths of \(X\) have bounded variation almost surely if and only if \(\sigma = 0\) and \(\int _\mathbb{R }(1 \wedge |x| ) \, \varPi (dx) < \infty \). In this case \(\varPsi \) can be rewritten as
We call \(d \in \mathbb{R }\) the drift coefficient. For full details of these definitions see [4].
We will often need the result of Shtatland [45] that if \(X\) has paths of bounded variation with drift \(d\), then
The counterpart of Shtatland’s result when \(X\) has paths of unbounded variation is Rogozin’s result
For the sake of reference, we also record here a regularity criterion due to Rogozin [44] (see also [4, Proposition VI.11]) that we use frequently:
Of course, (2.3) has an obvious analogue that determines when zero is regular for \([0,\infty )\). Note that, except for the case \(\varPi (\mathbb{R })<\infty \) and \(d=0\), zero is regular for \((-\infty ,0]\) (resp. \([0,\infty )\)) if and only if zero is regular for \((-\infty ,0)\) (resp. \((0,\infty )\)) — see the remarks preceding [4, Proposition VI.11].
2.2 Existence of a minorant
Proposition 2.1
Let \(X\) be a Lévy process. The \(\alpha \)-Lipschitz minorant of \(X\) exists almost surely if and only if either \(\sigma = 0,\,\varPi =0\) and \(|d| = \alpha \) (equivalently, \(X_t = \alpha t\) for all \(t \in \mathbb{R }\) or \(X_t = -\alpha t\) for all \(t \in \mathbb{R }\)), or \(\mathbb{E }[|X_1|] < \infty \) and \(| \mathbb{E }[X_1]| < \alpha \).
Proof
As we remarked in the Introduction, the \(\alpha \)-Lipschitz minorant of a function \(f: \mathbb{R }\rightarrow \mathbb{R }\) exists if and only if \(f\) is bounded below on compact intervals and satisfies \(\liminf _{t \rightarrow -\infty } f(t) - \alpha t > - \infty \) and \(\liminf _{t \rightarrow +\infty } f(t) + \alpha t > - \infty \).
Since the sample paths of a Lévy process are almost surely bounded on compact intervals, we need necessary and sufficient conditions for \(\liminf _{t \rightarrow -\infty } X_t - \alpha t > - \infty \) and \(\liminf _{t \rightarrow +\infty } X_t + \alpha t > - \infty \) to hold almost surely. This is equivalent to requiring that
It is obvious that that the two conditions in (2.4) hold if \(\sigma = 0,\,\varPi =0\) and \(|d| = \alpha \). It is clear from the strong law of large numbers that they also hold if \(\mathbb{E }[|X_1|] < \infty \) and \(| \mathbb{E }[X_1]| < \alpha \).
Consider the converse. Writing \(x^+ := x \vee 0\) and \(x^- := - (x \wedge 0)\) for \(x \in \mathbb{R }\), the strong law of large numbers precludes any case where either \(\mathbb{E }[X_1^+] = +\infty \) and \(\mathbb{E }[X_1^-] < + \infty \) or \(\mathbb{E }[X_1^+] < +\infty \) and \(\mathbb{E }[X_1^-] = +\infty \). A result of Erickson [13, Chapter 4, Theorem 15] rules out the possibility \(\mathbb{E }[X_1^+] = \mathbb{E }[X_1^-] = +\infty \), and so \(\mathbb{E }[|X_1|] < \infty \). It now follows from the strong law of large numbers that \(\lim _{t \rightarrow \infty } t^{-1} X_t = \mathbb{E }[X_1]\) and so \(|\mathbb{E }[X_1]| \le \alpha \). Suppose that \(X_t\) is non-degenerate for \(t \ne 0\) (that is, that \(\sigma \ne 0\) or \(\varPi \ne 0\)). Then, \(\limsup _{t \rightarrow \infty } X_t - \mathbb{E }[X_1] t = +\infty \) almost surely and \(\liminf _{t \rightarrow \infty } X_t - \mathbb{E }[X_1] t = -\infty \) almost surely (see, for example, [25, Corollary 9.14]), and so \(|\mathbb{E }[X_1]| < \alpha \) in this case. \(\square \)
Hypothesis 2.2
From now on we assume, unless we note otherwise, that the Lévy process \(X = (X_t)_{t \in \mathbb{R }}\) has the properties:
-
\(X_0 = 0\);
-
\(X_t\) is non-degenerate for \(t \ne 0\);
-
\(\mathbb{E }[|X_1|]<\infty \);
-
\(| \mathbb{E }[X_1]| < \alpha \).
Notation 2.3
As in the Introduction, let \(M = (M_t)_{t \in \mathbb{R }}\) be the \(\alpha \)-Lipschitz minorant of \(X\). Put \(\fancyscript{Z} = \{ t \in \mathbb{R }: M_t = X_t \wedge X_{t-} \}\).
2.3 The contact set is stationary and regenerative
It follows fairly directly from our standing assumptions Hypothesis 2.2 that the random set \(\fancyscript{Z}\) is almost surely unbounded above and below. (Alternatively, it follows even more easily from Hypothesis 2.2 that \(\fancyscript{Z}\) is non-empty almost surely. We show below that \(\fancyscript{Z}\) is stationary, and any non-empty stationary random set is necessarily almost surely unbounded above and below.)
We now show that the contact set \(\fancyscript{Z}\) is stationary and that it is also regenerative in the sense of Fitzsimmons and Taksar [16]. For simplicity, we specialize the definition in [16] somewhat as follows by only considering random sets defined on probability spaces (rather than general \(\sigma \)-finite measure spaces).
Let \(\varOmega ^0\) denote the class of closed subsets of \(\mathbb{R }\). For \(t \in \mathbb R \) and \(\omega ^0 \in \varOmega ^0\), define
and
Here \(\mathbf{cl}\) denotes closure and we adopt the convention \(\inf \emptyset = + \infty \). Note that \(t \in \omega ^0\) if and only if \(\lim _{s \uparrow t} r_s(\omega ^0) = 0\), and so \(\omega ^0 \cap (-\infty ,t]\) can be reconstructed from \(r_s(\omega ^0),\,s \le t\), for any \(t \in \mathbb R \). Set \(\fancyscript{G}^0 = \sigma \{ r_s : s \in \mathbb{R }\}\) and \(\fancyscript{G}_t^0 = \sigma \{ r_s:s \le t \}\). Clearly, \((d_t)_{t \in \mathbb{R }}\) is an increasing càdlàg process adapted to the filtration \((\fancyscript{G}_t^0)_{t \in \mathbb{R }}\), and \(d_t \ge t\) for all \(t \in \mathbb{R }\).
A random set is a measurable mapping \(S\) from a measurable space \((\varOmega ,\fancyscript{F})\) into \((\varOmega ^0,\fancyscript{G}^0)\).
Definition 2.4
A probability measure \(\mathbb{Q }\) on \((\varOmega ^0,\fancyscript{G}^0)\) is regenerative with regeneration law \(\mathbb{Q }^0\) if
-
(i)
\(\mathbb{Q }\{d_t = +\infty \} = 0\), for all \(t \in \mathbb{R }\);
-
(ii)
for all \(t \in \mathbb{R }\) and for all \(\fancyscript{G}^0\)-measurable nonnegative functions \(F\),
$$\begin{aligned} \mathbb{Q }\left[ F(\tau _{d_t}) \, | \, \fancyscript{G}_{t+}^0 \right] = \mathbb{Q }^0[F], \end{aligned}$$(2.5)where we write \(\mathbb{Q }[\cdot ]\) and \(\mathbb{Q }^0[\cdot ]\) for expectations with respect to \(\mathbb{Q }\) and \(\mathbb{Q }^0\). A random set \(S\) defined on a probability space \((\varOmega ,\fancyscript{F}, \mathbb{P })\) is a regenerative set if the push-forward of \(\mathbb{P }\) by the map \(S\) (that is, the distribution of \(S\)) is a regenerative probability measure.
Remark 2.5
Suppose that the probability measure \(\mathbb{Q }\) on \((\varOmega ^0,\fancyscript{G}^0)\) is stationary; that is, if \(S^0\) is the identity map on \(\varOmega ^0\), then the random set \(S^0\) on \((\varOmega ^0,\fancyscript{G}^0, \mathbb{Q })\) has the same distribution as \(u+S^0\) for any \(u \in \mathbb{R }\) or, equivalently, that the process \((r_t)_{t \in \mathbb{R }}\) has the same distribution as \((r_{t-u})_{t \in \mathbb{R }}\) for any \(u \in \mathbb{R }\). Then, in order to check conditions (i) and (ii) of Definition 2.4 it suffices to check them for the case \(t=0\).
The probability measure \(\mathbb{Q }^0\) is itself regenerative. It assigns all of its mass to the collection of closed subsets of \(\mathbb{R }_+\). As remarked in [16], it is well known that any regenerative probability measure with this property arises as the distribution of a random set of the form \(\mathbf{cl}\{Y_t : Y_t > Y_0, \, t \ge 0\}\), where \((Y_t)_{t \ge 0}\) is a subordinator (that is, a non-decreasing, real-valued Lévy process) with \(Y_0 = 0\)—see [28, 29]. Note that \(\mathbf{cl}\{Y_t : Y_t > Y_0, \, t \ge 0\}\) has the same distribution as \(\mathbf{cl}\{Y_{ct} : Y_{ct} > Y_{c0}, \, t \ge 0\}\), and the distribution of the subordinator associated with a regeneration law can at most be determined up to linear time change (equivalently, the corresponding drift and Lévy measure can at most be determined up to a common constant multiple). It turns out that the distribution of the subordinator is unique except for this ambiguity—again see [28, 29].
We refer the reader to [16, Theorem \(3.5\)] for a description of the sense in which a stationary regenerative probability measure \(\mathbb{Q }\) with regeneration law \(\mathbb{Q }^0\) can be regarded as \(\mathbb{Q }^0\) “made stationary”. Note that if \(\varLambda \) is the Lévy measure of the subordinator associated with \(\mathbb{Q }\) in this way, then, by stationarity, it must be the case that \(\int _{\mathbb{R }_+} y \, \varLambda (dy) < \infty \).
Theorem 2.6
Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2. Then, the random (closed) set \(\fancyscript{Z}\) is stationary and regenerative.
Proof
It follows from Lemma 9.3 that \(\fancyscript{Z}\) is almost surely closed.
We next show that \(\fancyscript{Z}\) is stationary. Note for \(u \in \mathbb{R }\) that \(u + \fancyscript{Z} = \{t \in \mathbb{R }\!\!:\!X_{(t-u)} \wedge X_{(t-u)-} = M_{(t-u)}\}\). Define \((\breve{X}_t)_{t \in \mathbb{R }}\) by \(\breve{X}_t = X_{t-u} - X_{(-u)}\) for \(t \in \mathbb{R }\) and let \(\breve{M}\) be the \(\alpha \)-Lipschitz minorant of \(\breve{X}\). Note that \(\breve{M}_t = M_{t-u} - X_{(-u)}\) for \(t \in \mathbb{R }\). Therefore, \(u + \fancyscript{Z} = \{t \in \mathbb{R }: \breve{X}_t \vee \breve{X}_{t-} = \breve{M}_t\}\) and hence \(u + \fancyscript{Z}\) has the same distribution as \(\fancyscript{Z}\) because \(\breve{X}\) has the same distribution as \(X\).
We now show that \(\fancyscript{Z}\) is regenerative. For \(t \in \mathbb{R }\) set
and \(\fancyscript{F}_t := \bigcap _{s > t} \sigma \{X_u : u \le s\}\).
We claim that \(X_{S_t} \le X_{S_t-}\) almost surely. Suppose to the contrary that the event \(A := \{X_{S_t} > X_{S_t-}\}\) has positive probability. On the event \(A,\,X_s > X_{S_t} - (X_{S_t} - X_{S_t-})/2\) for \(s \in (S_t, S_t + \delta )\) for some (random) \(\delta > 0\). Hence, \(X_{s-} \ge X_{S_t} - (X_{S_t} - X_{S_t-})/2\) for \(s \in (S_t, S_t + \delta )\), and so \(X_s \wedge X_{s-} - \alpha (s - t) > X_{S_t} \wedge X_{S_t-} - \alpha (S_t-t) = X_{S_t-} - \alpha (S_t-t)\) for \(s \in (S_t, S_t + \delta )\) on the event \(A\) provided \(\delta \) is sufficiently small. It follows that, on the event \(A,\,X_{S_t-} - \alpha (S_t-t) \le \inf \{ X_u - \alpha (u-t) : u \le t\}\) and \(X_s \wedge X_{s-} - \alpha (s-t) > \inf \{ X_u - \alpha (u-t) : u \le t\}\) for \(t \le s < S_t\). Define a non-decreasing sequence of stopping times \(\{U_n\}_{n \in \mathbb N }\) by
and set \(U_\infty := \sup _{n \in \mathbb N } U_n\). We have shown that, on the event \(A,\,U_n < S_t\) for all \(n \in \mathbb N \) and \(U_\infty = S_t\). By the quasi-left-continuity of \(X,\,\lim _{n \rightarrow \infty } X_{U_n} = X_{U_\infty }\) almost surely. In particular, \(X_{S_t} = X_{S_t-}\) almost surely on the event \(A\), and so \(A\) cannot have positive probability.
Lemma 9.4 now gives that almost surely
We have already remarked that \(\fancyscript{Z}\) is almost surely unbounded above and below, and hence condition (i) of Definition 2.4 holds. By Remark 2.5, in order to check condition (ii) of Definition 2.4, it suffices to consider the case \(t=0\).
For notational simplicity, set \(S := S_0\) and \(D := D_0\)—see Fig. 3 for two illustrations of the construction of \(S\) and \(D\) from a sample path. For a random time \(U\), let \(\fancyscript{F}_{U}\) be the \(\sigma \)-field generated by random variables of the form \(\xi _U\), where \((\xi _t)_{t \in \mathbb{R }}\) is some optional process for the filtration \((\fancyscript{F}_t)_{t \in \mathbb{R }}\) (cf. Millar [32, 33]). It follows from Corollary 9.2 (where we are thinking intuitively of removing the process to the right of \(D\) rather than to the right of zero) that \(\bigcap _{\epsilon > 0} \sigma \{R_s : s \le \epsilon \} \subseteq \fancyscript{F}_{D}\).
Put
By the strong Markov property at the stopping time \(S\) and the spatial homogeneity of \(X\), the process \(\tilde{X}\) is independent of \(\fancyscript{F}_S\) with the same distribution as the Lévy process \((X_t + \alpha t)_{t \ge 0}\). Suppose for the Lévy process \((X_t + \alpha t)_{t \ge 0}\) that zero is regular for the interval \((0,\infty )\). A result of Millar [31, Proposition \(2.4\)] implies that almost surely there is a unique time \(\tilde{T}\) such that \(\tilde{X}_{\tilde{T}} = \inf \{\tilde{X}_s: s \ge 0\}\) and that if \(\bar{T}\) is such that \(\tilde{X}_{\bar{T} -} = \inf \{\tilde{X}_s: s \ge 0\}\), then \(\bar{T} = \tilde{T}\). Thus, \(\tilde{T} = \sup \{ t \ge 0: \tilde{X}_t \wedge \tilde{X}_{t-} = \inf \{\tilde{X}_s: s \ge 0\} \}\) and \(D = S + \tilde{T}\). Combining this observation with the main result of Millar [32] (see Remark 2.7 below) and the fact that \(\tilde{X}_{\tilde{T}} = \inf \{\tilde{X}_s: s \ge 0\}\) gives that \((\tilde{X}_{\tilde{T}+t})_{t \ge 0}\) is conditionally independent of \(\fancyscript{F}_D\) given \(\tilde{X}_{\tilde{T}}\). Thus, again by the spatial homogeneity of \(\tilde{X},\,(\tilde{X}_{\tilde{T}+t} - \tilde{X}_{\tilde{T}})_{t \ge 0}\) is independent of \(\fancyscript{F}_D\). This establishes condition (ii) of Definition 2.4 for \(t=0\).
If zero is not regular for the interval \((0,\infty )\) for the Lévy process \((X_t + \alpha t)_{t \ge 0}\), then zero is necessarily regular for the interval \((0,\infty )\) for the Lévy process \((X_{-t-} + \alpha t)_{t \ge 0}\) because this latter process the same distribution as \((-(X_t + \alpha t) + 2 \alpha t)_{t \ge 0}\). The argument above then establishes that the random set \(-\fancyscript{Z}\) is regenerative. It follows from [16, Theorem 4.1] that \(\fancyscript{Z}\) is regenerative with the same distribution as \(-\fancyscript{Z}\).
\(\square \)
Remark 2.7
A key ingredient in the proof of Theorem 2.6 was the result of Millar from [32] which says that, under suitable conditions, the future evolution of a càdlàg strong Markov process after the time it attains its global minimum is conditionally independent of the past up to that time given the value of the process and its left limit at that time. That result follows in turn from results in [18] on last exit decompositions or results in [40] on analogues of the strong Markov property at general coterminal times. Note that in the case of Lévy processes the independence can also be proved using Pitman and Uribe Bravo’s description of the convex minorant of a Lévy process [38], since in this description the pre and post minimum processes are obtained from restricting a Poisson point process to disjoint sets. We did not apply Millar’s result directly; rather, we considered a random time \(D = D_0\) that was the last time after a stopping time that a strong Markov process attained its infimum over times greater than the stopping time and combined Millar’s result with the strong Markov property at the stopping time. An alternative route would have been to observe that the random time \(D\) is a randomized coterminal time in the sense of [33] for a suitable strong Markov process.
3 Behavior of Lévy processes at local extrema
To identify properties of the subordinator associated with the Lipschitz minorant, Theorem 4.7 will make use of the main result of this section, Theorem 3.1, relating to the behavior of Lévy processes at local extrema. We also include in this section related results which we do not make later use of, but which we believe are of some interest. Write
for the set of local infima of the path of \(X\) and
for the set of local suprema.
Theorem 3.1
Let \(X\) be an Lévy process such that \(\sigma \ne 0\) or \(\varPi (\mathbb{R }) = \infty \). For \(r \ge 0\) and \(t \ge 0\), define
Then, \(T_t^{(r)} > 0\) for all \(t \in \fancyscript{M}^-\) almost surely if and only if
Similarly, for \(r < 0\) and \(t \ge 0\), define
Then, \(T_t^{(r)} > 0\) for all \(t \in \fancyscript{M}^+\) almost surely if and only if
Proof
We need prove only the result relating to local infima, since the result for local suprema follows by a time reversal argument.
We note from Remark 6.1 below that for \(0 \le a < b < \infty \) there is almost surely a unique time \(U \in [a,b]\) such that \(X_U \wedge X_{U-} = \inf \{X_t : t \in [a,b]\} = \inf \{X_t \wedge X_{t-}: t \in [a,b]\}\). We use the notation \(\arg \inf _{t \in [a,b]} X_t\) for the random time \(U\).
Let \(\xi \) be an independent exponential time with parameter \(0 < q < \infty \) that is independent of \(X\). Define
We claim that by a classical localization argument it is sufficient to show that the result holds at \(\rho \) almost surely rather than at every \(t \in \fancyscript{M}^-\). To be explicit, suppose we have shown that the result holds at \(\rho \) almost surely. It is then true with \(\rho \) replaced by \(\arg \inf _{0 \le s \le t} X_s\) for all \(t>0\) in some set whose complement can be contained in a Lebesgue-null set. Let \(\{ t_n \}_{n \in \mathbb N }\) be a sequence of such times with \(t_n \downarrow 0\). For \(n \in \mathbb N \) define
so that the result is true almost surely for every element of \(\fancyscript{I}_n\), and define
so that \(\fancyscript{M}^- = \bigcup _{n \ge 1} \fancyscript{M}^-_n\). The localization argument is completed by noting that \(\fancyscript{M}^-_n \subseteq \fancyscript{I}_n\) for every \(n \in \mathbb N \).
To show that the result is true at \(\rho \) we use a technique involving the convex minorant of the path of a Lévy process. By results of Pitman and Uribe Bravo [38, Corollary 2], the linear segments of the convex minorant of the process \((X_t)_{0 \le t \le \xi }\) define a Poisson point process on \(\mathbb{R }_+ \times \mathbb{R }\), where a point at \((t, x)\) represents a segment with length \(t\) and increment \(x\). The intensity measure of the Poisson point process is the measure on \(\mathbb{R }_+ \times \mathbb{R }\) given by
In order to recreate the convex minorant from the point process, the segments are pieced together in increasing order of slope—[38, Proposition 1] states that almost surely no two segments may have the same slope. Note that the time \(\rho \) is the supremum of the right-hand endpoints of the ordered segments with negative slope and the infimum of the left-hand endpoints of the segments with positive slope. Observe also that, up to translations in time and space, the restriction of the convex minorant to the time interval \([\rho ,\xi ]\) can be recreated by only piecing together the segments of positive slope. Let \(\fancyscript{S}\) be the infimum of the slopes of all segments of the convex minorant of \((X_t)_{0 \le t \le \xi }\) that have positive slope. It follows from (3.5) that for \(r > 0\),
Since almost surely no two segments may have the same slope we must have \(\int _0^\infty e^{-qt} t^{-1} \mathbb{P }\{ X_t = rt \} \, dt = 0\) for all \(r \ge 0\), and hence for all \(r \ge 0\) we have
Suppose first that \(\int _0^1 t^{-1} \mathbb{P }\{X_t \in [0, rt]\} \, dt < \infty \). Then, \(\mathbb{P }\{ \fancyscript{S} \ge r \} > 0\) and thus, with positive probability, there exists \(\varepsilon >0\) such that \(X_{\rho +t}-X_\rho \wedge X_{\rho -} \ge r t\) for all \(0 \le t \le \varepsilon \). Hence, by Millar’s zero-one law at the infimum of a Lévy process, such an \(\varepsilon \) exists almost surely. By the almost sure uniqueness of the value of the infima of a Lévy process that is not a compound Poisson process with zero drift [4, Proposition VI.4], there exists almost surely \(\varepsilon \!\!>\!\!0\) such that \(X_{\rho +t}-X_\rho \wedge X_{\rho -} > r t\) for all \(0 < t \le \varepsilon \). Hence, \(T_\rho ^{(r)} > 0\) almost surely.
Now suppose that \(\int _0^1 t^{-1} \mathbb{P }\{X_t \in [0, r^* t]\} \, dt = \infty \). Then, \(\mathbb{P }\{ \fancyscript{S} \ge r^* \} = 0\), and since the convex minorant of \((X_t)_{0 \le t \le \xi }\) then almost surely contains linear segments with positive slope less than \(r^*\), it follows that \(T_t^{(r^*)} = 0\) almost surely. \(\square \)
The following easy corollary is essentially a result of Vigon [49, Proposition 3.6] in the unbounded variation case. In the bounded variation case it can be easily deduced from the forthcoming Proposition 3.4.
Corollary 3.2
Let \(X\) be an Lévy process such that \(\sigma \ne 0\) or \(\varPi (\mathbb{R }) = \infty \). Define
Then,
for all \(t \in \fancyscript{M}^-\) almost surely. Similarly, define
Then,
for all \(t \in \fancyscript{M}^+\) almost surely.
Proof
We will use the same notation as in Theorem 3.1. As before we only need prove the local infima case, and again it is enough to show that the result holds at \(\rho \) almost surely rather than at every \(t \in \fancyscript{M}^-\).
For any \(0 \le r < r^*_-\) we have \(\int _0^1 t^{-1} \mathbb{P }\{X_t \in [0, rt]\} \, dt < \infty \). Theorem 3.1 implies that \(T^{(r)}_\rho > 0\) almost surely, and thus
for all \(t \in \fancyscript{M}^-\) almost surely, for all \(0 \le r < r^*_-\).
For any \(r > r^*_-\) we have \(\int _0^1 t^{-1} \mathbb{P }\{X_t \in [0, rt]\} \, dt = \infty \). Theorem 3.1 implies that \(T_t^{(r)} = 0\) almost surely. Hence, for all \(r > r^*_-\),
for all \(t \in \fancyscript{M}^-\) almost surely. \(\square \)
Remark 3.3
When \(X\) has paths of bounded variation and drift \(d > 0\) then (2.1) implies that zero is regular for the interval \([0,\infty )\) for the process \((X_t - d^{\prime } t)_{t \ge 0}\) if \(d^{\prime } < d\), but not if \(d^{\prime } > d\), and hence (2.3) implies that \(r^*_- = d\). By (2.3), the integral in (3.3) in the interesting case of \(r=r^*_- = d\), i.e. \(\int _0^1 t^{-1} \mathbb{P }\{X_t \in [0, d t]\} \, dt\), will be finite if and only if zero is not regular for the interval \((-\infty ,0]\) for the process \((X_t - d t)_{t \ge 0}\). An integral condition due to Bertoin involving the Lévy measure \(\varPi \) can be used to determine whether this is the case or not [6]. Note that the reasoning as to why \(r^*_- = d\) also shows that \(r^*_+ = -\infty \)—as will be shown in Proposition 3.4 this is because there is a downwards jump at every \(t \in \fancyscript{M}_+\). Comments similar to all the preceding can be made when \(d < 0\). If \(d=0\), then \(r^*_-\) and \(-r^*_+\) are either infinite or zero, and at least one of them must be zero since \(\varPi (\mathbb{R }) = \infty \).
In the unbounded variation case some special cases are worth mentioning. The value of \(r^*\) is infinite when \(X\) has non-zero Brownian component or is a stable process with stability parameter in the interval (1, 2]—see the discussion of abrupt processes in Sect. 5. Vigon provides in an unpublished work [47] a practical method for determining whether the integral in (3.3) is finite or not for processes with paths of unbounded variation:
where \(\varPsi \) is as defined in Sect. 2.1.
The following proposition completes the picture of the behavior at local extrema. Recall from the comments following (2.3) that, except for the case \(\varPi (\mathbb{R })<\infty \) and \(d=0\), zero is regular for \((-\infty ,0]\) (resp. \([0,\infty )\)) if and only if zero is regular for \((-\infty ,0)\) (respectively \((0,\infty )\)).
Proposition 3.4
Suppose that \(X\) is a Lévy process with paths of unbounded variation. Then, \(X_t = X_{t-}\) for all \(t \in \fancyscript{M}^-\) and all \(t \in \fancyscript{M}^+\) almost surely.
Next, suppose that \(X\) is a Lévy process with paths of bounded variation and drift \(d \ne 0\). Then, \(X_t \ne X_{t-}\) for all \(t \in \fancyscript{M}^-\) and all \(t \in \fancyscript{M}^+\) almost surely. Moreover, \(\lim _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{t + \varepsilon } - X_t) = d\) for all \(t \in \fancyscript{M}^-\) and all \(t \in \fancyscript{M}^+\) almost surely.
Suppose finally that \(X\) is a Lévy process with paths of bounded variation, drift \(d=0\), and \(\varPi (\mathbb{R }) = \infty \). Then,
-
(i)
If zero is not regular for \([0,\infty )\), then \(X_t > X_{t-}\) for all \(t \in \fancyscript{M}^-\) and \(X_t = X_{t-}\) for all \(t \in \fancyscript{M}^+\) almost surely.
-
(ii)
If zero is not regular for \((-\infty ,0]\), then \(X_t = X_{t-}\) for all \(t \in \fancyscript{M}^-\) and \(X_t < X_{t-}\) for all \(t \in \fancyscript{M}^+\) almost surely.
-
(iii)
If zero is regular for both \((-\infty ,0]\) and \([0,\infty )\), then \(X_t = X_{t-}\) for all \(t \in \fancyscript{M}^-\) and all \(t \in \fancyscript{M}^+\) almost surely.
Moreover, in all cases \(\lim _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{t + \varepsilon } - X_t) = 0\) for all \(t \in \fancyscript{M}^-\) and all \(t \in \fancyscript{M}^+\) almost surely.
Proof
As before, we only need prove the local infima cases, and again it is enough to show that the result holds at \(\rho \) almost surely rather than at every \(t \in \fancyscript{M}^-\).
Suppose that \(X\) has paths of unbounded variation. Then, \(\liminf _{t \downarrow 0} t^{-1} X_t = -\infty \) and \(\limsup _{t \downarrow 0} t^{-1} X_t = +\infty \) almost surely by (2.2), and hence zero is regular for both \((-\infty ,0]\) and \([0,\infty )\). That \(X_\rho = X_{\rho -}\) then follows from [31, Theorem \(3.1\)].
Next, suppose that \(X\) is a Lévy process with paths of bounded variation and drift \(d \ne 0\). A result of Millar states that any Lévy process for which zero is not regular for \([0,\infty )\) must jump out of its global infimum and that any such Lévy process for which zero is not regular for \((-\infty ,0]\) must jump into its global infimum—see [31, Theorem \(3.1\)]. By (2.1), \(\lim _{t \downarrow 0} t^{-1} X_t = d\) almost surely, and so one of these alternatives must hold. Hence, in either case, \(X_\rho \ne X_{\rho -}\).
Moreover, the fact that \(\rho \) is a jump time of \(X\) implies \(\lim _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{\rho + \varepsilon } - X_\rho ) = d\). To see this, we argue as in [38]. For \(\delta >0\), let \(0 < J_1^\delta < J_2^\delta < \ldots \) be the successive nonnegative times at which \(X\) has jumps of size greater than \(\delta \) in absolute value. The strong Markov property applied at the stopping time \(J_i^\delta \) and (2.1) gives that
Hence, at any random time \(V\) such that \(X_{V} \ne X_{V-}\) almost surely we have
Suppose finally that \(X\) has paths of bounded variation with drift \(d=0\) and with \(\varPi (\mathbb{R }) = \infty \). Since \(\varPi (\mathbb{R }) = \infty \), zero must be regular for at least one of \((-\infty ,0]\) and \([0,\infty )\). Results (i), (ii) and (iii) are then direct consequences of [31, Theorem 3.1]. In case (iii), Remark 3.3 explains why \(r^*_- = 0\) and thus Corollary 3.2 implies that \(\liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{\rho + \varepsilon } - X_\rho ) = 0\) almost surely. In cases (i) and (ii), since \(\rho \) is a jump time of \(X\), as before it must be the case that \(\lim _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{t + \varepsilon } - X_t) = d = 0\) almost surely. \(\square \)
Remark 3.5
The relationships between \(X_t\) and \(X_{t-}\) given in Proposition 3.4 are essentially results of Millar [31, Theorem \(3.1\)].
4 Identification of the associated subordinator
Let \(Y = (Y_t)_{t \ge 0}\) be “the” subordinator associated with the regenerative set \(\fancyscript{Z}\). Write \(\delta \) and \(\varLambda \) for the drift coefficient and Lévy measure of \(Y\). Recall that these quantities are unique up to a common scalar multiple. The closed range of \(Y\) either has zero Lebesgue measure almost surely or infinite Lebesgue measure almost surely according to whether \(\delta \) is zero or positive [13, Chapter 2, Theorem 3]. Consequently, the same dichotomy holds for the contact set \(\fancyscript{Z}\). It follows from Fubini’s theorem and the stationarity of \(\fancyscript{Z}\) that \(\fancyscript{Z}\) has infinite Lebesgue measure almost surely if and only if \(\mathbb{P }\{ 0 \in \fancyscript{Z} \} > 0\). The following result gives necessary and sufficient conditions for each alternative in this dichotomy.
Theorem 4.1
Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2. If \(\sigma =0,\,\varPi (\mathbb{R }) < \infty \), and \(|d|=\alpha \), then the Lebesgue measure of \(\fancyscript{Z}\) is almost surely infinite. If \(X\) is not of this form, then the Lebesgue measure of \(\fancyscript{Z}\) is almost surely zero if and only if zero is regular for the interval \((-\infty ,0]\) for at least one of the Lévy processes \((X_t + \alpha t)_{t\ge 0}\) and \((-X_t+\alpha t)_{t \ge 0}\).
Proof
Suppose first that \(\sigma =0,\,\varPi (\mathbb{R }) < \infty \) and \(|d|=\alpha \). In this case, the paths of \(X\) are piecewise linear with slope \(d\). Our standing assumption \(|\mathbb{E }[X_1]| < \alpha \) and the strong law of large numbers give \(\lim _{t \rightarrow -\infty } t^{-1} X_t = \lim _{t \rightarrow +\infty } t^{-1} X_t = \mathbb{E }[X_1]\). It is now clear that \(\fancyscript{Z}\) has positive Lebesgue measure with positive probability and hence infinite Lebesgue measure almost surely.
Suppose now that \(X\) is not of this special form. It suffices by the remarks prior to the statement of the result to show that \(\mathbb{P }\{ 0 \in \fancyscript{Z} \} > 0\) if and only if zero is not regular for \((-\infty ,0]\) for both of the Lévy processes \((X_t + \alpha t)_{t \ge 0}\) and \((-X_t+\alpha t)_{t \ge 0}\).
Set \(I^- := \inf \{X_t - \alpha t : t \le 0\}\) and \(I^+ := \inf \{X_t + \alpha t : t \ge 0\}\). Recall from (1.1) that \(M_0 = I^{-} \wedge I^+\). Therefore,
and so \(\mathbb{P }\{0 \in \fancyscript{Z}\} > 0\) if and only if \(\mathbb{P }\{I^- = 0\} > 0\) and \(\mathbb{P }\{I^+ = 0\} > 0\).
Note that \(I^-\) has the same distribution as \(\inf \{-X_t + \alpha t : t \ge 0\}\). From the formulas of Pecherskii and Rogozin [37] (or [4, Theorem VI.5]),
and
Taking the limit as \(\theta \rightarrow \infty \) and applying monotone convergence in (4.1) and in (4.2) gives
and
Since we are assuming that it is not the case that \(\sigma = 0\), \(\varPi (\mathbb{R }) < \infty \) and \(|d|=\alpha \), we have \(\mathbb{P }\{X_t + \alpha t = 0\} = \mathbb{P }\{-X_{t} + \alpha t = 0\} = 0\) for all \(t>0\). Moreover, by our standing assumption \(|\mathbb{E }[X_1] | < \alpha \) it certainly follows that both \(X_t + \alpha t\) and \(-X_{t} + \alpha t\) drift to \(+\infty \). Hence, by a result of Rogozin [44] (or see [4, Theorem VI.12])
The result now follows from (2.3) which implies that zero is not regular for the interval \((-\infty ,0]\) for both \((-X_{t}+\alpha t)_{t \ge 0}\) and \((X_t + \alpha t)_{t \ge 0}\) if and only if
\(\square \)
Remark 4.2
-
(i)
Note that zero is regular for the interval \((-\infty ,0]\) for both \((X_t + \alpha t)_{t \ge 0}\) and \((-X_{t}+\alpha t)_{t \ge 0}\) when \(X\) has paths of unbounded variation, since then \(\liminf _{t \rightarrow 0} t^{-1} X_t = -\infty \) and \(\limsup _{t \rightarrow 0} t^{-1} X_t = +\infty \) by (2.2).
-
(ii)
If \(X\) has paths of bounded variation and drift coefficient \(d\), then \(\lim _{t \downarrow 0} t^{-1}(X_t+ \alpha t) = d+ \alpha \) and \(\lim _{t \downarrow 0} t^{-1}(-X_{t}+ \alpha t) = -d + \alpha \) by (2.1). Thus, if \(|d| < \alpha \), then zero is regular for \((-\infty ,0]\) for neither \((X_t + \alpha t)_{t \ge 0}\) or \((-X_{t}+\alpha t)_{t \ge 0}\), whereas if \(|d| > \alpha \), then zero is regular for \((-\infty ,0]\) for exactly one of those two processes.
-
(iii)
If \(X\) has paths of bounded variation and \(|d| = \alpha \), then an integral condition due to Bertoin [6] involving the Lévy measure \(\varPi \) determines whether zero is regular for the interval \((-\infty ,0]\) for whichever of the processes \((X_t +\alpha t)_{t \ge 0}\) or \((-X_{t}+\alpha t)_{t \ge 0}\) has zero drift coefficient.
Remark 4.3
Recall the notation \(G = \sup \{ t < 0 : t \in \fancyscript{Z} \},\,D = \inf \{ t > 0 : t \in \fancyscript{Z} \}\) and \(K = D-G\) (note that \(D = d_0 \circ \fancyscript{Z}\)). Recall also that \(\varLambda \) and \(\delta \) are choices (unique up to a common scalar multiple) of the Lévy measure and drift of “the” subordinator associated with the stationary regenerative set \(\fancyscript{Z}\). If \(\delta = 0\), a condition equivalent to the Lebesgue measure of \(\fancyscript{Z}\) being almost surely zero and \(0 \notin \fancyscript{Z}\) almost surely, then \(G < 0 < D\) almost surely, and the distribution of \(K\) is obtained by size-biasing the Lévy measure \(\varLambda \); that is,
(recall that \(\int _{\mathbb{R }_+} y \, \varLambda (dy) < \infty \) since \(\fancyscript{Z}\) is stationary).
If the Lebesgue measure of \(\fancyscript{Z}\) is positive almost surely (and hence \(\delta > 0\)), then \(\mathbb{P }\{K=0\} > 0\) and we see by multiplying together (4.3) and (4.4) that
In this latter case, the conditional distribution of \(K\) given \(K > 0\) is the size-biasing of \(\varLambda \). The relationship between \(\delta \) and \(\varLambda \) is easily deduced since \(\mathbb{P }\{ K = 0 \}\) is the expected proportion of the real line that is covered by the range of the subordinator associated with \(\fancyscript{Z}\). Thus,
Remark 4.4
Theorem 4.1 and Remark 4.2 provide conditions for deciding whether the Lebesgue measure of the contact set \(\fancyscript{Z}\) is zero almost surely or positive almost surely. Recall that the Lebesgue measure of \(\fancyscript{Z}\) is zero almost surely if and only if \(0 \notin \fancyscript{Z}\) almost surely or, equivalently, that \(G < 0 < D\) almost surely. This is in turn equivalent to \(\delta = 0\). When \(\delta = 0,\,\varLambda (\mathbb{R }_+) < \infty \) if and only if the contact set \(\fancyscript{Z}\) is a discrete set, and this is equivalent to \(D^{\prime } > D\) almost surely, where
Clearly, if \(\sigma = 0\) and \(\varPi (\mathbb{R }) < \infty \), then \(\varLambda (\mathbb{R }_+) < \infty \). In the case \(\sigma =0,\,\varPi (\mathbb{R }) = \infty \), and \(\delta > 0\) we claim that \(\varLambda (\mathbb{R }_+) = \infty \). To see this, suppose to the contrary that \(\varLambda (\mathbb{R }_+)<\infty \), then there almost surely exists \(t_1 < t_2\) such that \(X_t \wedge X_{t-} = M_t\) for all \(t_1 < t < t_2\). Because \(t \mapsto M_t\) is continuous, the paths of \(X\) cannot jump between times \(t_1\) and \(t_2\). However, when \(\varPi (\mathbb{R }) = \infty \) the jump times of \(X\) are almost surely dense in \(\mathbb{R }\).
In Theorem 4.7 we provide conditions for deciding whether \(\varLambda (\mathbb{R }_+) < \infty \) or \(\varLambda (\mathbb{R }_+) = \infty \) for the remaining case \(\delta = 0\) and \(\varPi (\mathbb{R }) = \infty \).
We first give some relevant preliminary results on the behavior of the path of the Lipschitz minorant \(M\) on the interval \([G,D]\). By Proposition 9.3, when \(G < 0 < D\) the path of \(M\) on \([G,D]\) consists of a line of slope \(+\alpha \) on the interval \([G,T]\) followed by a line of slope \(-\alpha \) on the interval \([T,D]\), where \(T\) is the unique time that \(M\) attains its maximum over the interval \([G,D]\). Recall that
As in the proof of Theorem 2.6, it follows from Lemma 9.4 that almost surely
Moreover, from Corollary 9.5, \(G \le T \le S \le D\) and \(T=S=D\) on the event \(\{T=S\}\).
Proposition 4.5
Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2. Then, \(\mathbb{P }\{0 \notin \fancyscript{Z}, \, S=0\} = 0\). In addition,
-
(a)
If \(X\) has paths of unbounded variation, then \(G<T<S<D\) almost surely.
-
(b)
If \(X\) has paths of bounded variation and drift coefficient \(d\) satisfying \(d < - \alpha \), then \(G < T<S<D\) almost surely, and if \(X\) has paths of bounded variation and drift coefficient \(d\) satisfying \(d > \alpha \), then \(G < T<S \le D\) almost surely.
-
(c)
If \(X\) has paths of bounded variation and drift coefficient \(d\) satisfying \(|d| < \alpha \), then almost surely either \(0 \in \fancyscript{Z}\) and \(G = T = S = D = 0\), or \(0 \notin \fancyscript{Z}\) and \(G \le T \le S \le D\). Furthermore, \(T=S=D\) almost surely on the event \(\{T=S\}\).
Proof
Firstly, if \(0 \notin \fancyscript{Z}\), then \(\inf \{ X_u - \alpha u : u \le 0 \} < 0\), and thus \(S>0\) almost surely on the event \(\{0 \notin \fancyscript{Z}\}\).
-
(a)
Suppose that \(X\) has paths of unbounded variation. We have from Theorem 4.1 (see Remark 4.2(i)) that \(0 \notin \fancyscript{Z}\) almost surely. It follows from (2.2) that at the stopping time \(S\)
$$\begin{aligned} - \liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{S+\varepsilon } - X_S) = \limsup _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{S+\varepsilon } - X_S) = \infty , \end{aligned}$$and hence it is not possible for the \(\alpha \)-Lipschitz minorant to meet the path of \(X\) at time \(S\). Thus, \(T < S < D\) almost surely by Corollary 9.5. By time reversal, \(G < T\) almost surely.
-
(b)
Suppose \(X\) has paths of bounded variation and drift coefficient \(d\) satisfying \(|d| > \alpha \), then we have from Theorem 4.1 (see Remark 4.2(ii)) that \(0 \notin \fancyscript{Z}\) almost surely. Therefore, by Corollary 9.5, if \(T = S\) then \(T=S=D\). Suppose that \(d<-\alpha \). It follows from (2.1) that
$$\begin{aligned} \lim _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{S+\varepsilon } - X_S) = d, \quad \text{ a.s. } \end{aligned}$$Thus, \(S \notin \fancyscript{Z}\) and, in particular, \(S < D\), so that \(T < S < D\) almost surely. On the other hand, if \(d > \alpha \), then the Lévy process \((X_t - \alpha t)_{t \ge 0}\) has positive drift and so the associated descending ladder process has zero drift coefficient [13, p. 56]. In that case, for any \(x < 0\) we have \(X_V < x\) almost surely, where \(V := \inf \{t \ge 0 : X_t - \alpha t \le x\}\) [4, Theorem III.4]. Therefore,
$$\begin{aligned} X_S - \alpha S < \inf \{ X_{u} - \alpha u : u \le 0 \} \quad \text{ a.s. } \end{aligned}$$If \(T=S\), then \(T=S=D\) by Corollary 9.5, and then
$$\begin{aligned} X_S&= X_D \wedge X_{D-} \\&= X_G \wedge X_{G-} + \alpha (D-G) \\&= X_G \wedge X_{G-} + \alpha (S-G), \end{aligned}$$which results in the contradiction
$$\begin{aligned} X_G \wedge X_{G-} - \alpha G = X_S - \alpha S < \inf \{ X_{u} - \alpha u : u \le 0 \}. \end{aligned}$$Thus, \(T<S \le D\) almost surely. The results for \(G\) now follow by a time reversal argument.
-
(c)
Suppose \(X\) has paths of bounded variation and drift coefficient \(d\) satisfying \(|d| < \alpha \). We know from Theorem 4.1 and Remark 4.2 that the subordinator associated with \(\fancyscript{Z}\) has non-zero drift and so \(\fancyscript{Z}\) has positive Lebesgue measure almost surely. The subset of points of \(\fancyscript{Z}\) that are isolated on either the left or the right is countable and hence has zero Lebesgue measure. It follows from the stationarity of \(\fancyscript{Z}\) that \(G = T = S = D = 0\) almost surely on the event \(\{0 \in \fancyscript{Z}\}\). The remaining statements can be read from Corollary 9.5. \(\square \)
Corollary 4.6
Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2. Suppose further that \(\delta = 0\), so that \(G < 0 < D\) almost surely, and that when \(X\) has paths of bounded variation the drift coefficient \(d\) satisfies \(|d| \ne \alpha \). Then, \(G<T<D\) almost surely.
Proof
We know from parts (a) and (b) of Proposition 4.5 that the result holds when \(X\) has paths of unbounded variation or \(X\) has paths of bounded variation and drift coefficient \(d\) satisfying \(|d| > \alpha \). As remarked in the proof of part (c) of Proposition 4.5, it follows from Theorem 4.1 and Remark 4.2 that \(\delta >0\) when \(X\) has paths of bounded variation and drift coefficient \(d\) satisfying \(|d| < \alpha \). \(\square \)
Theorem 4.7
Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2 and \(\varPi (\mathbb{R }) = \infty \). Then, \(\varLambda (\mathbb{R }_+) < \infty \) if and only if
Proof
Suppose first that (4.9) holds. Note that necessarily one of the integrals in (4.6) must then be infinite, and hence, as noted in the proof of Theorem 4.1, this implies that \(\delta = 0\) and so \(G < 0 < D\) almost surely. Now, \(X_t \wedge X_{t-} > M_t = X_{D} \wedge X_{D-} - \alpha (t-D)\) for \(T \le t \le D\). Moreover, \(X_t \wedge X_{t-} \ge M_t \ge X_{D} \wedge X_{D-} - \alpha (t-D)\) for all \(t \ge D\) by the definition of the Lipschitz minorant. It follows that \(X_t \wedge X_{t-} + \alpha t \ge X_{D} \wedge X_{D-} + \alpha D\) for \(t \ge T\) and, in particular, \(D\) is the time of a local infimum of the process \((X_t + \alpha t)_{t \ge 0}\) on the event \(\{T < D\}\). Theorem 3.1 and (4.9) imply that almost surely on the event \(\{T < D\}\) there exists \(\varepsilon >0\) such that \((X_{D + s} + \alpha s)-X_D \wedge X_{D-} > 2 \alpha s\) for all \(0 < s \le \varepsilon \). Thus, if
then \(D^{\prime }>D\) almost surely on the event \(\{T < D\}\). By the regenerative property of \(\fancyscript{Z}\), the event \(\{D^{\prime }=D\}\) has probability zero or one, and in the latter case \(\fancyscript{Z}\) is discrete almost surely.
Thus, \(\mathbb{P }\{T < D\} > 0\) implies that \(\fancyscript{Z}\) is discrete almost surely and hence \(\varLambda (\mathbb{R }_+) < \infty \). However, if \(\mathbb{P }\{T < D\} = 0\), then \(\mathbb{P }\{T > G\} = 1\), and the above argument combined with a time reversal shows that \(\varLambda (\mathbb{R }_+) < \infty \) in this case as well.
Turning to the converse, suppose that (4.9) fails. If \(\sigma = 0\) and \(\delta > 0\), then \(\varLambda (\mathbb{R }_+) = \infty \) as discussed in Remark 4.4, and if \(\sigma > 0\) then \(\delta = 0\) as discussed in Remark 4.2(ii). Thus we henceforth assume that \(\delta = 0\).
As above, \(D\) is the time of a local infimum of the process \((X_t + \alpha t)_{t \ge 0}\) on the event \(\{T < D\}\). Suppose for the moment that \(\mathbb{P }\{T < D\} > 0\). If it were the case that \(\varLambda (\mathbb{R }_+) < \infty \) and hence \(D^{\prime } > D\) almost surely, then it would follow that
almost surely. However, the failure of (4.9) and Theorem 3.1 imply that \(T_D^{(\alpha )} = 0\) on the event \(\{T<D\}\), and hence \(D^{\prime } = D\) almost surely. Since \(\delta = 0\), this is only possible if \(\varLambda (\mathbb{R }_+) = \infty \).
Lastly, if \(\mathbb{P }\{T < D\} = 0\), then \(\mathbb{P }\{T > G\} = 1\), and the above argument combined with a time reversal shows that \(\varLambda (\mathbb{R }_+) = \infty \) in this case as well.\(\square \)
Remark 4.8
An example of a process that satisfies our standing assumptions and for which (4.9) fails for all \(\alpha >0\) is given by truncating the Lévy measure of the symmetric Cauchy process to remove all jumps with magnitude greater than \(m\), so that the Lévy measure becomes \(1_{|x| \le m}x^{-2} \, dx\). To see this, first note that (4.9) fails for the symmetric Cauchy process because, by the self-similarity properties of this process, the probability that it lies in an interval of the form \((at, bt)\) at time \(t > 0\) does not depend on \(t\) and \(\int _0^1 t^{-1} \, dt = \infty \). Then observe that the difference between the probabilities that the truncated process and the symmetric Cauchy processes lie in some interval at time \(t\) is at most the probability that the symmetric Cauchy process has at least one jump of size greater than \(m\) before time \(t\). The latter probability is \(1-e^{-\lambda t}\) with \(\lambda = 2 \int _m^\infty x^{-2} \, dx < \infty \), and \(\int _0^1 t^{-1}(1-e^{-\lambda t}) \, dt < \infty \).
Remark 4.9
If \(X\) is a symmetric \(\beta \)-stable process for \(1 < \beta \le 2\), then (4.9) holds for all \(\alpha >0\). To see this, first note that \(X_1\) has a bounded density. Hence, by scaling, \(\mathbb{P }\{X_t \in [-\alpha t, \alpha t]\} = \mathbb{P }\{t^{1/\beta } X_1 \in [-\alpha t, \alpha t]\} \le c t^{1-1/\beta }\) for some constant \(c\) depending on \(\alpha \), and then observe that \(\int _0^1 t^{-1} t^{1-1/\beta } \, dt < \infty \). Further cases where (4.9) holds for all \(\alpha >0\) are discussed in Remark 5.3.
In Sect. 7 we prove the following result, which characterizes the subordinator associated with \(\fancyscript{Z}\) when \(X\) has paths of unbounded variation and satisfies certain extra conditions. In Corollary 6.5 we show that these extra conditions hold when \(X\) has non-zero Brownian component. Note that the conclusion \(\delta = 0\) in the result follows from Remark 4.2(i).
Theorem 4.10
Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2 and has paths of unbounded variation almost surely. Suppose further that \(X_t\) has absolutely continuous distribution for all \(t \ne 0\), and that the densities of the random variables \(\inf _{t \ge 0} \{ X_t + \alpha t \}\) and \(\inf _{t \ge 0} \{ X_{-t} + \alpha t \}\) are square integrable. Then, \(\delta = 0\) and \(\varLambda \) is characterized by
for \(\theta \ge 0\), and, moreover, \(\varLambda (\mathbb{R }_+) < \infty \).
Note that the existence of the densities of the infima in the hypothesis (ii) of Theorem 4.10 comes from the assumption that \(X_t\) has absolutely continuous distribution for all \(t \ne 0\)—see Lemma 6.2.
When the conditions of Theorem 4.10 are not satisfied, we are able to give a characterization of \(\varLambda \) as a limit of integrals in the following way. Let \(X^\varepsilon = X + \varepsilon B\), with \(B\) a (two-sided) standard Brownian motion independent of \(X\), and let \(\varLambda ^\varepsilon \) be the Lévy measure of the subordinator associated with the contact set for \(X^\varepsilon \). Then, in the case \(\delta = 0\) we have the representation
See Lemma 7.3 in Sect. 7 for details of this limit and (7.10) for a proof of the above equality.
Theorem 4.7 together with the conclusion \(\varLambda (\mathbb{R }_+) < \infty \) of Theorem 4.10 result in the following.
Corollary 4.11
Suppose the conditions of Theorem 4.10 are satisfied, then
5 The limit of the contact set for increasing slopes
We now investigate how \(\fancyscript{Z}\) changes as \(\alpha \) increases. For the sake of clarity, let \(X\) be a fixed Lévy process with \(X_0=0\) and \(\mathbb{E }[|X_1|] < \infty \). Write \(M^{(\alpha )} = (M^{(\alpha )}_t)_{t \in \mathbb{R }}\) for the \(\alpha \)-Lipschitz minorant of \(X\) for \(\alpha > |\mathbb{E }[X_1]|\), and put \(\fancyscript{Z}_\alpha := \{ t \in \mathbb{R }: X_t \wedge X_{t-} = M^{(\alpha )}_t \}\). For \(|\mathbb{E }[X_1]| < \alpha ^{\prime } \le \alpha ^{\prime \prime }\), we have \(M^{(\alpha ^{\prime })}_t \le M^{(\alpha ^{\prime \prime })}_t \le X_t\) for all \(t \in \mathbb{R }\) (because any \(\alpha ^{\prime }\)-Lipschitz function is also \(\alpha ^{\prime \prime }\)-Lipschitz), and so \(\fancyscript{Z}_{\alpha ^{\prime }} \subseteq \fancyscript{Z}_{\alpha ^{\prime \prime }}\). We note in passing that \(\fancyscript{Z}_{\alpha ^{\prime }}\) is regeneratively embedded in \(\fancyscript{Z}_{\alpha ^{\prime \prime }}\) in the sense of Bertoin [5].
If \(X\) has paths of bounded variation and drift coefficient \(d\), then \(|d| < \alpha \) for all \(\alpha \) large enough. Since \(\lim _{t \downarrow 0} t^{-1} X_t = - \lim _{t \downarrow 0} t^{-1} X_{-t} = d\), the law of large numbers implies that
and thus the set \(\bigcup _{\alpha > | \mathbb{E }[X_1] | } Z_\alpha \) has full Lebesgue measure.
We now consider the case where \(X\) has paths of unbounded variation. In order to state our result, we need to recall the definition of the so-called abrupt Lévy processes introduced by Vigon [49]. Recall from (3.1) that \(\fancyscript{M}^-\) is the set of local infima of the path of \(X\), and from Proposition 3.4 if the paths of \(X\) have unbounded variation, then almost surely \(X_{t-} = X_t\) for all \(t \in \fancyscript{M}^-\).
Definition 5.1
A Lévy process \(X\) is abrupt if its paths have unbounded variation and almost surely for all \(t \in \fancyscript{M}^-\)
Remark 5.2
An equivalent definition may be made in terms of local suprema [49, Remark 1.2]: a Lévy process \(X\) with paths of unbounded variation is abrupt if almost surely for any \(t\) that is the time of a local supremum,
Remark 5.3
A Lévy process \(X\) with paths of unbounded variation is abrupt if and only if
(see [49, Theorem 1.3]). Examples of abrupt Lévy processes include stable processes with stability parameter in the interval \((1,2]\), processes with non-zero Brownian component, and any processes that creep upwards or downwards. An example of an unbounded variation process that is not abrupt is the symmetric Cauchy process.
Remark 5.4
The analytic condition (5.1) for a Lévy process \(X\) to be abrupt in Remark 5.3 has an interpretation in terms of the smoothness of the convex minorant of \(X\) over a finite interval. The results of Pitman and Uribe Bravo [38] imply that the number of segments of the convex minorant of \(X\) over a finite interval that have slope between \(a\) and \(b\) is finite for all \(a < b\) if and only if (5.1) holds.
We now return to the question of the limit of \(\fancyscript{Z}_\alpha \) as \(\alpha \rightarrow +\infty \).
Theorem 5.5
Let \(X\) be a Lévy process with \(X_0=0\) and \(\mathbb{E }[ |X_1| ] < \infty \). Then \( \bigcup _{\alpha > | \mathbb{E }[X_1] |} \fancyscript{Z}_\alpha \supseteq \fancyscript{M}^-\). Furthermore, if \(X\) is abrupt, then \( \bigcup _{\alpha > | \mathbb{E }[X_1] |} \fancyscript{Z}_\alpha = \fancyscript{M}^-\).
Proof
Suppose that \(t \in \fancyscript{M}^-\) so that there exists \(\epsilon > 0\) such that \(\inf \{X_s : t-\epsilon < s < t+\epsilon \} = X_t = X_{t-}\). Fix any \(\beta > | \mathbb{E }[X_1] |\). Then, by the strong law of large numbers, \(\inf \{ X_s + \beta s : s \ge 0\} > - \infty \) and \(\inf \{ X_s - \beta s : s \le 0 \} > - \infty \). It is clear that if \(\alpha \in \mathbb{R }\) is such that
then \(X_t = X_{t-} = M^{(\alpha )}_t\) and \(t \in \fancyscript{Z}_\alpha \). Hence \( \bigcup _{\alpha > | \mathbb{E }[X_1] |} \fancyscript{Z}_\alpha \supseteq \fancyscript{M}^-\).
Now suppose that \(X\) is abrupt, and let \(t \in \fancyscript{Z}_\alpha \) for some \(\alpha > | \mathbb{E }[X_1] |\). Then, one of the following three possibilities must occur:
-
(a)
\(X_t > X_{t-}\) and \(\limsup _{\varepsilon \uparrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_{t-}) \le \alpha \);
-
(b)
\(X_{t-} > X_t\) and \(\liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_t) \ge - \alpha \);
-
(c)
\(X_{t-} = X_t\) and \(\limsup _{\varepsilon \uparrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_{t-}) \le \alpha ,\,\liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_t) \ge -\alpha \).
We discount options (a) and (b) by assuming that \(t\) is a jump time of \(X\) and then showing that the \(\liminf \) or \(\limsup \) part of the statements cannot occur. Our argument borrows heavily from the proof of Property 2 in [38, Proposition 1], which itself is based on the proof of [31, Proposition 2.4], but is more detailed.
Arguing as in the proof of Proposition 3.4, for \(\delta >0\), let \(0 < J_1^\delta < J_2^\delta < \ldots \) be the successive nonnegative times at which \(X\) has jumps of size greater than \(\delta \) in absolute value. The strong Markov property applied at the stopping time \(J_i^\delta \) and (2.2) gives that
Hence, at any random time \(V\) such that \(X_{V} \ne X_{V-}\) almost surely we have
and, by a time reversal,
Thus, neither of the possibilities (a) or (b) hold, and so (c) must hold. It then follows from Theorem 5.6 below that \(X\) must have a local infimum or supremum at \(t\). However, \(X\) cannot have a local supremum at \(t\) by Remark 5.2, and so \(X\) must have a local infimum at \(t\). \(\square \)
The key to proving Theorem 5.5 in the abrupt case was the following theorem that describes the local behavior of an abrupt Lévy process at arbitrary times. This result is an immediate corollary of [49, Theorem 2.6] once we use the fact that almost surely the paths of a Lévy processes cannot have both points of increase and points of decrease [17].
Theorem 5.6
Let \(X\) be an abrupt Lévy process. Then, almost surely for all \(t\) one of the following possibilities must hold:
-
(i)
\(\limsup _{\varepsilon \uparrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_{t-}) = +\infty \) and \(\liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_t) = -\infty \);
-
(ii)
\(\limsup _{\varepsilon \uparrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_{t-}) < +\infty \) and \(\liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_t) = -\infty \);
-
(iii)
\(\limsup _{\varepsilon \uparrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_{t-}) = +\infty \) and \(\liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_t) > -\infty \);
-
(iv)
\(X\) has a local infimum or supremum at \(t\).
Remark 5.7
Theorem 5.5 shows that the \(\alpha \)-Lipschitz minorant provides a method for “sieving out” a certain discrete set of times of local infima of an abrupt process. This method has the property that if we let \(\alpha \rightarrow \infty \), then eventually we collect all the times of local infima. Alternative methods for sieving out the local minima of Brownian motion are discussed in [34, 35]. One method is to take all local infima times \(t\) such that \(X_{t+s} - X_t > 0\) for all \(s \in ( -h, h)\) for some fixed \(h\), and then let \(h \rightarrow 0\). Another is to take all local infima times \(t\) such that \(X_{s_+} - X_t \ge h\) for some time \(s_+ \in (0, \inf \{ s > 0 : X_s - X_t = 0 \})\) and such that \(X_{s_-} - X_t \ge h\) for some time \(s_- \in (0, \inf \{ s < 0 : X_s - X_t = 0 \})\), and then again let \(h \rightarrow 0\). This work is extended to Brownian motion with drift in [15].
6 Future infimum of a Lévy process
For future use, we collect together in this section some preliminary results concerning the distribution of the infimum of a Lévy process \((Z_t)_{t \ge 0}\) and the time at which the infimum is attained. Further interesting results in this direction, proved using excursion theory, may be found in recent work by Chaumont [10].
Remark 6.1
Let \(Z = (Z_t)_{t \ge 0}\) be a Lévy process such that \(Z_0 = 0\). Set \(\underline{Z}_t := \inf \{Z_s : 0 \le s \le t\},\,t \ge 0\). If \(Z\) is not a compound Poisson process (that is, either \(Z\) has a non-zero Brownian component or the Lévy measure of \(Z\) has infinite total mass or the Lévy measure has finite total mass but there is a non-zero drift coefficient), then
– see, for example, [4, Proposition VI.4]. Hence, almost surely for each \(t \ge 0\) there is a unique time \(U_t\) such that \(Z_{U_t} \wedge Z_{U_t-} = \underline{Z}_t\). If, in addition, \(\lim _{t \rightarrow \infty } Z_t = +\infty \), then almost surely there is a unique time \(U_\infty \) such that \(Z_{U_\infty } \wedge Z_{U_\infty -} = \underline{Z}_\infty := \inf \{Z_s : s \ge 0\}\).
Lemma 6.2
Let \(Z\) be a Lévy process such that \(Z_0 = 0,\,Z_t\) has an absolutely continuous distribution for each \(t>0\), and \(\lim _{t \rightarrow \infty } Z_t = +\infty \). Then, the distribution of \((U_\infty ,\underline{Z}_\infty )\) restricted to \((0,\infty ) \times (-\infty ,0]\) is absolutely continuous with respect to Lebesgue measure. Moreover, \(\mathbb{P }\{(U_\infty , \underline{Z}_\infty ) = (0,0)\} > 0\) if and only if zero is not regular for \((-\infty ,0)\).
Proof
Because the random variable \(Z_t\) has an absolutely continuous distribution for each \(t>0\), it follows from [38, Theorem 2] that for all \(t > 0\) the restriction of the distribution of the random vector \((U_t,\underline{Z}_t)\) is absolutely continuous with respect to Lebesgue measure on the set \((0,t] \times (-\infty ,0]\). Observe that
Thus, if \(A \subseteq (0,\infty ) \times (-\infty ,0]\) is Borel with zero Lebesgue measure, then
The proof the claim concerning the atom at \((0,0)\) follows from the above formula, the fact that \(\mathbb{P }\{(U_t, \underline{Z}_t) = (0,0)\} > 0\) if and only if zero is not regular for the interval \((-\infty ,0)\) [38, Theorem 2], and the hypothesis that \(\lim _{t \rightarrow \infty } Z_t = +\infty \). \(\square \)
Remark 6.3
Note that if the process \(Z\) has a non-zero Brownian component, then the random variable \(Z_t\) has an absolutely continuous distribution for all \(t>0\). Moreover, in this case zero is regular for the interval \((-\infty ,0)\)
Let \(\tau \!=\! (\tau _t)_{t \ge 0}\) be the local time at zero for the process \(Z \!-\! \underline{Z}\). Write \(\tau ^{-1}\) for the inverse local time process. Set \(\underline{H}_t \!:=\! \underline{Z}_{\tau ^{-1}(t)}\) The process \(\underline{H} \!:=\! (\underline{H}_t)_{t \ge 0}\) is the descending ladder height process for \(Z\). If \(\lim _{t \rightarrow \infty } Z_t \!=\! \!+\!\infty \), then \(\hat{\underline{H}} \!:=\! \!-\!\underline{H}\) is a subordinator killed at an independent exponential time (see, for example, [4, Lemma VI.2]).
For the sake of completeness, we include the following observation that combines well-known results and probably already exists in the literature—it can be easily concluded from Theorem 19 and the remarks at the top of page 172 of [4].
Lemma 6.4
Let \(Z\) be a Lévy process such that \(Z_0 = 0\) and \(\lim _{t \rightarrow \infty } Z_t = +\infty \). Then, the distribution of random variable \(\underline{Z}_\infty \) is absolutely continuous with a bounded density if and only if the (killed) subordinator \(\hat{\underline{H}}\) has a positive drift coefficient.
Proof
Let \(S = (S_t)_{t \ge 0}\) be an (unkilled) subordinator with the same drift coefficient and Lévy measure as \(\hat{\underline{H}}\), so that \(-\underline{Z}_\infty \) has the same distribution as \(S_\zeta \), where \(\zeta \) is an independent, exponentially distributed random time. Therefore, for some \(q>0\),
for any Borel set \(A \subseteq \mathbb{R }\). By a result of Kesten for general Lévy processes (see, for example, [4, Theorem II.16]) the \(q\)-resolvent measure \(\int _0^\infty e^{-qt} \mathbb{P }\{S_t \in \cdot \} \, dt\) of \(S\) is absolutely continuous with a bounded density for all \(q>0\) (equivalently, for some \(q>0\)) if and only if points are not essentially polar for \(S\). Moreover, points are not essentially polar for a Lévy process with paths of bounded variation (and, in particular, for a subordinator) if and only if the process has a non-zero drift coefficient [4, Corollary II.20]. \(\square \)
Corollary 6.5
Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2 and which has paths of unbounded variation almost surely. Then, the random variables \(\inf \{ X_t + \alpha t : t \ge 0 \}\) and \(\inf \{ X_t - \alpha t : t \le 0 \}\) both have absolutely continuous distributions with bounded densities if and only if \(X\) has a non-zero Brownian component.
Proof
By Lemma 6.4, the distributions in question are absolutely continuous with bounded densities if and only if the drift coefficients of the descending ladder processes for the two Lévy processes \((X_t + \alpha t)_{t \ge 0}\) and \((-X_{t} + \alpha t)_{t \ge 0}\) are non-zero. By the results of [30] (see also [4, Theorem VI.19]), this occurs if and only if both \((X_t + \alpha t)_{t \ge 0}\) and \((-X_{t} + \alpha t)_{t \ge 0}\) have positive probability of creeping down across \(x\) for some (equivalently, all) \(x<0\), where we recall that a Lévy process creeps down across \(x < 0\) if the first passage time in \((-\infty , x)\) is not a jump time for the path of the process. Equivalently, both densities exist and are bounded if and only if the Lévy process \((X_t + \alpha t)_{t \ge 0}\) creeps downwards and the Lévy process \((X_t - \alpha t)_{t \ge 0}\) creeps upwards, where the latter notion is defined in the obvious way.
A result of Vigon [48] (see also [13, Chapter 6, Corollary 9]) states that when the paths of \(X\) have unbounded variation, \((X_t + \alpha t)_{t \ge 0}\) creeps downward if and only if \(X\) creeps downward, and hence, in turn, if and only if \((X_t - \alpha t)_{t \ge 0}\) creeps downwards. A similar result applies to creeping upwards.
Thus, both densities exist and are bounded if and only if \(X\) creeps downwards and upwards. This occurs if and only if the ascending and descending ladder processes of \(X\) have positive drifts [4, Theorem VI.19], which happens if and only if \(X\) has a non-zero Brownian component [13, Chapter 4, Corollary 4(i)] (or see the remark after the proof of [4, Theorem VI.19]). \(\square \)
7 The complementary interval straddling zero
7.1 Distributions in the case of a non-zero Brownian component
Suppose that \(X = (X_t)_{t \in \mathbb{R }}\) is a Lévy process that satisfies our standing assumptions Hypothesis 2.2. Also, suppose until further notice that \(X\) has a non-zero Brownian component.
Recall that \(M = (M_t)_{t \in \mathbb R }\) is the \(\alpha \)-Lipschitz minorant of \(X\) and \(\fancyscript{Z}\) is the stationary regenerative set \(\{t \in \mathbb{R }: X_t \wedge X_{t-} = M_t\}\). Recall also that \(K = D - G\), where \( G = \sup \{ t < 0 : X_t \wedge X_{t-} = M_t \} = \sup \{t < 0 : t \in \fancyscript{Z}\} \) and \( D = \inf \{ t > 0 : X_t \wedge X_{t-} = M_t \} = \inf \{t > 0 : t \in \fancyscript{Z}\} \). Lastly, recall that \(T\) is the unique \(t \in [G,D]\) such that \(M_t = \max \{M_s : s \in [G, D]\}\).
Let \(f^+\) (respectively, \(f^-\)) be the joint density of the random variables we denoted by \((U_\infty ,\underline{Z}_\infty )\) in Lemma 6.2 in the case where the Lévy process \(Z\) is \((X_t + \alpha t)_{t \ge 0}\) (respectively, \((-X_t + \alpha t)_{t \ge 0}\)).
Proposition 7.1
Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2. Suppose, moreover, that \(X\) has a non-zero Brownian component. Set \(L := T - G\) and \(R := D - T\). Then, the random vector \((T,L,R)\) has a distribution that is absolutely continuous with respect to Lebesgue measure with joint density
Therefore, \((T,G,D)\) also has an absolutely continuous distribution with joint density
and \(K\) has an absolutely continuous distribution with density
Proof
Observe that \(X\) is abrupt and so, by Theorem 4.7 and Remark 5.3, \(\fancyscript{Z}\) is a stationary discrete random set with intensity
Hence, the set of times of peaks of the \(\alpha \)-Lipschitz minorant \(M\) is also a stationary discrete random set with the same finite intensity. The point process consisting of a single point at time \(T\) is included in the set of times of peaks of \(M\), and so for \(A\) a Borel set with Lebesgue measure \(|A|\) we have
Thus, the distribution of \(T\) is absolutely continuous with respect to Lebesgue measure with density bounded above by \(\varLambda (\mathbb{R }_+)/\int _{\mathbb{R }_+} x \varLambda (dx)\).
It follows from the observations made in the proof of Theorem 2.6 about the nature of the global infimum of the process \(\tilde{X}\) that under our hypotheses, almost surely \(X_G = X_{G-} = M_T - \alpha |G-T| = M_T - \alpha L,\,X_D = X_{D-} = M_T - \alpha |D-T| = M_T - \alpha R\), and \(X_t \wedge X_{t-} > M_T - \alpha |t-T|\) for \(t \notin \{ G, D \}\). Thus,
and
Consequently,
Conversely, \((T,L,R)\) is the unique triple with \(T-L < 0 < T+R\) such that (7.1) holds.
Fix \(\tau \in \mathbb R \) and \(\lambda ,\rho \in \mathbb R _+\) such that \(\tau - \lambda < 0 < \tau + \rho \). Set
For \(0 < \varDelta \tau < \rho \) set
From (7.1) we have that
Similarly,
Observe that
By Corollary 6.5, the independent random variables \(\underline{Z}^-\) and \(\underline{Z}^+\) have densities bounded by some constant \(c\). Moreover, they are independent of \((X_{\tau +s})_{0 \le s \le \varDelta \tau }\). Conditioning on \(\underline{Z}^-\) and \((X_{\tau +s})_{0 \le s \le \varDelta \tau }\), we see that the last probability is, using \(| \cdot |\) to denote Lebesgue measure, at most
Consequently,
as \(\varDelta \tau \downarrow 0\).
The same argument shows that
as \(\varDelta \tau \downarrow 0\).
Now,
The random vectors \((U^-,\underline{Z}^-)\) and \((U^+, \underline{Z}^+)\) are independent with respective densities \(f^-\) and \(f^+\), and so the joint density of \((U^-, U^+, \underline{Z}^+ - \underline{Z}^- )\) is
Thus, using the facts that the random variable \(\underline{Z}^+ - \underline{Z}^-\) is independent of \(X_{\tau + \varDelta \tau } - X_\tau \) and the latter random variable has the same distribution as \(X_{\varDelta \tau }\),
By Fubini’s theorem,
Moreover, for any \(\epsilon > \varDelta \tau \),
Note that \((\varDelta \tau )^{-1} [(|X_{\varDelta \tau }|-(\epsilon -\varDelta \tau ))_+ \wedge (2 \varDelta \tau )] \le 2\) and that the random variable on the left of this inequality converges to \(0\) almost surely as \(\varDelta \tau \downarrow 0\). Hence, by bounded convergence,
Furthermore, the independent random variables \(\underline{Z}^-\) and \(\underline{Z}^+\) both have bounded densities by Corollary 6.5; that is, the functions \(h \mapsto \int _0^\infty du \, f^-(u,h)\) and \(h \mapsto \int _0^\infty dv \, f^+(v,h)\) both belong to \(L^1 \cap L^\infty \). Therefore, the functions \(h \mapsto \int _{\lambda }^\infty du \, f^-(u,h)\) and \(h \mapsto \int _{\rho }^\infty dv \, f^+(v,h)\) both certainly belong to \(L^1 \cap L^\infty \).
It now follows from the Lebesgue differentiation theorem that
for Lebesgue almost every \(h \in \mathbb{R }\). Moreover, the quantity on the left is bounded by \(\sup _{h \in \mathbb{R }} 2 \alpha \int _{\lambda }^\infty du \, f^-(u,h) < \infty \). Therefore, by (7.2), (7.3), (7.4), (7.5), and bounded convergence,
As we observed above, the measure \(\mathbb{P }\{T \in d\tau \}\) is absolutely continuous with density bounded above by \(\varLambda (\mathbb{R }_+) < \infty \), and so the same is certainly true of the measure \( \mathbb{P }\{ T \in d\tau , \, L > \lambda , \, R > \rho \} \) for fixed \(\lambda \) and \(\rho \). Therefore, by (7.6) and the Lebesgue differentiation theorem,
for any Borel set \(A \subseteq (-\rho ,\lambda )\), and this establishes that \((T,L,R)\) has the claimed density.
The remaining two claims follow immediately. \(\square \)
Corollary 7.2
Under the assumptions of Proposition 7.1,
Proof
From Proposition 7.1,
Viewing \(\int _0^\infty f^+(\xi ,h) e^{-\theta \xi } \, d \xi \) and \(\int _0^\infty f^-(\kappa ,h) e^{-\theta \kappa } \, d \kappa \) as functions of \(h\) that belong to \(L^1 \cap L^\infty \subset L^2\), we can use Plancherel’s Theorem and then the Pecherskii-Rogozin formulas [13, p. 28] to get that \(\mathbb{E }[e^{-\theta K}]\) is
\(\square \)
7.2 Extension to more general Lévy processes
Corollary 7.2 establishes Theorem 4.10 when \(X\) has a non-zero Brownian component. The next result allows us establish the latter result for the class of Lévy processes described in its statement.
Lemma 7.3
Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2. Suppose, moreover, that if \(X\) has paths of bounded variation, then \(|d| \ne \alpha \). For \(\varepsilon > 0\) set \(X^\varepsilon = X + \varepsilon B\), where \(B\) is a standard Brownian motion on \(\mathbb{R }\), independent of \(X\). Define \(G^\varepsilon ,\,D^\varepsilon \) and \(K^\varepsilon = D^\varepsilon - G^\varepsilon \) to be the analogues of \(G,\,D\) and \(K\) with \(X\) replaced by \(X^\varepsilon \). Then, \((G^\varepsilon ,D^\varepsilon )\) converges almost surely to \((G,D)\) as \(\varepsilon \downarrow 0\), and so \(K^\varepsilon \) converges almost surely to \(K\) as \(\varepsilon \downarrow 0\).
Proof
By symmetry, it suffices to show that \(D^\varepsilon \) converges almost surely to \(D\) as \(\varepsilon \downarrow 0\). We first show the convergence on the event \(\{S>0\}\).
Let \(S^\varepsilon \) be the analogue of the stopping time \(S\) with \(X\) replaced by \(X^\varepsilon \). As we observed in the proof of Theorem 2.6, \(X_S - \alpha S = X_S \wedge X_{S-} - \alpha S \le \inf \{ X_u - \alpha u : u \le 0 \}\). If \(X\) has paths of unbounded variation or bounded variation with drift satisfying \(d < \alpha \), then, since \(S\) is a stopping time, \(\liminf _{u \downarrow S} (X_u - X_S - \alpha (u-S))/(u-S) < 0\). If \(X\) has paths of bounded variation with drift satisfying \(d > \alpha \), then by the remarks at the top of page 56 of [13], the downwards ladder height process of the process \((X_u - \alpha u)_{u \ge 0}\) (resp. \((-X_u + \alpha u)_{u \ge 0}\)) has zero drift (resp. non-zero drift). By Lemma 6.4, the distribution of \(\inf \{ X_u - \alpha u : u \le 0 \}\) is absolutely continuous with a bounded density, and hence
by Fubini’s theorem and the fact that the range of a subordinator with zero drift has zero Lebesgue measure almost surely.
For all three of these cases, given any \(\delta > 0\) we can, with probability one, thus find a time \(t \in (S,S+\delta )\) such that
By the strong law of large numbers for the Brownian motion \(B\),
Hence, \(X_t^\varepsilon \wedge X_{t-}^\varepsilon - \alpha t \le \inf \{ X_{u}^\varepsilon - \alpha u : u \le 0\}\) for \(\varepsilon \) sufficiently small, and so \(S^\varepsilon \le S + \delta \) for such an \(\varepsilon \). Therefore, \(\limsup _{\varepsilon \downarrow 0} S^\varepsilon \le S\).
On the other hand, for any \(\delta > 0\) we have
Thus, \(X_t^\varepsilon \wedge X_{t-}^\varepsilon - \alpha t > \inf \{ X_{u}^\varepsilon - \alpha u : u \le 0\}\) for all \(t \in [0, (S- \delta )_+]\) for \(\varepsilon \) sufficiently small, so that \(S^\varepsilon \ge (S - \delta )_+\). Therefore, \(\liminf _{\varepsilon \downarrow 0} S^\varepsilon \ge S\). Consequently, \(\lim _{\varepsilon \downarrow 0} S^\varepsilon = S\).
Now, as a result of the uniqueness of the global infima of Lévy processes that are not compound Poisson processes with zero drift [4, Proposition VI.4], and the law of large numbers applied to \(B\), we have
It follows readily that \(D^\varepsilon \) converges to \(D\) almost surely as \(\epsilon \downarrow 0\) on the event \(\{S>0\}\).
Suppose now that we are on the event \(\{S=0\}\). Then, by Proposition 4.5, \(0 \in \fancyscript{Z}\) almost surely, and we may suppose that \(X\) satisfies the conditions of part (c) of that result, so that \(G=T=S=D=0\) almost surely. Then, by the strong law of large numbers for the Brownian motion \(B\), almost surely
Therefore, \(D^\varepsilon \) also converges to \(D\) almost surely as \(\epsilon \downarrow 0\) on the event \(\{S = 0\}\).
\(\square \)
We are finally in a position to give the proof of Theorem 4.10. Suppose for the moment that \(X\) has a non-zero Brownian component. It follows from Theorem 4.1 that \(\delta = 0\), and hence from (4.7) we have that
By Corollary 7.2, this last integral is
as claimed in the theorem.
Now suppose \(X\) has zero Brownian component, but satisfies the conditions of Theorem 4.10. Since \(X\) has paths of unbounded variation almost surely, it follows from Remark 4.2 that \(\delta = 0\). Let \(X^\varepsilon = X + \varepsilon B\) and \(K^\varepsilon \) be as in Lemma 7.3, and let \(\varLambda ^\varepsilon \) be the Lévy measure of the subordinator associated with the set of points where \(X^\varepsilon \) meets its \(\alpha \)-Lipschitz minorant. By Lemma 7.3 we know that \(K^\varepsilon \rightarrow K\) almost surely, and thus since \(\delta = 0\), arguing as in (7.8) we have
Now, in the notation of of the proof of Corollary 7.2 , it can be seen that the square integrability of the densities of \(\inf _{t \ge 0} \{ X_t + \alpha t \}\) and \(\inf _{t \ge 0} \{ -X_t + \alpha t \}\) implies that
for all \(\theta \ge 0\). Thus, by the same methods used in the proof of Corollary 7.2 from the last line of (7.7) onwards, it follows that (7.9) is finite. Then, since for each fixed value of \(z\) the integrand in (7.9) is a product of characteristic functions of certain infima, and hence not equal to zero, we can apply Fubini’s theorem to swap the order of the integrals within the exponentials (here we are using the absolute continuity of the distribution of \(X_t\) for all \(t>0\)). We now have that the integrand for each fixed value of \(z\) with \(X_t\) replaced by \(X^\varepsilon _t\) converges to the inegrand with just \(X_t\) as \(\varepsilon \rightarrow 0\). Then, by finiteness of (7.9), we have that (7.9) with \(X_t\) replaced by \(X^\varepsilon _t\) converges to (7.9).
It remains to show that \(\varLambda (\mathbb{R }_+) < \infty \). We have from (7.7) that
as \(\theta \rightarrow \infty \), and we conclude that \(\varLambda (\mathbb{R }_+) < \infty \). \(\square \)
8 Lipschitz minorants of Brownian motion
8.1 Williams’ path decomposition for Brownian motion with drift
We recall for later use a path composition due to David Williams that describes the distribution of a Brownian motion with positive drift in terms of the segment of the path up to the time the process achieves its global minimum and the segment of the path after that time—see [43, p. 436] or, for a concise description, [8, Sect. IV.5].
For \(\mu \in \mathbb{R }\), let \(Z^{(\mu )} = (Z_t^{(\mu )})_{t \ge 0}\) be a Brownian motion with drift \(\mu \) started at zero. Take \(\beta > 0\) and let \(E\) be a random variable that is independent of \(Z^{(-\beta )}\) and has an exponential distribution with mean \((2 \beta )^{-1}\). Set
Then, there is a diffusion \(W = (W_t)_{t \ge 0}\) with the properties
-
(i)
\(W\) is independent of \(Z^{(-\beta )}\) and \(E\);
-
(ii)
\(W_0 = 0\);
-
(iii)
\(W_t > 0\) for all \(t>0\) almost surely;
such that if we define a process \((\tilde{Z}_t)_{t \ge 0}\) by
then \(\tilde{Z}\) has the same distribution as \(Z^{(\beta )}\). Thus, in particular,
and the unique time that \(Z^{(\beta )}\) achieves its global minimum is distributed as \(T_E\). Recall also that
for \(h \le 0\) (see, for example, [8, page 295, equation \(2.2.0.1\)]).
8.2 Random variables related to the Brownian Lipschitz minorant
Proposition 8.1
Let \(X\) be a Brownian motion with drift \(\beta \), where \(|\beta | < \alpha \). Then, the distribution of \(K\) is characterized by
for \(\theta \ge 0\), and hence \(\varLambda \) is characterized by
for \(\theta \ge 0\).
Proof
We have from [8, page 269, Eq. 1.14.3(1)] that
and
Thus, from (7.7),
as required.
Now, by (7.8),
after a little algebra. \(\square \)
Remark 8.2
There is an alternative way to verify that the Laplace transform for \(K\) presented in Proposition 8.1 is correct. Recall from the proof of Theorem 2.6 that \(D = S + \tilde{T}\), where the independent random variables \(S\) and \(\tilde{T}\) are defined by
and
with
Set \(I^- := \inf \{ X_u - \alpha u : u \le 0\}\). Because \((X_{-t} + \alpha t)_{t \ge 0}\) is a Brownian motion with drift \(\alpha - \beta \), we know from Sect. 8.1 that \(-I^-\) has an exponential distribution with mean \((2(\alpha - \beta ))^{-1}\). Now \((X_t - \alpha t)_{t \ge 0}\) is a Brownian motion with drift \(\beta - \alpha \), and so, again from Sect. 8.1, \(S\) is distributed as the time until this process achieves its global minimum. It follows that
and
—see, for example, [8, page 266, Eq. 1.12.3(2)].
By stationarity, \(D\) has the same distribution as \(U(D-G) = UK\), where \(U\) is an independent random variable that is uniformly distributed on \([0,1]\). Thus,
and
This equality agrees with the one found in Proposition 8.1. Differentiating the expression on the right with respect to \(\theta \) and recalling the observation (7.8), we arrive at the the expression for the Laplace transform of \(K\) in Proposition 8.1.
Proposition 8.3
Let \(X\) be a Brownian motion with zero drift. Then,
where \(\varPhi \) is the standard normal cumulative distribution function. Thus,
Proof
We have from [8, page 269, Eq. 1.14.4(1)] that
Thus, by Proposition 7.1,
The change of variable \(y = \frac{1}{\xi (1-\xi )} -4\) gives that
for any \(c>0\), and hence
The further change of variable \(z = 2 \kappa ^{-1/2} h - \alpha \kappa ^{1/2} \) leads to
Because \(\varLambda (dx)\) is proportional to \(x^{-1} \mathbb{P }\{K \in dx\}\), we need only find \(\int _{\mathbb{R }_+} x^{-1} \mathbb{P }\{K \in dx\}\) to establish the claim for \(\varLambda \), and this can be done using methods of integration similar to those used in Remark 8.4 below to check that the density of \(K\) integrates to one. \(\square \)
Remark 8.4
We can check directly that the density given for \(K\) integrates to one. For the first term, we use the substitution \(\eta = \alpha ^2 \kappa /2\), and for the second we use the substitution \(\eta = \alpha ^2 \kappa \) and then change the order of integration to get that the integral of the claimed density is
Proposition 8.5
Let \(X\) be a Brownian motion with drift \(\beta \), where \(|\beta | < \alpha \). Recall that \(T := \arg \max \{M_t : G \le t \le D\}\) and \(H := X_T - M_T \). Then, \(H\) has a \(\mathrm{Gamma}(2, 4 \alpha )\) distribution; that is, the distribution of \(H\) is absolutely continuous with respect to Lebesgue measure with density \(h \mapsto (4 \alpha )^2 h e^{-4 \alpha h},\,h \ge 0\). Also,
and the distribution of \(T\) is characterized by
for \(- \frac{(\alpha -\beta )^2}{2} \le \theta \le \frac{(\alpha +\beta )^2}{2}\).
Proof
Consider the claim regarding the distribution of \(H\). A slight elaboration of the proof of Proposition 7.1 shows, in the notation of that result, that the random vector \((T,L,R,-H)\) has a distribution that is absolutely continuous with respect to Lebesgue measure with joint density \((\tau ,\lambda ,\rho ,\eta ) \mapsto 2 \alpha f^-(\lambda ,\eta ) f^+(\rho ,\eta ),\,\lambda ,\rho >0,\,\tau - \lambda < 0 < \tau + \rho ,\,\eta < 0\). Therefore,
By (8.2),
Combining this with (8.3) gives
Similarly,
and
Thus,
Note that \(T>0\) if and only if \(I^+ > I^-\), where
Recall from Sect. 8.1 that the independent random variables \(I^+\) and \(I^-\) are exponentially distributed with respective means \((2(\alpha + \beta ))^{-1}\) and \((2(\alpha - \beta ))^{-1}\). It follows that
We can also derive this last result from Proposition 7.1 as follows.
Substituting in (8.5) and (8.6), and then evaluating the resulting straightforward integral establishes the result.
The Laplace transform of \(T\) may be calculated using very similar methods. \(\square \)
9 Some facts about Lipschitz minorants
The following is a restatement of (1.1) accompanied by a proof.
Lemma 9.1
Suppose that the function \(f: \mathbb{R }\rightarrow \mathbb{R }\) has \(\alpha \)-Lipschitz minorant \(m: \mathbb{R }\rightarrow \mathbb{R }\). Then,
Proof
Consider the first equality. Fix \(t \in \mathbb{R }\). Because \(m\) is \(\alpha \)-Lipschitz, if \(h \le m(t)\), then \(h - \alpha |t-s| \le m(t) - \alpha |t-s| \le m(s) \le f(s)\) for all \(s \in \mathbb{R }\). On the other hand, if \(h > m(t)\), then \(s \mapsto (h - \alpha |t-s|) \vee m(s)\) is an \(\alpha \)-Lipschitz function that dominates \(m\) (strictly at \(t\)), and so \((h - \alpha |t-s|) \vee m(s) > f(s)\) for some \(s \in \mathbb{R }\). This implies that \(h - \alpha |t-s| > f(s)\), since \(m(s) \le f(s)\). The second equality is simply a rephrasing of the first. \(\square \)
We leave the proof of the following straightforward consequence of Lemma 9.1 to the reader.
Corollary 9.2
Suppose that the function \(f: \mathbb{R }\rightarrow \mathbb{R }\) has \(\alpha \)-Lipschitz minorant \(m: \mathbb{R }\rightarrow \mathbb{R }\). Define functions \(f^\leftarrow : \mathbb{R }\rightarrow \mathbb{R }\) and \(f^\rightarrow : \mathbb{R }\rightarrow \mathbb{R }\) by
and
Denote the \(\alpha \)-Lipschitz minorants of \(f^\leftarrow \) and \(f^\rightarrow \) by \(m^\leftarrow \) and \(m^\rightarrow \), respectively. Then, \(m^\leftarrow (t) = m(t)\) for all \(t \le 0\) and \(m^\rightarrow (t) = m(t)\) for all \(t \ge 0\).
The next result says that if \(f\) is a càdlàg function with \(\alpha \)-Lipschitz minorant \(m\), then on an open interval in the complement of the closed set \(\{t \in \mathbb{R }: m(t) = f(t) \wedge f(t-)\}\) the graph of the function \(m\) is either a straight line or a “sawtooth”.
Lemma 9.3
Suppose that \(f: \mathbb{R }\rightarrow \mathbb{R }\) is a càdlàg function with \(\alpha \)-Lipschitz minorant \(m : \mathbb{R }\rightarrow \mathbb{R }\). The set \(\{t \in \mathbb{R }: m(t) = f(t) \wedge f(t-)\}\) is closed. If \(t^{\prime } < t^{\prime \prime }\) are such that \(f(t^{\prime }) \wedge f(t^{\prime }-) = m(t^{\prime }),\,f(t^{\prime \prime }) \wedge f(t^{\prime \prime }-) = m(t^{\prime \prime })\), and \(f(t) \wedge f(t-) > m(t)\) for \(t^{\prime } < t < t^{\prime \prime }\), then, setting \(t^* = (f(t^{\prime \prime }) \wedge f(t^{\prime \prime }-) - f(t^{\prime }) \wedge f(t^{\prime }-) + \alpha (t^{\prime \prime } + t^{\prime }))/(2 \alpha )\),
Proof
We first show that the set \(\{t \in \mathbb{R }: m(t) = f(t) \wedge f(t-)\}\) is closed by showing that its complement is open. Suppose \(t\) is in the complement, so that \(f(t) \wedge f(t-) - m(t) =: \epsilon > 0\). Because \(f\) is càdlàg and \(m\) is continuous, there exists \(\delta > 0\) such that if \(|s - t| < \delta \), then \(f(s) > f(t) \wedge f(t-) - \epsilon /3\) and \(m(s) < m(t) + \epsilon /3\). Hence, \(f(s-) \ge f(t) \wedge f(t-) - \epsilon /3\) and \(f(s) \wedge f(s-) - m(s) > \epsilon /3\) for \(|s - t| < \delta \), showing that a neighborhood of \(t\) is also in the complement.
Turning to the second claim, define a function \(\tilde{m} : \mathbb{R }\rightarrow \mathbb{R }\) by
That is, \(\tilde{m}(t) = h^* - \alpha |t - t^*|\), where
Because \(m(t^{\prime }) = \tilde{m}(t^{\prime }),\,m(t^{\prime \prime }) = \tilde{m}(t^{\prime \prime })\), and \(m\) is \(\alpha \)-Lipschitz, we have \(m(t) \le \tilde{m}(t)\) for \(t \in [t^{\prime },t^{\prime \prime }]\) and \(m(t) \ge \tilde{m}(t)\) for \(t \notin [t^{\prime },t^{\prime \prime }]\). Suppose for some \(t_0 \in (t^{\prime },t^{\prime \prime })\) that \(m(t_0) < \tilde{m}(t_0)\). We must have that \(m(t_0) - \alpha |t^{\prime }-t_0| \le m(t^{\prime }) \le f(t^{\prime }) \wedge f(t^{\prime }-)\) and \(m(t_0) - \alpha |t^{\prime \prime }-t_0| \le m(t^{\prime \prime }) \le f(t^{\prime \prime }) \wedge f(t^{\prime \prime }-)\). Moreover, both of these inequalities must be strict, because otherwise we would conclude that \(m(t_0) \ge \tilde{m}(t_0)\).
We can therefore choose \(\epsilon > 0\) sufficiently small so that \(m(t_0) + \epsilon - \alpha |t - t_0| < f(t) \wedge f(t-)\) for \(t \in [t^{\prime },t^{\prime \prime }]\). This implies that \(m(t_0) + \epsilon - \alpha |t - t_0| < \tilde{m}(t) \le m(t) \le f(t) \wedge f(t-)\) for \(t \notin [t^{\prime },t^{\prime \prime }]\). Thus, \(t \mapsto (m(t_0) + \epsilon - \alpha |t - t_0|) \vee m(t)\) is an \(\alpha \)-Lipschitz function that is dominated everywhere by \(f\) and strictly dominates \(m\) at the point \(t_0\), contradicting the definition of \(m\). \(\square \)
We have a recipe for finding \(\inf \{ t>0 : f(t) \wedge f(t-) = m(t) \}\) when \(f\) is a càdlàg function with \(\alpha \)-Lipschitz minorant \(m\). Figure 3 gives two examples of how the recipe applies to different paths (note that the value of \(\alpha \) differs for the two examples).
Lemma 9.4
Let \(f: \mathbb{R }\rightarrow \mathbb{R }\) be a càdlàg function with \(\alpha \)-Lipschitz minorant \(m : \mathbb{R }\rightarrow \mathbb{R }\). Set
and
Suppose that \(f(\mathbf s ) \le f(\mathbf s -)\). Then, \(\mathbf e =\mathbf d \).
Proof
It suffices to show the following:
For \(0 < t < \mathbf s \), it follows from the definition of \(\mathbf s \) that
For \(\mathbf s \le t < \mathbf e \), it follows from the definition of \(\mathbf e \) that
and hence
This completes the proof of (9.1)
Now \(f(\mathbf e ) \wedge f(\mathbf e -) + \alpha (\mathbf e - \mathbf s ) = \inf \{ f(u) + \alpha (u - \mathbf s ) : u \ge \mathbf s \} \) , and so \(f(\mathbf e ) \wedge f(\mathbf e -) = \inf \{ f(u) + \alpha (u - \mathbf e ) : u \ge \mathbf s \} \) This certainly gives
Combined with the definition of \(\mathbf s \), it also gives
Thus, \(f(\mathbf e ) \wedge f(\mathbf e -) + 2 \alpha (\mathbf e - \mathbf s ) \le \inf \{ f(s) + \alpha (\mathbf e - s) : s \le 0 \} \) and hence, a fortiori,
For \(0 < s < \mathbf s ,\,f(s) - s> \inf \{ f(r) - \alpha r : r \le 0 \}\), and so
Combining (9.4), (9.5) and (9.6) gives (9.2).
The proof of (9.3) is a straightforward consequence of Lemma 9.3 and we leave it to the reader. \(\square \)
Corollary 9.5
Let \(f: \mathbb{R }\rightarrow \mathbb{R }\) be a càdlàg function with \(\alpha \)-Lipschitz minorant \(m : \mathbb{R }\rightarrow \mathbb{R }\). Define \(\mathbf d ,\,\mathbf s \), and \(\mathbf e \) as in Lemma 9.4. Assume that \(f(\mathbf s ) \le f(\mathbf s -)\), so that \(\mathbf e =\mathbf d \). Put \(\mathbf g := \sup \{ t<0 : f(t) \wedge f(t-) = m(t) \}\) and assume that \(f(0) \wedge f(0-) > m(0)\), so that \(f(t) \wedge f(t-) > m(t)\) for \(t \in (\mathbf g ,\mathbf d )\). Let \(\mathbf t := (f(\mathbf d ) \wedge f(\mathbf d -) - f(\mathbf g ) \wedge f(\mathbf g -) + \alpha (\mathbf d + \mathbf g ))/(2 \alpha )\) be the point in \([\mathbf g , \mathbf d ]\) at which the function \(m\) achieves its maximum. Then, \(\mathbf g \le \mathbf t \le \mathbf s \le \mathbf d \). Moreover, if \(\mathbf t =\mathbf s \), then \(\mathbf t =\mathbf s =\mathbf d \).
Proof
We first show that \(\mathbf g \le \mathbf t \le \mathbf s \le \mathbf d \). We certainly have \(\mathbf g \le \mathbf s \le \mathbf d \) and \(\mathbf g \le \mathbf t \le \mathbf d \), so it suffices to prove that \(\mathbf t \le \mathbf s \). Because \(\mathbf s \ge 0\), this is clear when \(\mathbf t < 0\), so it further suffices to consider the case where \(\mathbf t \ge 0\). Suppose, then, that \(\mathbf g \le 0 \le \mathbf s < \mathbf t \le \mathbf d \).
From Lemma 9.3 we have \(m(u) = f(\mathbf g ) \wedge f(\mathbf g -) + \alpha (u - \mathbf g )\) for \(\mathbf g \le u \le \mathbf t \) and \(f(u) \wedge f(u-) \ge f(\mathbf g ) \wedge f(\mathbf g -) + \alpha (u - \mathbf g )\) for \(u \le \mathbf t \). Therefore, \(\inf \{f(u) \wedge f(u-) - \alpha u : u \le 0\} \ge f(\mathbf g ) \wedge f(\mathbf g -) - \mathbf g \), and hence \(\inf \{f(u) \wedge f(u-) - \alpha u : u \le 0\} = f(\mathbf g ) \wedge f(\mathbf g -) - \alpha \mathbf g \). Now, by definition of \(\mathbf s ,\,f(\mathbf s ) \wedge f(\mathbf s- ) - \alpha \mathbf s \le \inf \{f(u) \wedge f(u-) - \alpha u : u \le 0\}\), and so
which contradicts \(\mathbf d = \inf \{u > 0 : f(u) \wedge f(u-) = m(u)\} = \inf \{u > 0 : f(u) \wedge f(u-) \le m(u)\}\) unless \(\mathbf s =0\) and \(f(0) \wedge f(0-) = m(0)\), but we have assumed that this is not the case.
A similar argument shows that if \(\mathbf t =\mathbf s \), then \(\mathbf t =\mathbf s =\mathbf d \). \(\square \)
References
Abramson, J., Pitman, J., Ross, N.: Convex minorants of random walks and Lévy processes. Electr. Commun. Probab. 16, 423–434 (2011)
Balder, E.J.: An extension of duality-stability relations to nonconvex optimization problems. SIAM J. Control Optim. 15(2), 329–343 (1977)
Bass, R.F.: Markov processes and convex minorants. In: Seminar on probability, XVIII, vol. 1059, lecture notes in Mathematics, pp. 29–41, Springer, Berlin (1984)
Bertoin, J.: Lévy processes, volume 121 of Cambridge tracts in mathematics. Cambridge University Press, Cambridge (1996)
Bertoin, J.: Regenerative embedding of Markov sets. Probab. Theory Relat. Fields 108(4), 559–571 (1997)
Bertoin, J.: Regularity of the half-line for Lévy processes. Bull. Sci. Math. 121(5), 345–354 (1997)
Bertoin, J.: The convex minorant of the Cauchy process. Electr. Comm. Probab. 5, 51–55 (2000). (electronic)
Borodin, A.N., Salminen, P.: Handbook of Brownian motion—facts and formulae. Probability and its applications, 2nd edn. Birkhäuser, Basel (2002)
Carolan, C., Dykstra, R.: Marginal densities of the least concave majorant of Brownian motion. Ann. Stat. 29(6), 1732–1750 (2001)
Chaumont, L.: On the law of the supremum of Lévy processes. Ann. Probab (to appear) (2011)
Çinlar, E.: Sunset over Brownistan. Stoch. Process. Appl. 40(1), 45–53 (1992)
Dietrich, H.: Zur \(c\)-Konvexität und \(c\)-Subdifferenzierbarkeit von Funktionalen. Optimization 19(3), 355–371 (1988)
Doney, R.A.: Fluctuation theory for Lévy processes. Lecture notes in mathematics, vol. 1897, Springer, Berlin. In: Picard, J. (ed.) Lectures from the 35th summer school on probability theory, Saint-Flour, 6–23 July 2005, (foreword by Picard, J.) (2007)
Elster, K.H., Nehse, R.: Zur Theorie der Polarfunktionale. Math. Operationsforsch. Stat. 5(1), 3–21 (1974)
Faggionato, A.: The alternating marked point process of \(h\)-slopes of drifted Brownian motion. Stoch. Process. Appl. 119(6), 1765–1791 (2009)
Fitzsimmons, P.J., Taksar, M.: Stationary regenerative sets and subordinators. Ann. Probab. 16(3), 1299–1305 (1988)
Fourati, S.: Points de croissance des processus de Lévy et théorie générale des processus. Probab. Theory Relat. Fields 110(1), 13–49 (1998)
Getoor, R.K., Sharpe, M.J.: Last exit decompositions and distributions. Indiana Univ. Math. J. 23, 377–404 (1973/74)
Groeneboom, P.: The concave majorant of Brownian motion. Ann. Probab. 11(4), 1016–1027 (1983)
Hansen, P., Jaumard, B., Lu, S.H.: Global optimization of univariate Lipschitz functions. I. Survey and properties. Math. Program. 55(3, Ser. A), 251–272 (1992)
Hansen, P., Jaumard, B., Lu, S.H.: Global optimization of univariate Lipschitz functions. II. New algorithms and computational comparison. Math. Program. 55(3, Ser. A), 273–292 (1992)
Hiriart-Urruty, J.-B.: Extension of Lipschitz functions. J. Math. Anal. Appl. 77(2), 539–554 (1980)
Hiriart-Urruty, J.-B.: Lipschitz \(r\)-continuity of the approximate subdifferential of a convex function. Math. Scand. 47(1), 123–134 (1980)
Horst, R., Tuy, H.: Global optimization: deterministic approaches, 2nd edn. Springer, Berlin (1993)
Kallenberg, O.: Foundations of modern probability. Probability and its applications, 2nd edn. Springer, New York (2002)
Levin, V.L.: Abstract convexity in measure theory and in convex analysis. J. Math. Sci. (N. Y.) 116(4), 3432–3467 (2003). (optimization and related topics, 4)
Lucet, Y.: What shape is your conjugate? A survey of computational convex analysis and its applications. SIAM J. Optim. 20(1), 216–250 (2009)
Maisonneuve, B.: Ensembles régénératifs, temps locaux et subordinateurs. In: Séminaire de Probabilités, V (University of Strasbourg, année universitaire 1969–1970), Lecture notes in mathematics, vol. 191, pp. 147–169, Springer, Berlin (1971)
Maisonneuve, B.: Ensembles régénératifs de la droite. Z. Wahrsch. Verw. Gebiete 63(4), 501–510 (1983)
Millar, P.W.: Exit properties of stochastic processes with stationary independent increments. Trans. Am. Math. Soc. 178, 459–479 (1973)
Millar, P.W.: Zero-one laws and the minimum of a Markov process. Trans. Am. Math. Soc. 226, 365–391 (1977)
Millar, P.W.: A path decomposition for Markov processes. Ann. Probab. 6(2), 345–348 (1978)
Millar, P.W.: Random times and decomposition theorems. In: Proceedings of the symposium on probability—pure mathematics, vol. XXXI, University of Illinois, Urbana, Ill, 1976, pp. 91–103, American Mathematics Soceity, Providence (1977)
Neveu, J., Pitman, J.: Renewal property of the extrema and tree property of the excursion of a one-dimensional Brownian motion. In: Séminaire de Probabilités, XXIII, vol. 1372 of lecture notes in mathematics, pp. 239–247, Springer, Berlin (1989)
Neveu, J., Pitman, J.W.: The branching process in a Brownian excursion. In: Séminaire de Probabilités, XXIII, vol. 1372 of lecture notes in mathematics, pp. 248–257, Springer, Berlin (1989)
Norkin, V.I., Onishchenko, B.O.: Minorant methods of stochastic global optimization. Kibernet. Sistem. Anal. 41(2), 56–70 (2005)
Pecherskii, E.A., Rogozin, B.A.: On joint distributions of random variables associated with fluctuations of a process with independent increments. Theory Probab. Appl. 14(3), 410–423 (1969)
Pitman, J., Uribe Bravo, G.: The convex minorant of a Lévy process. Ann. Probab. (to appear) (2011)
Pitman, J.W.: Remarks on the convex minorant of Brownian motion. In: Seminar on stochastic processes, 1982 (Evanston, Ill, 1982), vol. 5 of progress in probability and statistics, pp. 219–227, Birkhäuser, Boston (1983)
Pittenger, A.O., Shih, C.T.: Coterminal families and the strong Markov property. Bull. Amer. Math. Soc. 78, 439–443 (1972)
Rachev, S.T., Ludger, R.: Mass transportation problems (probability and its applications (New York)): theory, vol. 1. Springer, New York (1998)
Rockafellar, R.T., Wets, R.J.B.: Variational analysis, volume 317 of Grundlehren der Mathematischen Wissenschaften (fundamental principles of mathematical sciences). Springer, Berlin (1998)
Rogers, L.C.G., Williams, D.: Diffusions, Markov processes, and martingales: Itô calculus. Volume 2, Wiley series in probability and mathematical statistics: probability and mathematical statistics. Wiley, New York (1987)
Rogozin, B.A.: The local behavior of processes with independent increments. Teor. Verojatnost. i Primenen. 13, 507–512 (1968)
Štatland, E.S.: On local properties of processes with independent increments. Teor. Verojatnost. i Primenen. 10, 344–350 (1965)
Suidan, T.M.: Convex minorants of random walks and Brownian motion. Teor. Veroyatnost. i Primenen. 46(3), 498–512 (2001)
Vigon, V.: Dérivées de dini des processus de lévy. http://www-irma.u-strasbg.fr/~vigon/boulot/pas_publication/fichiers/conjecture.pdf
Vigon, Vincent: Votre Lévy rampe-t-il? J. Lond. Math. Soc. (2) 65(1), 243–256 (2002)
Vincent, Vigon: Abrupt Lévy processes. Stoch. Process. Appl. 103(1), 155–168 (2003)
Villani C.: Optimal transport—old and new: vol. 338 of Grundlehren der Mathematischen Wissenschaften (fundamental principles of mathematical sciences). Springer, Berlin (2009)
Acknowledgments
We thank an anonymous referee for a number of helpful suggestions. S. N. Evans was supported in part by National Science Foundation Grant DMS-0907630.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Abramson, J., Evans, S.N. Lipschitz minorants of Brownian motion and Lévy processes. Probab. Theory Relat. Fields 158, 809–857 (2014). https://doi.org/10.1007/s00440-013-0497-9
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-013-0497-9
Keywords
- Fluctuation theory
- Regenerative set
- Subordinator
- Abrupt process
- Global minimum
- \(c\)-Convexity
- Pasch-Hausdorff envelope